text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Miassite is a mineral made of rhodium and sulfur , with the stoichometric formula Rh 17 S 15 . It was named after the Miass River in the Urals. [ 1 ] It is a superconductor and an unconventional superconductor . Naturally occurring miassite is too brittle, so it is made in a lab for superconductor research. [ 2 ]
Its ability to be an unconventional superconductor was discovered at Ames National Laboratory in 2024. [ 3 ]
Miassite, covellite , parkerite , and palladseite , occur in nature, and are also made in labs as superconductors. Miassite is the only one found to also have unconventional superconductivity. [ 4 ] | https://en.wikipedia.org/wiki/Miassite |
MicMac is an open-source software for photogrammetry developed by the French National Geographic Institute . [ 1 ]
This software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MicMac_(software) |
In mathematics , the mice problem is a continuous pursuit–evasion problem in which a number of mice (or insects, dogs, missiles, etc.) are considered to be placed at the corners of a regular polygon . In the classic setup, each then begins to move towards its immediate neighbour (clockwise or anticlockwise). The goal is often to find out at what time the mice meet.
The most common version has the mice starting at the corners of a unit square, moving at unit speed. In this case they meet after a time of one unit, because the distance between two neighboring mice always decreases at a speed of one unit. More generally, for a regular polygon of n {\displaystyle n} unit-length sides, the distance between neighboring mice decreases at a speed of 1 − cos ( 2 π / n ) {\displaystyle 1-\cos(2\pi /n)} , so they meet after a time of 1 / ( 1 − cos ( 2 π / n ) ) {\displaystyle 1/{\bigl (}1-\cos(2\pi /n){\bigr )}} . [ 1 ] [ 2 ]
For all regular polygons, each mouse traces out a pursuit curve in the shape of a logarithmic spiral . These curves meet in the center of the polygon. [ 3 ]
In Dara Ó Briain: School of Hard Sums , the mice problem is discussed. Instead of 4 mice, 4 ballroom dancers are used. [ 4 ]
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Mice_problem |
A micellar cubic phase is a lyotropic liquid crystal phase formed when the concentration of micelles dispersed in a solvent (usually water) is sufficiently high that they are forced to pack into a structure having a long-ranged positional (translational) order. For example, spherical micelles a cubic packing of a body-centered cubic lattice. Normal topology micellar cubic phases, denoted by the symbol I 1 , are the first lyotropic liquid crystalline phases that are formed by type I amphiphiles . The amphiphiles' hydrocarbon tails are contained on the inside of the micelle and hence the polar-apolar interface of the aggregates has a positive mean curvature , by definition (it curves away from the polar phase). The first pure surfactant system found to exhibit three different type I (oil-in-water) micellar cubic phases was observed in the dodecaoxyethylene mono-n-dodecyl ether (C12EO12)/water system. [ 1 ]
Inverse topology micellar cubic phases (such as the Fd3m phase) are observed for some type II amphiphiles at very high amphiphile concentrations. These aggregates, in which water is the minority phase, have a polar-apolar interface with a negative mean curvature . The structures of the normal topology micellar cubic phases that are formed by some types of amphiphiles (e.g. the oligoethyleneoxide monoalkyl ether series of non-ionic surfactants are the subject of debate. Micellar cubic phases are isotropic phases but are distinguished from micellar solutions by their very high viscosity. When thin film samples of micellar cubic phases are viewed under a polarising microscope they appear dark and featureless. Small air bubbles trapped in these preparations tend to appear highly distorted and occasionally have faceted surfaces. A reversed micellar cubic phase has been observed, although it is much less common. It was observed that a reverse micellar cubic phase with Fd3m (Q227) symmetry formed in a ternary system of an amphiphilic diblock copolymer (EO17BO10, where EO represents ethylene oxide and BO represents butylene oxide), water, and p-xylene. [ 2 ] | https://en.wikipedia.org/wiki/Micellar_cubic |
Micellar electrokinetic chromatography ( MEKC ) is a chromatography technique used in analytical chemistry . It is a modification of capillary electrophoresis (CE), extending its functionality to neutral analytes, [ 1 ] where the samples are separated by differential partitioning between micelles (pseudo-stationary phase) and a surrounding aqueous buffer solution (mobile phase). [ 2 ]
The basic set-up and detection methods used for MEKC are the same as those used in CE. The difference is that the solution contains a surfactant at a concentration that is greater than the critical micelle concentration (CMC). Above this concentration, surfactant monomers are in equilibrium with micelles.
In most applications, MEKC is performed in open capillaries under alkaline conditions to generate a strong electroosmotic flow . Sodium dodecyl sulfate (SDS) is the most commonly used surfactant in MEKC applications. The anionic character of the sulfate groups of SDS causes the surfactant and micelles to have electrophoretic mobility that is counter to the direction of the strong electroosmotic flow . As a result, the surfactant monomers and micelles migrate quite slowly, though their net movement is still toward the cathode . [ 3 ] During a MEKC separation, analytes distribute themselves between the hydrophobic interior of the micelle and hydrophilic buffer solution as shown in figure 1 .
Analytes that are insoluble in the interior of micelles should migrate at the electroosmotic flow velocity, u o {\displaystyle u_{o}} , and be detected at the retention time of the buffer, t M {\displaystyle t_{M}} . Analytes that solubilize completely within the micelles (analytes that are highly hydrophobic) should migrate at the micelle velocity, u c {\displaystyle u_{c}} , and elute at the final elution time, t c {\displaystyle t_{c}} . [ 4 ]
The micelle velocity is defined by:
where u p {\displaystyle u_{p}} is the electrophoretic velocity of a micelle. [ 4 ]
The retention time of a given sample should depend on the capacity factor, k 1 {\displaystyle k^{1}} :
where n c {\displaystyle n_{c}} is the total number of moles of solute in the micelle and n w {\displaystyle n_{w}} is the total moles in the aqueous phase. [ 4 ] The retention time of a solute should then be within the range:
Charged analytes have a more complex interaction in the capillary because they exhibit electrophoretic mobility, engage in electrostatic interactions with the micelle, and participate in hydrophobic partitioning. [ 5 ]
The fraction of the sample in the aqueous phase, R {\displaystyle R} , is given by:
where u s {\displaystyle u_{s}} is the migration velocity of the solute. [ 4 ] The value R {\displaystyle R} can also be expressed in terms of the capacity factor:
Using the relationship between velocity, tube length from the injection end to the detector cell ( L {\displaystyle L} ), and retention time, u o = L / t M {\displaystyle u_{o}=L/t_{M}} , u c = L / t c {\displaystyle u_{c}=L/t_{c}} and u s = L / t r {\displaystyle u_{s}=L/t_{r}} , a relationship between the capacity factor and retention times can be formulated: [ 5 ]
The extra term enclosed in parentheses accounts for the partial mobility of the hydrophobic phase in MEKC. [ 5 ] This equation resembles an expression derived for k 1 {\displaystyle k^{1}} in conventional packed bed chromatography:
A rearrangement of the previous equation can be used to write an expression for the retention factor: [ 6 ]
From this equation it can be seen that all analytes that partition strongly into the micellar phase (where k 1 {\displaystyle k^{1}} is essentially ∞) migrate at the same time, t c {\displaystyle t_{c}} . In conventional chromatography, separation of similar compounds can be improved by gradient elution. In MEKC, however, techniques must be used to extend the elution range to separate strongly retained analytes. [ 5 ]
Elution ranges can be extended by several techniques including the use of organic modifiers, cyclodextrins , and mixed micelle systems. Short-chain alcohols or acetonitrile can be used as organic modifiers that decrease t M {\displaystyle t_{M}} and k 1 {\displaystyle k^{1}} to improve the resolution of analytes that co-elute with the micellar phase. These agents, however, may alter the level of the EOF. Cyclodextrins are cyclic polysaccharides that form inclusion complexes that can cause competitive hydrophobic partitioning of the analyte. Since analyte-cyclodextrin complexes are neutral, they will migrate toward the cathode at a higher velocity than that of the negatively charged micelles. Mixed micelle systems, such as the one formed by combining SDS with the non-ionic surfactant Brij-35, can also be used to alter the selectivity of MEKC. [ 5 ]
The simplicity and efficiency of MEKC have made it an attractive technique for a variety of applications. Further improvements can be made to the selectivity of MEKC by adding chiral selectors or chiral surfactants to the system. Unfortunately, this technique is not suitable for protein analysis because proteins are generally too large to partition into a surfactant micelle and tend to bind to surfactant monomers to form SDS-protein complexes. [ 7 ]
Recent applications of MEKC include the analysis of uncharged pesticides , [ 8 ] essential and branched-chain amino acids in nutraceutical products, [ 9 ] hydrocarbon and alcohol contents of the marjoram herb. [ 10 ]
MEKC has also been targeted for its potential to be used in combinatorial chemical analysis. The advent of combinatorial chemistry has enabled medicinal chemists to synthesize and identify large numbers of potential drugs in relatively short periods of time. Small sample and solvent requirements and the high resolving power of MEKC have enabled this technique to be used to quickly analyze a large number of compounds with good resolution.
Traditional methods of analysis, like high-performance liquid chromatography (HPLC), can be used to identify the purity of a combinatorial library, but assays need to be rapid with good resolution for all components to provide useful information for the chemist. [ 11 ] The introduction of surfactant to traditional capillary electrophoresis instrumentation has dramatically expanded the scope of analytes that can be separated by capillary electrophoresis.
MEKC can also be used in routine quality control of antibiotics in pharmaceuticals or feedstuffs. [ 12 ] | https://en.wikipedia.org/wiki/Micellar_electrokinetic_chromatography |
Micellar liquid chromatography ( MLC ) is a form of reversed phase liquid chromatography that uses an aqueous micellar solutions as the mobile phase. [ 1 ]
The use of micelles in high performance liquid chromatography was first introduced by Armstrong and Henry in 1980. [ 2 ] [ 3 ] The technique is used mainly to enhance retention and selectivity of various solutes that would otherwise be inseparable or poorly resolved. Micellar liquid chromatography (MLC) has been used in a variety of applications including separation of mixtures of charged and neutral solutes, direct injection of serum and other physiological fluids, analysis of pharmaceutical compounds , separation of enantiomers , analysis of inorganic organometallics , and a host of others.
One of the main drawbacks of the technique is the reduced efficiency that is caused by the micelles. Despite the sometimes poor efficiency, MLC is a better choice than ion-exchange LC or ion-pairing LC for separation of charged molecules and mixtures of charged and neutral species . [ 1 ] Some of the aspects which will be discussed are the theoretical aspects of MLC, the use of models in predicting retentive characteristics of MLC, the effect of micelles on efficiency and selectivity, and general applications of MLC.
Reverse phase high-performance liquid chromatography (RP-HPLC) involves a non- polar stationary phase, often a hydrocarbon chain , and a polar mobile or liquid phase. The mobile phase generally consists of an aqueous portion with an organic addition, such as methanol or acetonitrile . When a solution of analytes is injected into the system, the components begin to partition out of the mobile phase and interact with the stationary phase. Each component interacts with the stationary phase in a different manner depending upon its polarity and hydrophobicity . In reverse phase HPLC, the solute with the greatest polarity will interact less with the stationary phase and spend more time in the mobile phase. As the polarity of the components decreases, the time spent in the column increases. Thus, a separation of components is achieved based on polarity. [ 4 ] The addition of micelles to the mobile phase introduces a third phase into which the solutes may partition.
Micelles are composed of surfactant , or detergent, monomers with a hydrophobic moiety , or tail, on one end, and a hydrophilic moiety, or head group, on the other. The polar head group may be anionic , cationic , zwitterionic , or non-ionic. When the concentration of a surfactant in solution reaches its critical micelle concentration (CMC), it forms micelles which are aggregates of the monomers. The CMC is different for each surfactant, as is the number of monomers which make up the micelle, termed the aggregation number (AN). [ 5 ] Table 1 lists some common detergents used to form micelles along with their CMC and AN where available.
Many of the characteristics of micelles differ from those of bulk solvents. For example, the micelles are, by nature, spatially heterogeneous with a hydrocarbon, nearly anhydrous core and a highly solvated , polar head group. They have a high surface-to-volume ratio due to their small size and generally spherical shape. Their surrounding environment ( pH , ionic strength, buffer ion, presence of a co-solvent, and temperature ) has an influence on their size, shape, critical micelle concentration, aggregation number and other properties. [ 6 ]
Another important property of micelles is the Krafft point , the temperature at which the solubility of the surfactant is equal to its CMC. For HPLC applications involving micelles, it is best to choose a surfactant with a low Krafft point and CMC. A high CMC would require a high concentration of surfactant which would increase the viscosity of the mobile phase, an undesirable condition. Additionally, a Krafft point should be well below room temperature to avoid having to apply heat to the mobile phase. To avoid potential interference with absorption detectors, a surfactant should also have a small molar absorptivity at the chosen wavelength of analysis. Light scattering should not be a concern due to the small size, a few nanometers , of the micelle. [ 1 ]
The effect of organic additives on micellar properties is another important consideration. A small amount of organic solvent is often added to the mobile phase to help improve efficiency and to improve separations of compounds. Care needs to be taken when determining how much organic to add. Too high a concentration of the organic may cause the micelle to disperse, as it relies on hydrophobic effects for its formation. The maximum concentration of organic depends on the organic solvent itself, and on the micelle. This information is generally not known precisely, but a generally accepted practice is to keep the volume percentage of organic below 15–20%. [ 1 ]
Fischer and Jandera [ 7 ] studied the effect of changing the concentration of methanol on CMC values for three commonly used surfactants. Two cationic, hexadecyltrimethylammonium bromide (CTAB), and N-(a-carbethoxypentadecyl) trimethylammonium bromide ( Septonex ), and one anionic surfactant, sodium dodecyl sulphate (SDS) were chosen for the experiment. Generally speaking, the CMC increased as the concentration of methanol increased. It was then concluded that the distribution of the surfactant between the bulk mobile phase and the micellar phase shifts toward the bulk as the methanol concentration increases. For CTAB, the rise in CMC is greatest from 0–10% methanol, and is nearly constant from 10–20%. Above 20% methanol, the micelles disaggregate and do not exist. For SDS, the CMC values remain unaffected below 10% methanol, but begin to increase as the methanol concentration is further increased. Disaggregation occurs above 30% methanol. Finally, for Septonex, only a slight increase in CMC is observed up to 20%, with disaggregation occurring above 25%. [ 7 ]
As has been asserted, the mobile phase in MLC consists of micelles in an aqueous solvent, usually with a small amount of organic modifier added to complete the mobile phase. A typical reverse phase alkyl -bonded stationary phase is used. The first discussion of the thermodynamics involved in the retention mechanism was published by Armstrong and Nome in 1981. [ 8 ] In MLC, there are three partition coefficients which must be taken into account. The solute will partition between the water and the stationary phase (KSW), the water and the micelles (KMW), and the micelles and the stationary phase (KSM).
Armstrong and Nome derived an equation describing the partition coefficients in terms of the retention factor , formally capacity factor, k¢. In HPLC, the capacity factor represents the molar ratio of the solute in the stationary phase to the mobile phase. The capacity factor is easily measure based on retention times of the compound and any unretained compound. The equation rewritten by Guermouche et al. [ 9 ] is presented here:
Where:
A plot of 1/k¢ verses CM gives a straight line in which KSW can be calculated from the intercept and KMW can be obtained from the ratio of the slope to the intercept. Finally, KSM can be obtained from the ratio of the other two partition coefficients:
As can be observed from Figure 1, KMW is independent of any effects from the stationary phase, assuming the same micellar mobile phase. [ 9 ]
The validity of the retention mechanism proposed by Armstrong and Nome has been successfully, and repeated confirmed experimentally. However, some variations and alternate theories have also been proposed. Jandera and Fischer [ 10 ] developed equations to describe the dependence of retention behavior on the change in micellar concentrations. They found that the retention of most compounds tested decreased with increasing concentrations of micelles. From this, it can be surmised that the compounds associate with the micelles as they spend less time associated with the stationary phase. [ 10 ]
Foley proposed a similar retentive model to that of Armstrong and Nome which was a general model for secondary chemical equilibria in liquid chromatography. [ 11 ] While this model was developed in a previous reference, and could be used for any secondary chemical equilibria such as acid-base equilibria, and ion-pairing, Foley further refined the model for MLC. When an equilibrant (X), in this case surfactant, is added to the mobile phase, a secondary equilibria is created in which an analyte will exist as free analyte (A), and complexed with the equilibrant (AX). The two forms will be retained by the stationary phase to different extents, thus allowing the retention to be varied by adjusting the concentration of equilibrant (micelles). [ 11 ]
The resulting equation solved for capacity factor in terms of partition coefficients is much the same as that of Armstrong and Nome:
Where:
Foley used the above equation to determine the solute-micelle association constants and free solute retention factors for a variety of solutes with different surfactants and stationary phases. From this data, it is possible to predict the type and optimum surfactant concentrations needed for a given solute or solutes. [ 11 ]
Foley has not been the only researcher interested in determining the solute-micelle association constants. A review article by Marina and Garcia with 53 references discusses the usefulness of obtaining solute-micelle association constants. [ 12 ] The association constants for two solutes can be used to help understand the retention mechanism. The separation factor of two solutes, a, can be expressed as KSM1/KSM2. If the experimental a coincides with the ratio of the two solute-micelle partition coefficients, it can be assumed that their retention occurs through a direct transfer from the micellar phase to the stationary phase. In addition, calculation of a would allow for prediction of separation selectivity before the analysis is performed, provided the two coefficients are known. [ 12 ]
The desire to predict retention behavior and selectivity has led to the development of several mathematical models. [ 13 ] Changes in pH, surfactant concentration, and concentration of organic modifier play a significant role in determining the chromatographic separation. Often one or more of these parameters need to be optimized to achieve the desired separation, yet the optimum parameters must take all three variables into account simultaneously. The review by Garcia-Alvarez-Coque et al. mentioned several successful models for varying scenarios, a few of which will be mentioned here. The classic models by Armstrong and Nome and Foley are used to describe the general cases. Foley's model applies to many cases and has been experimentally verified for ionic, neutral, polar and nonpolar solutes; anionic, cationic, and non-ionic surfactants, and C8, C¬18, and cyano stationary phases. The model begins to deviate for highly and lowly retained solutes. Highly retained solutes may become irreversibly bound to the stationary phase, where lowly retained solutes may elute in the column void volume. [ 13 ]
Other models proposed by Arunyanart and Cline-Love and Rodgers and Khaledi describe the effect of pH on the retention of weak acids and bases. These authors derived equations relating pH and micellar concentration to retention. As the pH varies, sigmoidal behavior is observed for the retention of acidic and basic species. This model has been shown to accurately predict retention behavior. [ 13 ] Still other models predict behavior in hybrid micellar systems using equations or modeling behavior based on controlled experimentation. Additionally, models accounting for the simultaneous effect of pH, micelle and organic concentration have been suggested. These models allow for further enhancement of the optimization of the separation of weak acids and bases. [ 13 ]
One research group, Rukhadze, et al. [ 14 ] derived a first order linear relationship describing the influence of micelle and organic concentration, and pH on the selectivity and resolution of seven barbiturates . The researchers discovered that a second order mathematical equation would more precisely fit the data. The derivations and experimental details are beyond the scope of this discussion. The model was successful in predicting the experimental conditions necessary to achieve a separation for compounds which are traditionally difficult to resolve. [ 14 ]
Jandera, Fischer, and Effenberger approached the modeling problem in yet another way. [ 15 ] The model used was based on lipophilicity and polarity indices of solutes. The lipophilicity index relates a given solute to a hypothetical number of carbon atoms in an alkyl chain. It is based and depends on a given calibration series determined experimentally. The lipophilicity index should be independent of the stationary phase and organic modifier concentration. The polarity index is a measure of the polarity of the solute-solvent interactions. It depends strongly on the organic solvent, and somewhat on the polar groups present in the stationary phase. 23 compounds were analyzed with varying mobile phases and compared to the lipophilicity and polarity indices. The results showed that the model could be applied to MLC, but better predictive behavior was found with concentrations of surfactant below the CMC, sub-micellar. [ 15 ]
A final type of model based on molecular properties of a solute is a branch of quantitative structure-activity relationships (QSAR). QSAR studies attempt to correlate biological activity of drugs , or a class of drugs, with structures. The normally accepted means of uptake for a drug, or its metabolite, is through partitioning into lipid bilayers . The descriptor most often used in QSAR to determine the hydrophobicity of a compound is the octanol -water partition coefficient, log P. [ 16 ] MLC provides an attractive and practical alternative to QSAR. When micelles are added to a mobile phase, many similarities exist between the micellar mobile phase/stationary phase and the biological membrane/water interface. In MLC, the stationary phase become modified by the adsorption of surfactant monomers which are structurally similar to the membranous hydrocarbon chains in the biological model. Additionally, the hydrophilic/hydrophobic interactions of the micelles are similar to that in the polar regions of a membrane. Thus, the development of quantitative structure-retention relationships (QRAR) has become widespread. [ 17 ]
Escuder-Gilabert et al. [ 18 ] tested three different QRAR retention models on ionic compounds. Several classes of compounds were tested including catecholamines , local anesthetics , diuretics , and amino acids . The best model relating log K and log P was found to be one in which the total molar charge of a compound at a given pH is included as a variable. This model proved to give fairly accurate predictions of log P, R > 0.9. [ 18 ] Other studies have been performed which develop predictive QRAR models for tricyclic antidepressants [ 17 ] and barbiturates. [ 16 ]
The main limitation in the use of MLC is the reduction in efficiency (peak broadening) that is observed when purely aqueous micellar mobile phases are used. [ 19 ] Several explanations for the poor efficiency have been theorized. Poor wetting of the stationary phase by the micellar aqueous mobile phase, slow mass transfer between the micelles and the stationary phase, and poor mass transfer within the stationary phase have all been postulated as possible causes. To enhance efficiency, the most common approaches have been the addition of small amounts of isopropyl alcohol and increase in temperature. A review by Berthod [ 19 ] studied the combined theories presented above and applied the Knox equation to independently determine the cause of the reduced efficiency. The Knox equation is commonly used in HPLC to describe the different contributions to overall band broadening of a solute. The Knox equation is expressed as:
Where:
Berthod's use of the Knox equation to experimentally determine which of the proposed theories was most correct led him to the following conclusions. The flow anisotropy in micellar phase seems to be much greater than in traditional hydro-organic mobile phases of similar viscosity . This is likely due to the partial clogging of the stationary phase pores by adsorbed surfactant molecules. Raising the column temperature served to both decrease viscosity of the mobile phase and the amount of adsorbed surfactant. Both results reduce the A term and the amount of eddy diffusion , and thereby increase efficiency. [ 19 ]
The increase in the B term, as related to longitudinal diffusion, is associated with the decrease in the solute diffusion coefficient in the mobile phase, DM, due to the presence of the micelles, and an increase in the capacity factor, k¢. Again, this is related to surfactant adsorption on the stationary phase causing a dramatic decrease in the solute diffusion coefficient in the stationary phase, DS. Again an increase in temperature, now coupled with an addition of alcohol to the mobile phase, drastically decreases the amount of the absorbed surfactant. In turn, both actions reduce the C term caused by a slow mass transfer from the stationary phase to the mobile phase. Further optimization of efficiency can be gained by reducing the flow rate to one closely matched to that derived from the Knox equation. Overall, the three proposed theories seemed to have contributing effects of the poor efficiency observed, and can be partially countered by the addition of organic modifiers, particularly alcohol, and increasing the column temperature. [ 19 ]
Despite the reduced efficiency verses reversed phase HPLC, hundreds of applications have been reported using MLC. One of the most advantageous is the ability to directly inject physiological fluids. Micelles have an ability to solubilize proteins which enables MLC to be useful in analyzing untreated biological fluids such as plasma , serum, and urine . [ 1 ] Martinez et al. [ 20 ] found MLC to be highly useful in analyzing a class of drugs called b-antagonists, so called beta-blockers , in urine samples. The main advantage of the use of MLC with this type of sample, is the great time savings in sample preparation. Alternative methods of analysis including reversed phase HPLC require lengthy extraction and sample work up procedures before analysis can begin. With MLC, direct injection is often possible, with retention times of less than 15 minutes for the separation of up to nine b-antagonists. [ 20 ]
Another application compared reversed phase HPLC with MLC for the analysis of desferrioxamine in serum. [ 21 ] Desferrioxamine (DFO) is a commonly used drug for removal of excess iron in patients with chronic and acute levels. The analysis of DFO along with its chelated complexes, Fe(III) DFO and Al (III) DFO has proven to be difficult at best in previous attempts. This study found that direct injection of the serum was possible for MLC, verses an ultrafiltration step necessary in HPLC. This analysis proved to have difficulties with the separation of the chelated DFO compounds and with the sensitivity levels for DFO itself when MLC was applied. The researcher found that, in this case, reverse phase HPLC, was a better, more sensitive technique despite the time savings in direct injection. [ 21 ]
Analysis of pharmaceuticals by MLC is also gaining popularity. The selectivity and peak shape of MLC over commonly used ion-pair chromatography is much enhanced. [ 22 ] MLC mimics, yet enhances, the selectivity offered by ion-pairing reagents for the separation of active ingredients in pharmaceutical drugs . For basic drugs, MLC improves the excessive peak tailing frequently observed in ion-pairing. Hydrophilic drugs are often unretained using conventional HPLC, are retained by MLC due to solubilization into the micelles. Commonly found drugs in cold medications such as acetaminophen , L-ascorbic acid , phenylpropanolamine HCL, tipepidine hibenzate, and chlorpheniramine maleate have been successfully separated with good peak shape using MLC. Additional basic drugs like many narcotics, such as codeine and morphine , have also been successfully separated using MLC. [ 22 ]
Another novel application of MLC involves the separation and analysis of inorganic compounds , mostly simple ions. This is a relatively new area for MLC, but has seen some promising results. [ 23 ] MLC has been observed to provide better selectivity of inorganic ions that ion-exchange or ion-pairing chromatography. While this application is still in the beginning stages of development, the possibilities exist for novel, much enhanced separations of inorganic species. [ 23 ]
Since the technique was first reported on in 1980, micellar liquid chromatography has been used in hundreds of applications. This micelle controlled technique provides for unique opportunities for solving complicated separation problems. Despite the poor efficiency of MLC, it has been successfully used in many applications. The use of MLC in the future appears to be extremely advantages in the areas of physiological fluids, pharmaceuticals, and even inorganic ions. The technique has proven to be superior over ion-pairing and ion-exchange for many applications. As new approaches are developed to combat the poor efficiency of MLC, its application is sure to spread and gain more acceptance. | https://en.wikipedia.org/wiki/Micellar_liquid_chromatography |
A micelle ( / m aɪ ˈ s ɛ l / ) or micella ( / m aɪ ˈ s ɛ l ə / ) ( pl. micelles or micellae , respectively) is an aggregate (or supramolecular assembly ) of surfactant amphipathic lipid molecules dispersed in a liquid, forming a colloidal suspension (also known as associated colloidal system). [ 4 ] A typical micelle in water forms an aggregate with the hydrophilic "head" regions in contact with surrounding solvent , sequestering the hydrophobic single-tail regions in the micelle centre.
This phase is caused by the packing behavior of single-tail lipids in a bilayer . The difficulty in filling the volume of the interior of a bilayer, while accommodating the area per head group forced on the molecule by the hydration of the lipid head group, leads to the formation of the micelle. This type of micelle is known as a normal-phase micelle (or oil-in-water micelle). Inverse micelles have the head groups at the centre with the tails extending out (or water-in-oil micelle).
Micelles are approximately spherical in shape. Other shapes, such as ellipsoids, cylinders, and bilayers, are also possible. The shape and size of a micelle are a function of the molecular geometry of its surfactant molecules and solution conditions such as surfactant concentration, temperature , pH , and ionic strength . The process of forming micelles is known as micellisation and forms part of the phase behaviour of many lipids according to their polymorphism . [ 5 ]
The ability of a soapy solution to act as a detergent has been recognized for centuries. However, it was only at the beginning of the twentieth century that the constitution of such solutions was scientifically studied. Pioneering work in this area was carried out by James William McBain at the University of Bristol . As early as 1913, he postulated the existence of "colloidal ions" to explain the good electrolytic conductivity of sodium palmitate solutions. [ 6 ] These highly mobile, spontaneously formed clusters came to be called micelles, a term borrowed from biology and popularized by G.S. Hartley in his classic book Paraffin Chain Salts: A Study in Micelle Formation . [ 7 ] The term micelle was coined in nineteenth century scientific literature as the ‑elle diminutive of the Latin word mica (particle), conveying a new word for "tiny particle". [ 8 ]
Individual surfactant molecules that are in the system but are not part of a micelle are called " monomers ". Micelles represent a molecular assembly , in which the individual components are thermodynamically in equilibrium with monomers of the same species in the surrounding medium. In water, the hydrophilic "heads" of surfactant molecules are always in contact with the solvent, regardless of whether the surfactants exist as monomers or as part of a micelle. However, the lipophilic "tails" of surfactant molecules have less contact with water when they are part of a micelle—this being the basis for the energetic drive for micelle formation. In a micelle, the hydrophobic tails of several surfactant molecules assemble into an oil-like core, the most stable form of which having no contact with water. By contrast, surfactant monomers are surrounded by water molecules that create a "cage" or solvation shell connected by hydrogen bonds . This water cage is similar to a clathrate and has an ice -like crystal structure and can be characterized according to the hydrophobic effect. The extent of lipid solubility is determined by the unfavorable entropy contribution due to the ordering of the water structure according to the hydrophobic effect.
Micelles composed of ionic surfactants have an electrostatic attraction to the ions that surround them in solution, the latter known as counterions . Although the closest counterions partially mask a charged micelle (by up to 92%), the effects of micelle charge affect the structure of the surrounding solvent at appreciable distances from the micelle. Ionic micelles influence many properties of the mixture, including its electrical conductivity. Adding salts to a colloid containing micelles can decrease the strength of electrostatic interactions and lead to the formation of larger ionic micelles. [ 9 ] This is more accurately seen from the point of view of an effective charge in hydration of the system.
Micelles form only when the concentration of surfactant is greater than the critical micelle concentration (CMC), and the temperature of the system is greater than the critical micelle temperature, or Krafft temperature . The formation of micelles can be understood using thermodynamics : Micelles can form spontaneously because of a balance between entropy and enthalpy . In water, the hydrophobic effect is the driving force for micelle formation, despite the fact that assembling surfactant molecules is unfavorable in terms of both enthalpy and entropy of the system. At very low concentrations of the surfactant, only monomers are present in solution. As the concentration of the surfactant is increased, a point is reached at which the unfavorable entropy contribution, from clustering the hydrophobic tails of the molecules, is overcome by a gain in entropy due to release of the solvation shells around the surfactant tails. At this point, the lipid tails of a part of the surfactants must be segregated from the water. Hence, they start to form micelles. In broad terms, above the CMC, the loss of entropy due to assembly of the surfactant molecules is less than the gain in entropy by setting free the water molecules that were "trapped" in the solvation shells of the surfactant monomers. Also important are enthalpic considerations, such as the electrostatic interactions that occur between the charged parts of surfactants.
The micelle packing parameter equation is utilized to help "predict molecular self-assembly in surfactant solutions": [ 10 ]
where v o {\displaystyle v_{o}} is the surfactant tail volume, ℓ o {\displaystyle \ell _{o}} is the tail length, and a e {\displaystyle a_{e}} is the equilibrium area per molecule at the aggregate surface.
The concept of micelles was introduced to describe the core-corona aggregates of small surfactant molecules, however it has also extended to describe aggregates of amphiphilic block copolymers in selective solvents. [ 11 ] [ 12 ] It is important to know the difference between these two systems. The major difference between these two types of aggregates is in the size of their building blocks. Surfactant molecules have a molecular weight which is generally of a few hundreds of grams per mole while block copolymers are generally one or two orders of magnitude larger. Moreover, thanks to the larger hydrophilic and hydrophobic parts, block copolymers can have a much more pronounced amphiphilic nature when compared to surfactant molecules.
Because of these differences in the building blocks, some block copolymer micelles behave like surfactant ones, while others do not. It is necessary therefore to make a distinction between the two situations. The former ones will belong to the dynamic micelles while the latter will be called kinetically frozen micelles.
Certain amphiphilic block copolymer micelles display a similar behavior as surfactant micelles. These are generally called dynamic micelles and are characterized by the same relaxation processes assigned to surfactant exchange and micelle scission/recombination. Although the relaxation processes are the same between the two types of micelles, the kinetics of unimer exchange are very different. While in surfactant systems the unimers leave and join the micelles through a diffusion -controlled process, for copolymers the entry rate constant is slower than a diffusion controlled process. The rate of this process was found to be a decreasing power-law of the degree of polymerization of the hydrophobic block to the power 2/3. This difference is due to the coiling of the hydrophobic block of a copolymer exiting the core of a micelle. [ 13 ]
Block copolymers which form dynamic micelles are some of the tri-block poloxamers under the right conditions.
When block copolymer micelles do not display the characteristic relaxation processes of surfactant micelles, these are called kinetically frozen micelles . These can be achieved in two ways: when the unimers forming the micelles are not soluble in the solvent of the micelle solution, or if the core forming blocks are glassy at the temperature in which the micelles are found. Kinetically frozen micelles are formed when either of these conditions is met. A special example in which both of these conditions are valid is that of polystyrene-b-poly(ethylene oxide). This block copolymer is characterized by the high hydrophobicity of the core forming block, PS , which causes the unimers to be insoluble in water. Moreover, PS has a high glass transition temperature which is, depending on the molecular weight, higher than room temperature. Thanks to these two characteristics, a water solution of PS-PEO micelles of sufficiently high molecular weight can be considered kinetically frozen. This means that none of the relaxation processes, which would drive the micelle solution towards thermodynamic equilibrium, are possible. [ 14 ] Pioneering work on these micelles was done by Adi Eisenberg. [ 15 ] It was also shown how the lack of relaxation processes allowed great freedom in the possible morphologies formed. [ 16 ] [ 17 ] Moreover, the stability against dilution and vast range of morphologies of kinetically frozen micelles make them particularly interesting, for example, for the development of long circulating drug delivery nanoparticles. [ 18 ]
In a non-polar solvent, it is the exposure of the hydrophilic head groups to the surrounding solvent that is energetically unfavourable, giving rise to a water-in-oil system. In this case, the hydrophilic groups are sequestered in the micelle core and the hydrophobic groups extend away from the center. These inverse micelles are proportionally less likely to form on increasing headgroup charge, since hydrophilic sequestration would create highly unfavorable electrostatic interactions.
It is well established that for many surfactant/solvent systems a small fraction of the inverse micelles spontaneously acquire a net charge of +q e or -q e . This charging takes place through a disproportionation/comproportionation mechanism rather than a dissociation/association mechanism and the equilibrium constant for this reaction is on the order of 10 −4 to 10 −11 , which means about every 1 in 100 to 1 in 100 000 micelles will be charged. [ 19 ]
Supermicelle is a hierarchical micelle structure ( supramolecular assembly ) where individual components are also micelles. Supermicelles are formed via bottom-up chemical approaches, such as self-assembly of long cylindrical micelles into radial cross-, star- or dandelion -like patterns in a specially selected solvent; solid nanoparticles may be added to the solution to act as nucleation centers and form the central core of the supermicelle. The stems of the primary cylindrical micelles are composed of various block copolymers connected by strong covalent bonds ; within the supermicelle structure they are loosely held together by hydrogen bonds , electrostatic or solvophobic interactions. [ 20 ] [ 21 ]
When surfactants are present above the critical micelle concentration (CMC), they can act as emulsifiers that will allow a compound that is normally insoluble (in the solvent being used) to dissolve. This occurs because the insoluble species can be incorporated into the micelle core, which is itself solubilized in the bulk solvent by virtue of the head groups' favorable interactions with solvent species. The most common example of this phenomenon is detergents , which clean poorly soluble lipophilic material (such as oils and waxes) that cannot be removed by water alone. Detergents clean also by lowering the surface tension of water, making it easier to remove material from a surface. The emulsifying property of surfactants is also the basis for emulsion polymerization .
Micelles may also have important roles in chemical reactions. Micellar chemistry uses the interior of micelles to harbor chemical reactions, which in some cases can make multi-step chemical synthesis more feasible. [ 22 ] [ 23 ] Doing so can increase reaction yield, create conditions more favorable to specific reaction products (e.g. hydrophobic molecules), and reduce required solvents, side products, and required conditions (e.g. extreme pH). Because of these benefits, Micellular chemistry is thus considered a form of green chemistry . [ 24 ] However, micelle formation may also inhibit chemical reactions, such as when reacting molecules form micelles that shield a molecular component vulnerable to oxidation. [ 25 ]
The use of cationic micelles of cetrimonium chloride , benzethonium chloride , and cetylpyridinium chloride can accelerate chemical reactions between negatively charged compounds (such as DNA or Coenzyme A ) in an aqueous environment up to 5 million times. [ 26 ] Unlike conventional micellar catalysis, [ 27 ] the reactions occur solely on the charged micelles' surface.
Micelle formation is essential for the absorption of fat-soluble vitamins and complicated lipids within the human body. Bile salts formed in the liver and secreted by the gall bladder allow micelles of fatty acids to form. This allows the absorption of complicated lipids (e.g., lecithin) and lipid-soluble vitamins (A, D, E, and K) within the micelle by the small intestine.
During the process of milk-clotting, proteases act on the soluble portion of caseins , κ-casein , thus originating an unstable micellar state that results in clot formation.
Micelles can also be used for targeted drug delivery as gold nanoparticles. [ 28 ] | https://en.wikipedia.org/wiki/Micelle |
In mathematics, Michael's theorem gives sufficient conditions for a regular topological space (in fact, for a T 1 -space ) to be paracompact .
A family E i {\displaystyle E_{i}} of subsets of a topological space is said to be closure-preserving if for every subfamily E i j {\displaystyle E_{i_{j}}} ,
For example, a locally finite family of subsets has this property. With this terminology, the theorem states: [ 1 ]
Theorem — Let X {\displaystyle X} be a regular-Hausdorff topological space. Then the following are equivalent.
Frequently, the theorem is stated in the following form:
Corollary — [ 2 ] A regular-Hausdorff topological space is paracompact if and only if each open cover has a refinement that is a countable union of locally finite families of open sets.
In particular, a regular-Hausdorff Lindelöf space is paracompact. The proof of the theorem uses the following result which does not need regularity:
Proposition — [ 3 ] Let X be a T 1 -space . If X satisfies property 3 in the theorem, then X is paracompact.
The proof of the proposition uses the following general lemma
Lemma — [ 4 ] Let X be a topological space. If each open cover of X admits a locally finite closed refinement, then it is paracompact. Also, each open cover that is a countable union of locally finite sets has a locally finite refinement, not necessarily open.
This topology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Michael's_theorem_on_paracompact_spaces |
The Michael Brin Prize in Dynamical Systems , abbreviated as the Brin Prize , is awarded to mathematicians who have made outstanding advances in the field of dynamical systems and are within 14 years of their PhD. [ 1 ] The prize is endowed by and named after Michael Brin, [ 1 ] whose son Sergey Brin [ 2 ] is a co-founder of Google . Michael Brin is a retired mathematician at the University of Maryland and a specialist in dynamical systems. [ 3 ]
The first prize was awarded in 2008, between 2009 and 2017 it has been awarded bi-annually, and since 2017 annually. Artur Avila , the 2011 awardee, went on to win the Fields Medal in 2014. [ 4 ]
From 2016, the Brin prize for young mathematicians is awarded as well, which is given to mathematicians within 4 years of their PhD. | https://en.wikipedia.org/wiki/Michael_Brin_Prize_in_Dynamical_Systems |
Michael Bühl is a professor of Computational and Theoretical Chemistry in the School of Chemistry, University of St. Andrews . He has published work on the performance of various density functionals , [ 1 ] modelling thermal and medium effects, [ 2 ] [ 3 ] transition-metal NMR of metalloenzymes, [ 4 ] modelling of homogeneous catalysis , [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] and molecular dynamics of transition metal complexes . [ 12 ]
Bühl was born in 1962. [ 13 ] He earned his PhD at the University of Erlangen-Nuremberg 's Institute for Organic Chemistry (Institut für organische Chemie), where his thesis advisor was Paul von Ragué Schleyer . In 1992, he worked as a post-doctoral researcher with Henry F. Schaefer III ( University of Georgia ). He was an Oberassistent at the Institute of Organic Chemistry, University of Zürich between 1993 and 1999. In 1999, he also worked at Max-Planck-Institut für Kohlenforschung , Mülheim . He was on the faculty at the University of Zürich from 1998 to 2000 and then at University of Wuppertal from 2000 to 2008. He is Chair of Computational Chemistry at the University of St. Andrews since 2008.
Bühl's group applies the tools of computational quantum chemistry to study a variety of chemical and biochemical systems and their properties, focussing on transition-metal and f-element chemistry, homogeneous and bio-catalysis , and NMR properties. The methods employed are mostly rooted in density-functional theory (DFT), including quantum-mechanical / molecular-mechanical ( QM/MM ) calculations and first-principles molecular dynamics simulations. [ 14 ] [ 15 ] [ 16 ] | https://en.wikipedia.org/wiki/Michael_Bühl |
Michael Christopher Wendl is a mathematician and biomedical engineer who has worked on DNA sequencing theory , [ 4 ] covering and matching problems in probability, theoretical fluid mechanics, and co-wrote Phred . [ 5 ] He was a scientist on the Human Genome Project and has done bioinformatics and biostatistics work in cancer. Wendl is of ethnic German heritage and is the son of the aerospace engineer Michael J. Wendl . [ 6 ]
The problem of low Reynolds number flow in the gap between 2 infinite cylinders, so-called Couette flow , was solved in 1845 by Stokes . [ 7 ] Wendl reported the generalization of this solution for finite-length cylinders, [ 3 ] [ 8 ] which can actually be built for experimental work, in 1999, as a series of modified Bessel functions I 1 {\displaystyle I_{1}} and K 1 {\displaystyle K_{1}} . He also examined a variety of other low Reynolds number rotational devices and shear-driven devices, including a general form of the unsteady disk flow problem, for which the velocity profile is: [ 9 ]
u ( r , z , t ) = ℜ ( e i σ t [ r ⋅ sinh ( 1 + i ) β z sinh ( 1 + i ) β ϕ ] + 2 ϕ ∑ j = 1 ∞ ( − 1 ) j ⋅ α j 2 ⋅ sin ( α j z ) ⋅ I 1 ( i R σ + α j 2 r ) ( i R σ + α j 2 ) ⋅ I 1 ( i R σ + α j 2 ) ) {\displaystyle u(r,z,t)=\Re \left(e^{i\sigma t}\left[{\frac {r\cdot \sinh(1+i)\beta z}{\sinh(1+i)\beta \phi }}\right]+{\frac {2}{\phi }}\sum _{j=1}^{\infty }{\frac {(-1)^{j}\cdot \alpha _{j}^{2}\cdot \sin(\alpha _{j}z)\cdot I_{1}\left({\sqrt {iR\sigma +\alpha _{j}^{2}}}r\right)}{(iR\sigma +\alpha _{j}^{2})\cdot I_{1}\left({\sqrt {iR\sigma +\alpha _{j}^{2}}}\right)}}\right)}
where σ {\displaystyle \sigma } , R {\displaystyle R} , β {\displaystyle \beta } , and ϕ {\displaystyle \phi } are physical parameters, α j {\displaystyle \alpha _{j}} are eigen-values, and ( r , z , t ) {\displaystyle (r,z,t)} are coordinates. This result united prior-published special cases for steady flow, infinite disks, etc. [ 9 ]
Wendl examined a number of matching and covering problems in combinatorial probability, especially as these problems apply to molecular biology. He determined the distribution of match counts of pairs of integer multisets in terms of Bell polynomials , [ 10 ] a problem directly relevant to physical mapping of DNA . Prior to this, investigators had used a number of ad-hoc quantifiers, like the Sulston score , which idealized match trials as being independent. His result for the multiple-group birthday proposition [ 11 ] solves various related "collision problems", e.g. some types of P2P searching . [ 12 ] He has also examined a variety of 1-dimensional covering problems (see review by Cyril Domb [ 13 ] ), generalizing the basic configuration to forms relevant to molecular biology. [ 14 ] [ 15 ] His covering investigation of rare DNA variants with Richard K. Wilson [ 16 ] played a role in designing the 1000 Genomes Project . [ 17 ]
Wendl co-wrote Phred , a widely used DNA trace analyzer that converted raw output stream of early DNA sequence machines to sequence strings. [ 18 ] [ 19 ] He has contributed extensively to biostatistical analysis of cancer studies [ 20 ] [ 21 ] and to the bioinformatics toolbase, [ 22 ] collaborating frequently with Li Ding , Elaine Mardis , and Richard K. Wilson .
Wendl's heritage is ethnic German, originating from the Banat region of the old Austro-Hungarian Empire and is a historian of Danube-Swabian folk music . [ 23 ] He is the son of the aerospace engineer Michael J. Wendl . [ 6 ] He is married to the former Pamela Bjerkness of Chicago [ 24 ] | https://en.wikipedia.org/wiki/Michael_Christopher_Wendl |
Sir Michael Anthony Eardley Dummett FBA ( / ˈ d ʌ m ɪ t / ; 27 June 1925 – 27 December 2011) was an English academic described as "among the most significant British philosophers of the last century and a leading campaigner for racial tolerance and equality ." [ 3 ] He was, until 1992, Wykeham Professor of Logic at the University of Oxford . He wrote on the history of analytic philosophy , notably as an interpreter of Frege , and made original contributions particularly in the philosophies of mathematics , logic , language and metaphysics .
He was known for his work on truth and meaning and their implications to debates between realism and anti-realism , a term he helped to popularize. In mathematical logic , he developed an intermediate logic , a logical system intermediate between classical logic and intuitionistic logic that had already been studied by Kurt Gödel : the Gödel–Dummett logic . In voting theory , he devised the Quota Borda system of proportional voting, based on the Borda count , and conjectured the Gibbard–Satterthwaite theorem together with Robin Farquharson ; he also devised the condition of proportionality for solid coalitions . Besides his main work in analytic philosophy , he also wrote extensively on the history of card games , particularly on tarot card games .
He was married to the political activist Ann Dummett from 1951 until his death in 2011.
Born 27 June 1925 at his parents' house, 56, York Terrace , Marylebone , London, Dummett was the son of George Herbert Dummett (1880 – 12 November 1969), later of Shepherd's Cottage, Curridge , Berkshire, a silk merchant and rayon dealer, and Mabel Iris (1893–1980), daughter of the civil servant and conservationist Sir Sainthill Eardley-Wilmot (himself grandson of the politician Sir John Eardley-Wilmot, 1st Baronet ). [ 4 ] [ 5 ] [ 6 ] He studied at Sandroyd School in Wiltshire , at Winchester College as a scholar, and at Christ Church, Oxford , which awarded him a major scholarship in 1943. He was called up for military service that year and served until 1947, first as a private in the Royal Artillery , then in the Intelligence Corps in India and Malaya. In 1950 he graduated with a first in Politics, Philosophy and Economics from Oxford and was elected a Prize Fellow of All Souls College, Oxford . [ 7 ] [ 8 ]
Dummett was a research fellow at All Souls College, Oxford until 1979, and also Reader in Philosophy of Mathematics at Oxford University from 1962 to 1974. In 1979, he became Wykeham Professor of Logic at Oxford, a post he held until retiring in 1992. During his term as Wykeham Professor, he held a Fellowship at New College, Oxford . He has also held teaching posts at Birmingham University , UC Berkeley , Stanford University , Princeton University , and Harvard University . He won the Rolf Schock prize in 1995, [ 9 ] and was knighted in 1999. He was the 2010 winner of the Lauener Prize for an Outstanding Œuvre in Analytical Philosophy. [ 10 ]
During his career at Oxford, Dummett supervised many philosophers who went on to distinguished careers, including Peter Carruthers , Adrian Moore , Ian Rumfitt , and Crispin Wright .
Dummett's work on the German philosopher Frege has been acclaimed. His first book Frege: Philosophy of Language (1973), written over many years, is seen as a classic. It was instrumental in the rediscovery of Frege's work, and influenced a generation of British philosophers.
In his 1963 paper "Realism", he popularised a controversial approach to understanding the historical dispute between realist and other non-realist philosophy such as idealism , nominalism , irrealism . [ 11 ] He classed all the latter as anti-realist and argued that the fundamental disagreement between realist and anti-realist was over the nature of truth.
For Dummett, realism is best understood as semantic realism , i.e. the view that every declarative sentence in one's language is bivalent (determinately true or false) and evidence-transcendent (independent of our means of coming to know which), [ 12 ] [ 2 ] while anti-realism rejects this view in favour of a concept of knowable (or assertible) truth. [ 13 ] Historically, these debates had been understood as disagreements about whether a certain type of entity objectively exists or not. Thus we may speak of realism or anti-realism with respect to other minds, the past, the future, universals, mathematical entities (such as natural numbers ), moral categories, the material world, or even thought. The novelty of Dummett's approach consisted in seeing these disputes as at base analogous to the dispute between intuitionism and Platonism in the philosophy of mathematics .
Dummett espoused semantic anti-realism , a position suggesting that truth cannot serve as the central notion in the theory of meaning and must be replaced by verifiability . [ 14 ] Semantic anti-realism is sometimes related to semantic inferentialism . [ 15 ]
Dummett was politically active, through his work as a campaigner against racism. He let his philosophical career stall in order to influence civil rights for minorities during what he saw as a crucial period of reform in the late 1960s. He also worked on the theory of voting , which led to his introduction of the Quota Borda system .
Dummett drew heavily on his work in this area in writing his book On Immigration and Refugees , an account of what justice demands of states in relationship to movement between states . Dummett, in that book, argues that the vast majority of opposition to immigration has been founded on racism, and says that this has especially been so in the UK. In the book, Dummett argued in favour of open borders and mass migration, except when states were "under special threat" and could therefore refuse entry.
In 1954, in Germany, Dummett studied what had survived of Frege's Nachlass . [ 16 ] [ 17 ] He later recounted how he had been deeply shocked to discover from diary fragments that the man he had "revered" as "an absolutely rational man" was, at the end of his life, a "virulent" anti-Semite of "extreme right-wing opinions". [ 18 ] [ 16 ]
In 1955–1956, while in Berkeley, California , Dummett and his wife joined the NAACP . In June 1956 he met Martin Luther King Jr. while visiting San Francisco, and heard from him of Alistair Cooke providing the British public with what King defined as "biased and hostile reports" of the Civil Rights Movement and specifically of the Montgomery bus boycott . Dummett travelled to Montgomery and wrote his own account. However, The Guardian refused to publish Dummett's article and his refutation of Cooke's version of the Montgomery events, even in a shortened account as a Letter to the Editor; the BBC , too, also refused to publish it. [ 19 ]
Dummett and Robin Farquharson published influential articles on the theory of voting, in particular conjecturing that deterministic voting rules with more than three issues faced endemic strategic voting . [ 20 ] The Dummett–Farquharson conjecture was proved by Allan Gibbard , [ 21 ] a philosopher and former student of Kenneth J. Arrow and John Rawls , and by the economist Mark A. Satterthwaite. [ 22 ]
After the establishment of the Farquharson–Dummett conjecture by Gibbard and Satterthwaite, Dummett contributed three proofs of the Gibbard–Satterthwaite theorem in a monograph on voting. He also wrote a shorter overview of the theory of voting, for the educated public. [ citation needed ]
Dummett was a scholar in the field of card-game history, with numerous books and articles to his credit. He was a founding member of the International Playing-Card Society , in whose journal The Playing-Card he regularly published opinions, research and reviews of current literature on the subject; he was also a founder of the Accademia del Tarocchino Bolognese in Bologna . His historical work on the use of the tarot pack in card games , The Game of Tarot: From Ferrara to Salt Lake City , attempted to establish that the invention of Tarot could be set in 15th-century Italy . He laid the foundation for most subsequent research on the game of tarot , including exhaustive accounts of the rules of all hitherto known forms of the game. Sylvia Mann goes as far as to say that The Game of Tarot "is the most important book on cards ever written." [ 23 ]
Dummett's analysis of the historical evidence suggested that fortune-telling and occult interpretations were unknown before the 18th century. During most of their recorded history, he wrote, Tarot cards were used to play a popular trick-taking game which is still enjoyed in much of Europe. Dummett showed that the middle of the 18th century saw a great development in the game of Tarot, including a modernized deck with French suit-signs, and without the medieval allegories that interest occultists. This coincided with a growth in Tarot's popularity. "The hundred years between about 1730 and 1830 were the heyday of the game of Tarot; it was played not only in northern Italy , eastern France , Switzerland , Germany and Austro-Hungary , but also in Belgium , the Netherlands , Denmark , Sweden and even Russia . Not only was it, in these areas, a famous game with many devotees: it was also, during that period, more truly an international game than it had ever been before or than it has ever been since...." [ 24 ]
In 1987, Dummett collaborated with Giordano Berti and Andrea Vitali on the project of a great Tarot exhibition at Castello Estense in Ferrara . On that occasion he wrote some texts for the catalogue of the exhibition. [ 25 ]
In 1944, Dummett was received into the Roman Catholic Church and remained a practising Catholic. Throughout his career, Dummett published articles on various issues then facing the Catholic Church, mainly in the English Dominican journal New Blackfriars . Dummett published an essay in the bulletin of the Adoremus Society on the subject of liturgy, [ 26 ] and a philosophical essay defending the intelligibility of the Catholic Church's teaching on the Eucharist . [ 27 ]
In October 1987, one of his contributions to New Blackfriars sparked controversy by seemingly attacking currents of Catholic theology that appeared to him to diverge from orthodox Catholicism and "imply that, from the very earliest times, the Catholic Church, claiming to have a mission from God to safeguard divinely revealed truth, has taught and insisted on the acceptance of falsehoods." [ 28 ] Dummett argued that "the divergence which now obtains between what the Catholic Church purports to believe and what large or important sections of it in fact believe ought, in my view, to be tolerated no longer: not if there is to be a rationale for belonging to that Church; not if there is to be any hope of reunion with the other half of Christendom; not if the Catholic Church is not to be a laughing-stock in the eyes of the world." [ 28 ] A debate on these remarks continued for months, with the theologian Nicholas Lash [ 29 ] and the historian Eamon Duffy among the contributors. [ 30 ]
Dummett retired in 1992 and was knighted in 1999 for "services to philosophy and to racial justice". He received the Lakatos Award in the philosophy of science in 1994 and the Rolf Schock Prize for logic and philosophy in 1995. He was elected Fellow of the British Academy in 1968, resigned in 1984, and was re-elected in 1995. [ 6 ]
Dummett died on 27 December 2011 aged 86, leaving his wife Ann (married in 1951, died in 2012) and three sons and two daughters. A son and a daughter predeceased them. [ 31 ] He is buried at Wolvercote Cemetery , Oxford. [ 6 ]
Notable articles and exhibition catalogues include "Tarot Triumphant: Tracing the Tarot" in FMR , ( Franco Maria Ricci International ), January/February 1985; Pattern Sheets published by the International Playing Card Society ; with Giordano Berti and Andrea Vitali, the catalogue Tarocchi: Gioco e magia alla Corte degli Estensi (Bologna, Nuova Alfa Editorale, 1987).
For more complete publication details see the "Bibliography of the Writings of Michael Dummett" in R. E. Auxier and L. E. Hahn (eds.) The Philosophy of Michael Dummett (2007). | https://en.wikipedia.org/wiki/Michael_Dummett |
Member of National Academy of Sciences Ruth Kirstein Award Carl Branden award Emily M. Gray Award American Society for Biochemistry and Molecular Biology Award for Exemplary Contributions to Education Mentor Award of the American Association for the Advancement of Science American Society for Microbiology Hinton Award for Mentoring
Michael F. Summers is the Robert E. Meyerhoff Chair for Excellence in Research and Mentoring and a distinguished professor of chemistry and biochemistry at the University of Maryland, Baltimore County . [ 1 ] [ 2 ] He serves as editor-in-chief of the Journal of Molecular Biology . [ 3 ] Since 1994, he has been a HHMI Investigator as well as a member of the National Academy of Sciences since 2016. [ 4 ] [ 1 ]
Michael F. Summers earned his A.A. degree from St. Petersburg Junior College in 1978, and then a B.S. in chemistry from the University of West Florida in 1980. He then earned his Ph.D. in Bioinorganic Chemistry from Emory University in 1984. [ 4 ]
From 1984 to 1987, he was a postdoctoral fellow at the NIH under Dr. Adrian Bax . [ 2 ] [ 4 ] [ 5 ] Since 1987, he has been a UMBC Faculty member.
His career has focused on using structural approaches to studying protein, RNA, and macromolecular interactions with HIV-1 genome packaging and virus assembly. [ 6 ] [ 4 ] He is particularly well known for using NMR. He has also been a major proponent for retaining minority students in the sciences through undergraduate involvement in research as well as involvement with the Meyerhoff Scholars Program . [ 6 ] [ 7 ] [ 8 ] He is also involved with adapting the Meyerhoff Scholars program at other schools with HHMI such as Penn State and UNC . [ 9 ] | https://en.wikipedia.org/wiki/Michael_F._Summers |
Michael Grieves is an expert in Product life-cycle management (PLM). His work focuses on virtual product development, including Digital twins , engineering, systems engineering, complex systems, manufacturing, especially additive manufacturing, and operational sustainment. [ 1 ] [ 2 ] [ 3 ] [ 4 ] He has published on the topic of Digital twins and related topics, [ 5 ] being one of the early advocates of the approach. [ 2 ] [ 3 ] [ 6 ]
Grieves earned his B.S. Computer Engineering from Michigan State University , an MBA from Oakland University , and his doctorate from Case Western Reserve University . [ 1 ]
Grieves was executive director and chief scientist for the Digital Twin Institute [ 1 ] and he has been on boards of several public companies in the United States, Japan, and China, [ 1 ] such as Longhai Steel Inc. [ 7 ]
At present he serves as chief scientist of advanced manufacturing and executive vice president of operations at the Florida Institute of Technology . [ 8 ] | https://en.wikipedia.org/wiki/Michael_Grieves |
Michael O’Keeffe (born April 3, 1934) is a British-American chemist. He is currently Regents’ Professor Emeritus in the School of Molecular Sciences at Arizona State University . As a scientist, he is particularly known for his contributions to the field of reticular chemistry . In 2019, he received the Gregori Aminoff Prize in Crystallography from the Royal Swedish Academy of Sciences .
Michael O’Keeffe was born in Bury St Edmunds , Suffolk, England , on the 3rd April, 1934. He was one of four children born to Dr. E. Joseph O’Keeffe, an immigrant from Ireland, and his mother Marjorie G. O’Keeffe (née Marten). From 1942 to 1951 he attended Prior Park College (Bath) and then from 1951 to 1957 the University of Bristol : B.Sc. in chemistry (1954), Ph.D. (1958, mentor Frank S. Stone ). He spent 1958-1959 at Philips Natuurkundig Laboratorium (group of Evert W. Gorter ) then did postdoctoral research at Indiana University (mentor Walter J. Moore ). 1960-62. He subsequently became a U. S. citizen.
In 1963, he joined Arizona State University where he is now Regents’ Professor of Chemistry. Early research work was devoted to the study of defects , conductivity and diffusion in solids, particularly solid electrolytes . His more recent research is devoted to the theory of periodic structures (nets ( periodic graph (crystallography) ), tilings (periodic tessellations , and weavings (a higher dimensional version of Braid theory )) relevant to development of a taxonomy of such structures and its application to materials design and description. In collaboration with Omar Yaghi , O’Keeffe developed reticular chemistry , a new branch of chemistry that links molecular fragments of well-defined shapes with strong bonds to build symmetrical open structures such as metal-organic frameworks (MOFs), zeolitic imidazolate frameworks (ZIFs), and covalent organic frameworks (COFs) Together with Olaf Delgado-Friedrichs , he has developed the Reticular Chemistry Structure Resource (RCSR) [ 1 ] a compendium of structures relevant to design of materials on the molecular level. O'Keeffe has published three books, including one of the standard monographs on periodic structures, [ 2 ] and more than 300 refereed papers. His work is highly cited (over 100,000 citations and h index over 100), and he was third in the Clarivate#Highly Cited Researchers list of Top 100 Chemists, 2000-2010 [ 3 ]
Among his honors are: the 2019 Gregori Aminoff Prize ( Royal Swedish Academy ); [ 4 ] [ 5 ] the Bernal Distinguished Lecturer, University of Limerick , 2017; the World Class Professorship, KAIST , Korea, 2013; Newcomb Cleveland Prize from the American Association for the Advancement of Science in 2007; Regents' Professor , Arizona State University 1994; and D. Sc. “for excellence in published research” University of Bristol , 1976. | https://en.wikipedia.org/wiki/Michael_O'Keeffe_(chemist) |
Michael Peter Barnett (24 March 1929 – 13 March 2012) was a British theoretical chemist and computer scientist . [ 1 ] He developed mathematical and computer techniques for quantum chemical problems, and some of the earliest software for several other kinds of computer application. After his early days in London, Essex and Lancashire, he went to King's College, London , in 1945, the Royal Radar Establishment in Malvern in 1953, IBM United Kingdom in 1955, the University of Wisconsin Department of Chemistry in 1957, and the Massachusetts Institute of Technology Solid State and Molecular Theory Group in 1958.
At MIT he was an associate professor of physics and director of the Cooperative Computing Laboratory . He returned to England, to the Institute of Computer Science of the University of London in 1964, and then back to United States the following year. He worked in industry, and taught at Columbia University 1975–77 and Brooklyn College, City University of New York , 1977–96, retiring as an emeritus professor. After retirement he focused on symbolic calculation in quantum chemistry and nuclear magnetic resonance .
Barnett spent most of the World War II years near Fleetwood in Lancashire. He attended Baines' Grammar School in Poulton-le-Fylde , then went to King's College, London in 1945, where he received a BSc in chemistry in 1948, a PhD for work in the theoretical physics department with Charles Coulson in 1952, that he continued on a one-year post-doctoral fellowship. His assigned project was to determine if electrostatic forces could account for the energy needed to make two parts of an ethane molecule rotate around the bond that joins them. [ 2 ]
This work required the evaluation of certain mathematical objects – molecular integrals over Slater orbitals . Barnett extended some earlier work by Charles Coulson [ 3 ] by discovering some recurrence formulas , [ 2 ] [ 4 ] [ 5 ] that are part of a method of analysis and computation frequently referred to as the Barnett-Coulson expansion. [ 6 ] [ 7 ] Molecular integrals remain a significant problem in quantum chemistry [ 8 ] and continued to be one of Barnett's main interests. [ 9 ]
Two years after Barnett started this work, he was invited to be one of the twenty-five participants in a conference that was organised by Robert Mulliken , sponsored by the National Academy of Sciences and known, from its venue, as the Shelter Island Conference on Quantum Mechanics in Valence Theory . [ 10 ] [ 11 ] Barnett's attendance was enabled by the British Rayon Research Association , which supported his post-graduate work. [ 12 ]
At the Royal Radar Establishment , Barnett held a Senior Government Fellowship. He worked on aspects of theoretical solid state physics , that included the properties of organic semiconductors . [ 13 ] As part of his work at IBM United Kingdom, he directed an IBM model 650 computer centre. He directed and participated in numerous projects that included (1) calculating DNA structures from crystallographic data, [ 14 ] and (2) simulations to plan the location and operation of dams and reservoirs on the River Nile , working with Humphry Morrice, the hydrological advisor to the Government of the Sudan, and his predecessor, Nimmo Allen. [ 15 ] [ 16 ]
In 1957, Barnett accepted an invitation from Joseph Hirschfelder , [ 17 ] in the Chemistry department of the University of Wisconsin–Madison , to work on mathematical theories of combustion and detonation . [ 18 ]
In 1958, John Clarke Slater invited Barnett to join his Solid State and Molecular Theory Group . He was made an associate professor of physics in 1960 and, in 1962, set up an IBM 709 installation, the Cooperative Computing Laboratory (CCL). This supported heavy computations by several groups at MIT. [ 19 ] The SSMTG used much of the time for molecular and solid state research, attracting many post-doctoral workers from the United Kingdom and Canada,. [ 20 ]
The calculations of quantum chemistry involve approximate solutions of the Schrödinger equation. Many methods for computing these require molecular integrals that are defined for systems of 2, 3 and 4 atoms, respectively. The 4-atom (or 4-centre) integrals are by far the most difficult. By extending the methods of his PhD papers, Barnett developed a detailed methodology for evaluating all of these integrals [ 21 ] These were coded in FORTRAN , in software that was available to the IBM mainframe community through the SHARE organisation. [ 20 ] Members of the SSMTG who developed and used these programs included Donald Ellis, [ 22 ] Russell Pitzer and Donald Merrifield .
In 1960, Barnett started to extend a technique he had learned from Frank Boys to program a computer to construct coded mathematical formulas. [ 23 ] He needed a way to typeset these. A Photon machine, equipped with paper provided an immediate solution. Barnett developed software to typeset computer output, and applied this to documents containing mathematical formulas and to a wide range of other typesetting problems. He produced books for the MIT Libraries , [ 24 ] and with Imre Izs·k , the Smithsonian Astrophysical Observatory . [ 25 ] The work of his team and the parallel work of other groups through 1964 is described in his monograph. [ 26 ]
Barnett also began to develop his ideas on cognitive modelling, as a member of Frank Schmitt 's seminar on biological memory. [ 27 ] He wrote on river simulation [ 28 ] as a member of the Harvard Water Resources seminar (see for related work . He, John Iliffe , Robert Futrelle, [ 29 ] Paul Fehder, George Coulouris and other members of the CCL worked on parsing , [ 30 ] text processing (the precursor of word processing ), [ 31 ] programming language constructs, [ 32 ] scientific visualisation , [ 33 ] and further topics that melded into the computer science of later years.
In 1963, Barnett accepted an appointment as reader in information processing at the Institute of Computer Science in the University of London , [ 34 ] and, while he was still at Massachusetts Institute of Technology, the Department of Scientific and Industrial Research (DSIR) awarded him a grant, to be taken up in London, to continue his work on computer typesetting, that was publicised by the director, Richard A. Buckingham . [ 35 ] His return received further publicity as a "reverse brain drain". [ 36 ] [ 37 ] He worked extensively with printing trade union officials and the staff of training colleges, to provide understanding of the new methods and their potential (pages 208–218 of his book). [ 26 ] His concern with social aspects of technological innovation is noted in a detailed book review. [ 38 ] He served on the Information Committee of the DSIR . [ 39 ]
Asked about university research in England, in a BBC interview on his arrival in 1964, he said "the trouble was deeper than money ... Frustration is caused by concentration of power in the hands of a few." [ 40 ] His deepening concern about entrepreneurial activity in academe intensified, (Section 10.6 of his book. [ 26 ] )
After a year at the Institute of Computer Science, Barnett went back to the US He joined the newly formed Graphic Systems Division of RCA , to create software for commercial computer typesetting. RCA acquired the US rights to the Digiset machine of Rudolf Hell and marketed an adaptation as the Videocomp. About 50 were sold. [ 41 ] Barnett designed the algorithmic markup language PAGE-1 to express complicated formats in full page composition. [ 42 ] This was used for a wide range of typeset products that included, over the years, the Social Sciences Index of the H. W. Wilson Company and several other publications excerpted in a later review paper., [ 43 ] The application to database publishing led Barnett to devise and implement a programming language, that he called SNAP, to express file handling operations as sequences of grammatical English sentences. [ 44 ]
In 1969, Barnett joined the H. W. Wilson Company , a publisher of bibliographic tools for libraries, to automate the production of these. He designed and introduced the system that was used to produce the Social Sciences Index for about 10 years. He had also started to teach courses on library automation at the Columbia School of Library Service. [ 45 ] He joined the Columbia faculty full-time in 1975.
In 1977, Barnett moved to the Department of Computer and Information Science at Brooklyn College of the City University of New York in 1977, retiring as professor emeritus in 1996. Whilst at CUNY, he directed a major NSF funded project to develop computer generated printed matter for undergraduate teaching. [ 43 ] He wrote software that incorporated pictures in documents that were typeset using PAGE-1. [ 43 ] He wrote several books with his three teenage children, Gabrielle, Simon and Graham, aimed at the home market. These dealt with the production of computer graphics on early personal computers, that included the Commodore 64 , [ 46 ] the Apple II , [ 47 ] and IBM PC , [ 48 ] and the use of elementary algorithms . [ 49 ]
In 1989, Barnett started to spend part of his time as a visiting scientist at the John von Neumann National Supercomputer Center, [ 50 ] [ 51 ] located on the outskirts of Princeton and run by a consortium of universities. He restarted work on molecular integrals, using the power of the supercomputer to go beyond the possibilities of the 1960s. After his retirement from CUNY, he continued to explore applications of symbolic calculation to molecular integrals, nuclear magnetic resonance , and other topics. [ 52 ] | https://en.wikipedia.org/wiki/Michael_P._Barnett |
Michael Spivey (commonly known as Mike Spivey ) is a British computer scientist at the University of Oxford .
Spivey was born in 1960 and educated at Archbishop Holgate's Grammar School in York , England. He studied mathematics at Christ's College, Cambridge and then undertook a DPhil in computer science on the Z notation at Wolfson College, Oxford and the Programming Research Group , part of the Oxford University Computing Laboratory.
Mike Spivey is a University Lecturer in Computation at the Oxford University Department of Computer Science and Misys and Anderson Fellow of Computer Science at Oriel College, Oxford . [ 1 ] His main areas of research interest are compilers and programming languages , especially logic programming . He wrote an Oberon-2 compiler. [ 2 ]
This article on a computer specialist of the United Kingdom is a stub . You can help Wikipedia by expanding it .
This biography article of a United Kingdom academic is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Michael_Spivey |
Michael Stifel or Styfel (1487 – April 19, 1567) was a German monk, Protestant reformer and mathematician . He was an Augustinian who became an early supporter of Martin Luther . He was later appointed professor of mathematics at Jena University .
Stifel was born in Esslingen am Neckar in southern Germany. He joined the Order of Saint Augustine and was ordained a priest in 1511. Tensions in the abbey grew after he published the poem Von der Christförmigen, rechtgegründeten leer Doctoris Martini Luthers (1522, i.e. On the Christian, righteous doctrine of Doctor Martin Luther ) and came into conflict with Thomas Murner . Stifel then left for Frankfurt , and soon went to Mansfeld , where he began his mathematical studies. In 1524, upon a recommendation by Luther, Stifel was called by the Jörger family to serve at their residence, Tollet Castle in Tollet (close to Grieskirchen , Upper Austria ). [ 1 ] Due to the tense situation in the Archduchy of Austria in the wake of the execution of Leonhard Kaiser in Schärding , Stifel returned to Wittenberg in 1527. At this time Stifel started writing a book collecting letter transcripts of Martin Luther, completed in 1534. [ 2 ]
By intercession of Martin Luther, Stifel became minister in Lochau (now Annaburg). Luther also confirmed his marriage to the widow of his predecessor in the ministry. Michael Stifel was fascinated regarding the properties and possibilities of numbers; he studied number theory and numerology . He also performed the "Wortrechnung" (i.e. word-calculation), studying the statistical properties of letters and words in the bible (a common method at that time). In 1532, Stifel published anonymously his " Ein Rechenbuchlin vom EndChrist. Apocalyps in Apocalypsim " (A Book of Arithmetic about the AntiChrist. A Revelation in the Revelation). This predicted that Judgement Day would occur and the world would end at 8am on October 19, 1533. The German saying "to talk a Stiefel" or "to calculate a Stiefel" (Stiefel is the German word for boot), meaning to say or calculate something based on an unusual track, can be traced back to this incident. [ 3 ] When this prediction failed, he did not make any other predictions.
In 1535 he became minister in Holzdorf near Wittenberg and stayed there for 12 years. He studied "Die Coss" (the first algebra book written in German) by Christoph Rudolff and Euclid's Elements in the Latin edition by Campanus of Novara . Jacob Milich supported his scientific development and encouraged him to write a comprehensive work on arithmetic and algebra. [ 4 ] [ 5 ] In 1541 he registered for mathematics at the University of Wittenberg [ 6 ] to extend his mathematical knowledge. In 1558 Stifel became first professor of mathematics at the new founded University of Jena . [ 7 ]
Stifel's most important work Arithmetica integra (1544) contained important innovations in mathematical notation . It has the first use of multiplication by juxtaposition (with no symbol between the terms) in Europe. He is the first to use the term " exponent " and also included the following rules for calculating powers: q m q n = q m + n {\displaystyle q^{m}q^{n}=q^{m+n}} and q m q n = q m − n {\displaystyle {\tfrac {q^{m}}{q^{n}}}=q^{m-n}} . [ 8 ]
The book contains a table of integers and powers of 2 that some have considered to be an early version of a logarithmic table.
Stifel explicitly points out, that multiplication and division operations in the (lower) geometric series can be mapped by addition and subtraction in the (upper) arithmetic series. On the following page 250, he shows examples also using negative exponents. He also realized that this would create a lot of work. So he wrote, that regarding this issue marvelous books could be written, but he himself will refrain and keep his eyes shut. [ 9 ] [ 10 ] [ 11 ]
Stifel was the first, who had a standard method to solve quadratic equations . He was able to reduce the different cases known to one case, because he uses both, positive and negative coefficients. He called his method/rule AMASIAS. The letters A, M, A/S, I, A/S each are representing a single operation step when solving a quadratic equation. Stifel, however avoided to show the negative results. [ 12 ] [ 13 ]
Another topic dealt with in the Arithmetica integra are negative numbers (which Stifel calls numeri absurdi ). Negative numbers were refused and considered as preposterous by the authorities at that time. Stifel however, used negative numbers equal to the other numbers. He also discussed the properties of irrational numbers and if the irrationals are real numbers, or only fictitious (AI page 103). Stifel found them very useful for mathematics, and not dispensable. Further issues were a method of calculating roots of higher order by using binomial coefficients [ 14 ] and sequences. | https://en.wikipedia.org/wiki/Michael_Stifel |
Michael Charles Zerner (January 1, 1940 – February 2, 2000) was an American theoretical chemist , professor at the University of Guelph from 1970 to 1981 and University of Florida from 1981 to 2000. Zerner earned his Ph.D. under Martin Gouterman at Harvard, working with the spectroscopy of porphyrins. He conceived and wrote a quantum chemistry program, known as BIGSPEC or ZINDO, for calculating electronic spectra of big molecules. In 1996 Zerner was diagnosed with liver cancer, and died on February 2, 2000, survived by his wife and two children.
This article about an American physicist is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Michael_Zerner |
In organic chemistry , the Michael reaction or Michael 1,4 addition is a reaction between a Michael donor (an enolate or other nucleophile ) and a Michael acceptor (usually an α,β-unsaturated carbonyl ) to produce a Michael adduct by creating a carbon-carbon bond at the acceptor's β-carbon . [ 1 ] [ 2 ] It belongs to the larger class of conjugate additions and is widely used for the mild formation of carbon–carbon bonds. [ 3 ]
The Michael addition is an important atom-economical method for diastereoselective and enantioselective C–C bond formation, and many asymmetric variants exist [ 4 ] [ 5 ] [ 6 ]
In this general Michael addition scheme, either or both of R and R' on the nucleophile (the Michael donor) represent electron-withdrawing substituents such as acyl , cyano , nitro , or sulfone groups, which make the adjacent methylene hydrogen acidic enough to form a carbanion when reacted with the base , B: . For the alkene (the Michael acceptor), the R" substituent is usually a carbonyl , which makes the compound an α,β-unsaturated carbonyl compound (either an enone or an enal ), or R" may be any electron withdrawing group.
As originally defined by Arthur Michael , [ 7 ] [ 8 ] the reaction is the addition of an enolate of a ketone or aldehyde to an α,β-unsaturated carbonyl compound at the β carbon. The current definition of the Michael reaction has broadened to include nucleophiles other than enolates . [ 9 ] Some examples of nucleophiles include doubly stabilized carbon nucleophiles such as beta-ketoesters, malonates , and beta-cyanoesters. The resulting product contains a highly useful 1,5-dioxygenated pattern. Non-carbon nucleophiles such as water, alcohols , amines , and enamines can also react with an α,β-unsaturated carbonyl in a 1,4-addition. [ 10 ]
Some authors have broadened the definition of the Michael addition to essentially refer to any 1,4-addition reaction of α,β-unsaturated carbonyl compounds. Others, however, insist that such a usage is an abuse of terminology, and limit the Michael addition to the formation of carbon–carbon bonds through the addition of carbon nucleophiles. The terms oxa-Michael reaction and aza-Michael reaction [ 2 ] have been used to refer to the 1,4-addition of oxygen and nitrogen nucleophiles, respectively. The Michael reaction has also been associated with 1,6-addition reactions. [ 11 ]
In the reaction mechanism , there is 1 as the nucleophile: [ 3 ]
Deprotonation of 1 by a base leads to carbanion 2 , stabilized by its electron-withdrawing groups. Structures 2a to 2c are three resonance structures that can be drawn for this species, two of which have enolate ions. This nucleophile reacts with the electrophilic alkene 3 to form 4 in a conjugate addition reaction . Finally, enolate 4 abstracts a proton from protonated base (or solvent) to produce 5 .
The reaction is dominated by orbital, rather than electrostatic, considerations. The HOMO of stabilized enolates has a large coefficient on the central carbon atom while the LUMO of many alpha, beta unsaturated carbonyl compounds has a large coefficient on the beta carbon. Thus, both reactants can be considered soft . These polarized frontier orbitals are of similar energy, and react efficiently to form a new carbon–carbon bond. [ 12 ]
Like the aldol addition , the Michael reaction may proceed via an enol , silyl enol ether in the Mukaiyama–Michael addition , or more usually, enolate nucleophile. In the latter case, the stabilized carbonyl compound is deprotonated with a strong base (hard enolization) or with a Lewis acid and a weak base (soft enolization). The resulting enolate attacks the activated olefin with 1,4- regioselectivity , forming a carbon–carbon bond. This also transfers the enolate to the electrophile . Since the electrophile is much less acidic than the nucleophile, rapid proton transfer usually transfers the enolate back to the nucleophile if the product is enolizable; however, one may take advantage of the new locus of nucleophilicity if a suitable electrophile is pendant. Depending on the relative acidities of the nucleophile and product, the reaction may be catalytic in base. In most cases, the reaction is irreversible at low temperature.
The research done by Arthur Michael in 1887 at Tufts University was prompted by an 1884 publication by Conrad & Kuthzeit on the reaction of ethyl 2,3-dibromopropionate with diethyl sodiomalonate forming a cyclopropane derivative [ 13 ] (now recognized as involving two successive substitution reactions).
Michael was able to obtain the same product by replacing the propionate by 2-bromacrylic acid ethylester and realized that this reaction could only work by assuming an addition reaction to the double bond of the acrylic acid . He then confirmed this assumption by reacting diethyl malonate and the ethyl ester of cinnamic acid forming the first Michael adduct: [ 14 ]
In the same year Rainer Ludwig Claisen claimed priority for the invention. [ 15 ] He and T. Komnenos had observed addition products to double bonds as side-products earlier in 1883 while investigating condensation reactions of malonic acid with aldehydes . [ 16 ] However, according to biographer Takashi Tokoroyama, this claim is without merit. [ 14 ]
Researchers have expanded the scope of Michael additions to include elements of chirality via asymmetric versions of the reaction. The most common methods involve chiral phase transfer catalysis , such as quaternary ammonium salts derived from the Cinchona alkaloids ; or organocatalysis , which is activated by enamine or iminium with chiral secondary amines, usually derived from proline . [ 17 ]
In the reaction between cyclohexanone and β-nitrostyrene sketched below, the base proline is derivatized and works in conjunction with a protic acid such as p -toluenesulfonic acid : [ 18 ]
Syn addition is favored with 99% ee . In the transition state believed to be responsible for this selectivity, the enamine (formed between the proline nitrogen and the cycloketone) and β-nitrostyrene are co-facial with the nitro group hydrogen bonded to the protonated amine in the proline side group.
A well-known Michael reaction is the synthesis of warfarin from 4-hydroxycoumarin and benzylideneacetone first reported by Link in 1944: [ 19 ]
Several asymmetric versions of this reaction exist using chiral catalysts. [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ]
Classical examples of the Michael reaction are the reaction between diethyl malonate (Michael donor) and diethyl fumarate (Michael acceptor), [ 26 ] that of diethyl malonate and mesityl oxide (forming Dimedone ), [ 27 ] that of diethyl malonate and methyl crotonate , [ 28 ] that of 2-nitropropane and methyl acrylate , [ 29 ] that of ethyl phenylcyanoacetate and acrylonitrile [ 30 ] and that of nitropropane and methyl vinyl ketone . [ 31 ]
A classic tandem sequence of Michael and aldol additions is the Robinson annulation .
In the Mukaiyama–Michael addition , the nucleophile is a silyl enol ether and the catalyst is usually titanium tetrachloride : [ 32 ] [ 33 ]
The 1,6-Michael reaction proceeds via nucleophilic attack on the 𝛿 carbon of an α,β- γ {\displaystyle \gamma } , 𝛿 -diunsaturated Michael acceptor. [ 34 ] [ 35 ] The 1,6-addition mechanism is similar to the 1,4-addition, with one exception being the nucleophilic attack occurring at the 𝛿 carbon of the Michael acceptor. [ 35 ] However, research shows that organocatalysis often favours the 1,4-addition. [ 34 ] In many syntheses where 1,6-addition was favoured, the substrate contained certain structural features. [ 35 ] Research has shown that catalysts can also influence the regioselectivity and enantioselectivity of a 1,6-addition reaction. [ 35 ]
For example, the image below shows the addition of ethylmagnesium bromide to ethyl sorbate 1 using a copper catalyst with a reversed josiphos ( R,S )-(–)-3 ligand. [ 35 ] This reaction produced the 1,6-addition product 2 in 0% yield, the 1,6-addition product 3 in approximately 99% yield, and the 1,4-addition product 4 in less than 2% yield. This particular catalyst and set of reaction conditions led to the mostly regioselective and enantioselective 1,6-Michael addition of ethyl sorbate 1 to product 3 .
A Michael reaction is used as a mechanistic step by many covalent inhibitor drugs. Cancer drugs such as ibrutinib, osimertinib, and rociletinib have an acrylamide functional group as a Michael acceptor. The Michael donor on the drug reacts with a Michael acceptor in the active site of an enzyme . This is a viable cancer treatment because the target enzyme is inhibited following the Michael reaction. [ 36 ]
Source: [ 2 ]
All polymerization reactions have three basic steps: initiation, propagation, and termination. The initiation step is the Michael addition of the nucleophile to a monomer . The resultant species undergoes a Michael addition with another monomer, with the latter acting as an acceptor. This extends the chain by forming another nucleophilic species to act as a donor for the next addition. This process repeats until the reaction is quenched by chain termination. [ 37 ] The original Michael donor can be a neutral donor such as amines , thiols , and alkoxides , or alkyl ligands bound to a metal. [ 38 ]
Linear step growth polymerizations are some of the earliest applications of the Michael reaction in polymerizations. A wide variety of Michael donors and acceptors have been used to synthesize a diverse range of polymers. Examples of such polymers include poly(amido amine), poly(amino ester), poly(imido sulfide ), poly( ester sulfide), poly(aspartamide), poly(imido ether ), poly(amino quinone ), poly(enone sulfide) and poly(enamine ketone ).
For example, linear step growth polymerization produces the redox active poly(amino quinone), which serves as an anti-corrosion coatings on various metal surfaces. [ 39 ] Another example includes network polymers , which are used for drug delivery, high performance composites, and coatings. These network polymers are synthesized using a dual chain growth, photo-induced radical and step growth Michael addition system. | https://en.wikipedia.org/wiki/Michael_addition_reaction |
The Michaelis–Arbuzov reaction (also called the Arbuzov reaction ) is the chemical reaction of a trivalent phosphorus ester with an alkyl halide to form a pentavalent phosphorus species and another alkyl halide. The picture below shows the most common types of substrates undergoing the Arbuzov reaction; phosphite esters ( 1 ) react to form phosphonates ( 2 ), phosphonites ( 3 ) react to form phosphinates ( 4 ) and phosphinites ( 5 ) react to form phosphine oxides ( 6 ).
The reaction was discovered by August Michaelis in 1898, [ 1 ] and greatly explored by Aleksandr Arbuzov soon thereafter. [ 2 ] [ 3 ] This reaction is widely used for the synthesis of various phosphonates, phosphinates , and phosphine oxides . Several reviews have been published. [ 4 ] [ 5 ] The reaction also occurs for coordinated phosphite ligands, as illustrated by the demethylation of {(C 5 H 5 )Co[(CH 3 O) 3 P] 3 } 2+ to give {(C 5 H 5 )Co[(CH 3 O) 2 PO] 3 } − , which is called the Klaui ligand .
The Michaelis–Arbuzov reaction is initiated with the S N 2 attack of the nucleophilic phosphorus species ( 1 - A phosphite) with the electrophilic alkyl halide ( 2 ) to give a phosphonium salt as an intermediate ( 3 ). These intermediates are occasionally stable enough to be isolated, such as for triaryl phosphites which do not react to form the phosphonate without thermal cleavage of the intermediate (200 °C), or cleavage by alcohols or bases. The displaced halide anion then usually reacts via another S N 2 reaction on one of the R 1 carbons, displacing the oxygen atom to give the desired phosphonate ( 4 ) and another alkyl halide ( 5 ). This has been supported by the observation that chiral R 1 groups experience inversion of configuration at the carbon center attacked by the halide anion. This is what is expected of an S N 2 reaction. [ 6 ] Evidence also exists for a carbocation based mechanism of dealkylation similar to an S N 1 reaction , where the R 1 group initially dissociates from the phosphonium salt followed by attack of the anion. [ 5 ] Phosphite esters with tertiary alkyl halide groups can undergo the reaction, which would be unexpected if only an S N 2 mechanism was operating. Further support for this S N 1 type mechanism comes from the use of the Arbuzov reaction in the synthesis of neopentyl halides, a class of compounds that are notoriously unreactive towards S N 2 reactions. Based on the principle of microscopic reversibility , the inert nature of the neopentyl halides towards the S N 2 reaction indicates that an S N 2 reaction is unlikely to be the mechanism for the synthesis of the neopentyl halides in this reaction. Substrates that cannot react through an S N 2 pathway or an S N 1 pathway generally do not react, which include vinyl and aryl groups. For example, the triaryl phosphites mentioned above generally do not react because they form stable phosphonium salts. Since aryl groups do not undergo S N 1 and S N 2 type mechanisms, triaryl phosphites lack a low energy pathway for decomposition of the phosphonium salt. An allylic rearrangement mechanism ( S N 2' ) has also been implicated in allyl and propargyl halides.
Stereochemical experiments on cyclic phosphites have revealed the presence of both pentavalent phosphoranes and tetravalent phosphonium intermediates in chemical equilibrium being involved in the dealkylation step of the reaction using 31 P NMR . The decomposition of these intermediates is driven primarily by the nucleophilicity of the anion. There exists many instances of the intermediate phosphonium salts being sufficiently stable that they can be isolated when the anion is weakly nucleophilic, such as with tetrafluoroborate or triflate anions.
Source: [ 5 ]
As a general guideline, the reactivity of the organic halide component can be listed as follows: (from most reactive to least reactive)
and
In general, tertiary alkyl halides, aryl halides and vinyl halides do not react. There are notable exceptions to this trend, including 1,2-dichloroethene and trityl halides. Some activated aryl halides, often involving heterocycles have been known to undergo the reaction. Iodobenzene and substituted derivatives have been known to undergo the reaction under photolytic conditions. Secondary alkyl halides often do not react well, producing alkenes as side-products. Allyl and propargyl halides are also reactive, but can proceed through an S N 2 or an S N 2` mechanism. Reaction with primary alkyl halides and acyl halides generally proceed smoothly. Carbon tetrachloride interestingly enough, only undergoes the reaction a single time with chloroform being inert to the reaction conditions. When a halide atom is found in the ester chain off of the phosphorus atom, isomerization to the corresponding Arbuzov product has been known without addition of an alkyl halide.
The Perkow reaction is a competing reaction pathway for α-bromo- and α-chloroketones. Under the reaction conditions a mixture of the Perkow product and the normal Arbuzov product occur, usually favoring the Perkow product by a significant amount. Using higher temperatures during the reaction can lead to favoring of the Arbuzov product. The reaction of α-iodoketones give only the Arbuzov product. [ 7 ] Other methods of producing β-ketophosphonates have been developed. [ 8 ]
The reaction of trivalent phosphorus compounds with alkyl fluorides is abnormal. One example of this reactivity is shown below.
The general form of the trivalent phosphorus reagent can be considered as follows: ABP − OR {\displaystyle {\ce {ABP-OR}}} with A and B generally being alkyl, alkoxy or aryloxy groups. Electron-withdrawing groups are known to slow down the rate of the reaction, with electron donating groups increasing the rate of the reaction. This is consistent with initial attack of the phosphorus reagent on the alkyl halide as the rate-determining step of the reaction. The reaction proceeds smoothly when the R group is aliphatic. When all of A, B and R are aryl groups, a stable phosphonium salt is formed and the reaction proceeds no further under normal conditions. Heating to higher temperatures in the presence of alcohols has been known to give the isomerization product. Cyclic phosphites generally react to eject the non-cyclic OR group, though for some 5-member rings additional heating is required to afford the final cyclic product. [ 5 ]
Phosphite salts (Ex: R = Na) can also undergo the reaction with precipitation of the corresponding Na-halide salt. Amidophosphites and silyloxyphosphites have been used before to yield amidophosphonates and phosphinic acids. [ 5 ]
An Arbuzov type rearrangement can also occur where the O from an OR group acts as the leaving group in the initial S N 2 attack of the phosphorus. This is only known to occur when A and B are Cl. [ 5 ]
Phosphite esters are the least reactive class of reagents used in this reaction. They react to produce phosphonates. They require the most heating for the reaction to occur (120 °C - 160 °C is common). This high temperature allows for fractional distillation to be employed in the removal of the alkyl halide produced, though excess of the starting alkyl halide can also be used. Solvents are often not used for this reaction, though there is precedent for the improvement of selectivity with its usage. [ 5 ]
Phosphonites are generally more reactive than phosphite esters. They react to produce phosphinates. Heating is also required for the reaction, but pyrolysis of the ester to an acid is a common side reaction. The poor availability of substituted phosphonites limits the usage of this class of reagent in the Arbuzov reaction. Hydroxy , thiol , carboxylic acid , primary and secondary amine functional groups cannot be used with phosphonites in the reaction as they all react with the phosphonite. [ 5 ]
Phosphinites are the most reactive class of reagents used in this reaction. They react to produce phosphine oxides. They often require very little heating (45 °C) for the reaction to occur and have been known to self-isomerize without the presence of alkyl halides. [ 5 ]
The Arbuzov rearrangement generally does not admit a thiologous analogue, except when the phosphorus is substituted with strongly electron-donating groups. [ 9 ] | https://en.wikipedia.org/wiki/Michaelis–Arbuzov_reaction |
The Michaelis–Becker reaction is the reaction of a hydrogen (thio) phosphonate with a base, followed by a nucleophilic substitution of phosphorus on a haloalkane , to give an alkyl (thio)phosphonate. [ 1 ] Yields of this reaction are often lower than the corresponding Michaelis–Arbuzov reaction . [ 2 ] [ 3 ] | https://en.wikipedia.org/wiki/Michaelis–Becker_reaction |
In biochemistry , Michaelis–Menten kinetics , named after Leonor Michaelis and Maud Menten , is the simplest case of enzyme kinetics , applied to enzyme-catalysed reactions involving the transformation of one substrate into one product. It takes the form of a differential equation describing the reaction rate v {\displaystyle v} (rate of formation of product P, with concentration p {\displaystyle p} ) as a function of a {\displaystyle a} , the concentration of the substrate A (using the symbols recommended by the IUBMB ). [ 1 ] [ 2 ] [ 3 ] [ 4 ] Its formula is given by the Michaelis–Menten equation :
V {\displaystyle V} , which is often written as V max {\displaystyle V_{\max }} , [ 5 ] represents the limiting rate approached by the system at saturating substrate concentration for a given enzyme concentration. The Michaelis constant K m {\displaystyle K_{\mathrm {m} }} has units of concentration, and for a given reaction is equal to the concentration of substrate at which the reaction rate is half of V {\displaystyle V} . [ 6 ] Biochemical reactions involving a single substrate are often assumed to follow Michaelis–Menten kinetics, without regard to the model's underlying assumptions. Only a small proportion of enzyme-catalysed reactions have just one substrate, but the equation still often applies if only one substrate concentration is varied.
The plot of v {\displaystyle v} against a {\displaystyle a} has often been called a "Michaelis–Menten plot", even recently, [ 7 ] [ 8 ] [ 9 ] but this is misleading, because Michaelis and Menten did not use such a plot. Instead, they plotted v {\displaystyle v} against log a {\displaystyle \log a} , which has some advantages over the usual ways of plotting Michaelis–Menten data. It has v {\displaystyle v} as the dependent variable, and thus does not distort the experimental errors in v {\displaystyle v} . Michaelis and Menten did not attempt to estimate V {\displaystyle V} directly from the limit approached at high log a {\displaystyle \log a} , something difficult to do accurately with data obtained with modern techniques, and almost impossible with their data. Instead they took advantage of the fact that the curve is almost straight in the middle range and has a maximum slope of 0.576 V {\displaystyle 0.576V} i.e. 0.25 ln 10 ⋅ V {\displaystyle 0.25\ln 10\cdot V} . With an accurate value of V {\displaystyle V} it was easy to determine log K m {\displaystyle \log K_{\mathrm {m} }} from the point on the curve corresponding to 0.5 V {\displaystyle 0.5V} .
This plot is virtually never used today for estimating V {\displaystyle V} and K m {\displaystyle K_{\mathrm {m} }} , but it remains of major interest because it has another valuable property: it allows the properties of isoenzymes catalysing the same reaction, but active in very different ranges of substrate concentration, to be compared on a single plot. For example, the four mammalian isoenzymes of hexokinase are half-saturated by glucose at concentrations ranging from about 0.02 mM for hexokinase A (brain hexokinase) to about 50 mM for hexokinase D ("glucokinase", liver hexokinase), more than a 2000-fold range. It would be impossible to show a kinetic comparison between the four isoenzymes on one of the usual plots, but it is easily done on a semi-logarithmic plot. [ 10 ]
A decade before Michaelis and Menten , Victor Henri found that enzyme reactions could be explained by assuming a binding interaction between the enzyme and the substrate. [ 11 ] His work was taken up by Michaelis and Menten, who investigated the kinetics of invertase , an enzyme that catalyzes the hydrolysis of sucrose into glucose and fructose . [ 12 ] In 1913 they proposed a mathematical model of the reaction. [ 13 ] It involves an enzyme E binding to a substrate A to form a complex EA that releases a product P regenerating the original form of the enzyme. [ 6 ] This may be represented schematically as
where k + 1 {\displaystyle k_{\mathrm {+1} }} (forward rate constant), k − 1 {\displaystyle k_{\mathrm {-1} }} (reverse rate constant), and k c a t {\displaystyle k_{\mathrm {cat} }} (catalytic rate constant) denote the rate constants , [ 14 ] the double arrows between A (substrate) and EA (enzyme-substrate complex) represent the fact that enzyme-substrate binding is a reversible process, and the single forward arrow represents the formation of P (product).
Under certain assumptions – such as the enzyme concentration being much less than the substrate concentration – the rate of product formation is given by
in which e 0 {\displaystyle e_{0}} is the initial enzyme concentration. The reaction order depends on the relative size of the two terms in the denominator. At low substrate concentration a ≪ K m {\displaystyle a\ll K_{\mathrm {m} }} , so that the rate v = k c a t e 0 a K m {\displaystyle v={\frac {k_{\mathrm {cat} }e_{0}a}{K_{\mathrm {m} }}}} varies linearly with substrate concentration a {\displaystyle a} ( first-order kinetics in a {\displaystyle a} ). [ 15 ] However at higher a {\displaystyle a} , with a ≫ K m {\displaystyle a\gg K_{\mathrm {m} }} , the reaction approaches independence of a {\displaystyle a} (zero-order kinetics in a {\displaystyle a} ), [ 15 ] asymptotically approaching the limiting rate V m a x = k c a t e 0 {\displaystyle V_{\mathrm {max} }=k_{\mathrm {cat} }e_{0}} . This rate, which is never attained, refers to the hypothetical case in which all enzyme molecules are bound to substrate. k c a t {\displaystyle k_{\mathrm {cat} }} , known as the turnover number or catalytic constant , normally expressed in s –1 , is the limiting number of substrate molecules converted to product per enzyme molecule per unit of time. Further addition of substrate would not increase the rate, and the enzyme is said to be saturated.
The Michaelis constant K m {\displaystyle K_{\mathrm {m} }} is not affected by the concentration or purity of an enzyme. [ 16 ] Its value depends both on the identity of the enzyme and that of the substrate, as well as conditions such as temperature and pH.
The model is used in a variety of biochemical situations other than enzyme-substrate interaction, including antigen–antibody binding , DNA–DNA hybridization , and protein–protein interaction . [ 17 ] [ 18 ] It can be used to characterize a generic biochemical reaction, in the same way that the Langmuir equation can be used to model generic adsorption of biomolecular species. [ 18 ] When an empirical equation of this form is applied to microbial growth, it is sometimes called a Monod equation .
Michaelis–Menten kinetics have also been applied to a variety of topics outside of biochemical reactions, [ 14 ] including alveolar clearance of dusts, [ 19 ] the richness of species pools, [ 20 ] clearance of blood alcohol , [ 21 ] the photosynthesis-irradiance relationship, and bacterial phage infection. [ 22 ]
The equation can also be used to describe the relationship between ion channel conductivity and ligand concentration, [ 23 ] and also, for example, to limiting nutrients and phytoplankton growth in the global ocean. [ 24 ]
The specificity constant k cat / K m {\displaystyle k_{\text{cat}}/K_{\mathrm {m} }} (also known as the catalytic efficiency ) is a measure of how efficiently an enzyme converts a substrate into product. Although it is the ratio of k cat {\displaystyle k_{\text{cat}}} and K m {\displaystyle K_{\mathrm {m} }} it is a parameter in its own right, more fundamental than K m {\displaystyle K_{\mathrm {m} }} . Diffusion limited enzymes , such as fumarase , work at the theoretical upper limit of 10 8 – 10 10 M −1 s −1 , limited by diffusion of substrate into the active site . [ 25 ]
If we symbolize the specificity constant for a particular substrate A as k A = k cat / K m {\displaystyle k_{\mathrm {A} }=k_{\text{cat}}/K_{\mathrm {m} }} the Michaelis–Menten equation can be written in terms of k A {\displaystyle k_{\mathrm {A} }} and K m {\displaystyle K_{\mathrm {m} }} as follows:
At small values of the substrate concentration this approximates to a first-order dependence of the rate on the substrate concentration:
Conversely it approaches a zero-order dependence on a {\displaystyle a} when the substrate concentration is high:
The capacity of an enzyme to distinguish between two competing substrates that both follow Michaelis–Menten kinetics depends only on the specificity constant, and not on either k cat {\displaystyle k_{\text{cat}}} or K m {\displaystyle K_{\mathrm {m} }} alone. Putting k A {\displaystyle k_{\mathrm {A} }} for substrate A {\displaystyle \mathrm {A} } and k A ′ {\displaystyle k_{\mathrm {A'} }} for a competing substrate A ′ {\displaystyle \mathrm {A'} } , then the two rates when both are present simultaneously are as follows:
Although both denominators contain the Michaelis constants they are the same, and thus cancel when one equation is divided by the other:
and so the ratio of rates depends only on the concentrations of the two substrates and their specificity constants.
As the equation originated with Henri , not with Michaelis and Menten , it is more accurate to call it the Henri–Michaelis–Menten equation, [ 26 ] though it was Michaelis and Menten who realized that analysing reactions in terms of initial rates would be simpler, and as a result more productive, than analysing the time course of reaction, as Henri had attempted. Although Henri derived the equation he made no attempt to apply it. In addition, Michaelis and Menten understood the need for buffers to control the pH, but Henri did not.
Parameter values vary widely between enzymes. Some examples are as follows: [ 27 ]
In their analysis, Michaelis and Menten (and also Henri) assumed that the substrate is in instantaneous chemical equilibrium with the complex, which implies [ 13 ] [ 28 ]
in which e is the concentration of free enzyme (not the total concentration) and x is the concentration of enzyme-substrate complex EA.
Conservation of enzyme requires that [ 28 ]
where e 0 {\displaystyle e_{0}} is now the total enzyme concentration. After combining the two expressions some straightforward algebra leads to the following expression for the concentration of the enzyme-substrate complex:
where K d i s s = k − 1 / k + 1 {\displaystyle K_{\mathrm {diss} }=k_{-1}/k_{+1}} is the dissociation constant of the enzyme-substrate complex. Hence the rate equation is the Michaelis–Menten equation, [ 28 ]
where k + 2 {\displaystyle k_{+2}} corresponds to the catalytic constant k c a t {\displaystyle k_{\mathrm {cat} }} and the limiting rate is V m a x = k + 2 e 0 = k c a t e 0 {\displaystyle V_{\mathrm {max} }=k_{+2}e_{0}=k_{\mathrm {cat} }e_{0}} . Likewise with the assumption of equilibrium the Michaelis constant K m = K d i s s {\displaystyle K_{\mathrm {m} }=K_{\mathrm {diss} }} .
When studying urease at about the same time as Michaelis and Menten were studying invertase, Donald Van Slyke and G. E. Cullen [ 29 ] made essentially the opposite assumption, treating the first step not as an equilibrium but as an irreversible second-order reaction with rate constant k + 1 {\displaystyle k_{+1}} . As their approach is never used today it is sufficient to give their final rate equation:
and to note that it is functionally indistinguishable from the Henri–Michaelis–Menten equation. One cannot tell from inspection of the kinetic behaviour whether K m {\displaystyle K_{\mathrm {m} }} is equal to k + 2 / k + 1 {\displaystyle k_{+2}/k_{+1}} or to k − 1 / k + 1 {\displaystyle k_{-1}/k_{+1}} or to something else.
G. E. Briggs and J. B. S. Haldane undertook an analysis that harmonized the approaches of Michaelis and Menten and of Van Slyke and Cullen, [ 30 ] [ 31 ] and is taken as the basic approach to enzyme kinetics today. They assumed that the concentration of the intermediate complex does not change on the time scale over which product formation is measured. [ 32 ] This assumption means that k + 1 e a = k − 1 x + k c a t x = ( k − 1 + k c a t ) x {\displaystyle k_{+1}ea=k_{-1}x+k_{\mathrm {cat} }x=(k_{-1}+k_{\mathrm {cat} })x} . The resulting rate equation is as follows:
where
This is the generalized definition of the Michaelis constant. [ 33 ]
All of the derivations given treat the initial binding step in terms of the law of mass action , which assumes free diffusion through the solution. However, in the environment of a living cell where there is a high concentration of proteins , the cytoplasm often behaves more like a viscous gel than a free-flowing liquid, limiting molecular movements by diffusion and altering reaction rates. [ 34 ] Note, however that although this gel-like structure severely restricts large molecules like proteins its effect on small molecules, like many of the metabolites that participate in central metabolism, is very much smaller. [ 35 ] In practice, therefore, treating the movement of substrates in terms of diffusion is not likely to produce major errors. Nonetheless, Schnell and Turner consider it more appropriate to model the cytoplasm as a fractal , in order to capture its limited-mobility kinetics. [ 36 ]
Determining the parameters of the Michaelis–Menten equation typically involves running a series of enzyme assays at varying substrate concentrations a {\displaystyle a} , and measuring the initial reaction rates v {\displaystyle v} , i.e. the reaction rates are measured after a time period short enough for it to be assumed that the enzyme-substrate complex has formed, but that the substrate concentration remains almost constant, and so the equilibrium or quasi-steady-state approximation remain valid. [ 37 ] By plotting reaction rate against concentration, and using nonlinear regression of the Michaelis–Menten equation with correct weighting based on known error distribution properties of the rates, the parameters may be obtained.
Before computing facilities to perform nonlinear regression became available, graphical methods involving linearisation of the equation were used. A number of these were proposed, including the Eadie–Hofstee plot of v {\displaystyle v} against v / a {\displaystyle v/a} , [ 38 ] [ 39 ] the Hanes plot of a / v {\displaystyle a/v} against a {\displaystyle a} , [ 40 ] and the Lineweaver–Burk plot (also known as the double-reciprocal plot ) of 1 / v {\displaystyle 1/v} against 1 / a {\displaystyle 1/a} . [ 41 ] Of these, [ 42 ] the Hanes plot is the most accurate when v {\displaystyle v} is subject to errors with uniform standard deviation. [ 43 ] From the point of view of visualizaing the data the Eadie–Hofstee plot has an important property: the entire possible range of v {\displaystyle v} values from 0 {\displaystyle 0} to V {\displaystyle V} occupies a finite range of ordinate scale, making it impossible to choose axes that conceal a poor experimental design.
However, while useful for visualization, all three linear plots distort the error structure of the data and provide less precise estimates of v {\displaystyle v} and K m {\displaystyle K_{\mathrm {m} }} than correctly weighted non-linear regression. Assuming an error ε ( v ) {\displaystyle \varepsilon (v)} on v {\displaystyle v} , an inverse representation leads to an error of ε ( v ) / v 2 {\displaystyle \varepsilon (v)/v^{2}} on 1 / v {\displaystyle 1/v} ( Propagation of uncertainty ), implying that linear regression of the double-reciprocal plot should include weights of v 4 {\displaystyle v^{4}} . This was well understood by Lineweaver and Burk, [ 41 ] who had consulted the eminent statistician W. Edwards Deming before analysing their data. [ 44 ] Unlike nearly all workers since, Burk made an experimental study of the error distribution, finding it consistent with a uniform standard error in v {\displaystyle v} , before deciding on the appropriate weights. [ 45 ] This aspect of the work of Lineweaver and Burk received virtually no attention at the time, and was subsequently forgotten.
The direct linear plot is a graphical method in which the observations are represented by straight lines in parameter space, with axes K m {\displaystyle K_{\mathrm {m} }} and V {\displaystyle V} : each line is drawn with an intercept of − a {\displaystyle -a} on the K m {\displaystyle K_{\mathrm {m} }} axis and v {\displaystyle v} on the V {\displaystyle V} axis. The point of intersection of the lines for different observations yields the values of K m {\displaystyle K_{\mathrm {m} }} and V {\displaystyle V} . [ 46 ]
Many authors, for example Greco and Hakala, [ 47 ] have claimed that non-linear regression is always superior to regression of the linear forms of the Michaelis–Menten equation. However, that is correct only if the appropriate weighting scheme is used, preferably on the basis of experimental investigation, something that is almost never done. As noted above, Burk [ 45 ] carried out the appropriate investigation, and found that the error structure of his data was consistent with a uniform standard deviation in v {\displaystyle v} . More recent studies found that a uniform coefficient of variation (standard deviation expressed as a percentage) was closer to the truth with the techniques in use in the 1970s. [ 48 ] [ 49 ] However, this truth may be more complicated than any dependence on v {\displaystyle v} alone can represent. [ 50 ]
Uniform standard deviation of 1 / v {\displaystyle 1/v} . If the rates are considered to have a uniform standard deviation the appropriate weight for every v {\displaystyle v} value for non-linear regression is 1. If the double-reciprocal plot is used each value of 1 / v {\displaystyle 1/v} should have a weight of v 4 {\displaystyle v^{4}} , whereas if the Hanes plot is used each value of a / v {\displaystyle a/v} should have a weight of v 4 / a 2 {\displaystyle v^{4}/a^{2}} .
Uniform coefficient variation of 1 / v {\displaystyle 1/v} . If the rates are considered to have a uniform coefficient variation the appropriate weight for every v {\displaystyle v} value for non-linear regression is v 2 {\displaystyle v^{2}} . If the double-reciprocal plot is used each value of 1 / v {\displaystyle 1/v} should have a weight of v 2 {\displaystyle v^{2}} , whereas if the Hanes plot is used each value of a / v {\displaystyle a/v} should have a weight of v 2 / a 2 {\displaystyle v^{2}/a^{2}} .
Ideally the v {\displaystyle v} in each of these cases should be the true value, but that is always unknown. However, after a preliminary estimation one can use the calculated values v ^ {\displaystyle {\hat {v}}} for refining the estimation. In practice the error structure of enzyme kinetic data is very rarely investigated experimentally, therefore almost never known, but simply assumed. It is, however, possible to form an impression of the error structure from internal evidence in the data. [ 51 ] This is tedious to do by hand, but can readily be done in the computer.
Santiago Schnell and Claudio Mendoza suggested a closed form solution for the time course kinetics analysis of the Michaelis–Menten kinetics based on the solution of the Lambert W function . [ 52 ] Namely,
where W is the Lambert W function and
The above equation, known nowadays as the Schnell-Mendoza equation, [ 53 ] has been used to estimate V {\displaystyle V} and K m {\displaystyle K_{\mathrm {m} }} from time course data. [ 54 ] [ 55 ]
Only a small minority of enzyme-catalysed reactions have just one substrate, and even if the number is increased by treating two-substrate reactions in which one substrate is water as one-substrate reactions the number is still small. One might accordingly suppose that the Michaelis–Menten equation, normally written with just one substrate, is of limited usefulness. This supposition is misleading, however. One of the common equations for a two-substrate reaction can be written as follows to express v {\displaystyle v} in terms of two substrate concentrations a {\displaystyle a} and b {\displaystyle b} :
the other symbols represent kinetic constants. Suppose now that a {\displaystyle a} is varied with b {\displaystyle b} held constant. Then it is convenient to reorganize the equation as follows:
This has exactly the form of the Michaelis–Menten equation
with apparent values V a p p {\displaystyle V^{\mathrm {app} }} and K m a p p {\displaystyle K_{\mathrm {m} }^{\mathrm {app} }} defined as follows:
The linear (simple) types of inhibition can be classified in terms of the general equation for mixed inhibition at an inhibitor concentration i {\displaystyle i} :
in which K i c {\displaystyle K_{\mathrm {ic} }} is the competitive inhibition constant and K i u {\displaystyle K_{\mathrm {iu} }} is the uncompetitive inhibition constant . This equation includes the other types of inhibition as special cases:
Pure non-competitive inhibition is very rare, being mainly confined to effects of protons and some metal ions. Cleland recognized this, and he redefined noncompetitive to mean mixed . [ 57 ] Some authors have followed him in this respect, but not all, so when reading any publication one needs to check what definition the authors are using.
In all cases the kinetic equations have the form of the Michaelis–Menten equation with apparent constants, as can be seen by writing the equation above as follows:
with apparent values V a p p {\displaystyle V^{\mathrm {app} }} and K m a p p {\displaystyle K_{\mathrm {m} }^{\mathrm {app} }} defined as follows: | https://en.wikipedia.org/wiki/Michaelis–Menten_kinetics |
Michel André Kervaire (26 April 1927 – 19 November 2007) was a French mathematician who made significant contributions to topology and algebra .
He introduced the Kervaire semi-characteristic . He was the first to show the existence of topological n - manifolds with no differentiable structure (using the Kervaire invariant ), and (with John Milnor ) computed the number of exotic spheres in dimensions greater than four, known as Kervaire–Milnor groups . He is also well known for fundamental contributions to high-dimensional knot theory . The solution of the Kervaire invariant problem was announced by Michael Hopkins in Edinburgh on 21 April 2009.
He was the son of André Kervaire (a French industrialist) and Nelly Derancourt. After completing high school in France , Kervaire pursued his studies at ETH Zurich (1947–1952), receiving a Ph.D. in 1955. His thesis, entitled Courbure intégrale généralisée et homotopie , was written under the direction of Heinz Hopf and Beno Eckmann . [ 1 ]
Kervaire was a professor at New York University 's Courant Institute from 1959 to 1971, and then at the University of Geneva from 1971 to 1997, when he retired. [ 2 ] He received an honorary doctorate from the University of Neuchâtel in 1986; he was also an honorary member of the Swiss Mathematical Society . [ 3 ] | https://en.wikipedia.org/wiki/Michel_Kervaire |
Michel Pouchard (born 23 January, 1938 in Avrillé-les-Ponceaux ) is a French chemist specialising in the physico-chemistry of inorganic solids.
After studying at the David high school in Angers and at the faculties of science at University of Rennes and University of Bordeaux , [ 1 ] Michel Pouchard specializes in the physico-chemistry of inorganic solids: oxides of transition metals , electronic properties ( magnetism , insulation-to-metal transition ) and electrochemistry (materials for energy, membranes, electrodes for SOFC fuel cells in particular), nanocrystalline silicon ) and in the science of functional materials.
Trainee then research associate at the CNRS from 1960 to 1967 (director of the materials technology dissemination department at the CNRS from 1975 to 1984), he was a lecturer at the Faculty of Sciences, University of Bordeaux from 1967 to 1970, then professor at the University of Bordeaux I from 1970 to 1992 (professor emeritus from 2004). From 1992 to 2002, he was a professor at the Institut universitaire de France (of which he was a director from 1993 to 1997). [ 1 ]
He was elected a member of the French Academy of sciences on 16 November 1992. [ 2 ] He is also a member of the Academy of Technologies, [ 3 ] the French Society of Chemistry, [ 4 ] the Academia europaea (1998) [ 5 ] and the Leopoldina Academy (Germany) (2000).
Michel Pouchard is the author of nearly 400 articles published in the best journals in solid-state chemistry and materials science and some fifteen patents. [ 6 ] [ 2 ]
Langevin Prize of the French Academy of sciences (1977) | https://en.wikipedia.org/wiki/Michel_Pouchard |
Michel Rolle (21 April 1652 – 8 November 1719) was a French mathematician . He is best known for Rolle's theorem (1691). He is also the co-inventor in Europe [ 1 ] of Gaussian elimination (1690).
Rolle was born in Ambert , Basse-Auvergne . Rolle, the son of a shopkeeper, received only an elementary education. He married early and as a young man struggled to support his family on the meager wages of a transcriber for notaries and attorney. In spite of his financial problems and minimal education, Rolle studied algebra and Diophantine analysis (a branch of number theory) on his own. He moved from Ambert to Paris in 1675.
Rolle's fortune changed dramatically in 1682 when he published an elegant solution of a difficult, unsolved problem in Diophantine analysis. The public recognition of his achievement led to a patronage under minister Louvois, a job as an elementary mathematics teacher, and eventually to a short-termed administrative post in the Ministry of War. In 1685 he joined the Académie des Sciences in a very low-level position for which he received no regular salary until 1699. Rolle was promoted to a salaried position in the academy, a pensionnaire géometre, . This was a distinguished post because of the 70 members of the academy, only 20 were paid. [ 2 ] He had then already been given a pension by Jean-Baptiste Colbert after he solved one of Jacques Ozanam 's problems. He remained there until he died of apoplexy in 1719.
While Rolle's forte was always Diophantine analysis, his most important work was a book on the algebra of equations, called Traité d'algèbre , published in 1690. In that book Rolle firmly established the notation for the n th root of a real number, and proved a polynomial version of the theorem that today bears his name. ( Rolle's theorem was named by Giusto Bellavitis in 1846.)
Rolle was one of the most vocal early antagonists of calculus – ironically so, because Rolle's theorem is essential for basic proofs in calculus. He strove intently to demonstrate that it gave erroneous results and was based on unsound reasoning. He quarreled so vehemently on the subject that the Académie des Sciences was forced to intervene on several occasions.
Among his several achievements, Rolle helped advance the currently accepted size order for negative numbers. Descartes, for example, viewed –2 as smaller than –5. Rolle preceded most of his contemporaries by adopting the current convention in 1691.
Rolle died in Paris. No contemporary portrait of him is known.
Rolle was an early critic of infinitesimal calculus , arguing that it was inaccurate, based upon unsound reasoning, and was a collection of ingenious fallacies, [ 3 ] but later changed his opinion. [ 3 ]
In 1690, Rolle published Traité d'Algebre. It contains the first published description in Europe of the Gaussian elimination algorithm, which Rolle called the method of substitution [ 4 ] Some examples of the method had previously appeared in algebra books, and Isaac Newton had previously described the method in his lecture notes, but Newton's lesson was not published until 1707. Rolle's statement of the method seems not to have been noticed insofar as the lesson for Gaussian elimination that was taught in 18th- and 19th-century algebra textbooks owes more to Newton than to Rolle.
Rolle is best known for Rolle's theorem in differential calculus. Rolle had used the result in 1690, and he proved it (by the standards of the time) in 1691. Given his animosity to infinitesimals it is fitting that the result was couched in terms of algebra rather than analysis. [ 2 ] Only in the 18th century was the theorem interpreted as a fundamental result in differential calculus. Indeed, it is needed to prove both the mean value theorem and the existence of Taylor series . As the importance of the theorem grew, so did the interest in identifying the origin, and it was finally named Rolle's theorem in the 19th century. Barrow-Green remarks that the theorem might well have been named for someone else had not a few copies of Rolle's 1691 publication survived.
In a criticism of infinitesimal calculus that predated George Berkeley 's, Rolle presented a series of papers at the French academy, alleging that the use of the methods of infinitesimal calculus leads to errors. Specifically, he presented an explicit algebraic curve, and alleged that some of its local minima are missed when one applies the methods of infinitesimal calculus. Pierre Varignon responded by pointing out that Rolle had misrepresented the curve, and that the alleged local minima are in fact singular points with a vertical tangent. [ 5 ] | https://en.wikipedia.org/wiki/Michel_Rolle |
Michel Van den Bergh (born 25 July 1960) is a Belgian mathematician and professor at the Vrije Universiteit Brussel and does research at Hasselt University . His research interest is on the fundamental relationship between algebra and geometry . In 2003, he was awarded the Francqui Prize on Exact Sciences.
Van den Bergh obtained his Ph.D. in mathematics from the University of Antwerp in 1985, with thesis Algebraic Elements in Finite Dimensional Division Algebras written under the direction of Fred Van Oystaeyen and Jan Maria Hendrik Van Geel. [ 1 ]
This article about a Belgian scientist is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Michel_Van_den_Bergh |
The Michelangelo Hand is a fully articulated robotic hand prosthesis developed by the German prosthetics company Ottobock and its American partner Advanced Arm Dynamics. It is the first prosthesis to feature an electronically actuated thumb which mimics natural human hand movements. [ 1 ] [ 2 ] [ 3 ] The Michelangelo Hand can be used for a variety of delicate everyday tasks, was first fitted to an Austrian elective-amputee in July 2010 [ 4 ] [ 5 ] and has been in use by military and civilian amputees in the United States and United Kingdom since 2011. [ 2 ] [ 3 ] [ 6 ]
The Michelangelo Hand's development was begun by the German prosthetics manufacturer Ottobock. In 2008, the American company Advanced Arm Dynamics became involved with testing and further refinement of the prosthesis. [ 1 ]
The prosthesis is battery-powered and can be used for up to 20 hours between charges. [ 2 ] Constructed of metal and plastic, it is designed with a natural, anthropomorphic aesthetic, and can be custom-fitted for each user. Its motions are controlled by built-in electrodes , which detect the movements of the user's remaining arm muscles and interpret them using electromyography software. [ 1 ] The fingers can form numerous naturalistic configurations to hold, grip or pinch objects. [ 7 ] The Michelangelo Hand is capable of moving with enough precision to conduct delicate tasks such as cooking, ironing , and opening a toothpaste tube, [ 1 ] but can also exert enough strength to use an automobile's steering wheel . Skin-toned cosmetic gloves are also available for the prosthesis. [ 8 ] In 2013, the Michelangelo Hand had a unit cost of around £47,000 (US$73,800). [ 2 ]
Austrian electrician Patrick Mayrhofer suffered serious injuries to his hands at the age of 20 when he touched a 6000-volt power line in February 2008. After unsuccessful attempts to reconstruct his left hand, it was amputated below the elbow in July 2010 [ 9 ] and he became the first patient in the world to be fitted with a Michelangelo Hand. [ 4 ] [ 5 ] [ 10 ] [ 11 ] He joined Ottobock 3 years later, helping their customers learn to use their prostheses. [ 12 ] Having started para-snowboarding in 2012, [ 5 ] [ 9 ] Mayrhofer was named Paralympic Austrian Sports Personality of the Year [ 10 ] after winning a gold medal in banked slalom at the 2015 Para-Snowboard World Championships [ 13 ] He went on to win the Paralympic silver medal in banked slalom at the 2018 Winter Olympics . [ 14 ]
Numerous American soldiers who suffered limb amputation in combat have received Michelangelo Hands since 2011. In January 2012, Matt Rezink of Wisconsin became the first American civilian to receive a unit. [ 6 ] In January 2013, Chris Taylor, a British service engineer who had lost his right hand in a jet ski accident in 2009, became the first UK citizen to be fitted with a Michelangelo Hand. [ 2 ] By 2013, the hand was offered by several British prosthetic services companies, including Dorset Orthopaedic. [ 15 ] | https://en.wikipedia.org/wiki/Michelangelo_Hand |
Michele Mosca is co-founder and deputy director of the Institute for Quantum Computing at the University of Waterloo , researcher and founding member of the Perimeter Institute for Theoretical Physics , and professor of mathematics in the department of Combinatorics & Optimization at the University of Waterloo. He has held a Tier 2 Canada Research Chair in Quantum Computation since January 2002, and has been a scholar for the Canadian Institute for Advanced Research since September 2003. [ 1 ] [ 2 ] Mosca's principal research interests concern the design of quantum algorithms , but he is also known for his early work on NMR quantum computation together with Jonathan A. Jones .
Mosca received a B.Math degree from the University of Waterloo in 1995. In 1996 he received a Commonwealth Scholarship to attend Wolfson College , Oxford University , where he received his M.Sc. degree in mathematics and foundations of computer science. On another scholarship (and while holding a fellowship ), Mosca received his D.Phil degree on the topic of quantum computer algorithms, also at the University of Oxford. [ 1 ]
In the field of cryptography , Mosca's theorem addresses the question of how soon an organization needs to act in order to protect its data from the threat of quantum computers . A quantum computer, once developed, would have the capacity to break the types of cryptography that have been widely used throughout the world, such as RSA . Although this is known risk, no one knows exactly when a quantum computer will be created. Mosca's theorem provides a risk assessment framework [ 3 ] that can help organizations identify how quickly they need to start migrating to new methods of quantum-safe cryptography .
Mosca's theorem was first proposed in the paper "Cybersecurity in an era with quantum computers: will we be ready?" by Mosca. [ 4 ] They proposed that if X + Y > Z, then organizations need to worry about the impact of quantum computers on their data. In this formula, X is the amount of time a given piece of data needs to be secure (shelf life); Y is how long it will take your organization to implement post-quantum cryptographic solutions (migration time) and Z is how long it will be before a sufficiently strong quantum computer exists (threat timeline). [ 5 ] [ 6 ] [ 7 ]
While the value of Z is unknown, many national information technology organizations predict the year 2030 [ 8 ] or 2035. [ 9 ] Given the complexity of migrating to post-quantum cryptography , Mosca's theorem suggests that most organizations need to be transitioning soon, or are perhaps behind schedule.
Mosca's theorem helped justify the National Institute of Standards and Technology ’s 2016 strategy to establish a handful of PQC algorithms with the international community. [ 10 ] | https://en.wikipedia.org/wiki/Michele_Mosca |
In combustion , Michelson–Sivashinsky equation describes the evolution of a premixed flame front, subjected to the Darrieus–Landau instability , in the small heat release approximation. The equation was derived by Gregory Sivashinsky in 1977, [ 1 ] who along the Daniel M. Michelson, presented the numerical solutions of the equation in the same year. [ 2 ] Let the planar flame front, in a uitable frame of reference be on the x y {\displaystyle xy} -plane, then the evolution of this planar front is described by the amplitude function u ( x , t ) {\displaystyle u(\mathbf {x} ,t)} (where x = ( x , y ) {\displaystyle \mathbf {x} =(x,y)} ) describing the deviation from the planar shape. The Michelson–Sivashinsky equation, reads as [ 3 ]
where ν {\displaystyle \nu } is a constant. Incorporating also the Rayleigh–Taylor instability of the flame, one obtains the Rakib–Sivashinsky equation (named after Z. Rakib and Gregory Sivashinsky ), [ 4 ]
where ⟨ u ⟩ ( t ) {\displaystyle \langle u\rangle (t)} denotes the spatial average of u {\displaystyle u} , which is a time-dependent function and γ {\displaystyle \gamma } is another constant.
The equations, in the absence of gravity, admits an explicit solution, which is called as the N-pole solution since the equation admits a pole decomposition,as shown by Olivier Thual, Uriel Frisch and Michel Hénon in 1988. [ 5 ] [ 6 ] [ 7 ] [ 8 ] Consider the 1d equation
where u ^ {\displaystyle {\hat {u}}} is the Fourier transform of u {\displaystyle u} . This has a solution of the form [ 5 ] [ 9 ]
where z n ( t ) {\displaystyle z_{n}(t)} (which appear in complex conjugate pairs) are poles in the complex plane. In the case periodic solution with periodicity 2 π {\displaystyle 2\pi } , the it is sufficient to consider poles whose real parts lie between the interval 0 {\displaystyle 0} and 2 π {\displaystyle 2\pi } . In this case, we have
These poles are interesting because in physical space, they correspond to locations of the cusps forming in the flame front. [ 10 ]
In 1995, [ 11 ] John W. Dold and Guy Joulin generalised the Michelson–Sivashinsky equation by introducing the second-order time derivative, which is consistent with the quadratic nature of the dispersion relation for the Darrieus–Landau instability . The Dold–Joulin equation is given by
where I ( e i k ⋅ x ) = | k | e i k ⋅ x {\displaystyle {\mathcal {I}}(e^{i\mathbf {k} \cdot \mathbf {x} })=|\mathbf {k} |e^{i\mathbf {k} \cdot \mathbf {x} }} corresponds to the non-local integral operator.
In 1992, [ 12 ] Guy Joulin and Pierre Cambray extended the Michelson–Sivashinsky equation to include higher-order correction terms, following by an earlier incorrect attempt to derive such an equation by Gregory Sivashinsky and Paul Clavin . [ 13 ] The Joulin–Cambray equation, in dimensional form, reads as | https://en.wikipedia.org/wiki/Michelson–Sivashinsky_equation |
Michiei Oto is a molecular biologist and an expert on the application of biotechnology to genetic testing . [ 1 ] [ 2 ] [ 3 ] He was the first to propose gene literacy education. [ 4 ]
Oto was born in Japan. He received a bachelor's degree in biochemistry from Chiba University in 1980 and a Ph.D. from the School of Medicine at Tokyo Medical and Dental University . He is the department director of biotechnology at Tokyo Technical College and a visiting Lecturer at Tokyo University of Agriculture and Technology , Maebashi Institute of Technology and Kogakuin University . | https://en.wikipedia.org/wiki/Michiei_Oto |
The Michigan Life Sciences Corridor (MLSC) is a $ 1 billion biotechnology initiative in the U.S. state of Michigan .
The MLSC invests in biotech research at four Michigan institutions: the University of Michigan in Ann Arbor ; Michigan State University in East Lansing ; Wayne State University in Detroit ; and the Van Andel Institute in Grand Rapids .
The Michigan Economic Development Corporation administers the program. It began in 1999 with money from the state's settlement with the tobacco industry . When the program's funds distributions are completed in 2019, the goal is that the investments in high tech research will have notably expanded the state's economic base.
In 1998, the State of Michigan, along with 45 other states, reached the $ 8.5 billion Tobacco Master Settlement Agreement , a settlement with the U.S. tobacco industry. [ 1 ] Former Governor John Engler created the Michigan Life Sciences Corridor in 1999 when he signed Public Act 120 of 1999. [ 2 ] The bill appropriated money from the state's settlement with the tobacco industry to fund biotech research at four of Michigan's largest research institutions. [ 3 ]
Under the management of the Michigan Economic Development Corporation, the MLSC allocated $1 billion over the course of 20 years, including $50 million in 1999 to fund research on aging . [ 4 ] The following year, the MLSC awarded $100 million to 63 Michigan universities. [ 5 ] In 2002, Governor Jennifer Granholm incorporated the MLSC into the Michigan Technology Tri-Corridor, adding funding for homeland security and alternative fuel research. [ 6 ]
In 2009, the University of Michigan added a 30-building, 174-acre (0.70 km 2 ) North Campus Research Complex by acquiring the former Pfizer pharmaceutical corporation facility. [ 7 ]
A BioEnterprise Midwest Healthcare Venture report found that Michigan attracted $451.8 million in new biotechnology venture capital investments from 2005 to 2009. [ 8 ] [ 9 ] | https://en.wikipedia.org/wiki/Michigan_Life_Sciences_Corridor |
The University of Michigan Spin Physics Center focuses on studies of spin effects in high polarized proton-proton elastic and inelastic scattering. These polarized scattering experiments use the world-class solid and jet polarized proton targets, which are developed, upgraded and tested at the center. The Center obtained a record density of about 10 12 spin-polarized hydrogen atoms per cm 3 .
The center also led the development of the world's first accelerated polarized beams at the 12 GeV Argonne ZGS (in 1973) and then at the 28 GeV Brookhaven AGS . The Center led pioneering experiments at the IUCF Cooler Ring from 1988 until its 2003 shutdown, which developed and tested Siberian snakes and Spin-flippers , which are now used to accelerate, store and use high energy polarized proton beams .
The center also leads the International SPIN Collaboration and its proton polarization know-how is used in many experiments worldwide.
In 1978 the Center found that protons with parallel spins interact much stronger than protons with anti-parallel spin. [ 1 ] [ 2 ] According to Quantum Chromodynamics the interaction between parallel and anti-parallel spinning proton beams should be the same. Sheldon Glashow called this effect "the thorn in the side of QCD". [ 3 ]
This effect remained unexplained until today. In 2005 Stanley Brodsky called it "one of the unsolved mysteries in hadronic physics". [ 3 ] | https://en.wikipedia.org/wiki/Michigan_Spin_Physics_Center |
The Michigan Tech Research Institute (MTRI) is a research center of Michigan Technological University located in Ann Arbor, Michigan . [ 1 ] The institute specializes in advancing the state of the art in remote sensing and information technology for a variety of applications.
MTRI has its heritage in the branch of the Environmental Research Institute of Michigan (ERIM) that remained not-for-profit and developed into the Altarum Institute after ERIM's organization was divided by a corporate takeover in the late 1990s. It became part of Michigan Tech in 2006 and includes research programs related to national security, protecting and evaluating critical infrastructure, bioinformatics, earth sciences, and environmental processes, including transportation.
This article about an education organization is a stub . You can help Wikipedia by expanding it .
This article about a scientific organization is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Michigan_Tech_Research_Institute |
The Michigan Terminal System ( MTS ) is one of the first time-sharing computer operating systems . [ 1 ] Created in 1967 at the University of Michigan for use on IBM S/360 -67, S/370 and compatible mainframe computers , it was developed and used by a consortium of eight universities in the United States , Canada , and the United Kingdom over a period of 33 years (1967 to 1999). [ 2 ]
The University of Michigan Multiprogramming Supervisor (UMMPS) was initially developed by the staff of the academic computing center at the University of Michigan for operation of the IBM S/360-67, S/370 and compatible computers. The software may be described as a multiprogramming , multiprocessing , virtual memory , time-sharing supervisor that runs multiple resident, reentrant programs. Among these programs is the Michigan Terminal System (MTS) for command interpretation, execution control, file management, and accounting. End-users interact with the computing resources through MTS using terminal, batch, and server oriented facilities. [ 2 ]
The name MTS refers to:
MTS was used on a production basis at about 13 sites in the United States , Canada , the United Kingdom , Brazil , and possibly in Yugoslavia and at several more sites on a trial or benchmarking basis. MTS was developed and maintained by a core group of eight universities included in the MTS Consortium .
The University of Michigan announced in 1988 that "Reliable MTS service will be provided as long as there are users requiring it ... MTS may be phased out after alternatives are able to meet users' computing requirements". [ 3 ] It ceased operating MTS for end-users on June 30, 1996. [ 4 ] By that time, most services had moved to client/server-based computing systems, typically Unix for servers and various Mac, PC, and Unix flavors for clients. The University of Michigan shut down its MTS system for the last time on May 30, 1997. [ 5 ]
Rensselaer Polytechnic Institute (RPI) is believed to be the last site to use MTS in a production environment. RPI retired MTS in June 1999. [ 6 ]
Today, MTS still runs using IBM S/370 emulators such as Hercules , Sim390, [ 7 ] and FLEX-ES. [ 8 ]
In the mid-1960s, the University of Michigan was providing batch processing services on IBM 7090 hardware under the control of the University of Michigan Executive System (UMES), but was interested in offering interactive services using time-sharing . [ 9 ] At that time the work that computers could perform was limited by their small real memory capacity. When IBM introduced its System/360 family of computers in the mid-1960s, it did not provide a solution for this limitation and within IBM there were conflicting views about the importance of and need to support time-sharing.
A paper titled Program and Addressing Structure in a Time-Sharing Environment by Bruce Arden , Bernard Galler , Frank Westervelt (all associate directors at UM's academic Computing Center), and Tom O'Brian building upon some basic ideas developed at the Massachusetts Institute of Technology (MIT) was published in January 1966. [ 10 ] The paper outlined a virtual memory architecture using dynamic address translation (DAT) that could be used to implement time-sharing.
After a year of negotiations and design studies, IBM agreed to make a one-of-a-kind version of its S/360-65 mainframe computer with dynamic address translation (DAT) features that would support virtual memory and accommodate UM's desire to support time-sharing. The computer was dubbed the Model S/360-65M. [ 9 ] The "M" stood for Michigan. But IBM initially decided not to supply a time-sharing operating system for the machine. Meanwhile, a number of other institutions heard about the project, including General Motors , the Massachusetts Institute of Technology 's (MIT) Lincoln Laboratory , Princeton University , and Carnegie Institute of Technology (later Carnegie Mellon University ). They were all intrigued by the time-sharing idea and expressed interest in ordering the modified IBM S/360 series machines. With this demonstrated interest IBM changed the computer's model number to S/360-67 and made it a supported product. [ 1 ] With requests for over 100 new model S/360-67s IBM realized there was a market for time-sharing, and agreed to develop a new time-sharing operating system called TSS/360 (TSS stood for Time-sharing System) for delivery at roughly the same time as the first model S/360-67.
While waiting for the Model 65M to arrive, U of M Computing Center personnel were able to perform early time-sharing experiments using an IBM System/360 Model 50 that was funded by the ARPA CONCOMP (Conversational Use of Computers) Project. [ 11 ] The time-sharing experiment began as a "half-page of code written out on a kitchen table" combined with a small multi-programming system, LLMPS from MIT's Lincoln Laboratory , [ 1 ] which was modified and became the U of M Multi-Programming Supervisor (UMMPS) which in turn ran the MTS job program. This earliest incarnation of MTS was intended as a throw-away system used to gain experience with the new IBM S/360 hardware and which would be discarded when IBM's TSS/360 operating system became available.
Development of TSS took longer than anticipated, its delivery date was delayed, and it was not yet available when the S/360-67 (serial number 2) arrived at the Computing Center in January 1967. [ 12 ] At this time UM had to decide whether to return the Model 67 and select another mainframe or to develop MTS as an interim system for use until TSS was ready. The decision was to continue development of MTS and the staff moved their initial development work from the Model 50 to the Model 67. TSS development was eventually canceled by IBM, then reinstated, and then canceled again. But by this time UM liked the system they had developed, it was no longer considered interim, and MTS would be used at U of M and other sites for 33 years.
MTS was developed, maintained, and used by a consortium of eight universities in the US, Canada, and the United Kingdom: [ 2 ] [ 13 ]
Several sites ran more than one MTS system: NUMAC ran two (first at Newcastle and later at Durham), Michigan ran three in the mid-1980s (UM for Maize, UB for Blue, and HG at Human Genetics), UBC ran three or four at different times (MTS-G, MTS-L, MTS-A, and MTS-I for general, library, administration, and instruction).
Each of the MTS sites made contributions to the development of MTS, sometimes by taking the lead in the design and implementation of a new feature and at other times by refining, enhancing, and critiquing work done elsewhere. Many MTS components are the work of multiple people at multiple sites. [ 19 ]
In the early days collaboration between the MTS sites was accomplished through a combination of face-to-face site visits, phone calls, the exchange of documents and magnetic tapes by snail mail , and informal get-togethers at SHARE or other meetings. Later, e-mail, computer conferencing using CONFER and *Forum, network file transfer, and e-mail attachments supplemented and eventually largely replaced the earlier methods.
The members of the MTS Consortium produced a series of 82 MTS Newsletters between 1971 and 1982 to help coordinate MTS development. [ 20 ]
Starting at UBC in 1974 [ 21 ] the MTS Consortium held annual MTS Workshops at one of the member sites. The workshops were informal, but included papers submitted in advance and Proceedings published after-the-fact that included session summaries. [ 22 ] In the mid-1980s several Western Workshops were held with participation by a subset of the MTS sites (UBC, SFU, UQV, UM, and possibly RPI).
The annual workshops continued even after MTS development work began to taper off. Called simply the "community workshop", they continued until the mid-1990s to share expertise and common experiences in providing computing services, even though MTS was no longer the primary source for computing on their campuses and some had stopped running MTS entirely.
In addition to the eight MTS Consortium sites that were involved in its development, MTS was run at a number of other sites, including: [ 13 ]
A copy of MTS was also sent to the University of Sarajevo , Yugoslavia, though whether or not it was ever installed is not known.
INRIA , the French national institute for research in computer science and control in Grenoble, France ran MTS on a trial basis, as did the University of Waterloo in Ontario, Canada, Southern Illinois University , the Naval Postgraduate School , Amdahl Corporation , ST Systems for McGill University Hospitals, Stanford University , and University of Illinois in the United States, and a few other sites.
In theory MTS will run on the IBM S/360-67, any of the IBM S/370 series which include virtual memory, and their successors. MTS has been run
on the following computers in production, benchmarking, or trial configurations: [ 2 ]
The University of Michigan installed and ran MTS on the first IBM S/360-67 outside of IBM (serial number 2) in 1967, the second Amdahl 470V/6 (serial number 2) in 1975, [ 26 ] [ 27 ] the first Amdahl 5860 (serial number 1) in 1982, and the first factory shipped IBM 3090–400 in 1986. [ 28 ] NUMAC ran MTS on the first S/360-67 in the UK and very likely the first in Europe. [ 29 ] The University of British Columbia (UBC) took the lead in converting MTS to run on the IBM S/370 series (an IBM S/370-168) in 1974. The University of Alberta installed the first Amdahl 470V/6 in Canada (serial number P5) in 1975. [ 16 ] By 1978 NUMAC (at University of Newcastle upon Tyne and University of Durham) had moved main MTS activity on to its IBM S/370 series (an IBM S/370-168).
MTS was designed to support up to four processors on the IBM S/360-67 , although IBM only produced one (simplex and half-duplex) and two (duplex) processor configurations of the Model 67. In 1984 RPI updated MTS to support up to 32 processors in the IBM S/370-XA (Extended Addressing) hardware series, although 6 processors is likely the largest configuration actually used. [ 30 ] MTS supports the IBM Vector Facility , [ 31 ] available as an option on the IBM 3090 and ES/9000 systems.
In early 1967 running on the single processor IBM S/360-67 at UM without virtual memory support, MTS was typically supporting 5 simultaneous terminal sessions and one batch job. [ 2 ] In November 1967 after virtual memory support was added, MTS running on the same IBM S/360-67 was simultaneously supporting 50 terminal sessions and up to 5 batch jobs. [ 2 ] In August 1968 a dual processor IBM S/360-67 replaced the single processor system, supporting roughly 70 terminal and up to 8 batch jobs. [ 32 ] By late 1991 MTS at UM was running on an IBM ES/9000-720 supporting over 600 simultaneous terminal sessions and from 3 to 8 batch jobs. [ 2 ]
MTS can be IPL -ed under VM/370 , and some MTS sites did so, but most ran MTS on native hardware without using a virtual machine .
Some of the notable features of MTS include: [ 33 ]
The following are some of the notable programs developed for MTS: [ 46 ]
The following are some of the notable programs ported to MTS from other systems: [ 46 ]
MTS supports a rich set of programming languages, some developed for MTS and others ported from other systems: [ 46 ]
UMMPS, the supervisor, has complete control of the hardware and manages a collection of job programs. [ 32 ] One of the job programs is MTS, the job program with which most users interact. [ 2 ] MTS operates as a collection of command language subsystems (CLSs). One of the CLSs allows for the execution of user programs. MTS provides a collection of system subroutines that are available to CLSs, user programs, and MTS itself. [ 41 ] Among other things these system subroutines provide standard access to Device Support Routines (DSRs), the components that perform device dependent input/output.
The lists that follow are quite University of Michigan centric. Most other MTS sites used some of this material, but they also produced their own manuals, memos, reports, and newsletters tailored to the needs of their site.
The manual series MTS: The Michigan Terminal System , was published from 1967 through 1991, in volumes 1 through 23, which were updated and reissued irregularly. [ 20 ] Initial releases of the volumes did not always occur in numeric order and volumes occasionally changed names when they were updated or republished. In general, the higher the number, the more specialized the volume.
The earliest versions of MTS Volume I and II had a different organization and content from the MTS volumes that followed and included some internal as well as end user documentation. The second edition from December 1967 covered:
The following MTS Volumes were published by the University of Michigan Computing Center [ 2 ] and are available as PDFs: [ 107 ] [ 108 ] [ 109 ] [ 110 ]
Various aspects of MTS at the University of Michigan were documented in a series of Computing Center Memos (CCMemos) [ 108 ] [ 113 ] which were published irregularly from 1967 through 1987, numbered 2 through 924, though not necessarily in chronological order. Numbers 2 through 599 are general memos about various software and hardware; the 600 series are the Consultant's Notes series—short memos for beginning to intermediate users; the 800 series covers issues relating to the Xerox 9700 printer, text processing, and typesetting; and the 900 series covers microcomputers. There was no 700 series. In 1989 this series continued as Reference Memos with less of a focus on MTS. [ 114 ] [ 115 ]
A long run of newsletters targeted to end-users at the University of Michigan with the titles Computing Center News , Computing Center Newsletter , U-M Computing News , and the Information Technology Digest were published starting in 1971. [ 108 ] [ 113 ]
There was also introductory material presented in the User Guide , MTS User Guide , and Tutorial series, including: [ 108 ]
The following materials were not widely distributed, but were included in MTS Distributions: [ 20 ] [ 107 ] [ 109 ]
The University of Michigan released MTS on magnetic tape on an irregular basis. [ 20 ] There were full and partial distributions, where full distributions (D1.0, D2.0, ...) included all of the MTS components and partial distributions (D1.1, D1.2, D2.1, D2.2, ...) included just the components that had changed since the last full or partial distribution. Distributions 1.0 through 3.1 supported the IBM S/360 Model 67, distribution 3.2 supported both the IBM S/360-67 and the IBM S/370 architecture, and distributions D4.0 through D6.0 supported just the IBM S/370 architecture and its extensions.
MTS distributions included the updates needed to run licensed program products and other proprietary software under MTS, but not the base proprietary software itself, which had to be obtained separately from the owners. Except for IBM's Assembler H, none of the licensed programs were required to run MTS.
The last MTS distribution was D6.0 released in April 1988. It consisted of 10,003 files on six 6250 bpi magnetic tapes. After 1988, distribution of MTS components was done in an ad hoc fashion using network file transfer.
To allow new sites to get started from scratch, two additional magnetic tapes were made available, an IPLable boot tape that contained a minimalist version of MTS plus the DASDI and DISKCOPY utilities that could be used to initialize and restore a one disk pack starter version of MTS from the second magnetic tape. In the earliest days of MTS, the standalone TSS DASDI and DUMP/RESTORE utilities rather than MTS itself were used to create the one-disk starter system.
There were also less formal redistributions where individual sites would send magnetic tapes containing new or updated work to a coordinating site. That site would copy the material to a common magnetic tape (RD1, RD2, ...), and send copies of the tape out to all of the sites. The contents of most of the redistribution tapes seem to have been lost.
Today, complete materials from the six full and the ten partial MTS distributions as well as from two redistributions created between 1968 and 1988 are available from the Bitsavers Software archive [ 122 ] [ 123 ] and from the University of Michigan's Deep Blue digital archive. [ 124 ] [ 125 ]
Working with the D6.0 distribution materials, it is possible to create an IPLable version of MTS. A new D6.0A distribution of MTS makes this easier. [ 126 ] D6.0A is based on the D6.0 version of MTS from 1988 with various fixes and updates to make operation under Hercules in 2012 smoother. In the future, an IPLable version of MTS will be made available based upon the version of MTS that was in use at the University of Michigan in 1996 shortly before MTS was shut down. [ 123 ]
As of December 22, 2011, the MTS Distribution materials are freely available under the terms of the Creative Commons Attribution 3.0 Unported License (CC BY 3.0). [ 127 ]
In its earliest days MTS was made available for free without the need for a license to sites that were interested in running MTS and which seemed to have the knowledgeable staff required to support it.
In the mid-1980s licensing arrangements were formalized with the University of Michigan acting as agent for and granting licenses on behalf of the MTS Consortium. [ 128 ] MTS licenses were available to academic organizations for an annual fee of $5,000, to other non-profit organizations for $10,000, and to commercial organizations for $25,000. The license restricted MTS from being used to provide commercial computing services. The licensees received a copy of the full set of MTS distribution tapes, any incremental distributions prepared during the year, written installation instructions, two copies of the current user documentation, and a very limited amount of assistance.
Only a few organizations licensed MTS. Several licensed MTS in order to run a single program such as CONFER. The fees collected were used to offset some of the common expenses of the MTS Consortium. | https://en.wikipedia.org/wiki/Michigan_Terminal_System |
MicrOmega-IR is an infrared hyperspectral microscope that is part of the science payload on board the European Rosalind Franklin rover , [ 2 ] tasked to search for biosignatures on Mars. The rover is planned to be launched not earlier than 2028. MicrOmega-IR will analyse in situ the powder material derived from crushed samples collected by the rover's core drill . [ 3 ] [ 4 ]
The MicrOmega mnemonic is derived from its French name Micro observatoire pour la mineralogie, l'eau, les glaces et l'activité ; [ 1 ] IR stands for infrared . It was developed by France's Institut d'Astrophysique Spatiale at the CNRS . France has also flown MicrOmega on other missions such as the 2011 Fobos-Grunt and the Hayabusa2 MASCOT mobile lander currently exploring asteroid Ryugu . [ 5 ] France is also developing a variant called MacrOmega Near-IR Spectrometer for the Martian Moons Exploration (MMX) lander, a Japanese sample-return mission to Mars' moon Phobos . [ 6 ]
The Principal Investigator of the MicrOmega-IR for the Rosalind Franklin rover is Jean-Pierre Bibring, a French astronomer and planetary scientist at the Institut d'Astrophysique Spatiale . Co-PIs are astrobiologists Frances Westall and Nicolas Thomas. [ 7 ]
MicrOmega was developed by a consortium including: [ 8 ]
MicrOmega-IR is a visible and infrared hyperspectral microscope that is designed to characterize the texture and composition of crushed samples presented to the instrument. [ 9 ] Its objective is to study mineral grain assemblages in detail to try to unravel their geological origin, structure and composition, including potential organics . [ 9 ] These data will be vital for interpreting past and present geological processes and environments on Mars. Because MicrOmega-IR is an imaging instrument, it can also be used to identify grains that are particularly interesting, and assign them as targets for Raman and MOMA observations. [ 9 ]
It is composed of 2 microscopes: MicrOmega/VIS has a spatial sampling of approximately 4 μm, working in 4 colors in the visible range. The other one is the MicrOmega/NIR hyperspectral microscope working in the spectral range 0.95 μm - 3.65 μm with a spatial sampling of 20 μm per pixel. [ 10 ] Its main supporting components include: [ 11 ]
The IR instrument uses a HgCdTe (Mercury-Cadmium-Telluride) matrix detector, the Sofradir Mars SW 320 x 256 pixels. [ 12 ]
Examples of materials for identification, if present: [ 13 ] | https://en.wikipedia.org/wiki/MicrOmega-IR |
The Perkin-Elmer Micralign was a family of aligners introduced in 1973. Micralign was the first projection aligner, a concept that dramatically improved semiconductor fabrication . According to the Chip History Center, it "literally made the modern IC industry". [ 1 ]
The Micralign addressed a significant problem in the early integrated circuit (IC) industry, that the vast majority of ICs printed contained defects that rendered them useless. On average, about 1 in 10 complex ICs produced would be operational, a 10% yield . The Micralign improved this to over 50%, and as great as 70% in many applications. In doing so, the price of microprocessors and dynamic RAM products fell about 10 times between 1974 and 1978, [ citation needed ] by which time the Micralign had become practically universal in the high-end market.
Initially predicting to sell perhaps 50 units, Perkin-Elmer eventually sold about 2,000, [ a ] making them the by far largest vendor in the semiconductor fabrication equipment space through the second half of the 1970s and early 1980s. Formed into the Microlithography Division, by 1980 its income was the largest of Perkin-Elmer's divisions and provided the majority of the company's profits.
The company was slow to respond to the challenge of the stepper , which replaced the projection aligners in most roles starting in the mid-1980s. Their move to extreme ultraviolet as a response failed, as the technology was not mature. Another attempt, buying a European stepper company, did nothing to reverse their fortunes. In 1990, Perkin-Elmer sold the division to Silicon Valley Group , which is today part of ASML Holding .
Integrated circuits (ICs) are produced in a multi-step process known as photolithography . The process begins with thin disks of highly pure silicon being sawn from a crystalline cylinder known as a boule . After initial processing, these disks are known as wafers . The IC consists of one or more layers of lines and areas patterned onto the surface of the wafer. [ 3 ]
The wafers are coated in a chemical known as photoresist . One layer of the ultimate chip design is printed on a "mask", similar to a stencil . The mask is placed over the wafer and an ultraviolet (UV) lamp, typically a mercury arc lamp , is shone on the mask. Depending on the process, areas of the photoresist that are exposed to the light either harden or soften, and then the softer areas are washed away using a solvent . The result is a duplication of the pattern from the mask onto the surface of the wafer. Chemical processing is then used on the pattern to give it the desired electrical qualities. [ 3 ]
This entire process is repeated several times to build up the complete IC design. Each step uses a different design on a different mask. The features are measured in micrometres, so any previous design already deposited has to be precisely aligned with the new mask that will be applied. This is the purpose of the aligner, a task that was originally completed manually using a microscope . [ 3 ]
There is a strong economic argument to use larger wafers, as more individual IC's can be patterned on the surface and produced in a single series of operations, thereby producing more chips during the same period of time. However, larger wafers give rise to significant optical issues; focussing the light over the area while maintaining very high uniformity was a major challenge. By the early 1970s, wafers had been about 2.5 inches in diameter for some time and were just moving to 3 inches, but existing optical systems were having problems with this size. Every time a new wafer size was introduced, the optical systems had to be redesigned from scratch. [ 4 ]
In the 1960s, the most common way to hold the mask during the exposure processes was to use a contact aligner. As the name implies, the purpose of this device was to precisely align the mask between each patterning step, and once aligned, hold the mask directly on the surface of the wafer. The reason for holding the mask on the wafer was that at the scale of the lines being drawn, diffraction of the light around the edges of the lines on the mask would blur the image if there was any distance between the mask and the wafer. [ 5 ]
There were significant problems with the contact-mask concept. One of the most annoying was that any dust that reached the aligner's interior might stick to the mask and would be imaged on subsequent wafers as if it were part of the pattern. Equally annoying was that uncured photoresist would stick to the mask, and when the mask was lifted, it would pull off the top surface from the wafer, destroying that wafer and once again adding spurious images on the mask. Any one error might not be an issue because only the ICs in that location will be affected, but eventually, enough errors will be picked up that the mask is no longer useful. [ 6 ]
Places like TI were buying masks, literally by the truckload, using them six to ten times, then putting them in the landfill.
As a result of issues like these, masks generally lasted only a dozen times before having to be replaced. To supply the required number of masks, copies of the original mask were repeatedly printed using conventional silver halide photography on photographic stock, which was then used in the machine. The thermal stability of these masks during exposure to bright light caused distortions, which were not a concern in the early days but became an issue as feature sizes continued to shrink. This forced a move from film to glass masks, further increasing costs. [ 7 ]
Because any particular wafer could be damaged at any given masking step, the chance that any one wafer would make it through to production without damage was a function of the number of steps. [ 8 ] This limited the complexity of the IC designs in spite of the designers being able to make use of many more layers. Microprocessors , in particular, were complex multi-layer designs that had extremely low yield, with perhaps 1 in 10 of the patterns on a wafer delivering a working chip. [ 9 ]
The Micralign traces its history to a 1967 contract with the US Air Force for a higher-resolution aligner. At the time, the Air Force was one of the largest users of ICs, which were used in many of their missile systems, notably the Minuteman missile . The cost, and especially time to market, was a significant problem that the Air Force was interested in improving. [ 10 ]
There was a second type of aligner in use, the proximity aligner. As the name implies, these held the mask in close proximity to the wafer rather than in direct contact. This improved the life of the mask and allowed a more complex design, but had the downside that diffraction effects limited its use to relatively large features compared to the contact aligners. More annoying was the fact that the mask had to be aligned in three axes to make it perfectly flat relative to the wafer, which was a very slow process, and had to hold the mask in such a way that it didn't sag. [ 10 ]
The Air Force had worked with Perkin-Elmer for many years on reconnaissance optics, and the Air Force Materiel Command at Wright-Patterson Air Force Base offered them a contract to see whether they could improve the proximity masking system. [ 10 ] The result was the Microprojector. The key to the design was a 16-element lens system that produced an extremely focused light source. The resulting system could produce 2.5 μm features, 100 millionths of an inch, equal to the best contact aligners. [ 9 ]
Although the system was effective, meeting the goals set by the Air Force, it was not practical. [ 11 ] With a large number of lenses, dispersion was a significant problem, which they addressed by filtering out everything but a single band of UV only 200- angstrom wide (the G-line), throwing away the majority of the light coming from the 1,000 W lamp. This made the exposure times even longer than existing proximity designs. [ 9 ]
Another significant problem was that the filters removed the visible light as well as UV, which made it impossible for the operators to view the chips during the alignment process. To solve this problem, they added an image intensifier system that produced a visible image from the UV that could be used during alignment, but this added to the unit's cost. [ 9 ]
Harold Hemstreet, manager of what was then the Electro-Optical Division, felt that Perkin-Elmer could improve on the Microprojector. He called on Abe Offner, the company's main optical designer, to come up with a solution. Offner decided to explore systems that would focus the light using mirrors instead of lenses, thus avoiding the problem of dispersion. Mirrors suffer from another problem, aberration , which makes it difficult to focus near the edges of the mirror. Combined with the desire to move to the larger 3-inch wafers, a mirror would be a difficult solution in spite of its advantages. [ 9 ]
Offner's solution was to use only a small portion of the mirror system to image the mask, a section where the focus was guaranteed to be correct. This was along a thin ring running about halfway out from the center of the primary mirror. That meant only this sliver of the mask's image was properly focussed. This could be used if the resulting light was magnified to the size of the mask, but Rod Scott suggested that it instead be used by scanning the sliver of light across the mask. [ 12 ]
Scanning requires the light to shine on the photoresist for the same time as it would for the entire wafer in a contact aligner, so this implied that a scanner would be much slower to operate, as it imaged only a small portion at a time. However, because the mirror was achromatic, the entire output of the lamp could be used, rather than just a small window of frequencies. In the end, the two effects offset each other, and the new system's imaging time was as good as contact systems. [ 9 ]
John Bossung built a proof-of-concept system that copied a mask onto a photographic slide. This won another $100,000 contract from the Air Force to produce a working example. [ 13 ]
The $100,000 would not be enough to bring such a system to commercial production, so Hemstreet had to persuade management to fund development. At the time, another division was asking for funds to develop a laser letterpress, a high-speed currency printing system, and Hemstreet had to argue they should be funded instead of that project. [ 14 ] When the board of directors asked about the potential market, he suggested that the company might sell 50 of the systems, which was laughed at as no one could imagine a requirement for 50 such machines. [ 15 ] Nevertheless, Hemstreet managed to win approval for the project. [ 16 ]
In May 1971 a production team was formed, led by Jere Buckley, a mechanical designer, and Dave Markle, an optical engineer. Offner's original design required the mask and wafer to be scanned horizontally in precisely the same motion as the mask passed over the active area of the mirror system. This appeared to be fantastically difficult to arrange with the required precision. [ 13 ] They developed a new layout where both the mask and wafer were held on opposite ends of a C-shaped holder, at right angles to the main mirror. New mirrors reflected the light through right angles so vertical motion of the holder was translated into horizontal scanning over the main mirror, and a roof prism flipped the final image so that the mask and wafer did not produce mirror images. By making the C-shaped holder large enough, rotating the assembly produced a facsimile of horizontal scanning that was more than accurate enough for the desired resolution. A flexure bearing was used to provide super-smooth rotational motion. Perkin-Elmer boasted that one could throw a handful of sand into the mechanism and it would still work perfectly. [ 17 ] There is no record of the scanner ever failing. [ 18 ]
The basic mechanical design was completed by November 1971. The next step was to come up with a lamp that could efficiently light the curved section of the mirror. They called Ray Paquette at Advanced Radiation Corporation , and after working on it for about two hours he had produced a sample of a curved lamp. Offner then designed a new collimator that worked with the curved shape. Because almost all of the light from the lamp was being used, scanning took 10 to 12 seconds, a dramatic improvement over older systems. The next problem was how to align the mask, as the system focussed only UV light. This was solved by adding a dielectric coating that reflected the UV but not visible light. A separate lamp was used during the alignment process, with the light passing through the optics to the microscope that the operator used to align the mask. [ 17 ]
The product was set to launch in the summer of 1973. In a pre-launch sales effort, the company ran a series of wafers for Texas Instruments , which they then used as their "golden wafers" to show to potential clients. They showed the wafers to Raytheon who rejected them, National Semiconductor who were impressed, and Fairchild Semiconductor who produced electron microscope images of the wafers which showed they had "horrible edges". By the time they returned to company headquarters in Norcross, Raytheon had indicated that the problem might not be with the aligner itself, but the photoresist layers. They sent one of their experienced operators to Perkin-Elmer and began sorting out the practical problems of fabrication that the company had not had to deal with previously. [ 6 ]
The first sale of what was now known as the Micralign 100 was in 1974 to Texas Instruments, which paid $98,000 for the machine, equivalent to $624,833 in 2024, about three times that of existing high-end contact aligners. [ 19 ] Sales to Intel and Raytheon followed. Intel kept their system secret, and were able to introduce new products, notably memory devices, at prices no one else could touch. The secret finally leaked out when various Intel workers left the company. [ 20 ]
The sales pitch to early customers was simple; they could use their existing glass master masks, or "reticles", without the need to print working masks at all. The masks would last 100,000 uses instead of 10. By the next year, the company was in full-out production and had a year-long backlog of orders. By 1976, they were selling 30 a month. [ 21 ] The only issue found during initial use was that the longer exposures led to new issues with thermal expansion, which was cured by moving from conventional soda-lime glass to borosilicate glass for the masks. [ 22 ] [ b ]
The real advantage was not a reduction in mask costs, but improved yield. A 1975 report by a 3rd party research firm outlined the impressive advantages; because the contact problems with dirt and sticking emulsion were eliminated, yields had improved dramatically. For simple single-layer ICs like the 7400-series , yields improved from 75 percent with contact printing to 90 percent with the Micralign. Results were more dramatic for larger chips; a typical four-function calculator chip yielded 30 percent using contact printing, Micralign yielded 65 percent. [ 6 ]
Microprocessors were only truly useful after the introduction of the Micralign. [ 23 ] The Intel 8088 had yields of about 20% on older systems, improving to 60% on the Micralign. [ 24 ] Other microprocessors were designed from the start specifically for fabrication on the Micralign. The Motorola 6800 was produced using contact aligners and sold for $295 in single units. Chuck Peddle found customers would not buy it at that cost and designed a low-cost replacement. When Motorola management refused to fund development, he left and moved to MOS Technology . Their MOS 6502 was designed specifically for the Micralign in mind, with a combination of high yield and smaller feature set allowing them to hit their design cost of $5 per unit. They introduced the 6502 only a year after the 6800, selling it for $25 in singles, and sold the subsequent 6507 with their RIOT support IC to Atari for a total of $12 per pair. [ 25 ]
Several improvements were introduced into the line to adapt to changes in the IC market. One of the first, on the Model 110, was the addition of an automated wafer loader, which allowed the operators to rapidly mask many wafers in a row.
The Model 111 was a single-wafer model that replaced the 100, and could be adapted for use with 2-, 2.5- or 3-inch wafers, and optionally 4×4-, 3.5×3.5- or 3×3-inch masks. The Model 120 was a 111 with automatic wafer loading. The 130 worked with 100 mm wafers and 5×5-inch masks on a single wafer system, and the 140 added wafer loading to the 130. [ 26 ] Any existing model could be adapted to other wafer and mask sizes, or add wafer loading, through conversion kits. [ 27 ]
The second-generation Micralign was introduced in 1979. This offered higher resolutions and the ability to work with larger wafers, but also cost much more at $250,000, equivalent to $1,083,104 in 2024. This higher price was offset by its ability to print more chips per wafer, due to the smaller feature sizes. [ 28 ] 1981's Model 500 increased throughput to 100 wafers an hour, offsetting its $675,000 price, equivalent to $2,334,581 in 2024 via improved throughput. [ 28 ]
By the early 1980s, Perkin-Elmer was firmly in control of the majority of the aligner market, in spite of concerted efforts on the parts of many companies to enter the space. Between 1976 and 1980, overall company sales tripled to $966 million, equivalent to $3,686,483,845 in 2024, of which $104 million was from the Microlithography Division, making it the single largest division of the company, and by far the most profitable. [ 28 ]
While Perkin-Elmer was introducing the Micralign, several other companies were working on different solutions to the same basic problem of focussing a light across the ever-growing wafers. GCA, formerly Geophysical Corporation of America , had been working on a concept that focused on only a small part of the wafer at a time, magnifying the image of the mask about 10-to-1 so it could shine more light through a much larger mask and make up for the fact that it used only a single band of UV light. IBM had purchased one at about the same time the Micralign came to market, but gave up on the system and concluded it could never work. [ 29 ]
By 1981, GCA had solved the problems in the stepper system. During that period, the chip industry had continually moved to denser features and more complex designs. The Micralign was running out of resolution, while the additional magnification in the GCA system allowed it to operate at finer feature sizes. With roughly the same speed that the Micralign ended sales of contact printers, GCA's stepper ended sales of the Micralign. Perkin-Elmer had simply not listened to its customers who were clamoring for higher resolution, and ignored the research and development of newer systems. [ 30 ]
Instead of steppers, the Model 600 bet on Deep UV (note: correcting interview "EUV" blooper) (DUV) as a solution to the resolution problem. IBM used these to run a memory chip series, but no one else had an effective photoresist that worked in DUV, and few other customers purchased the system. [ 31 ] [ 32 ] Steppers were far slower than the Micralign and much more expensive, so sales started very slowly, [ 28 ] but by the mid-1980s the stepper was rapidly taking over the market. [ 33 ]
In an effort to stay in the market, in 1984 Perkin-Elmer purchased Censor, a stepper company from Liechtenstein . The product never made major inroads in the market, and in spite of GCA's bankruptcy in 1987, Perkin-Elmer decided to give up on the Microlithography Division and put it on the market in April 1989, along with their electron-beam lithography (EBL) division. The EBL work quickly sold, but the aligner division lingered. In 1990 it was purchased by the Silicon Valley Group (SVGL) in a multi-way deal involving IBM whose involvement was brokered by Nikon . [ 34 ] SVGL was purchased by ASML Holding in 2001. [ 35 ] | https://en.wikipedia.org/wiki/Micralign |
The micro-atmosphere method is an antimicrobial sensitivity testing method involving the use of potentially bacteriostatic or fungicidal compounds which are obtained from the volatile oils of plants, such as citronella grass . This method involves the use of essential oils , a growth medium , a selection of bacterial or fungal cultures, and an incubator .
In this microbiological procedure, theoretically, the antibacterial or anti-fungal activity of the volatile oils from a chosen plant may be tested against a selection of gram-positive and gram-negative bacteria or a species of fungus . The growth of the bacteria or fungi are then monitored on a timely basis to measure the bacteriostatic or anti-fungal activity of the volatile oils. In some cases, a complete inhibition of growth for the bacteria tested can be observed. [ citation needed ]
Before it can be tested, the essential oils are first diluted to produce solutions of varying concentrations. In this way, the minimum inhibitory concentration can be calculated to obtain the most cost-effective antimicrobial agent. [ 1 ]
Until recently, the antibacterial activity of essential oils has been primarily evaluated through direct contact methods between the pathogen and the antimicrobial agent through diffusion and dilution methods, [ 1 ] [ 2 ] however the role of essential oils in the vapour phase as antimicrobial agents is gaining increasing significance. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ]
This method was developed on the premise that essential oil vapours exert critical biological activity. These methods offer rapid screening protocols for the antimicrobial assessment of plant essential oils. It has been suggested that essential oils in the vapour phase possess the greatest degree of antimicrobial activity since the active constituents are highly volatile in nature and thus, the vapour is therefore the contributing attribute for its biological activity. [ 8 ] Each individual constituent has differing volatility, therefore when the mixtures are introduced into a free, non-saturated state in a closed micro-environment; the volatile constituents begin to disperse at differing rates in the vapour phase within the headspace according to their degree of volatility until they reach equilibrium. [ 11 ]
Preliminary research involving the essential oils of citronella yielded promising results when tested against a selection of Gram-positive bacteria and Gram-negative bacteria. The use of such extracts can be explored especially in the development of cost-effective treatments of respiratory illness , specifically of those caused by bacterial or fungal infection. Preliminary tests exhibit complete inhibition of the growth of certain bacterial strains. [ 1 ]
In a separate research, the micro-atmosphere method was used to investigate the anti-fungal efficacy of the essential oils from the cinnamon , a plant belonging to the genus Cinnamomum .
The procedure was employed to ensure that the active films, the material containing the active compound, does not come into direct contact to the tested fungal suspension. [ 12 ]
The micro-atmosphere method was performed, in order to evaluate the indirect effects of active films against P. digitatum . Addition of 0.5% cinnamon essential oil, led to 12% inhibition of fungal growth. Higher anti-fungal effects were obtained by adding higher amounts of the essential oil. An inhibition of fungal growth between 28% and 50% were observed for the films incorporated with 1.5% and 3% essential oil, respectively.
In this particular research, the potential anti-fungal agent was subjected to both disk diffusion test and the micro-atmosphere method for comparison and to get an idea of how the active compound may be utilized. The active compound exhibited higher anti-fungal effects in the disc diffusion test compared to the micro-atmosphere assays. This could be attributed to the fact that in the disc diffusion test, both direct contact and migration of active compounds from the film to the outside induced the observed antimicrobial effects. On the other hand, only the migration of the volatile compounds to the headspace, may cause the anti-fungal effect in the micro-atmosphere method. [ 13 ] | https://en.wikipedia.org/wiki/Micro-atmosphere_method |
Micro-combustion is the sequence of exothermic chemical reaction between a fuel and an oxidant accompanied by the production of heat and conversion of chemical species at micro level. The release of heat can result in the production of light in the form of either glowing or a flame . Fuels of interest often include organic compounds (especially hydrocarbons ) in the gas, liquid or solid phase. The major problem of micro-combustion is the high surface to volume ratio . As the surface to volume ratio increases heat loss to walls of combustor increases which leads to flame quenching .
The development of miniaturized products such as microrobots , notebook computers , micro-aerial vehicles and other small scale devices is becoming increasingly important in our daily life. There is a growing interest in developing small scale combustors to power these micro-devices due to their inherent advantages of higher energy density , higher heat and mass transfer coefficients and shorter recharge times compared to electrochemical batteries . [ 1 ] [ 2 ] The energy density of hydrocarbon fuels is 20-50 times higher than the most advanced Li-ion concept based electrochemical batteries. The concept of the micro-heat engine was proposed by Epstein and Senturia in 1997. [ 3 ] Since then, substantial amount of work has been done towards the development and application of such small scale devices to generate power through the combustion of hydrocarbon fuels. Micro-combustors are an attractive alternate to batteries as they have large surface area to volume ratio, due to which, significant amount of heat is transferred through the walls which leads to flame quenching . [ 4 ] However, the increased rate of heat transfer through solid walls is advantageous in the case of steam reformers used for hydrogen production. [ 5 ]
B. Khandelwal et al. have experimentally studied the flame stability limits and other characteristics in a two staged micro combustor. [ 6 ] They found out that staged combustor leads to higher flame stability limits, in addition to that they also offer higher temperature profiles which would be helpful in utilizing the heat produced by combustion. Maruta et al. have experimentally studied the flame propagation characteristics of premixed methane air mixtures in a 2.0 mm diameter straight quartz channel with a positive wall temperature gradient along the flow direction. [ 7 ] This was a simple one-dimensional configuration to study flame stabilization characteristics in microchannels. Other researchers have studied the flame stabilization behavior and combustion performance in a Swiss roll combustor, [ 8 ] micro-gas turbine engines, [ 9 ] a micro-thermo-photovoltaic system, [ 10 ] a free piston knock engine, [ 11 ] a micro-tube combustor, [ 12 ] radial channel combustors, [ 13 ] and in various other types of micro-combustor. [ 14 ] [ 15 ] | https://en.wikipedia.org/wiki/Micro-combustion |
Micro-compounding is the mixing or processing of polymer formulations in the melt on a small scale, typically milliliters. It is popular for research and development because it gives faster, more reliable results with smaller samples and less cost. Its applications include pharmaceutical , biomedical , and nutritional areas.
Micro-compounding is typically performed with a tabletop, twin screw micro-compounder, or micro-extruder with a working volume of 5 or 15 milliliters. With such small volumes, it is difficult to have sufficient mixing in a continuous extruder. Therefore, micro-compounders typically have a batch mode (recirculation) and a conical shape.
The L/D of a continuous twin screw extruder is mimicked in a batch micro-compounder by the recirculation mixing time, which is controlled by a manual valve. With this valve, the recirculation can be interrupted to unload the formulation in either a strand or an injection moulder , a film device or a fiber line. Typical recirculation times are one to three minutes, depending on the ease of dispersive and distributive mixing of the formulation. [ citation needed ]
Micro-compounding can now produce films, fibers, and test samples (rods, rings, tablets) from mixtures as small as 5 ml in less than ten minutes. The small footprint requires less lab space than for a parallel twin screw extruder. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] One micro-extruder, developed to test whether drug delivery enabled improved bioavailability of poorly soluble drugs or the sustained release of active ingredients [ clarification needed ] show or require sensitive and water destroying invasives. [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] | https://en.wikipedia.org/wiki/Micro-compounding |
Microencapsulation is a process in which tiny particles or droplets are surrounded by a coating to give small capsules, with useful properties. [ 1 ] [ 2 ] In general, it is used to incorporate food ingredients , [ 3 ] enzymes , cells or other materials on a micro metric scale. Microencapsulation can also be used to enclose solids , liquids , or gases inside a micrometric wall made of hard or soft soluble film, in order to reduce dosing frequency and prevent the degradation of pharmaceuticals . [ 4 ]
In its simplest form, a microcapsule is a small sphere comprising a near-uniform wall enclosing some material. The enclosed material in the microcapsule is referred to as the core, internal phase, or fill, whereas the wall is sometimes called a shell, coating, or membrane. Some materials like lipids and polymers , such as alginate , may be used as a mixture to trap the material of interest inside. Most microcapsules have pores with diameters between a few nanometers and a few micrometers. Materials generally used for coating are:
The definition has been expanded, and includes most foods, where the encapsulation of flavors is the most common. [ 5 ] The technique of microencapsulation depends on the physical and chemical properties of the material to be encapsulated. [ 6 ]
Many microcapsules however bear little resemblance to these simple spheres. The core may be a crystal , a jagged adsorbent particle, an emulsion , a Pickering emulsion , a suspension of solids, or a suspension of smaller microcapsules. The microcapsule even may have multiple walls.
Microcapsule : Hollow microparticle composed of a solid shell surrounding a core-forming space available to permanently or temporarily entrapped substances.
Note : The substances can be flavour compounds, pharmaceuticals, pesticides, dyes, or similar materials.
Ionotropic gelation occurs when units of uric acid in the chains of the polymer alginate , crosslink with multivalent cations. These may include, calcium, zinc, iron and aluminium.
Coacervation-phase separation consists of three steps carried out under continuous agitation.
In interfacial polycondensation, the two reactants in a polycondensation meet at an interface and react rapidly. The basis of this method is the classical Schotten-Baumann reaction between an acid chloride and a compound containing an active hydrogen atom, such as an amine or alcohol , polyesters , polyurea , polyurethane . Under the right conditions, thin flexible walls form rapidly at the interface. A solution of the pesticide and a diacid chloride are emulsified in water and an aqueous solution containing an amine and a polyfunctional isocyanate is added. Base is present to neutralize the acid formed during the reaction. Condensed polymer walls form instantaneously at the interface of the emulsion droplets.
Interfacial cross-linking is derived from interfacial polycondensation, and was developed to avoid the use of toxic diamines, for pharmaceutical or cosmetic applications. In this method, the small bifunctional monomer containing active hydrogen atoms is replaced by a biosourced polymer, like a protein. When the reaction is performed at the interface of an emulsion, the acid chloride reacts with the various functional groups of the protein, leading to the formation of a membrane. The method is very versatile, and the properties of the microcapsules (size, porosity, degradability, mechanical resistance) can be customized. Flow of artificial microcapsules in microfluidic channels:
In a few microencapsulation processes, the direct polymerization of a single monomer is carried out on the particle surface. In one process, e.g. cellulose fibers are encapsulated in polyethylene while immersed in dry toluene . Usual deposition rates are about 0.5μm/min. Coating thickness ranges 0.2–75 μm (0.0079–2.9528 mils). The coating is uniform, even over sharp projections. Protein microcapsules are biocompatible and biodegradable , and the presence of the protein backbone renders the membrane more resistant and elastic than those obtained by interfacial polycondensation.
In a number of processes, a core material is imbedded in a polymeric matrix during formation of the particles. A simple method of this type is spray-drying, in which the particle is formed by evaporation of the solvent from the matrix material. However, the solidification of the matrix also can be caused by a chemical change.
Even when the aim of a microencapsulation application is the isolation of the core from its surrounding, the wall must be ruptured at the time of use. Many walls are ruptured easily by pressure or shear stress , as in the case of breaking dye particles during writing to form a copy. Capsule contents may be released by melting the wall, or dissolving it under particular conditions, as in the case of an enteric drug coating . [ 7 ] In other systems, the wall is broken by solvent action, enzyme attack, chemical reaction, hydrolysis , or slow disintegration.
Microencapsulation can be used to slow the release of a drug into the body. This may permit one controlled release dose to substitute for several doses of non-encapsulated drug and also may decrease toxic side effects for some drugs by preventing high initial concentrations in the blood. There is usually a certain desired release pattern. In some cases, it is zero-order, i.e. the release rate is constant. In this case, the microcapsules deliver a fixed amount of drug per minute or hour during the period of their effectiveness. This can occur as long as a solid reservoir or dissolving drug is maintained in the microcapsule.
A more typical release pattern is first-order in which the rate decreases exponentially with time until the drug source is exhausted. In this situation, a fixed amount of drug is in solution inside the microcapsule. The concentration difference between the inside and the outside of the capsule decreases continually as the drug diffuses.
Nevertheless, there are some other mechanisms that may take place in the liberation of the encapsulated material. These include, biodegradation, osmotic pressure, diffusion, etc. Each one will depend on the composition of the capsule made and the environment it is in. Therefore, the liberation of the material may be affected by various mechanisms that act simultaneously. [ 8 ]
Applications of micro-encapsulation are numerous. It is mainly used to increase the stability and life of the product being encapsulated, facilitate the manipulation of the product and provide for the controlled release of the contents. | https://en.wikipedia.org/wiki/Micro-encapsulation |
Micro-incineration or microincineration is a technique to determine the manner and distribution of mineral elements in biological cells , biological tissues and organs. Slide preparation of tissues can be used. Examples include calcium (Ca), potassium (K), sodium (Na), magnesium (Mg), iron (Fe), and silicon (Si).
The organic matter is vaporised by heating. The nature and position of the mineral ash is determined microscopically. Aqueous or cryo-EM fixed tissue materials can also be examined under transmission and scanning electron microscopy (TEM & SEM).
The ashing procedure produces cellular oxidised -residues rich in Na 2 O , CaO, MgO , Fe 2 O 3 , SiO 2 , Ca(PO 4 ) 2 , Mg(PO 4 ) 2 , etc., which are detected by X-ray microanalysis with 2-4 times sensitivity gained after incineration of sample, due to increased mineral concentration and reduced nonspecific background radiation.
http://jcb.rupress.org/content/39/1/55.full.pdf
This biophysics -related article is a stub . You can help Wikipedia by expanding it .
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Micro-incineration |
A micro-loop heat pipe or MLHP is a miniature loop heat pipe in which the radius of curvature of the liquid meniscus in the evaporator is in the same order of magnitude of the micro grooves' dimensions; or a miniature loop heat pipe which has been fabricated using microfabrication techniques. [ 1 ]
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Micro-loop_heat_pipe |
The theory of micro-mechanics of failure aims to explain the failure of continuous fiber reinforced composites by micro-scale analysis of stresses within each constituent material (such as fiber and matrix), and of the stresses at the interfaces between those constituents, calculated from the macro stresses at the ply level. [ 1 ]
As a completely mechanics-based failure theory, the theory is expected to provide more accurate analyses than those obtained with phenomenological models such as Tsai-Wu [ 2 ] and Hashin [ 3 ] [ 4 ] failure criteria, being able to distinguish the critical constituent in the critical ply in a composite laminate .
The basic concept of the micro-mechanics of failure (MMF) theory is to perform a hierarchy of micromechanical analyses, starting from mechanical behavior of constituents (the fiber, the matrix, and the interface), then going on to the mechanical behavior of a ply, of a laminate, and eventually of an entire structure.
At the constituent level, three elements are required to fully characterize each constituent:
The constituents and a unidirectional lamina are linked via a proper micromechanical model, so that ply properties can be derived from constituent properties, and on the other hand, micro stresses at the constituent level can be calculated from macro stresses at the ply level.
Starting from the constituent level, it is necessary to devise a proper method to organize all three constituents such that the microstructure of a UD lamina is well-described. In reality, all fibers in a UD ply are aligned longitudinally; however, in the cross-sectional view, the distribution of fibers is random, and there is no distinguishable regular pattern in which fibers are arrayed. To avoid such a complication cause by the random arrangement of fibers, an idealization of the fiber arrangement in a UD lamina is performed, and the result is the regular fiber packing pattern. Two regular fiber packing patterns are considered: the square array and the hexagonal array. Either array can be viewed as a repetition of a single element, named unit cell or representative volume element (RVE), which consists of all three constituents. With periodical boundary conditions applied, [ 5 ] a unit cell is able to respond to external loadings in the same way that the whole array does. Therefore, a unit cell model is sufficient in representing the microstructure of a UD ply.
Stress distribution at the laminate level due to external loadings applied to the structure can be acquired using finite element analysis (FEA) . Stresses at the ply level can be obtained through transformation of laminate stresses from laminate coordinate system to ply coordinate system. To further calculate micro stresses at the constituent level, the unit cell model is employed. Micro stresses σ {\displaystyle \sigma } at any point within fiber/matrix, and micro surface tractions t {\displaystyle t} at any interfacial point, are related to ply stresses σ ¯ {\displaystyle {\bar {\sigma }}} as well as temperature increment Δ T {\displaystyle \Delta T} through: [ 6 ]
Here σ {\displaystyle \sigma } , σ ¯ {\displaystyle {\bar {\sigma }}} , and t {\displaystyle t} are column vectors with 6, 6, and 3 components, respectively. Subscripts serve as indications of constituents, i.e. f {\displaystyle {\mathrm {f} }} for fiber, m {\displaystyle {\mathrm {m} }} for matrix, and i {\displaystyle {\mathrm {i} }} for interface. M {\displaystyle M} and A {\displaystyle A} are respectively called stress amplification factors (SAF) for macro stresses and for temperature increment. The SAF serves as a conversion factor between macro stresses at the ply level and micro stresses at the constituent level. For a micro point in fiber or matrix, M {\displaystyle M} is a 6×6 matrix while A {\displaystyle A} has the dimension of 6×1; for an interfacial point, respective dimensions of M {\displaystyle M} and A {\displaystyle A} are 3×6 and 3×1. The value of each single term in the SAF for a micro material point is determined through FEA of the unit cell model under given macroscopic loading conditions. The definition of SAF is valid not only for constituents having linear elastic behavior and constant coefficients of thermal expansion (CTE) , but also for those possessing complex constitutive relations and variable CTEs .
Fiber is taken as transversely isotropic, and there are two alternative failure criteria for it: [ 1 ] a simple maximum stress criterion and a quadratic failure criterion extended from Tsai-Wu failure criterion :
The Coefficients involved in the quadratic failure criterion are defined as follows:
where X f {\displaystyle X_{\mathrm {f} }} , X f ′ {\displaystyle X_{\mathrm {f} }^{\prime }} , Y f {\displaystyle Y_{\mathrm {f} }} , Y f ′ {\displaystyle Y_{\mathrm {f} }^{\prime }} , S f 4 {\displaystyle S_{\mathrm {f} 4}} , and S f 6 {\displaystyle S_{\mathrm {f} 6}} denote longitudinal tensile, longitudinal compressive, transverse tensile, transverse compressive, transverse (or through-thickness) shear, and in-plane shear strength of the fiber, respectively.
Stresses used in two preceding criteria should be micro stresses in the fiber, expressed in such a coordinate system that 1-direction signifies the longitudinal direction of fiber.
The polymeric matrix is assumed to be isotropic and exhibits a higher strength under uniaxial compression than under uniaxial tension. A modified version of von Mises failure criterion suggested by Christensen [ 7 ] is adopted for the matrix:
Here T m {\displaystyle {T}_{\mathrm {m} }} and C m {\displaystyle {C}_{\mathrm {m} }} represent matrix tensile and compressive strength , respectively; whereas σ M i s e s {\displaystyle \sigma _{Mises}} and I 1 {\displaystyle {\mathrm {I} }_{1}} are von Mises equivalent stress and the first stress invariant of micro stresses at a point within matrix, respectively.
The fiber-matrix interface features traction-separation behavior, and the failure criterion dedicated to it takes the following form: [ 8 ]
( ⟨ t n ⟩ Y n ) 2 + ( t s Y s ) 2 = 1 {\displaystyle {\begin{array}{lcl}\left({\cfrac {\left\langle {t}_{n}\right\rangle }{{Y}_{n}}}\right)^{2}+\left({\cfrac {{t}_{s}}{{Y}_{s}}}\right)^{2}=1\end{array}}}
where t n {\displaystyle {t}_{n}} and t s {\displaystyle {t}_{s}} are normal (perpendicular to the interface) and shear (tangential to the interface) interfacial tractions, with Y n {\displaystyle {Y}_{n}} and Y s {\displaystyle {Y}_{s}} being their corresponding strengths. The angle brackets ( Macaulay brackets ) imply that a pure compressive normal traction does not contribute to interface failure.
These are interacting failure criteria where more than one stress components have been used to evaluate the different failure modes. These criteria were originally developed for unidirectional polymeric composites, and hence, applications to other type of laminates and non-polymeric composites have significant approximations. Usually Hashin criteria are implemented within two-dimensional classical lamination approach for point stress calculations with ply discounting as the material degradation model. Failure indices for Hashin criteria are related to fibre and matrix failures and involve four failure modes. The criteria are extended to three-dimensional problems where the maximum stress criteria are used for transverse normal stress component.
The failure modes included in Hashin's criteria are as follows.
where, σij denote the stress components and the tensile and compressive allowable strengths for lamina are denoted by subscripts T and C, respectively. XT, YT, ZT denotes the allowable tensile strengths in three respective material directions. Similarly, XC, YC, ZC denotes the allowable compressive strengths in three respective material directions. Further, S12, S13 and S23 denote allowable shear strengths in the respective principal material directions.
Endeavors have been made to incorporate MMF with multiple progressive damage models and fatigue models for strength and life prediction of composite structures subjected to static or dynamic loadings. | https://en.wikipedia.org/wiki/Micro-mechanics_of_failure |
The micro-pulling-down (μ-PD) method is a crystal growth technique based on continuous transport of the melted substance through micro-channel(s) made in a crucible bottom. Continuous solidification of the melt is progressed on a liquid/solid interface positioned under the crucible. In a steady state, both the melt and the crystal are pulled-down with a constant (but generally different) velocity .
Many different types of crystal are grown by this technique, including Y 3 Al 5 O 12 , Si , Si-Ge , LiNbO 3 , α-Al 2 O 3 , Y 2 O 3 , Sc 2 O 3 , LiF , CaF 2 , BaF 2 , etc. [ 1 ] [ 2 ]
Standard routine procedure used in the growth of most of μ-PD crystals is well developed. The general stages of the growths include: [ 3 ]
This science article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Micro-pulling-down |
Micro-spatially offset Raman spectroscopy ( micro-SORS ) is an analytical technique developed in 2014 that combines SORS with microscopy . [ 1 ] The technique derives its sublayer‐resolving properties from its parent technique SORS . [ 2 ] The main difference between SORS and micro-SORS is the spatial resolution: while SORS is suited to the analysis of millimetric layers, micro-SORS is able to resolve thin, micrometric-scale layers. Similarly to SORS technique, micro-SORS is able to preferentially collect the Raman photons generated under the surface in turbid (diffusely scattering ) media. In this way, it is possible to reconstruct the chemical makeup of micrometric multi-layered turbid system in a non destructive way. Micro-SORS is particularly useful when dealing with precious or unique objects as for Cultural Heritage field and Forensic Science or in biomedical applications, where a non-destructive molecular characterization constitute a great advantage. [ 3 ]
To date, micro-SORS has been mainly used to characterize biological materials such as bones, [ 4 ] blood, [ 5 ] [ 6 ] and Cultural Heritage materials, especially paint stratigraphies. [ 7 ] [ 8 ] [ 9 ] Other materials have been studied with this technique including polymers, industrial paper and wheat seeds. [ 3 ]
Micro-SORS was developed on a conventional micro-Raman instrument, and portable micro-SORS prototypes are currently under further optimization to enable in-situ measurements and avoid the need of sampling. [ 10 ] [ 11 ] [ 12 ]
In turbid media, the depth‐resolving power of confocal Raman microscopy is restricted due to the optical proprieties of these materials. [ 7 ] In such materials, Raman photons generated at different depths emerge on the surface after a certain number of scattering events. The Raman photons generated in the sub-surface emerge on the surface laterally compared to the incident light position, and this displacement is statistically proportional with the depth the Raman photon was generated at. Micro-SORS permits to preferentially collect these displaced photons coming from the sub-surface by enlarging ( defocusing ) or separating laser excitation and collection zones ( Full micro-SORS ). [ 13 ]
Defocusing is the most basic variant of the technique and it does not provide a complete separation between excitation and collection zones, rendering this variant less effective. [ 13 ] Nonetheless, defocused measurements have the great advantage to be easily performed with a conventional micro-Raman without any hardware nor software modifications. Defocusing consists in the enlargement of the excitation and the collection zones that is achieved by moving the microscope objective out of focus (Δz movements) from the surface of the object or sample under analysis. [ 1 ] The Δz movements range goes typically from few tens to two millimeters, depending on the numbers and thicknesses of the materials.
This more sophisticated micro-SORS variant provides a complete separation of laser excitation and collection zones (Δx offset) that requires a hardware or a software modification to a conventional Raman microscope. The separation can be achieved by using an external probe or fibre optics to deliver the laser, [ 13 ] by displacing the laser spot by moving the beam-steer alignment mirrors, [ 5 ] [ 6 ] by using a spatially resolved CCD , [ 4 ] by using a digital micro-mirror device ( DMD ), [ 14 ] by moving the tip of the Raman detection fibre to perform an off-confocal detection of the signal [ 15 ] or by combining hyperspectral SORS and defocusing micro-SORS. [ 16 ] Full micro-SORS was proven to be more effective in terms of both penetration depth into the sample and relative enhancement of sublayer signal [ 13 ]
To reconstruct the micro-layer succession it is required to collect a conventional Raman spectrum and at least a one micro-SORS spectrum; the acquisition of several spectra at gradually increasing defocusing distances or spatial offsets is usually the best way to approach unknown materials. A comparison among the acquired spectra allows achieving the layers composition: in defocused of spatially offset spectra, the signals of the sub-surface layers appear or are intensified compared to the surface signal. Data treatment as spectra normalization or subtraction is commonly used to better visualize the layer sequence.
The layers' thickness can be estimated after calibration on a well characterized sample set with a known thickness. [ 17 ]
Non-destructivity is a major goal for Conservation Scientist , due to the intrinsic value of Cultural Heritage objects. Micro-SORS was developed to address the need of a non-destructive analytical technique with high chemical specificity for the non-destructive analysis of thin painted layers. In painted artworks, the painted film is typically obtained superimposing turbid thin (micrometric-scale) pigmented layers, and their chemical characterization is essential to detect the presence of degradation products, to gain information about the artistic technique and for datation and authentication purposes. To date, Micro-SORS was successfully used to characterize the paint stratigraphy in polychrome sculptures, painted plasters., [ 7 ] painted cards [ 9 ] and contemporary street art mural paintings [ 8 ] | https://en.wikipedia.org/wiki/Micro-spatially_offset_Raman_spectroscopy |
Microspectrophotometry is the measure of the spectra of microscopic samples using different wavelengths of electromagnetic radiation (e.g. ultraviolet , visible and near infrared , etc.) It is accomplished with microspectrophotometers , cytospectrophotometers , microfluorometers , Raman microspectrophotometers , etc. A microspectrophotometer can be configured to measure transmittance , absorbance , reflectance , light polarization , fluorescence (or other types of luminescence such as photoluminescence ) of sample areas less than a micrometer in diameter through a modified optical microscope.
The main reason to use microspectrophotometry is the ability to measure the optical spectra of samples with a spatial resolution on the micron scale. Optical spectra may be acquired of either microscopic samples or larger samples with a micron-scale spatial resolution. Another reason microspectrophotometry is useful is that measurements are made without destroying the samples. This is important when dealing with stained/unstained histological or cytochemical biological sections, when measuring film thickness in semi-conductor integrated circuits , when matching paints and fibers ( forensic science ), when studying gems and coal ( geology ), and in paint/ink/color analysis in paint chemistry or art-work.
An advantage of the 'microscope spectrometer' is its ability to use microscope apertures to precisely control the area of sample analysis. Flat capillaries can be used for analyzing small liquid samples, up to about 10 micro-liters in volume. Quartz or mirror-based optics can be used for studying samples from the ultraviolet (UV), down to 200 nm, to the near infrared (NIR) up to 2100 nm. Samples that emit electromagnetic radiation via fluorescence, phosphorescence or photoluminescence when exposed to light, can be quantitatively investigated using a variety of excitation and barrier filters. A variety of observations can be made on samples of interest by using different illumination sources such as halogen, xenon, deuterium and mercury lamps. Plane polarized light can also be used for studying birefringent samples. | https://en.wikipedia.org/wiki/Micro-spectrophotometry |
MicroEmulator (also MicroEMU ) — is a free and open-source platform independent J2ME emulator allowing to run MIDlets (applications and games) on any device with compatible JVM . It is written in pure Java as an implementation of J2ME in J2SE . [ 4 ] [ 5 ] [ 6 ]
In November 2001, MicroEmulator project has been created on SourceForge .
On 31 March 2006, MicroEmulator version 1.0 has been released.
In November 2009, project moved to code.google.com , [ 5 ] and after Google closed it, development moved to GitHub . [ 6 ]
On 10 January 2010, the last stable version 2.0.4 has been released.
On 24 May 2013, the last preview version 3.0.0-SNAPSHOT.112 has been released.
After 2014, MicroEMU technology has been acquired by All My Web Needs company and all the MicroEmulator's docs and binary builds has been removed from the official site. [ 7 ] [ 8 ]
All sources and binary previously released on SourceForge, Google Code and GitHub preserved as open-source, but development stalled since then. [ 4 ] [ 5 ] [ 6 ]
By default MicroEmulator does not loads all distributed JSRs; user should load it per launch via custom commands instead. [ 11 ]
By default, MicroEmulator does not loads JSR 75 lib, required to grant MIDlets an access to file system.
To grant file system access, config2.xml file (on Linux, in ~/.microemulator/ folder) should include the next code <extensions> block after </windows> tag: [ 12 ]
MicroEmulator should run with loading JSR 75 lib. [ 13 ] On Linux, launch command to add into microemulator.desktop file is:
On Windows, ; (semicolon) in command should be replaced with : (colon).
To load more libs, path to additional libs should be added each after each in a row into launch command.
MicroEmulator allows conversion of any J2ME app into a Java applet , that could be placed on a web page. This feature is used for demonstrating apps and games demos on vendors sites, but it requires JVM and Java Web Start plugin to be installed on the user's PC or device. [ 14 ] [ 15 ]
MicroEmulator allows interface customization with skins called "devices" (see "Options > Select device..." menu) and distributed with few "devices":
Each "device" skin consist of XML-files, that stores definitions of window size, keys layout and assignations (according scancodes), text rendering options, etc. Optionally, skin could include image textures for "device" background and keys animation on key click and key relax. All files of "device" skin should be packed into ZIP or JAR, and its possible to include few "devices" into single package. [ 16 ] [ 17 ]
Screen could be switched between portrait and landscape (rotated) orientation. Additionally its possible to show current MIDlet screen scaled (x2, x3 or x4) in a separate floating window.
MicroEmulator has official support for the Android platform. [ 29 ] It is also possible to convert J2ME MIDlet JAR-packages into standalone APK files. [ 30 ]
J2ME Loader — is an enhanced fork of MicroEmulator for Android. [ 31 ] [ 32 ]
JL-Mod — is an enhaced fork of J2ME Loader with the Mascot Capsule 3D API support. [ 33 ] [ 34 ]
MicroEmulator has been ported to iOS , but it requires to use iOS jailbreaking technique to install it on iPhone or other iOS device. [ 35 ] [ 36 ] [ 37 ] [ 38 ]
MicroEmulator officially supports Mac OS, but there is also package in MacPorts repository. [ 39 ]
MicroEmulator has an official support for Maemo platform, and there is custom MicroEmulator devices skins (themed to Nokia S60 smartphones with 240x320 and 640x360 displays) made for Nokia N900 . [ 40 ] [ 41 ] [ 42 ] [ 43 ] [ 44 ] [ 45 ] [ 46 ]
Here is a command to launch MicroEmulator on Maemo with JSR 75 lib loaded, to grant MIDlets file system access:
KarinME — is a MicroEmulator front-end launcher for MeeGo/Harmattan platform, with a GUI written in QML . [ 47 ] [ 48 ] [ 49 ]
mpowerplayer SDK — is a freeware enhanced fork of MicroEmulator, initially created for MacOS as J2ME MIDP 1.0 emulator, later become a platform independed J2ME MIDP 2.0 emulator with own implementation of M3G (JSR 184) and SVG (JSR 226). [ 50 ] [ 51 ] [ 52 ] [ 53 ] [ 54 ] [ 55 ] [ 56 ]
WMA (JSR 120) has been implemented for mpowerplayer SDK as an open-source library. [ 57 ]
Development stalled after mpowerplayer SDK version 2.0.1185 release in 2007. ZIP of latest distribution package available for download from archived official website on Wayback Machine . [ 21 ] | https://en.wikipedia.org/wiki/MicroEmulator |
MicroPort is a multinational medical technology developer and manufacturer that is primarily headquartered in Shanghai, China . It mainly designs and produces medical devices for a range of fields including cardiology , interventional radiology , orthopedics , electrophysiology , and surgical management . [ 2 ] MicroPort is considered one of the global Medtech Big 100 and has been consistently known as the leading spender in research and development by percentage of revenue. [ 3 ] [ 4 ]
MicroPort was founded in 1998 by Zhaohua Chang, who currently serves as CEO, chairman, and Director. [ 5 ] The company rose to prominence from the early success of its coronary stent line due its focus on serving the needs of the Chinese device market. [ 6 ] [ 7 ] It is now one of the top global manufacturers of cardiac interventional devices. [ 8 ] Notably, it produces the world's first and only commercially available targeted drug eluting stent system, which uses a significantly reduced amount of drug than traditional drug eluting stents while maintaining effectiveness. [ 9 ] [ 10 ] [ 11 ] [ 12 ] As of early 2018, MicroPort is one of few medical device companies still developing a commercial coronary bioresorbable stent with ongoing clinical trials. [ 13 ] [ 14 ] [ 15 ]
Beginning in the 2010s, MicroPort has rapidly expanded around the world via international acquisitions to other medical device industries, including orthopedics and cardiac rhythm management. [ 16 ] [ 17 ] These acquisitions have been followed up with substantial local investment, including a US$ 398 Million investment in 2019 to develop pacemakers and defibrillators in France . [ 18 ]
In 2022, MicroPort established its US headquarters in Irvine, California with facilities that include a manufacturing base and innovation center. [ 19 ] As of 2022, its principal business is valued at over US$ 6.5 Billion. [ 20 ]
In 2014, MicroPort expanded operations in the United States by acquiring Wright Medical's OrthoRecon business to become the 6th largest international producer of orthopedic devices at the time. [ 21 ] MicroPort's orthopedic business is based in Arlington, Tennessee and in 2018 has expanded its business into India . [ 22 ]
In 2018, MicroPort and LivaNova closed the sale of LivaNova's cardiac rhythm management business for $190M. [ 17 ]
In 2018, MicroPort purchased Lombard Medical, a UK-based endovascular device company, from bankruptcy after it defaulted on loans in early 2018. [ 23 ]
In 2021, MicroPort purchased Hemovent GmbH, a German-based manufacturer of extracorporeal life support systems. [ 24 ] | https://en.wikipedia.org/wiki/MicroPort |
Micro ribonucleic acid ( microRNA , miRNA , μRNA ) are small, single-stranded, non-coding RNA molecules containing 21–23 nucleotides . [ 1 ] Found in plants, animals, and even some viruses, miRNAs are involved in RNA silencing and post-transcriptional regulation of gene expression . [ 2 ] [ 3 ] miRNAs base-pair to complementary sequences in messenger RNA (mRNA) molecules, [ 4 ] then silence said mRNA molecules by one or more of the following processes: [ 1 ] [ 5 ]
In cells of humans and other animals, miRNAs primarily act by destabilizing the mRNA. [ 6 ] [ 7 ]
miRNAs resemble the small interfering RNAs (siRNAs) of the RNA interference (RNAi) pathway, except miRNAs derive from regions of RNA transcripts that fold back on themselves to form short stem-loops (hairpins), whereas siRNAs derive from longer regions of double-stranded RNA . [ 2 ] The human genome may encode over 1900 miRNAs, [ 8 ] [ 9 ] However, only about 500 human miRNAs represent bona fide miRNAs in the manually curated miRNA gene database MirGeneDB . [ 10 ]
miRNAs are abundant in many mammalian cell types. [ 11 ] [ 12 ] They appear to target about 60% of the genes of humans and other mammals. [ 13 ] [ 14 ] Many miRNAs are evolutionarily conserved, which implies that they have important biological functions. [ 15 ] [ 1 ] For example, 90 families of miRNAs have been conserved since at least the common ancestor of mammals and fish, and most of these conserved miRNAs have important functions, as shown by studies in which genes for one or more members of a family have been knocked out in mice. [ 1 ]
In 2024, American scientists Victor Ambros and Gary Ruvkun were awarded the Nobel Prize in Physiology or Medicine for their work on the discovery of miRNA and its role in post-transcriptional gene regulation . [ 16 ] [ 17 ] [ 18 ]
The first miRNA was discovered in the early 1990s. [ 19 ] However, they were not recognized as a distinct class of biological regulators until the early 2000s. [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] Research revealed different sets of miRNAs expressed in different cell types and tissues [ 12 ] [ 25 ] and multiple roles for miRNAs in plant and animal development and in many other biological processes. [ 26 ] [ 27 ] [ 28 ] [ 29 ] [ 30 ] [ 31 ] [ 32 ] Aberrant miRNA expression are implicated in disease states. MiRNA-based therapies are under investigation. [ 33 ] [ 34 ] [ 35 ] [ 36 ]
The first miRNA was discovered in 1993 by a group led by Victor Ambros and including Lee and Feinbaum. However, additional insight into its mode of action required simultaneously published work by Gary Ruvkun 's team, including Wightman and Ha. [ 19 ] [ 37 ] These groups published back-to-back papers on the lin-4 gene, which was known to control the timing of C. elegans larval development by repressing the lin-14 gene. When Lee et al. isolated the lin-4 miRNA, they found that instead of producing an mRNA encoding a protein, it produced short non-coding RNAs , one of which was a ~22-nucleotide RNA that contained sequences partially complementary to multiple sequences in the 3' UTR of the lin-14 mRNA. [ 19 ] This complementarity was proposed to inhibit the translation of the lin-14 mRNA into the LIN-14 protein. At the time, the lin-4 small RNA was thought to be a nematode idiosyncrasy.
In 2000, a second small RNA was characterized: let-7 RNA, which represses lin-41 to promote a later developmental transition in C. elegans . [ 20 ] The let-7 RNA was found to be conserved in many species, leading to the suggestion that let-7 RNA and additional "small temporal RNAs" might regulate the timing of development in diverse animals, including humans. [ 21 ]
A year later, the lin-4 and let-7 RNAs were found to be part of a large class of small RNAs present in C. elegans , Drosophila and human cells. [ 22 ] [ 23 ] [ 24 ] The many RNAs of this class resembled the lin-4 and let-7 RNAs, except their expression patterns were usually inconsistent with a role in regulating the timing of development. This suggested that most might function in other types of regulatory pathways. At this point, researchers started using the term "microRNA" to refer to this class of small regulatory RNAs. [ 22 ] [ 23 ] [ 24 ]
The first human disease associated with deregulation of miRNAs was chronic lymphocytic leukemia . In this disorder, the miRNAs have a dual role working as both tumor suppressors and oncogenes. [ 38 ]
Under a standard nomenclature system, names are assigned to experimentally confirmed miRNAs before publication. [ 39 ] [ 40 ] The prefix "miR" is followed by a dash and a number, the latter often indicating order of naming. For example, miR-124 was named and likely discovered prior to miR-456. A capitalized "miR-" refers to the mature form of the miRNA, while the uncapitalized "mir-" refers to the pre-miRNA and the pri-miRNA. [ 41 ] The genes encoding miRNAs are also named using the same three-letter prefix according to the conventions of the organism gene nomenclature. For examples, the official miRNAs gene names in some organisms are " mir-1 in C. elegans and Drosophila, Mir1 in Rattus norvegicus and MIR25 in human.
miRNAs with nearly identical sequences except for one or two nucleotides are annotated with an additional lower case letter. For example, miR-124a is closely related to miR-124b. For example:
Pre-miRNAs, pri-miRNAs and genes that lead to 100% identical mature miRNAs but that are located at different places in the genome are indicated with an additional dash-number suffix. For example, the pre-miRNAs hsa-mir-194-1 and hsa-mir-194-2 lead to an identical mature miRNA (hsa-miR-194) but are from genes located in different genome regions.
Species of origin is designated with a three-letter prefix, e.g., hsa-miR-124 is a human ( Homo sapiens ) miRNA and oar-miR-124 is a sheep ( Ovis aries ) miRNA. Other common prefixes include "v" for viral (miRNA encoded by a viral genome) and "d" for Drosophila miRNA (a fruit fly commonly studied in genetic research).
When two mature microRNAs originate from opposite arms of the same pre-miRNA and are found in roughly similar amounts, they are denoted with a -3p or -5p suffix. (In the past, this distinction was also made with "s" ( sense ) and "as" (antisense)). However, the mature microRNA found from one arm of the hairpin is usually much more abundant than that found from the other arm, [ 2 ] in which case, an asterisk following the name indicates the mature species found at low levels from the opposite arm of a hairpin. For example, miR-124 and miR-124* share a pre-miRNA hairpin, but much more miR-124 is found in the cell.
Plant miRNAs usually have near-perfect pairing with their mRNA targets, which induces gene repression through cleavage of the target transcripts. [ 26 ] [ 42 ] In contrast, animal miRNAs are able to recognize their target mRNAs by using as few as 6–8 nucleotides (the seed region) at the 5' end of the miRNA, [ 13 ] [ 43 ] [ 44 ] which is not enough pairing to induce cleavage of the target mRNAs. [ 4 ] Combinatorial regulation is a feature of miRNA regulation in animals. [ 4 ] [ 45 ] A given miRNA may have hundreds of different mRNA targets, and a given target might be regulated by multiple miRNAs. [ 14 ] [ 46 ]
Estimates of the average number of unique messenger RNAs that are targets for repression by a typical miRNA vary, depending on the estimation method, [ 47 ] but multiple approaches show that mammalian miRNAs can have many unique targets. For example, an analysis of the miRNAs highly conserved in vertebrates shows that each has, on average, roughly 400 conserved targets. [ 14 ] Likewise, experiments show that a single miRNA species can reduce the stability of hundreds of unique messenger RNAs. [ 48 ] Other experiments show that a single miRNA species may repress the production of hundreds of proteins, but that this repression often is relatively mild (much less than 2-fold). [ 49 ] [ 50 ]
As many as 40% of miRNA genes may lie in the introns or even exons of other genes. [ 51 ] These are usually, though not exclusively, found in a sense orientation, [ 52 ] [ 53 ] and thus usually are regulated together with their host genes. [ 51 ] [ 54 ] [ 55 ]
The DNA template is not the final word on mature miRNA production: 6% of human miRNAs show RNA editing ( IsomiRs ), the site-specific modification of RNA sequences to yield products different from those encoded by their DNA. This increases the diversity and scope of miRNA action beyond that implicated from the genome alone.
miRNA genes are usually transcribed by RNA polymerase II (Pol II). [ 56 ] [ 57 ] The polymerase often binds to a promoter found near the DNA sequence, encoding what will become the hairpin loop of the pre-miRNA. The resulting transcript is capped with a specially modified nucleotide at the 5' end, polyadenylated with multiple adenosines (a poly(A) tail), [ 56 ] [ 52 ] and spliced . Animal miRNAs are initially transcribed as part of one arm of an ~80 nucleotide RNA hairpin that in turn forms part of a several hundred nucleotide-long miRNA precursor termed a pri-miRNA. [ 56 ] [ 52 ] When a hairpin precursor is found in the 3' UTR, a transcript may serve as a pri-miRNA and a mRNA. [ 52 ] RNA polymerase III (Pol III) transcribes some miRNAs, especially those with upstream Alu sequences , transfer RNAs (tRNAs), and mammalian wide interspersed repeat (MWIR) promoter units. [ 58 ]
A single pri-miRNA may contain from one to six miRNA precursors. These hairpin loop structures are composed of about 70 nucleotides each. Each hairpin is flanked by sequences necessary for efficient processing.
The double-stranded RNA (dsRNA) structure of the hairpins in a pri-miRNA is recognized by a nuclear protein known as DiGeorge Syndrome Critical Region 8 (DGCR8 or "Pasha" in invertebrates ), named for its association with DiGeorge Syndrome . DGCR8 associates with the enzyme Drosha , a protein that cuts RNA, to form the Microprocessor complex . [ 59 ] [ 60 ] In this complex, DGCR8 orients the catalytic RNase III domain of Drosha to liberate hairpins from pri-miRNAs by cleaving RNA about eleven nucleotides from the hairpin base (one helical dsRNA turn into the stem). [ 61 ] [ 62 ] The product resulting has a two-nucleotide overhang at its 3' end; it has 3' hydroxyl and 5' phosphate groups. It is often termed as a pre-miRNA (precursor-miRNA). Sequence motifs downstream of the pre-miRNA that are important for efficient processing have been identified. [ 63 ] [ 64 ] [ 65 ]
Pre-miRNAs that are spliced directly out of introns, bypassing the Microprocessor complex, are known as " mirtrons ." [ 66 ] Mirtrons have been found in Drosophila , C. elegans , and mammals. [ 66 ] [ 67 ]
As many as 16% of pre-miRNAs may be altered through nuclear RNA editing . [ 68 ] [ 69 ] [ 70 ] Most commonly, enzymes known as adenosine deaminases acting on RNA (ADARs) catalyze adenosine to inosine (A to I) transitions. RNA editing can halt nuclear processing (for example, of pri-miR-142, leading to degradation by the ribonuclease Tudor-SN) and alter downstream processes including cytoplasmic miRNA processing and target specificity (e.g., by changing the seed region of miR-376 in the central nervous system). [ 68 ]
Pre-miRNA hairpins are exported from the nucleus in a process involving the nucleocytoplasmic shuttler Exportin-5 . This protein, a member of the karyopherin family , recognizes a two-nucleotide overhang left by the RNase III enzyme Drosha at the 3' end of the pre-miRNA hairpin. Exportin-5-mediated transport to the cytoplasm is energy-dependent, using guanosine triphosphate (GTP) bound to the Ran protein. [ 71 ]
In the cytoplasm , the pre-miRNA hairpin is cleaved by the RNase III enzyme Dicer . [ 72 ] This endoribonuclease interacts with 5' and 3' ends of the hairpin [ 73 ] and cuts away the loop joining the 3' and 5' arms, yielding an imperfect miRNA:miRNA* duplex about 22 nucleotides in length. [ 72 ] Overall hairpin length and loop size influence the efficiency of Dicer processing. The imperfect nature of the miRNA:miRNA* pairing also affects cleavage. [ 72 ] [ 74 ] Some of the G-rich pre-miRNAs can potentially adopt the G-quadruplex structure as an alternative to the canonical hairpin structure. For example, human pre-miRNA 92b adopts a G-quadruplex structure which is resistant to the Dicer mediated cleavage in the cytoplasm . [ 75 ] Although either strand of the duplex may potentially act as a functional miRNA, only one strand is usually incorporated into the RNA-induced silencing complex (RISC) where the miRNA and its mRNA target interact.
While the majority of miRNAs are located within the cell, some miRNAs, commonly known as circulating miRNAs or extracellular miRNAs, have also been found in extracellular environment, including various biological fluids and cell culture media. [ 76 ] [ 77 ]
miRNA biogenesis in plants differs from animal biogenesis mainly in the steps of nuclear processing and export. Instead of being cleaved by two different enzymes, once inside and once outside the nucleus, both cleavages of the plant miRNA are performed by a Dicer homolog, called Dicer-like1 (DL1). DL1 is expressed only in the nucleus of plant cells, which indicates that both reactions take place inside the nucleus. Before plant miRNA:miRNA* duplexes are transported out of the nucleus, its 3' overhangs are methylated by a RNA methyltransferaseprotein called Hua-Enhancer1 (HEN1). The duplex is then transported out of the nucleus to the cytoplasm by a protein called Hasty (HST), an Exportin 5 homolog, where they disassemble and the mature miRNA is incorporated into the RISC. [ 78 ]
The mature miRNA is part of an active RNA-induced silencing complex (RISC) containing Dicer and many associated proteins. [ 79 ] RISC is also known as a microRNA ribonucleoprotein complex (miRNP); [ 80 ] A RISC with incorporated miRNA is sometimes referred to as a "miRISC."
Dicer processing of the pre-miRNA is thought to be coupled with unwinding of the duplex. Generally, only one strand is incorporated into the miRISC, selected on the basis of its thermodynamic instability and weaker base-pairing on the 5' end relative to the other strand. [ 81 ] [ 82 ] [ 83 ] The position of the hairpin may also influence strand choice. [ 84 ] The other strand, called the passenger strand due to its lower levels in the steady state, is denoted with an asterisk (*) and is normally degraded. In some cases, both strands of the duplex are viable and become functional miRNA that target different mRNA populations. [ 85 ]
Members of the Argonaute (Ago) protein family are central to RISC function. Argonautes are needed for miRNA-induced silencing and contain two conserved RNA binding domains: a PAZ domain that can bind the single stranded 3' end of the mature miRNA and a PIWI domain that structurally resembles ribonuclease-H and functions to interact with the 5' end of the guide strand. They bind the mature miRNA and orient it for interaction with a target mRNA. Some argonautes, for example human Ago2, cleave target transcripts directly; argonautes may also recruit additional proteins to achieve translational repression. [ 86 ] The human genome encodes eight argonaute proteins divided by sequence similarities into two families: AGO (with four members present in all mammalian cells and called E1F2C/hAgo in humans), and PIWI (found in the germline and hematopoietic stem cells). [ 80 ] [ 86 ]
Additional RISC components include TRBP [human immunodeficiency virus (HIV) transactivating response RNA (TAR) binding protein], [ 87 ] PACT (protein activator of the interferon -induced protein kinase ), the SMN complex, fragile X mental retardation protein (FMRP), Tudor staphylococcal nuclease-domain-containing protein (Tudor-SN), the putative DNA helicase MOV10 , and the RNA recognition motif containing protein TNRC6B . [ 71 ] [ 88 ] [ 89 ]
Gene silencing may occur either via mRNA degradation or preventing mRNA from being translated. For example, miR16 contains a sequence complementary to the AU-rich element [ 90 ] found in the 3'UTR of many unstable mRNAs, such as TNF alpha or GM-CSF . [ 91 ] It has been demonstrated that given complete complementarity between the miRNA and target mRNA sequence, Ago2 can cleave the mRNA and lead to direct mRNA degradation. In the absence of complementarity, silencing is achieved by preventing translation. [ 48 ] The relation of miRNA and its target mRNA can be based on the simple negative regulation of a target mRNA, but it seems that a common scenario is the use of a "coherent feed-forward loop", "mutual negative feedback loop" (also termed double negative loop) and "positive feedback/feed-forward loop". Some miRNAs work as buffers of random gene expression changes arising due to stochastic events in transcription, translation and protein stability. Such regulation is typically achieved by the virtue of negative feedback loops or incoherent feed-forward loop uncoupling protein output from mRNA transcription.
Turnover of mature miRNA is needed for rapid changes in miRNA expression profiles. During miRNA maturation in the cytoplasm, uptake by the Argonaute protein is thought to stabilize the guide strand, while the opposite (* or "passenger") strand is preferentially destroyed. In what has been called a "Use it or lose it" strategy, Argonaute may preferentially retain miRNAs with many targets over miRNAs with few or no targets, leading to degradation of the non-targeting molecules. [ 92 ]
Decay of mature miRNAs in Caenorhabditis elegans is mediated by the 5'-to-3' exoribonuclease XRN2 , also known as Rat1p. [ 93 ] In plants, SDN (small RNA degrading nuclease) family members degrade miRNAs in the opposite (3'-to-5') direction. Similar enzymes are encoded in animal genomes, but their roles have not been described. [ 92 ]
Several miRNA modifications affect miRNA stability. As indicated by work in the model organism Arabidopsis thaliana (thale cress), mature plant miRNAs appear to be stabilized by the addition of methyl moieties at the 3' end. The 2'-O-conjugated methyl groups block the addition of uracil (U) residues by uridyltransferase enzymes, a modification that may be associated with miRNA degradation. However, uridylation may also protect some miRNAs; the consequences of this modification are incompletely understood. Uridylation of some animal miRNAs has been reported. Both plant and animal miRNAs may be altered by addition of adenine (A) residues to the 3' end of the miRNA. An extra A added to the end of mammalian miR-122 , a liver-enriched miRNA important in hepatitis C , stabilizes the molecule and plant miRNAs ending with an adenine residue have slower decay rates. [ 92 ]
The function of miRNAs appears to be in gene regulation. For that purpose, a miRNA is complementary to a part of one or more messenger RNAs (mRNAs). Animal miRNAs are usually complementary to a site in the 3' UTR whereas plant miRNAs are usually complementary to coding regions of mRNAs. [ 95 ] Perfect or near perfect base pairing with the target RNA promotes cleavage of the RNA. [ 96 ] This is the primary mode of plant miRNAs. [ 97 ] In animals the match-ups are imperfect.
For partially complementary microRNAs to recognise their targets, nucleotides 2–7 of the miRNA (its 'seed region' [ 13 ] [ 43 ] ) must be perfectly complementary. [ 98 ] Animal miRNAs inhibit protein translation of the target mRNA [ 99 ] (this is present but less common in plants). [ 97 ] Partially complementary microRNAs can also speed up deadenylation , causing mRNAs to be degraded sooner. [ 100 ] While degradation of miRNA-targeted mRNA is well documented, whether or not translational repression is accomplished through mRNA degradation, translational inhibition, or a combination of the two is hotly debated. Recent work on miR-430 in zebrafish, as well as on bantam-miRNA and miR-9 in Drosophila cultured cells, shows that translational repression is caused by the disruption of translation initiation , independent of mRNA deadenylation. [ 101 ] [ 102 ]
miRNAs occasionally also cause histone modification and DNA methylation of promoter sites, which affects the expression of target genes. [ 103 ] [ 104 ]
Nine mechanisms of miRNA action are described and assembled in a unified mathematical model: [ 94 ]
It is often impossible to discern these mechanisms using experimental data about stationary reaction rates. Nevertheless, they are differentiated in dynamics and have different kinetic signatures . [ 94 ]
Unlike plant microRNAs, the animal microRNAs target diverse genes. [ 43 ] However, genes involved in functions common to all cells, such as gene expression, have relatively fewer microRNA target sites and seem to be under selection to avoid targeting by microRNAs. [ 105 ] There is a strong correlation between ITPR gene regulations and mir-92 and mir-19. [ 106 ]
dsRNA can also activate gene expression , a mechanism that has been termed "small RNA-induced gene activation" or RNAa . dsRNAs targeting gene promoters can induce potent transcriptional activation of associated genes. This was demonstrated in human cells using synthetic dsRNAs termed small activating RNAs (saRNAs), [ 107 ] but has also been demonstrated for endogenous microRNA. [ 108 ]
Interactions between microRNAs and complementary sequences on genes and even pseudogenes that share sequence homology are thought to be a back channel of communication regulating expression levels between paralogous genes (genes having a similar structure indicating divergence from a common ancestral gene). Given the name "competing endogenous RNAs" ( ceRNAs ), these microRNAs bind to "microRNA response elements" on genes and pseudogenes and may provide another explanation for the persistence of non-coding DNA . [ 109 ]
miRNAs are also found as extracellular circulating miRNAs . [ 110 ] Circulating miRNAs are released into body fluids including blood and cerebrospinal fluid and have the potential to be available as biomarkers in a number of diseases. [ 110 ] [ 111 ] Some researches show that mRNA cargo of exosomes may have a role in implantation, they can savage an adhesion between trophoblast and endometrium or support the adhesion by down regulating or up regulating expression of genes involved in adhesion/invasion. [ 112 ]
Moreover, miRNA as miR-183/96/182 seems to play a key role in circadian rhythm . [ 113 ]
miRNAs are well conserved in both plants and animals, and are thought to be a vital and evolutionarily ancient component of gene regulation. [ 114 ] [ 115 ] [ 116 ] [ 117 ] [ 118 ] While core components of the microRNA pathway are conserved between plants and animals , miRNA repertoires in the two kingdoms appear to have emerged independently with different primary modes of action. [ 119 ] [ 120 ]
microRNAs are useful phylogenetic markers because of their apparently low rate of evolution. [ 121 ] microRNAs' origin as a regulatory mechanism developed from previous RNAi machinery that was initially used as a defense against exogenous genetic material such as viruses. [ 122 ] Their origin may have permitted the development of morphological innovation, and by making gene expression more specific and 'fine-tunable', permitted the genesis of complex organs [ 123 ] and perhaps, ultimately, complex life. [ 118 ] Rapid bursts of morphological innovation are generally associated with a high rate of microRNA accumulation. [ 121 ] [ 123 ]
New microRNAs are created in multiple ways. Novel microRNAs can originate from the random formation of hairpins in "non-coding" sections of DNA (i.e. introns or intergene regions), but also by the duplication and modification of existing microRNAs. [ 124 ] microRNAs can also form from inverted duplications of protein-coding sequences, which allows for the creation of a foldback hairpin structure. [ 125 ] The rate of evolution (i.e. nucleotide substitution) in recently originated microRNAs is comparable to that elsewhere in the non-coding DNA, implying evolution by neutral drift; however, older microRNAs have a much lower rate of change (often less than one substitution per hundred million years), [ 118 ] suggesting that once a microRNA gains a function, it undergoes purifying selection. [ 124 ] Individual regions within an miRNA gene face different evolutionary pressures, where regions that are vital for processing and function have higher levels of conservation. [ 126 ] At this point, a microRNA is rarely lost from an animal's genome, [ 118 ] although newer microRNAs (thus presumably non-functional) are frequently lost. [ 124 ] In Arabidopsis thaliana , the net flux of miRNA genes has been predicted to be between 1.2 and 3.3 genes per million years. [ 127 ] This makes them a valuable phylogenetic marker, and they are being looked upon as a possible solution to outstanding phylogenetic problems such as the relationships of arthropods . [ 128 ] On the other hand, in multiple cases microRNAs correlate poorly with phylogeny, and it is possible that their phylogenetic concordance largely reflects a limited sampling of microRNAs. [ 129 ]
microRNAs feature in the genomes of most eukaryotic organisms, from the brown algae [ 130 ] to the animals. However, the difference in how these microRNAs function and the way they are processed suggests that microRNAs arose independently in plants and animals. [ 131 ]
Focusing on the animals, the genome of Mnemiopsis leidyi [ 132 ] appears to lack recognizable microRNAs, as well as the nuclear proteins Drosha and Pasha , which are critical to canonical microRNA biogenesis. It is the only animal thus far reported to be missing Drosha. MicroRNAs play a vital role in the regulation of gene expression in all non-ctenophore animals investigated thus far except for Trichoplax adhaerens , the first known member of the phylum Placozoa . [ 133 ]
Across all species, in excess of 5000 different miRNAs had been identified by March 2010. [ 134 ] Whilst short RNA sequences (50 – hundreds of base pairs) of a broadly comparable function occur in bacteria, bacteria lack true microRNAs. [ 135 ]
While researchers focused on miRNA expression in physiological and pathological processes, various technical variables related to microRNA isolation emerged. The stability of stored miRNA samples has been questioned. [ 77 ] microRNAs degrade much more easily than mRNAs, partly due to their length, but also because of ubiquitously present RNases . This makes it necessary to cool samples on ice and use RNase -free equipment. [ 136 ]
microRNA expression can be quantified in a two-step polymerase chain reaction process of modified RT-PCR followed by quantitative PCR . Variations of this method achieve absolute or relative quantification. [ 137 ] miRNAs can also be hybridized to microarrays , slides or chips with probes to hundreds or thousands of miRNA targets, so that relative levels of miRNAs can be determined in different samples. [ 138 ] microRNAs can be both discovered and profiled by high-throughput sequencing methods ( microRNA sequencing ). [ 139 ] The activity of an miRNA can be experimentally inhibited using a locked nucleic acid (LNA) oligo , a Morpholino oligo [ 140 ] [ 141 ] or a 2'-O-methyl RNA oligo. [ 142 ] A specific miRNA can be silenced by a complementary antagomir . microRNA maturation can be inhibited at several points by steric-blocking oligos. [ 143 ] The miRNA target site of an mRNA transcript can also be blocked by a steric-blocking oligo. [ 144 ] For the "in situ" detection of miRNA, LNA [ 145 ] or Morpholino [ 146 ] probes can be used. The locked conformation of LNA results in enhanced hybridization properties and increases sensitivity and selectivity, making it ideal for detection of short miRNA. [ 147 ]
High-throughput quantification of miRNAs is error prone, for the larger variance (compared to mRNAs ) that comes with methodological problems. mRNA -expression is therefore often analyzed to check for miRNA-effects in their levels (e.g. in [ 148 ] ). Databases can be used to pair mRNA - and miRNA-data that predict miRNA-targets based on their base sequence. [ 149 ] [ 150 ] While this is usually done after miRNAs of interest have been detected (e. g. because of high expression levels), ideas for analysis tools that integrate mRNA - and miRNA-expression information have been proposed. [ 151 ] [ 152 ]
Just as miRNA is involved in the normal functioning of eukaryotic cells, so has dysregulation of miRNA been associated with disease. A manually curated, publicly available database, miR2Disease, documents known relationships between miRNA dysregulation and human disease. [ 153 ]
A mutation in the seed region of miR-96 causes hereditary progressive hearing loss. [ 154 ]
A mutation in the seed region of miR-184 causes hereditary keratoconus with anterior polar cataract. [ 155 ]
Deletion of the miR-17~92 cluster causes skeletal and growth defects. [ 156 ]
The first human disease known to be associated with miRNA deregulation was chronic lymphocytic leukemia . [ 157 ] Many other miRNAs also have links with cancer and accordingly are sometimes referred to as " oncomirs ". [ 158 ] In malignant B cells miRNAs participate in pathways fundamental to B cell development like B-cell receptor (BCR) signalling, B-cell migration/adhesion, cell-cell interactions in immune niches and the production and class-switching of immunoglobulins. MiRNAs influence B cell maturation, generation of pre-, marginal zone, follicular, B1, plasma and memory B cells. [ 159 ]
Another role for miRNA in cancers is to use their expression level for prognosis. In NSCLC samples, low miR-324 a levels may serve as an indicator of poor survival. [ 160 ] Either high miR-185 or low miR-133b levels may correlate with metastasis and poor survival in colorectal cancer . [ 161 ]
Furthermore, specific miRNAs may be associated with certain histological subtypes of colorectal cancer. For instance, expression levels of miR-205 and miR-373 have been shown to be increased in mucinous colorectal cancers and mucin-producing Ulcerative Colitis-associated colon cancers, but not in sporadic colonic adenocarcinoma that lack mucinous components. [ 162 ] In-vitro studies suggested that miR-205 and miR-373 may functionally induce different features of mucinous-associated neoplastic progression in intestinal epithelial cells. [ 162 ]
Hepatocellular carcinoma cell proliferation may arise from miR-21 interaction with MAP2K3, a tumor repressor gene. [ 163 ] Optimal treatment for cancer involves accurately identifying patients for risk-stratified therapy. Those with a rapid response to initial treatment may benefit from truncated treatment regimens, showing the value of accurate disease response measures. Cell-free circulating miRNAs (cimiRNAs) are highly stable in blood, are overexpressed in cancer and are quantifiable within the diagnostic laboratory. In classical Hodgkin lymphoma , plasma miR-21, miR-494, and miR-1973 are promising disease response biomarkers. [ 164 ] Circulating miRNAs have the potential to assist clinical decision making and aid interpretation of positron emission tomography combined with computerized tomography . They can be performed at each consultation to assess disease response and detect relapse.
MicroRNAs have the potential to be used as tools or targets for treatment of different cancers. [ 165 ] The specific microRNA, miR-506 has been found to work as a tumor antagonist in several studies. A significant number of cervical cancer samples were found to have downregulated expression of miR-506. Additionally, miR-506 works to promote apoptosis of cervical cancer cells, through its direct target hedgehog pathway transcription factor, Gli3. [ 166 ] [ 167 ]
Many miRNAs can directly target and inhibit cell cycle genes to control cell proliferation . A new strategy for tumor treatment is to inhibit tumor cell proliferation by repairing the defective miRNA pathway in tumors. [ 168 ] Cancer is caused by the accumulation of mutations from either DNA damage or uncorrected errors in DNA replication . [ 169 ] Defects in DNA repair cause the accumulation of mutations, which can lead to cancer. [ 170 ] Several genes involved in DNA repair are regulated by microRNAs. [ 171 ]
Germline mutations in DNA repair genes cause only 2–5% of colon cancer cases. [ 172 ] However, altered expression of microRNAs, causing DNA repair deficiencies, are frequently associated with cancers and may be an important causal factor. Among 68 sporadic colon cancers with reduced expression of the DNA mismatch repair protein MLH1 , most were found to be deficient due to epigenetic methylation of the CpG island of the MLH1 gene. [ 173 ] However, up to 15% of MLH1-deficiencies in sporadic colon cancers appeared to be due to over-expression of the microRNA miR-155, which represses MLH1 expression. [ 174 ]
In 29–66% [ 175 ] [ 176 ] of glioblastomas , DNA repair is deficient due to epigenetic methylation of the MGMT gene, which reduces protein expression of MGMT. However, for 28% of glioblastomas, the MGMT protein is deficient, but the MGMT promoter is not methylated. [ 175 ] In glioblastomas without methylated MGMT promoters, the level of microRNA miR-181d is inversely correlated with protein expression of MGMT and the direct target of miR-181d is the MGMT mRNA 3'UTR (the three prime untranslated region of MGMT mRNA). [ 175 ] Thus, in 28% of glioblastomas, increased expression of miR-181d and reduced expression of DNA repair enzyme MGMT may be a causal factor.
HMGA proteins (HMGA1a, HMGA1b and HMGA2) are implicated in cancer, and expression of these proteins is regulated by microRNAs. HMGA expression is almost undetectable in differentiated adult tissues, but is elevated in many cancers. HMGA proteins are polypeptides of ~100 amino acid residues characterized by a modular sequence organization. These proteins have three highly positively charged regions, termed AT hooks , that bind the minor groove of AT-rich DNA stretches in specific regions of DNA. Human neoplasias, including thyroid, prostatic, cervical, colorectal, pancreatic and ovarian carcinomas, show a strong increase of HMGA1a and HMGA1b proteins. [ 177 ] Transgenic mice with HMGA1 targeted to lymphoid cells develop aggressive lymphoma, showing that high HMGA1 expression is associated with cancers and that HMGA1 can act as an oncogene. [ 178 ] HMGA2 protein specifically targets the promoter of ERCC1 , thus reducing expression of this DNA repair gene. [ 179 ] ERCC1 protein expression was deficient in 100% of 47 evaluated colon cancers (though the extent to which HGMA2 was involved is not known). [ 180 ]
Single Nucleotide polymorphisms (SNPs) can alter the binding of miRNAs on 3'UTRs for example the case of hsa-mir181a and hsa-mir181b on the CDON tumor suppressor gene. [ 181 ]
The global role of miRNA function in the heart has been addressed by conditionally inhibiting miRNA maturation in the murine heart. This revealed that miRNAs play an essential role during its development. [ 182 ] [ 183 ] miRNA expression profiling studies demonstrate that expression levels of specific miRNAs change in diseased human hearts, pointing to their involvement in cardiomyopathies . [ 184 ] [ 185 ] [ 186 ] Furthermore, animal studies on specific miRNAs identified distinct roles for miRNAs both during heart development and under pathological conditions, including the regulation of key factors important for cardiogenesis, the hypertrophic growth response and cardiac conductance. [ 183 ] [ 187 ] [ 188 ] [ 189 ] [ 190 ] Another role for miRNA in cardiovascular diseases is to use their expression levels for diagnosis, prognosis or risk stratification. [ 191 ] miRNA's in animal models have also been linked to cholesterol metabolism and regulation.
Murine microRNA-712 is a potential biomarker (i.e. predictor) for atherosclerosis , a cardiovascular disease of the arterial wall associated with lipid retention and inflammation. [ 192 ] Non-laminar blood flow also correlates with development of atherosclerosis as mechanosenors of endothelial cells respond to the shear force of disturbed flow (d-flow). [ 193 ] A number of pro-atherogenic genes including matrix metalloproteinases (MMPs) are upregulated by d-flow, [ 193 ] mediating pro-inflammatory and pro-angiogenic signals. These findings were observed in ligated carotid arteries of mice to mimic the effects of d-flow. Within 24 hours, pre-existing immature miR-712 formed mature miR-712 suggesting that miR-712 is flow-sensitive. [ 193 ] Coinciding with these results, miR-712 is also upregulated in endothelial cells exposed to naturally occurring d-flow in the greater curvature of the aortic arch. [ 193 ]
Pre-mRNA sequence of miR-712 is generated from the murine ribosomal RN45s gene at the internal transcribed spacer region 2 (ITS2). [ 193 ] XRN1 is an exonuclease that degrades the ITS2 region during processing of RN45s. [ 193 ] Reduction of XRN1 under d-flow conditions therefore leads to the accumulation of miR-712. [ 193 ]
MiR-712 targets tissue inhibitor of metalloproteinases 3 (TIMP3). [ 193 ] TIMPs normally regulate activity of matrix metalloproteinases (MMPs) which degrade the extracellular matrix (ECM). Arterial ECM is mainly composed of collagen and elastin fibers, providing the structural support and recoil properties of arteries. [ 194 ] These fibers play a critical role in regulation of vascular inflammation and permeability, which are important in the development of atherosclerosis. [ 195 ] Expressed by endothelial cells, TIMP3 is the only ECM-bound TIMP. [ 194 ] A decrease in TIMP3 expression results in an increase of ECM degradation in the presence of d-flow. Consistent with these findings, inhibition of pre-miR712 increases expression of TIMP3 in cells, even when exposed to turbulent flow. [ 193 ]
TIMP3 also decreases the expression of TNFα (a pro-inflammatory regulator) during turbulent flow. [ 193 ] Activity of TNFα in turbulent flow was measured by the expression of TNFα-converting enzyme (TACE) in blood. TNFα decreased if miR-712 was inhibited or TIMP3 overexpressed, [ 193 ] suggesting that miR-712 and TIMP3 regulate TACE activity in turbulent flow conditions.
Anti-miR-712 effectively suppresses d-flow-induced miR-712 expression and increases TIMP3 expression. [ 193 ] Anti-miR-712 also inhibits vascular hyperpermeability, thereby significantly reducing atherosclerosis lesion development and immune cell infiltration. [ 193 ]
The human homolog of miR-712 was found on the RN45s homolog gene, which maintains similar miRNAs to mice. [ 193 ] MiR-205 of humans share similar sequences with miR-712 of mice and is conserved across most vertebrates. [ 193 ] MiR-205 and miR-712 also share more than 50% of the cell signaling targets, including TIMP3. [ 193 ]
When tested, d-flow decreased the expression of XRN1 in humans as it did in mice endothelial cells, indicating a potentially common role of XRN1 in humans. [ 193 ]
Targeted deletion of Dicer in the FoxD1 -derived renal progenitor cells in a murine model resulted in a complex renal phenotype including expansion of nephron progenitors, fewer renin cells, smooth muscle arterioles , progressive mesangial loss and glomerular aneurysms. [ 196 ] High throughput whole transcriptome profiling of the FoxD1-Dicer knockout mouse model revealed ectopic upregulation of pro-apoptotic gene, Bcl2L11 (Bim) and dysregulation of the p53 pathway with increase in p53 effector genes including Bax , Trp53inp1 , Jun, Cdkn1a , Mmp2 , and Arid3a . p53 protein levels remained unchanged, suggesting that FoxD1 stromal miRNAs directly repress p53-effector genes. Using a lineage tracing approach followed by Fluorescent-activated cell sorting , miRNA profiling of the FoxD1-derived cells not only comprehensively defined the transcriptional landscape of miRNAs that are critical for vascular development, but also identified key miRNAs that are likely to modulate the renal phenotype in its absence. These miRNAs include miRs-10a, 18a, 19b, 24, 30c, 92a, 106a, 130a, 152, 181a, 214, 222, 302a, 370, and 381 that regulate Bcl2L11 (Bim) and miRs-15b, 18a, 21, 30c, 92a, 106a, 125b-5p, 145, 214, 222, 296-5p and 302a that regulate p53-effector genes. Consistent with the profiling results, ectopic apoptosis was observed in the cellular derivatives of the FoxD1 derived progenitor lineage and reiterates the importance of renal stromal miRNAs in cellular homeostasis. [ 196 ]
MiRNAs are crucial for the healthy development and function of the nervous system . [ 197 ] Previous studies demonstrate that miRNAs can regulate neuronal differentiation and maturation at various stages. [ 198 ] MiRNAs also play important roles in synaptic development [ 199 ] (such as dendritogenesis or spine morphogenesis) and synaptic plasticity [ 200 ] (contributing to learning and memory). Elimination of miRNA formation in mice by experimental silencing of Dicer has led to pathological outcomes, such as reduced neuronal size, motor abnormalities (when silenced in striatal neurons [ 201 ] ), and neurodegeneration (when silenced in forebrain neurons [ 202 ] ). Altered miRNA expression has been found in neurodegenerative diseases (such as Alzheimer's disease , Parkinson's disease , and Huntington's disease [ 203 ] ) as well as many psychiatric disorders (including epilepsy , [ 204 ] schizophrenia , major depression , bipolar disorder , and anxiety disorders [ 205 ] [ 206 ] [ 207 ] ).
According to the Center for Disease Control and Prevention, Stroke is one of the leading causes of death and long-term disability in America. 87% of the cases are ischemic strokes , which results from blockage in the artery of the brain that carries oxygen-rich blood. The obstruction of the blood flow means the brain cannot receive necessary nutrients, such as oxygen and glucose, and remove wastes, such as carbon dioxide. [ 208 ] [ 209 ] miRNAs plays a role in posttranslational gene silencing by targeting genes in the pathogenesis of cerebral ischemia, such as the inflammatory, angiogenesis, and apoptotic pathway. [ 210 ]
The vital role of miRNAs in gene expression is significant to addiction , specifically alcoholism . [ 211 ] Chronic alcohol abuse results in persistent changes in brain function mediated in part by alterations in gene expression . [ 211 ] miRNA global regulation of many downstream genes deems significant regarding the reorganization or synaptic connections or long term neural adaptations involving the behavioral change from alcohol consumption to withdrawal and/or dependence . [ 212 ] Up to 35 different miRNAs have been found to be altered in the alcoholic post-mortem brain, all of which target genes that include the regulation of the cell cycle , apoptosis , cell adhesion , nervous system development and cell signaling . [ 211 ] Altered miRNA levels were found in the medial prefrontal cortex of alcohol-dependent mice, suggesting the role of miRNA in orchestrating translational imbalances and the creation of differentially expressed proteins within an area of the brain where complex cognitive behavior and decision making likely originate. [ 213 ]
miRNAs can be either upregulated or downregulated in response to chronic alcohol use. miR-206 expression increased in the prefrontal cortex of alcohol-dependent rats, targeting the transcription factor brain-derived neurotrophic factor ( BDNF ) and ultimately reducing its expression. BDNF plays a critical role in the formation and maturation of new neurons and synapses, suggesting a possible implication in synapse growth/ synaptic plasticity in alcohol abusers. [ 214 ] miR-155 , important in regulating alcohol-induced neuroinflammation responses, was found to be upregulated, suggesting the role of microglia and inflammatory cytokines in alcohol pathophysiology. [ 215 ] Downregulation of miR-382 was found in the nucleus accumbens , a structure in the basal forebrain significant in regulating feelings of reward that power motivational habits. miR-382 is the target for the dopamine receptor D1 (DRD1), and its overexpression results in the upregulation of DRD1 and delta fosB , a transcription factor that activates a series of transcription events in the nucleus accumbens that ultimately result in addictive behaviors. [ 216 ] Alternatively, overexpressing miR-382 resulted in attenuated drinking and the inhibition of DRD1 and delta fosB upregulation in rat models of alcoholism, demonstrating the possibility of using miRNA-targeted pharmaceuticals in treatments. [ 216 ]
miRNAs play crucial roles in the regulation of stem cell progenitors differentiating into adipocytes . [ 217 ] Studies to determine what role pluripotent stem cells play in adipogenesis , were examined in the immortalized human bone marrow -derived stromal cell line hMSC-Tert20. [ 218 ] Decreased expression of miR-155 , miR-221 , and miR-222 , have been found during the adipogenic programming of both immortalized and primary hMSCs, suggesting that they act as negative regulators of differentiation. Conversely, ectopic expression of the miRNAs 155 , 221 , and 222 significantly inhibited adipogenesis and repressed induction of the master regulators PPARγ and CCAAT/enhancer-binding protein alpha ( CEBPA ). [ 219 ] This paves the way for possible genetic obesity treatments.
Another class of miRNAs that regulate insulin resistance , obesity , and diabetes , is the let-7 family. Let-7 accumulates in human tissues during the course of aging . [ 220 ] When let-7 was ectopically overexpressed to mimic accelerated aging, mice became insulin-resistant, and thus more prone to high fat diet-induced obesity and diabetes . [ 221 ] In contrast when let-7 was inhibited by injections of let-7-specific antagomirs , mice become more insulin-sensitive and remarkably resistant to high fat diet-induced obesity and diabetes. Not only could let-7 inhibition prevent obesity and diabetes, it could also reverse and cure the condition. [ 222 ] These experimental findings suggest that let-7 inhibition could represent a new therapy for obesity and type 2 diabetes.
miRNAs also play crucial roles in the regulation of complex enzymatic cascades including the hemostatic blood coagulation system . [ 223 ] Large scale studies of functional miRNA targeting have recently uncovered rationale therapeutic targets in the hemostatic system. [ 224 ] [ 225 ] They have been directly linked to Calcium homeostasis in the endoplasmic reticulum , which is critical in cell differentiation in early development. [ 226 ]
miRNAs are considered to be key regulators of many developmental, homeostatic, and immune processes in plants. [ 227 ] Their roles in plant development include shoot apical meristem development, leaf growth, flower formation, seed production, or root expansion. [ 228 ] [ 229 ] [ 230 ] [ 231 ] In addition, they play a complex role in responses to various abiotic stresses comprising heat stress, low-temperature stress, drought stress, light stress, or gamma radiation exposure. [ 227 ]
Viral microRNAs play an important role in the regulation of gene expression of viral and/or host genes to benefit the virus . Hence, miRNAs play a key role in host–virus interactions and pathogenesis of viral diseases . [ 232 ] [ 233 ] The expression of transcription activators by human herpesvirus-6 DNA is believed to be regulated by viral miRNA. [ 234 ]
miRNAs can bind to target messenger RNA (mRNA) transcripts of protein-coding genes and negatively control their translation or cause mRNA degradation. It is of key importance to identify the miRNA targets accurately. [ 235 ] A comparison of the predictive performance of eighteen in silico algorithms is available. [ 236 ] Large scale studies of functional miRNA targeting suggest that many functional miRNAs can be missed by target prediction algorithms. [ 224 ] | https://en.wikipedia.org/wiki/MicroRNA |
MicroRNA sequencing (miRNA-seq) , a type of RNA-Seq , is the use of next-generation sequencing or massively parallel high-throughput DNA sequencing to sequence microRNAs , also called miRNAs. miRNA-seq differs from other forms of RNA-seq in that input material is often enriched for small RNAs. miRNA-seq allows researchers to examine tissue-specific expression patterns, disease associations, and isoforms of miRNAs, and to discover previously uncharacterized miRNAs. Evidence that dysregulated miRNAs play a role in diseases such as cancer [ 1 ] has positioned miRNA-seq to potentially become an important tool in the future for diagnostics and prognostics as costs continue to decrease. [ 2 ] Like other miRNA profiling technologies, miRNA-Seq has both advantages (sequence-independence, coverage) and disadvantages (high cost, infrastructure requirements, run length, and potential artifacts ). [ 3 ]
MicroRNAs (miRNAs) are a family of small ribonucleic acids, 21-25 nucleotides in length, that modulate protein expression through transcript degradation, inhibition of translation , or sequestering transcripts. [ 4 ] [ 5 ] [ 6 ] The first miRNA to be discovered, lin-4 , was found in a genetic mutagenesis screen to identify molecular elements controlling post-embryonic development of the nematode Caenorhabditis elegans . [ 7 ] The lin-4 gene encoded a 22 nucleotide RNA with conserved complementary binding sites in the 3’-untranslated region of the lin-14 mRNA transcript [ 8 ] and downregulated LIN-14 protein expression. [ 9 ] miRNAs are now thought to be involved in the regulation of many developmental and biological processes, including haematopoiesis ( miR-181 in Mus musculus [ 10 ] ), lipid metabolism ( miR-14 in Drosophila melanogaster [ 11 ] ) and neuronal development ( lsy-6 in Caenorhabditis elegans [ 12 ] ). [ 6 ] These discoveries necessitated development of techniques able to identify and characterize miRNAs, such as miRNA-seq.
MicroRNA sequencing (miRNA-seq) was developed to take advantage of next-generation sequencing or massively parallel high-throughput sequencing technologies in order to find novel miRNAs and their expression profiles in a given sample. miRNA sequencing in and of itself is not a new idea, initial methods of sequencing utilized Sanger sequencing methods. Sequencing preparation involved creating libraries by cloning of DNA reverse transcribed from endogenous small RNAs of 21–25 bp size selected by column and gel electrophoresis . [ 13 ] However, this method is exhaustive in terms of time and resources, as each clone has to be individually amplified and prepared for sequencing. This method also inadvertently favors miRNAs that are highly expressed. [ 6 ] Next-generation sequencing eliminates the need for sequence specific hybridization probes required in DNA microarray analysis as well as laborious cloning methods required in the Sanger sequencing method. Additionally, next-generation sequencing platforms in the miRNA-SEQ method facilitate the sequencing of large pools of small RNAs in a single sequencing run. [ 14 ]
miRNA-seq can be performed using a variety of sequencing platforms. The first analysis of small RNAs using miRNA-seq methods examined approximately 1.4 million small RNAs from the model plant Arabidopsis thaliana using Lynx Therapeutics' Massively Parallel Signature Sequencing (MPSS) sequencing platform. This study demonstrated the potential of novel, high-throughput sequencing technologies for the study of small RNAs, and it showed that genomes generate large numbers of small RNAs with plants as particularly rich sources of small RNAs. [ 15 ] Later studies used other sequencing technologies, such as a study in C. elegans which identified 18 novel miRNA genes as well as a new class of nematode small RNAs termed 21U-RNAs . [ 16 ] Another study comparing small RNA profiles of human cervical tumours and normal tissue, utilized the Illumina (company) Genome Analyzer to identify 64 novel human miRNA genes as well as 67 differentially expressed miRNAs. [ 17 ] Applied Biosystems SOLiD sequencing platform has also been used to examine the prognostic value of miRNAs in detecting human breast cancer. [ 18 ]
Sequence library construction can be performed using a variety of different kits depending on the high-throughput sequencing platform being employed. However, there are several common steps for small RNA sequencing preparation. [ 19 ] [ 20 ]
Total RNA Isolation
In a given sample all the RNA is extracted and isolated using an isothiocyanate/phenol/chloroform (GITC/phenol) method or a commercial product such as Trizol ( Invitrogen ) reagent. A starting quantity of 50-100 μg total RNA, 1 g of tissue typically yields 1 mg of total RNA, is usually required for gel purification and size selection. [ 20 ] Quality control of the RNA is also measured, for example running an RNA chip on Caliper LabChipGX ( Caliper Life Sciences ).
Size Fractionation of small RNAs by Gel Electrophoresis
Isolated RNA is run on a denaturing polyacrylamide gel. An imaging method such as radioactive 5’-32P-labeled oligonucleotides along with a size ladder is used to identify a section of the gel containing RNA of the appropriate size, reducing the amount of material ultimately sequenced. This step does not have to be necessarily carried out before the ligation and reverse transcription steps outlined below. [ 19 ] [ 20 ]
Ligation
The ligation step adds DNA adaptors to both ends of the small RNAs, which act as primer binding sites during reverse transcription and PCR amplification. An adenylated single strand DNA 3’adaptor followed by a 5’adaptor is ligated to the small RNAs using a ligating enzyme such as T4 RNA ligase2. The adaptors are also designed to capture small RNAs with a 5’ phosphate group, characteristic microRNAs, rather than RNA degradation products with a 5’ hydroxyl group. [ 19 ] [ 20 ]
Reverse Transcription and PCR Amplification
This step converts the small adaptor ligated RNAs into cDNA clones used in the sequencing reaction. There are many commercial kits available that will carry out this step using some form of reverse transcriptase . PCR is then carried out to amplify the pool of cDNA sequences. Primers designed with unique nucleotide tags can also be used in this step to create ID tags in pooled library multiplex sequencing. [ 19 ] [ 20 ]
The actual RNA sequencing varies significantly depending on the platform used. Three common next-generation sequencing [ 21 ] platforms are Pyrosequencing on the 454 Life Sciences platform, [ 22 ] polymerase-based sequence-by-synthesis on the Illumina (company) platform, [ 23 ] or sequencing by ligation on the ABI Solid Sequencing platform. [ 24 ]
Central to miRNA-seq data analysis is the ability to 1) obtain miRNA abundance levels from sequence reads, 2) discover novel miRNAs and then be able to 3) determine the differentially expressed miRNA and their 4) associated mRNA gene targets.
miRNAs may be preferentially expressed in certain cell types, tissues, stages of development, or in particular disease states such as cancer. [ 1 ] Since deep sequencing (miRNA-seq) generates millions of reads from a given sample, it allows us to profile miRNAs; whether it may be by quantifying their absolute abundance, to discover their variants (known as isomirs [ 25 ] ) Note that given that the average length of sequence reads are longer than the average miRNA (17-25 nt), the 3’ and 5’ ends of the miRNA should be found on the same read.
There are several miRNA abundance quantification algorithms. [ 21 ] [ 26 ] Their general steps are as follows: [ 27 ]
Another advantage of miRNA-seq is that it allows the discovery of novel miRNAs that may have eluded traditional screening and profiling methods. [ 27 ] There are several novel miRNA discovery algorithms. Their general steps are as follows:
After the abundances of miRNAs are quantified for each sample, their expression levels can be compared between samples. One would then be able to identify miRNA that are preferentially expressed that particular time points, or in particular tissues or disease states. After normalizing for the number of mapped reads between samples, one can use a host of statistical tests (like those used in gene expression profiling ) to determine differential expression
Identifying a miRNA's mRNA targets will provide an understanding of the genes or networks of genes whose expression they regulate. [ 31 ] Public databases provide predictions of miRNA targets. But to better distinguish true positive predictions from false positive predictions, miRNA-seq data can be integrated to mRNA-seq data to observe for miRNA:mRNA functional pairs. RNA22, [ 32 ] TargetScan , [ 33 ] [ 34 ] [ 35 ] [ 36 ] [ 37 ] [ 38 ] miRanda, [ 39 ] and PicTar [ 40 ] are software designed for this purpose. A list of prediction software is given here .
The general steps are:
Many miRNAs function to direct cleavage of their mRNA targets; this is particularly true in plants, and thus high-throughput sequencing methods have been developed to take advantage of this property of miRNAs by sequencing the uncapped 3' ends of cleaved or degraded mRNAs. These methods are known as Degradome sequencing or PARE. [ 41 ] [ 42 ] Validation of target cleavage in specific mRNAs is typically performed using a modified version of 5' Rapid Amplification of cDNA Ends with a gene-specific primer.
miRNA-seq has revealed novel miRNAs that were previously eluded in traditional miRNA profiling methods. Examples of such findings are in embryonic stem cells, [ 25 ] chicken embryos, [ 43 ] acute lymphoblastic leukaemia, [ 44 ] diffuse large b-cell lymphoma and b-cells, [ 45 ] acute myeloid leukemia, [ 46 ] and lung cancer. [ 47 ]
Micro RNAs are important regulators of almost all cellular processes such as survival, proliferation , and differentiation . Consequently, it is not unexpected that miRNAs are involved in various aspects of cancer through the regulation of onco- and tumor suppressor gene expression. In combination with the development of high-throughput profiling methods, miRNAs have been identified as biomarkers for cancer classification, response to therapy, and prognosis. [ 48 ] Additionally, because miRNAs regulate gene expression they can also reveal perturbations in important regulatory networks that may be driving a particular disorder. [ 48 ] Several applications of miRNAs as biomarkers and predictors of disease are given below.
α This is not a comprehensive list of miRNAs involved with these malignancies.
The disadvantages of using miRNA-seq over other methods of miRNA profiling are that it is more expensive, generally requires a larger amount of total RNA, involves extensive amplification, and is more time-consuming than microarray and qPCR methods. [ 3 ] As well, miRNA-seq library preparation methods seem to have systematic preferential representation of the miRNA complement, and this prevents accurate determination of miRNA abundance. [ 64 ] At the same time, the approach is hybridization independent and therefore does not require a priori sequence information. Because of this, one can obtain sequences of novel miRNAs and miRNA isoforms (isoMirs), distinguish sequentially similar miRNAs, and identify point mutations. [ 65 ]
[ 3 ] | https://en.wikipedia.org/wiki/MicroRNA_sequencing |
MicroTCA (short for Micro Telecommunications Computing Architecture , also: μTCA ) is a modular, open standard, created and maintained by the PCI Industrial Computer Manufacturers Group (PICMG). It provides the electrical, mechanical, thermal and management specifications to create a switched fabric computer system, using Advanced Mezzanine Cards (AMC), connected directly to a backplane . MicroTCA is a descendant of the AdvancedTCA standard. [ 1 ]
The rapid expansion of mobile telecommunications and their associated services (such as text messages) at the beginning of the millennium increased the demand of processing power in telecommunication systems. The existing "carrier grade" (see RAS ) computing architectures were not fit to house the high performance processors of the time. [ 2 ] In order to answer those demands, about 100 companies worked together in PICMG, resulting in the Advanced Telecommunications Architecture (AdvancedTCA, ATCA), published in 2002.
After the introduction of AdvancedTCA, a standard was developed, to cater towards smaller telecommunications systems at the edge of the network. [ 1 ] This standard was geared towards a more compact, less expensive systems, without cutting back on reliability or data throughput. This standard, called MicroTCA, was ratified 2006.
MicroTCA systems migrated after its release into non-telecommunication sectors, like defence, avionics and science. This resulted in extensions to the base-standard, called modules.
The base-specification for properties common to all other modules, ratified July 6, 2006. [ 3 ] This includes:
A second revision of the base-specifications was ratified January 16, 2020, containing some corrections, as well as alterations, necessary to implement higher speed Ethernet fabrics , like 10GBASE-KR and 40GBASE-KR4 . [ 4 ]
This module adds specifications for ruggedized systems, using forced air for cooling. Possible scenarios for MicroTCA.1-based systems include outside plant telecom, industrial and aerospace environments [ 5 ]
This module adds specifications for more stringent requirements with regards to temperature, shock, vibration and other environmental conditions. These specifications are geared towards use in outside plant telecom, machine and transport industry, as well as military airborne, shipboard and ground mobile equipment. [ 6 ] MicroTCA.2 allows the use of air- and conduction-cooled AMC-modules.
This module adds specifications for even more stringent requirements with regards to temperature, shock, vibration and other environmental conditions. These specifications are geared towards use in outside plant telecom, machine and transport industry, as well as military airborne, shipboard and ground mobile equipment. [ 7 ] MicroTCA.3 requires the use of conduction-cooled AMC-modules.
This module extends the AMC with a Rear Transition Module (RTM), increasing PCB-space and modularity. AMC and RTM are connected with a connector, located in zone 3, defined in MicroTCA.0. [ 8 ] These specifications are geared towards use in large-scale scientific devices, like particle accelerators or telescopes .
The card cage (also: shelf, crate) houses all the other components and as such has two primary functions:
There exist a wide array of card cages. They usually differ in:
The backplane is a printed circuit board , mounted directly into the card cage. It connects all other components of a MicroTCA system to each other and provides power, data access and management access to them.
Two types of power are distributed over the backplane, Management Power (+3.3 V) and Payload Power (+12 V). Unlike typical backplanes, where power is distributed to all components via a common "powerplane" in the PCB, on a MicroTCA backplane, Management and Payload Power are distributed to each component individually. While Management Power is provided to each module connected to a powered backplane, Payload Power has to be granted by the MicroTCA Carrier Hub (MCH), after ensuring that the module is MicroTCA-compatible.
The standard defines various communication buses, which the backplane can/should provide:
The Cooling Unit (CU) provides controlled air flow in air-flow-cooled card cages. It usually consists of an array of fans and a controller, which is connected to the backplane. The MicroTCA Carrier Hub (MCH) can read-out temperature sensors (if present) and fan speed, as well as change fan speed via IPMI. The Cooling Unit is usually fitted to a specific card cage. Some CUs are easily detachable (i.e. for cleaning or replacement), while other card cages come with integrated, non-detachable CUs.
The Power Module (PM, also: Power Supply) converts the AC power from the power line to the +3.3 V Management Power (MP) and +12 V Payload Power (PP), both of which are DC . There exist a variety of power modules, which differ in:
The power module senses the presence of a module in a slot via a specified pin in the module connector, and immediately provides that module with management power. Payload power is managed by the MicroTCA Carrier Hub (MCH), which communicates with the power module via IPMI.
The power module uses its own type of connector, and can thus only be installed into designated slots, which in turn can't carry any other type of module. Some card cages provide an additional power module slot for redundancy. In such a case, one slot is the primary, which will provide power by default, and the other one is secondary, providing power only, if the primary does not.
The MicroTCA Carrier Hub (MCH) is the central managing device of a MicroTCA card cage. It manages power distribution and cooling. It usually also provides Gigabit Ethernet and/or PCIe/Serial RapidIO switching. Some MCHs additionally provide clocking. As the name indicates, they are the hub of various star topologies (i.e. for Ethernet, PCIe) on the backplane and thus require dedicated slot(s). Some backplanes support two MCHs for redundancy. In this case there are two MCH slots, with one being designated primary, and one secondary.
Advanced Mezzanine Card (AMC) is a standard for hot-plugable PCBs. It was originally developed to be used in AdvancedTCA systems. The standard specifies:
There is a huge variation of functionalities, an AMC can fulfill:
The Rear Transition Module (RTM) was added in the MicroTCA.4 standard. It is connected directly to an AMC via a connector, located in zone 3, requiring a double width AMC and RTM. An RTM has about the same dimensions, as an AMC, basically doubling the available PCB-space per slot in an MTCA.4 card cage. Its power is provided by the AMC. Thus an RTM can not operate on its own, but requires a paired AMC.
The zone 3 connector is electrically free configurable, making it possible, that a mechanically fitting AMC-RTM pair is electrically incompatible. To avoid damage due to that incompatibility, a mechanical code-pin was added to MTCA.4-compatible AMCs and RTMs, mechanically preventing the installation of an electrically incompatible RTM to an AMC.
The functionality of RTMs includes, but is not limited to: | https://en.wikipedia.org/wiki/MicroTCA |
Micro armour (or micro armor ) refers to scale models made of lead, pewter, die cast metal or plastic, usually used for wargaming purposes. Variations of the name include: mini armour, microscale, mini tanks, miniature armour, miniature tanks, micro tanks, minitanks , minifigs, armour figurines, tank figurines, etc. are also used. Micro armour is a sub-category of model military vehicle miniature figures used for military simulation , miniature wargaming , scale models , dioramas and collecting .
The specific term "micro armour" originated and was trademarked by GHQ founder Gregory Dean Scott in 1967 [ 1 ] for a line of metal 1:285 scale armour miniatures. GHQ also published Micro Armour: The Game - WWII in 2001 [ 2 ] some 34 years after founding the company. Early on, a competing company called C in C offered 1:285 scale micro armour starting in 1974.
Currently, games such as Flames of War and Axis & Allies Miniatures are widely popular and use 1:100 scale mini armour figurines and 15 mm infantry.
Micro and mini armour consists primarily of the following scales, from smallest to largest: [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ]
Beyond squad-level scale there is half-platoon scale, platoon scale, company scale, battalion scale and division scale.
Micro armour is usually differentiated from tabletop games based on human shaped heroic scale / infantry skirmish game scale figures (even if the high and low ends of each respective category overlap) because the scales used by most micro armour games are smaller (armour skirmish game scale) and the represented playing field larger - though it is not nearly as large as in naval wargaming . In typical micro armour based games (such as Micro Armour: The Game - WWII [ 9 ] ) a single tank would represent a typical military unit. Contrast to larger scaled games, such as those using 20 mm - 54 mm heroic scale / infantry skirmish game scale miniatures, (such as MechWarrior , Warhammer 40,000 , Warhammer , AT-43 , Warmachine or Dungeons & Dragons ), a tank would be unusually large and more akin to units like dragons or large catapults which human sized units must gang-up against to defeat. Infantry skirmish games such as The Face of Battle [ 10 ] and I Ain't Been Shot, Mum! [ 11 ] demonstrate this very well as they are designed to be played with 15mm, 20mm and 25/28mm scale figures, as contrasted with 6mm - 12.5mm armour skirmish figures.
Early (pre-1990) games using lead or pewter miniature armour (for World War II and modern battle simulation) included Angriff! by Z&M Publishing Enterprises (1968) and (1972), [ 12 ] [ 13 ] Fast Rules by Armored Operations Society (1970) published later by Guidon Games (1972), Tractics by Guidon Games (1971) later by TSR, Inc. (1975), War Games Rules Armour & Infantry 1925-1950 by Wargames Research Group (WRG) (1973), Panzer Warfare by TSR, Inc. (1975), [ 14 ] Kampfgruppe by Historical Alternatives Game Co. (1979), [ 15 ] Corps Commander: OMG & Korps Commander by Table Top Games (1986) and Command Decision by Game Designers' Workshop (1986).
There were also some science fiction -based games that used micro armour, such as Starguard by Reviresco (1974), [ 16 ] Ogre by Steve Jackson Games (1977), [ 17 ] Striker by Game Designers' Workshop (1981), Classic BattleTech by FASA (1984) and Space Marine by Games Workshop (1989). [ 18 ]
Recent (1990 and later) games include Tide of Iron , Flames of War , Axis & Allies Miniatures , Micro Armour: The Game - WWII , [ 19 ] Heavy Gear , Blitzkrieg Commander , [ 20 ] Dirtside II , [ 21 ] Crossfire , I Ain't Been Shot, Mum! , Cold War Commander , Megablitz , Panzer War , [ 22 ] Panzertruppe [ 23 ] Panzer Miniatures , [ 24 ] Panzer Marsch , [ 25 ] First Watch . [ 26 ] Jagdpanzer , [ 27 ] Command Decision - Test of Battle 4th Edition , [ 28 ] World Tank Campaigns , [ 29 ] BGMR Modern Rules . [ 30 ]
Metal (and some plastic) gaming pieces are traditionally manufactured by companies such as:
Recent plastic and diecast metal series intended for collecting and made in the 1:144 scale are manufactured by companies such as: | https://en.wikipedia.org/wiki/Micro_armour |
Micro carbon residue , commonly known as "MCR" is a laboratory test used to determine the amount of carbonaceous residue formed after evaporation and pyrolysis of petroleum materials under certain conditions. The test is used to provide some indication of a material's coke -forming tendencies. [ 1 ] [ 2 ] [ 3 ] The test results are equivalent to the test results obtained from the Conradson Carbon Residue test. [ 1 ] [ 4 ]
A quantity of sample is weighed, placed in a glass vial, and heated to 500 °C. Heating is performed in a controlled manner, for a specific period of time, and under an inert ( nitrogen ) atmosphere . The sample experiences coking reactions, with volatiles formed being swept away by the nitrogen. The carbonaceous residue remaining is reported as a mass percent of the original sample, and noted as “carbon residue (micro).” [ 1 ]
Micro carbon residue offers the same range of applicability as the test to which it is equivalent, Conradson Carbon Residue. Advantages of MCR include better control of test conditions, smaller samples, and less operator attention. [ 1 ] Applications include: | https://en.wikipedia.org/wiki/Micro_carbon_residue |
A micro gallery was a computer-based guide to archives and museum collections, first developed for the collections at the National Gallery in London, UK. [ 1 ] [ 2 ] It took three years to develop by the company Cognitive Applications , and opened in July 1991 as part of the facilities in the Sainsbury Wing . Visitors could use the system to determine which pictures they would like to see in the gallery. It was possible to print out personalised information for use during the visit. The Micro Gallery ran for 14 years and a CD-ROM with similar facilities was produced.
In 1995, a similar facility was produced for the National Gallery of Art in Washington, D.C., USA. [ 3 ] [ 4 ]
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Micro_gallery |
Micro heat exchangers, Micro-scale heat exchangers, or microstructured heat exchangers are heat exchangers in which (at least one) fluid flows in lateral confinements with typical dimensions below 1 mm. The most typical such confinement are microchannels , which are channels with a hydraulic diameter below 1 mm. Microchannel heat exchangers can be made from metal or ceramic. [ 1 ]
Microchannel heat exchangers can be used for many applications including:
Investigation of microscale thermal devices is motivated by the single phase internal flow correlation for convective heat transfer:
Where h {\displaystyle h} is the heat transfer coefficient , N u c {\displaystyle {\mathit {Nu}}_{c}} is the Nusselt number , k {\displaystyle k} is the thermal conductivity of the fluid and d {\displaystyle d} is the hydraulic diameter of the channel or duct. In internal laminar flows , the Nusselt number becomes a constant. This is a result which can be arrived at analytically: For the case of a constant wall temperature, N u c = 3.657 {\displaystyle {\mathit {Nu}}_{c}=3.657} and for the case of constant heat flux N u c = 4.364 {\displaystyle {\mathit {Nu}}_{c}=4.364} for round tubes. [ 6 ] The last value is increased to 140/17 = 8.23 for flat parallel plates. [2] As Reynolds number is proportional to hydraulic diameter, fluid flow in channels of small hydraulic diameter will predominantly be laminar in character. This correlation therefore indicates that the heat transfer coefficient increases as channel diameter decreases. Should the hydraulic diameter in forced convection be on the order of tens or hundreds of micrometres, an extremely high heat transfer coefficient should result.
This hypothesis was initially investigated by Tuckerman and Pease. [ 7 ] Their positive results led to further research ranging from classical investigations of single channel heat transfer [ 8 ] to more applied investigations in parallel micro-channel and micro scale plate fin heat exchangers . Recent work in the field has focused on the potential of two-phase flows at the micro-scale. [ 9 ] [ 10 ] [ 11 ]
Just like "conventional" or "macro scale" heat exchangers , micro heat exchangers have one, two or even three [ 12 ] fluidic flows. In the case of one fluidic flow, heat can be transferred to the fluid (each of the fluids can be a gas , a liquid , or a multiphase flow ) from electrically powered heater cartridges, or removed from the fluid by electrically powered elements like Peltier chillers. In the case of two fluidic flows, micro heat exchangers are usually classified by the orientation of the fluidic flows to another as "cross flow" or " counter flow " devices. If a chemical reaction is conducted inside a micro heat exchanger, the latter is also called a microreactor . | https://en.wikipedia.org/wiki/Micro_heat_exchanger |
Micro miniature (also called micro art or micro sculpture ) is a fine art form. Micro miniatures are made with the assistance of microscopes , or eye surgeon tools. [ 1 ] It originated at the end of 20th century. [ 2 ]
The National Museum of Toys and Miniatures in Kansas City, Missouri has micro-miniatures in their permanent collection and in their exhibit titled Micro Curiosities , a permanent exhibit displaying the work of several fine-scale miniature artists.
The Metropolitan Museum of Art in New York City holds a micro-miniature basket made by a Pomo Native American artist around 1910. [ 3 ]
The Museum of Jurassic Technology in Culver City, California has a collection of the microminiatures of the Armenian artist Hagop Sandaldjian in their permanent exhibition, The Eye of the Needle . [ 4 ] [ 5 ]
The Museum of Miniatures located in Prague focuses on works of microminiature art. It features the work of Edward Ter Ghazarian, Anatoly Konenko, Nikolai Aldunin among others. [ 1 ]
The Museum of Microminiatures in St. Petersburg includes micro-miniature work by Vladimir Aniskin of Novosibirsk, Siberia, as well as Nikolai Aldunin of Moscow. [ 6 ]
Ermann, Lynn. They have jobs on the slide: Microscopic art , The Washington Post, February 14, 1999 | https://en.wikipedia.org/wiki/Micro_miniature |
Micro pitting is a fatigue failure of the surface of a material commonly seen in rolling bearings and gears . [ 1 ] It is also known as grey staining , micro spalling or frosting .
The difference between pitting and micropitting is the size of the pits after surface fatigue . Pits formed by micropitting are approximately 10–20 μm in depth, and to the unaided eye, micropitting appears dull, etched or stained, with patches of gray. [ 2 ] Normal pitting creates larger and more visible pits. Micropits are originated from the local contact of asperities produced by improper lubrication .
In a normal bearing the surfaces are separated by a layer of oil , this is known as elastohydrodynamic (EHD) lubrication . If the thickness of the EHD film is of the same order of magnitude as the surface roughness , the surface topography is able to interact and cause micro pitting. A thin EHD film may be caused by excess load or temperature, a lower oil viscosity than is required, low speed or water in the oil. Water in the oil can make micro pitting worse by causing hydrogen embrittlement of the surface. Micro pitting occurs only under poor EHD lubrication conditions and although it can affect all types of gears, it can be particularly troublesome in heavily loaded gears with hardened teeth. [ 3 ] [ 1 ]
A surface with a deep scratch might break exactly at the scratch if stress is applied. One can imagine that the surface roughness is a composite of many very small scratches. So high surface roughness decreases the stability on heavy stressed parts. To get a good overview of the surface an areal scan ( Surface metrology ) gives more information that a measurement along a single profile (profileometer). To quantify the surface roughness the ISO 25178 can be used.
ISO/TS 6336-22 contains a method for a calculation of risk of micropitting in gear sets. [ 4 ] [ 1 ]
This corrosion -related article is a stub . You can help Wikipedia by expanding it .
This article about materials science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Micro_pitting |
Micro power sources and nano power sources are units of RFID , MEMS , microsystems and nanosystems for energy-power generation, harvesting from ambient, storage and conversion.
[1] La O` G.J., In H.J., Crumlin E., Barbastathis G., Shao-Horn Y. Resent advances in microdevices for electrochemical energy conversion and storage // Int. J. Energy Res. 2007. V.31. P.548-575.
[2] Curtright A.E., Bouwman P.J., Wartena R.C., Swider-Lyons K.E. Power sources for nanotechnology // International Journal of Nanotechnology. 2004. V.1. Nos.1/2. P.226-239
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Micro_power_source |
Micro process engineering is the science of conducting chemical or physical processes ( unit operations ) inside small volumina,
typically inside channels with diameters of less than 1 mm
(microchannels) or other structures with sub-millimeter dimensions.
These processes are usually carried out in continuous flow mode, as opposed to batch production , allowing a throughput high enough to make micro process engineering a tool for chemical production. Micro process engineering is therefore not to be confused with microchemistry , which deals with very small overall quantities of matter.
The subfield of micro process engineering that deals with chemical
reactions, carried out in microstructured reactors or
" microreactors ", is also known as microreaction technology .
The unique advantages of microstructured reactors or microreactors are enhanced heat transfer due to the large surface area-to-volume ratio , and enhanced mass transfer . For example, the length scale of diffusion processes is comparable to that of microchannels or even shorter, and efficient mixing of reactants can be achieved during very short times (typically milliseconds). The good heat transfer properties allow a precise temperature control of reactions. For example, highly exothermic reactions can be conducted almost isothermally when the microstructured reactor contains a second set of microchannels ("cooling passage"), fluidically separated from the reaction channels ("reaction
passage"), through which a flow of cold fluid with sufficiently high heat capacity is maintained. It is also possible to change the temperature of microstructured reactors very rapidly to intentionally achieve a non-isothermal behaviour.
While the dimensions of the individual channels are small, a micro process engineering device ("microstructured reactor") can contain many thousands of such channels, and the overall size of a microstructured reactor can be on the scale of meters. The objective of micro process engineering is not primarily to miniaturize production plants, but to increase yields and selectivities of chemical reactions, thus reducing the cost of chemical production. This goal can be achieved by either using chemical reactions that cannot be conducted in larger volumina, or by running chemical reactions at parameters (temperatures, pressures, concentrations) that are inaccessible in larger volumina due
to safety constraints. For example, the detonation of the stoichiometric mixture of two volume unit of hydrogen gas and
one volume unit of oxygen gas does not propagate in microchannels
with a sufficiently small diameter. This property is referred to as the
" intrinsic safety " of microstructured reactors. The improvement of yields and selectivities by using novel reactions or running reactions at more extreme parameters is known as "process intensification".
Historically, micro process engineering originated around the 1980s, when mechanical micromachining methods developed for the fabrication of uranium isotope separation nozzles were first applied to the manufacturing of compact heat exchangers at the Karlsruhe (Nuclear) Research Center . | https://en.wikipedia.org/wiki/Micro_process_engineering |
A microactuator is a microscopic servomechanism that supplies and transmits a measured amount of energy for the operation of another mechanism or system. As a general actuator , following standards have to be met:
For microactuator, there are two in addition
The basic principle can be described as the expression for mechanical work W = F → ⋅ Δ r → {\displaystyle W={\overrightarrow {F}}\cdot \Delta {\overrightarrow {r}}} since an actuator is to manipulate positions and therefore force is needed. For different kind of microactuators, different physical principles are applied. | https://en.wikipedia.org/wiki/Microactuator |
Microalgae do not settle by gravity , therefore expensive harvesting techniques must be applied. This is a major bottleneck of microalgal technology. Bioflocculation of microalgae and bacteria addresses this.
MaB-flocs or M icro a lgal B acterial flocs settle by gravity, up to density of 20 g per liter. This is a major improvement for microalgal technology for wastewater treatment .
Currently, MaB-flocs are being applied for sewage treatment on lab and pilot scale in Germany , New Zealand and Belgium . The idea is to scavenge nutrients such as nitrogen and phosphorus from the wastewater , sometimes combined with flue gas treatment.
Nutritional evaluation of such microbial protein or single cell protein as an unconventional protein feedstuff or ingredient in artificial animal feeds have gained much importance lately. [ 1 ] Its nutritional strengths and bottlenecks are much described lately. [ 2 ]
The integration of Microalgal Bacterial (MaB) flocs into sustainable agricultural practices presents an innovative approach to enhancing the nutritional content of food sources, particularly in terms of omega-3 fatty acids. Omega-3 fatty acids, such as Eicosapentaenoic acid (EPA) and Docosahexaenoic acid (DHA), are essential fats that humans must obtain from their diet. These fats are crucial for brain health, maintaining the health of cell membranes, and supporting cardiovascular health.
MaB-flocs, comprising both microalgae and bacteria, have shown promise in wastewater treatment applications by effectively removing nutrients such as nitrogen and phosphorus. Microalgae, a key component of MaB-flocs, are known for their ability to accumulate high levels of omega-3 fatty acids. This positions MaB-flocs as a potential sustainable source of these essential nutrients. The cultivation of microalgae within MaB-flocs for omega-3 production offers a dual benefit: improving water quality through nutrient removal and providing a source of essential dietary fats. [ 3 ]
Current research focuses on optimizing the growth conditions of MaB-flocs to maximize the yield of omega-3 fatty acids. This includes investigating the effects of various environmental parameters, such as light intensity, temperature, and pH, on the fatty acid profile of microalgae within the flocs. Additionally, the feasibility of harvesting omega-3-rich microalgae from MaB-flocs for use in food and feed applications is being explored. [ 4 ]
While the direct contribution of MaB-flocs to disease prevention through omega-3 production requires further research, their potential to serve as a sustainable source of these essential nutrients is clear. As the global demand for omega-3 fatty acids continues to rise, MaB-flocs represent a promising avenue for environmentally friendly production of these vital dietary components. | https://en.wikipedia.org/wiki/Microalgal_bacterial_flocs |
Microanalysis is the chemical identification and quantitative analysis of very small amounts of chemical substances (generally less than 10 mg or 1 ml) or very small surfaces of material (generally less than 1 cm 2 ). One of the pioneers in the microanalysis of chemical elements was the Austrian Nobel Prize winner Fritz Pregl . [ 1 ]
The most known methods used in microanalysis include:
Compared to normal analyses methods, microanalysis:
This article about analytical chemistry is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Microanalysis |
A microarray is a multiplex lab-on-a-chip . [ 1 ] Its purpose is to simultaneously detect the expression of thousands of biological interactions. It is a two-dimensional array on a solid substrate —usually a glass slide or silicon thin-film cell —that assays (tests) large amounts of biological material using high-throughput screening miniaturized, multiplexed and parallel processing and detection methods. The concept and methodology of microarrays was first introduced and illustrated in antibody microarrays (also referred to as antibody matrix ) by Tse Wen Chang in 1983 in a scientific publication [ 2 ] and a series of patents. [ 3 ] [ 4 ] [ 5 ] The " gene chip " industry started to grow significantly after the 1995 Science Magazine article by the Ron Davis and Pat Brown labs at Stanford University. [ 6 ] With the establishment of companies, such as Affymetrix , Agilent , Applied Microarrays, Arrayjet, Illumina , and others, the technology of DNA microarrays has become the most sophisticated and the most widely used, while the use of protein, peptide and carbohydrate microarrays [ 7 ] is expanding.
Types of microarrays include:
People in the field of CMOS biotechnology are developing new kinds of microarrays. Once fed magnetic nanoparticles , individual cells can be moved independently and simultaneously on a microarray of magnetic coils. A microarray of nuclear magnetic resonance microcoils is under development. [ 8 ]
A large number of technologies underlie the microarray platform, including the material substrates, [ 9 ] spotting of biomolecular arrays, [ 10 ] and the microfluidic packaging of the arrays. [ 11 ] Microarrays can be categorized by how they physically isolate each element of the array, by spotting (making small physical wells), on-chip synthesis (synthesizing the target DNA probes adhered directly on the array), or bead-based (adhering samples to barcoded beads randomly distributed across the array). [ 12 ]
The initial publication on microarray production process dates back to 1995, when 48 cDNAs of a plant were printed on glass slide typically used for light microscopy, modern microarrays on the other hand include now thousands of probes and different carriers with coatings. The fabrication of the microarray requires both biological and physical information, including sample libraries, printers, and slide substrates. Though all procedures and solutions always dependent on the fabrication technique employed. The basic principle of the microarray is the printing of small stains of solutions containing different species of the probe on a slide several thousand times. [ 13 ]
Modern printers are HEPA -filtered and have controlled humidity and temperature surroundings, which is typically around 25°C, 50% humidity. Early microarrays were directly printed onto the surface by using printer pins which deposit the samples in a user-defined pattern on the slide. Modern methods are faster, generate less cross-contamination, and produce better spot morphology. The surface to which the probes are printed must be clean, dust free and hydrophobic, for high-density microarrays. Slide coatings include poly-L-lysine, amino silane, epoxy and others, including manufacturers solutions and are chosen based on the type of sample used. Ongoing efforts to advance microarray technology aim to create uniform, dense arrays while reducing the necessary volume of solution and minimizing contamination or damage. [ 13 ] [ 14 ]
For the manufacturing process, a sample library which contains all relevant information is needed. In the early stages of microarray technology, the sole sample used was DNA , obtained from commonly available clone libraries and acquired through DNA amplification via bacterial vectors. Modern approaches do not include just DNA as a sample anymore, but also proteins, antibodies, antigens, glycans, cell lysates and other small molecules. All samples used are presynthesized, regularly updated, and more straightforward to maintain. Array fabrication techniques include contact printing, lithography, non-contact and cell free printing. [ 14 ]
Contact printing microarray include Pin printing, microstamping or flow printing. Pin printing is the oldest and still widest adopted methodology in DNA microarray contact printing. This technique uses pin types like solid pins, split or quill pins to load and deliver the sample solution directly on solid microarray surfaces. Microstamping offers an alternative to the commonly used pin printing and is also referred as soft lithography , which in theory covers different, related pattern transfer technologies using patterned polymer monolithic substrates, the most prominent being microstamping. In contrast to pin printing, microstamping is a more parallel deposition method with less individuality. Certain stamps are loaded with reagents and printed with these reagent solutions identically. [ 15 ]
Lithography combines various methods like Photolithography, Interference lithography, laser writing, electron-beam and Dip pen. The most widely used and researched method remains Photolithography, in which photolithographic masks are used to target specific nucleotides to the surface. UV light is passed through the mask that acts as a filter to either transmit or block the light from the chemically protected microarray surface. If the UV light has been blocked, the area will remain protected from the addition of nucleotides, whereas in areas which were exposed to UV light, further nucleotides can be added. With this method high-quality custom arrays can be produced with a very high density of DNA features by using a compact device with few moving parts. [ 16 ] [ 17 ]
Non-contact printing methods vary from Photochemistry -based printing, Electro-printing and droplet dispensing. In contrast to the other methods, non-contact printing does not involve contact between the surface and the stamp, pin, or other used dispenser. The main advantages are reduced contamination, lesser cleaning and higher throughput which increases steadily. Many of the methods are able to load the probes in parallel, allowing multiple arrays to be produced simultaneously. [ 14 ] [ 15 ]
In cell free systems, the transcription and translation are carried out in situ, which makes the cloning and expression of proteins in host cells obsolete, because no intact cells are needed. The molecule of interest is directly synthesized onto the surface of a solid area. These assays allow high-throughput analysis in a controlled environment without inferences associated with intact cells. [ 18 ] | https://en.wikipedia.org/wiki/Microarray |
Microarray analysis techniques are used in interpreting the data generated from experiments on DNA ( Gene chip analysis ), RNA, and protein microarrays , which allow researchers to investigate the expression state of a large number of genes – in many cases, an organism's entire genome – in a single experiment. [ 1 ] Such experiments can generate very large amounts of data, allowing researchers to assess the overall state of a cell or organism. Data in such large quantities is difficult – if not impossible – to analyze without the help of computer programs.
Microarray data analysis is the final step in reading and processing data produced by a microarray chip. Samples undergo various processes including purification and scanning using the microchip, which then produces a large amount of data that requires processing via computer software. It involves several distinct steps, as outlined in the image below. Changing any one of the steps will change the outcome of the analysis, so the MAQC Project [ 2 ] was created to identify a set of standard strategies. Companies exist that use the MAQC protocols to perform a complete analysis. [ 3 ]
Most microarray manufacturers, such as Affymetrix and Agilent , [ 4 ] provide commercial data analysis software alongside their microarray products. There are also open source options that utilize a variety of methods for analyzing microarray data.
Comparing two different arrays or two different samples hybridized to the same array generally involves making adjustments for systematic errors introduced by differences in procedures and dye intensity effects. Dye normalization for two color arrays is often achieved by local regression . LIMMA provides a set of tools for background correction and scaling, as well as an option to average on-slide duplicate spots. [ 5 ] A common method for evaluating how well normalized an array is, is to plot an MA plot of the data. MA plots can be produced using programs and languages such as R and MATLAB. [ 6 ] [ 7 ]
Raw Affy data contains about twenty probes for the same RNA target. Half of these are "mismatch spots", which do not precisely match the target sequence. These can theoretically measure the amount of nonspecific binding for a given target. Robust Multi-array Average (RMA) [ 8 ] is a normalization approach that does not take advantage of these mismatch spots but still must summarize the perfect matches through median polish . [ 9 ] The median polish algorithm, although robust, behaves differently depending on the number of samples analyzed. [ 10 ] Quantile normalization , also part of RMA, is one sensible approach to normalize a batch of arrays in order to make further comparisons meaningful.
The current Affymetrix MAS5 algorithm, which uses both perfect match and mismatch probes, continues to enjoy popularity and do well in head to head tests. [ 11 ]
Factor analysis for Robust Microarray Summarization (FARMS) [ 12 ] is a model-based technique for summarizing array data at perfect match probe level. It is based on a factor analysis model for which a Bayesian maximum a posteriori method optimizes the model parameters under the assumption of Gaussian measurement noise. According to the Affycomp benchmark [ 13 ] FARMS outperformed all other summarizations methods with respect to sensitivity and specificity.
Many strategies exist to identify array probes that show an unusual level of over-expression or under-expression. The simplest one is to call "significant" any probe that differs by an average of at least twofold between treatment groups. More sophisticated approaches are often related to t-tests or other mechanisms that take both effect size and variability into account. Curiously, the p-values associated with particular genes do not reproduce well between replicate experiments, and lists generated by straight fold change perform much better. [ 14 ] [ 15 ] This represents an extremely important observation, since the point of performing experiments has to do with predicting general behavior. The MAQC group recommends using a fold change assessment plus a non-stringent p-value cutoff, further pointing out that changes in the background correction and scaling process have only a minimal impact on the rank order of fold change differences, but a substantial impact on p-values. [ 14 ]
Clustering is a data mining technique used to group genes having similar expression patterns. Hierarchical clustering , and k-means clustering are widely used techniques in microarray analysis.
Hierarchical clustering is a statistical method for finding relatively homogeneous clusters. Hierarchical clustering consists of two separate phases. Initially, a distance matrix containing all the pairwise distances between the genes is calculated. Pearson's correlation and Spearman's correlation are often used as dissimilarity estimates, but other methods, like Manhattan distance or Euclidean distance , can also be applied. Given the number of distance measures available and their influence in the clustering algorithm results, several studies have compared and evaluated different distance measures for the clustering of microarray data, considering their intrinsic properties and robustness to noise. [ 16 ] [ 17 ] [ 18 ] After calculation of the initial distance matrix, the hierarchical clustering algorithm either (A) joins iteratively the two closest clusters starting from single data points (agglomerative, bottom-up approach, which is fairly more commonly used), or (B) partitions clusters iteratively starting from the complete set (divisive, top-down approach). After each step, a new distance matrix between the newly formed clusters and the other clusters is recalculated. Hierarchical cluster analysis methods include:
Different studies have already shown empirically that the Single linkage clustering algorithm produces poor results when employed to gene expression microarray data and thus should be avoided. [ 18 ] [ 19 ]
K-means clustering is an algorithm for grouping genes or samples based on pattern into K groups. Grouping is done by minimizing the sum of the squares of distances between the data and the corresponding cluster centroid . Thus the purpose of K-means clustering is to classify data based on similar expression. [ 20 ] K-means clustering algorithm and some of its variants (including k-medoids ) have been shown to produce good results for gene expression data (at least better than hierarchical clustering methods). Empirical comparisons of k-means , k-medoids , hierarchical methods and, different distance measures can be found in the literature. [ 18 ] [ 19 ]
Commercial systems for gene network analysis such as Ingenuity [ 21 ] and Pathway studio [ 22 ] create visual representations of differentially expressed genes based on current scientific literature. Non-commercial tools such as FunRich, [ 23 ] GenMAPP and Moksiskaan also aid in organizing and visualizing gene network data procured from one or several microarray experiments. A wide variety of microarray analysis tools are available through Bioconductor written in the R programming language . The frequently cited SAM module and other microarray tools [ 24 ] are available through Stanford University. Another set is available from Harvard and MIT. [ 25 ]
Specialized software tools for statistical analysis to determine the extent of over- or under-expression of a gene in a microarray experiment relative to a reference state have also been developed to aid in identifying genes or gene sets associated with particular phenotypes . One such method of analysis, known as Gene Set Enrichment Analysis (GSEA), uses a Kolmogorov-Smirnov -style statistic to identify groups of genes that are regulated together. [ 1 ] This third-party statistics package offers the user information on the genes or gene sets of interest, including links to entries in databases such as NCBI's GenBank and curated databases such as Biocarta [ 26 ] and Gene Ontology . Protein complex enrichment analysis tool (COMPLEAT) provides similar enrichment analysis at the level of protein complexes. [ 27 ] The tool can identify the dynamic protein complex regulation under different condition or time points. Related system, PAINT [ 28 ] and SCOPE [ 29 ] performs a statistical analysis on gene promoter regions, identifying over and under representation of previously identified transcription factor response elements. Another statistical analysis tool is Rank Sum Statistics for Gene Set Collections (RssGsc), which uses rank sum probability distribution functions to find gene sets that explain experimental data. [ 30 ] A further approach is contextual meta-analysis, i.e. finding out how a gene cluster responds to a variety of experimental contexts. Genevestigator is a public tool to perform contextual meta-analysis across contexts such as anatomical parts, stages of development, and response to diseases, chemicals, stresses, and neoplasms .
Significance analysis of microarrays (SAM) is a statistical technique , established in 2001 by Virginia Tusher, Robert Tibshirani and Gilbert Chu , for determining whether changes in gene expression are statistically significant. With the advent of DNA microarrays , it is now possible to measure the expression of thousands of genes in a single hybridization experiment. The data generated is considerable, and a method for sorting out what is significant and what isn't is essential. SAM is distributed by Stanford University in an R-package . [ 31 ]
SAM identifies statistically significant genes by carrying out gene specific t-tests and computes a statistic d j for each gene j , which measures the strength of the relationship between gene expression and a response variable. [ 32 ] [ 33 ] [ 34 ] This analysis uses non-parametric statistics , since the data may not follow a normal distribution . The response variable describes and groups the data based on experimental conditions. In this method, repeated permutations of the data are used to determine if the expression of any gene is significant related to the response. The use of permutation-based analysis accounts for correlations in genes and avoids parametric assumptions about the distribution of individual genes. This is an advantage over other techniques (e.g., ANOVA and Bonferroni ), which assume equal variance and/or independence of genes. [ 35 ]
the number of permutations is set by the user when imputing correct values for the data set to run SAM
Types: [ 32 ]
SAM calculates a test statistic for relative difference in gene expression based on permutation analysis of expression data and calculates a false discovery rate. The principal calculations of the program are illustrated below. [ 32 ] [ 33 ] [ 34 ]
The s o constant is chosen to minimize the coefficient of variation of d i . r i is equal to the expression levels (x) for gene i under y experimental conditions.
F a l s e d i s c o v e r y r a t e ( F D R ) = M e d i a n ( o r 90 t h p e r c e n t i l e ) o f # o f f a l s e l y c a l l e d g e n e s N u m b e r o f g e n e s c a l l e d s i g n i f i c a n t {\displaystyle \mathrm {False\ discovery\ rate\ (FDR)={\frac {Median\ (or\ 90^{th}\ percentile)\ of\ \#\ of\ falsely\ called\ genes}{Number\ of\ genes\ called\ significant}}} }
Fold changes (t) are specified to guarantee genes called significant change at least a pre-specified amount. This means that the absolute value of the average expression levels of a gene under each of two conditions must be greater than the fold change (t) to be called positive and less than the inverse of the fold change (t) to be called negative.
The SAM algorithm can be stated as:
Entire arrays may have obvious flaws detectable by visual inspection, pairwise comparisons to arrays in the same experimental group, or by analysis of RNA degradation. [ 39 ] Results may improve by removing these arrays from the analysis entirely.
Depending on the type of array, signal related to nonspecific binding of the fluorophore can be subtracted to achieve better results. One approach involves subtracting the average
signal intensity of the area between spots. A variety of tools for background correction and further analysis are available from TIGR, [ 40 ] Agilent ( GeneSpring ), [ 41 ] and Ocimum Bio Solutions (Genowiz). [ 42 ]
Visual identification of local artifacts, such as printing or washing defects, may likewise suggest the removal of individual spots. This can take a substantial amount of time depending on the quality of array manufacture. In addition, some procedures call for the elimination of all spots with an expression value below a certain intensity threshold. | https://en.wikipedia.org/wiki/Microarray_analysis_techniques |
A microarray database is a repository containing microarray gene expression data. The key uses of a microarray database are to store the measurement data, manage a searchable index, and make the data available to other applications for analysis and interpretation (either directly, or via user downloads).
Microarray databases can fall into two distinct classes:
Some of the most known public, curated microarray databases are: | https://en.wikipedia.org/wiki/Microarray_databases |
Microautophagy is one of the three common forms of autophagic pathway , but unlike macroautophagy and chaperone-mediated autophagy , it is mediated—in mammals by lysosomal action or in plants and fungi by vacuolar action—by direct engulfment of the cytoplasmic cargo. Cytoplasmic material is trapped in the lysosome/vacuole by a random process of membrane invagination.
The microautophagic pathway is especially important for survival of cells under conditions of starvation, nitrogen deprivation, or after treatment with rapamycin . Generally a non-selective process, there are three special cases of a selective microautophagic pathway: micropexophagy, piecemeal microautophagy of the nucleus, and micromitophagy, all which are activated only under a specific conditions. [ 1 ]
Microautophagy together with macroautophagy is necessary for nutrient recycling under starvation. Microautophagy due to degradation of lipids incorporated into vesicles regulates the composition of lysosomal / vacuolar membrane. [ 1 ] Microautophagic pathway functions also as one of the mechanism of glycogen delivery into the lysosomes . [ 2 ] This autophagic pathway engulfs multivesicular bodies formed after endocytosis therefore it plays role in membrane proteins turnover. [ 3 ] Microautophagy is also connected with organellar size maintenance, composition of biological membranes , cell survival under nitrogen restriction, and the transition pathway from starvation-induced growth arrest to logarithmic growth. [ 1 ]
Non-selective microautophagic process can be dissected into 5 distinct steps. The majority of experiments were done in yeast (vacuolar invaginations) but the molecular principles seem to be more general. [ 1 ]
Invagination is a constitutive process but its frequency is dramatically increased during periods of starvation. Invagination is a tubular process by which is formed the autophagic tube. [ 4 ]
Formation of the autophagic tubes is mediated through Atg7-dependent ubiquitin -like conjugation (Ublc) or via vacuolar transporter chaperone ( VTC ) molecular complex which acts through calmodulin -dependent manner. Calmodulin involvement in tube formation is calcium independent process. [ 5 ] [ 6 ]
The mechanism of vesicle formation is based on lateral sorting mechanism. Changed composition of membrane molecules ( lipid enrichment in the autophagic tubes due to removal of transmembrane proteins ) leads spontaneous vesicle formation via phase separation mechanism. [ 4 ]
The process of microautophagic vesicle formation is similar to multivesicular bodies formation process [ 7 ]
Enlargement of vesicle is mediated by binding enzymes inside of unclosed vesicle. Basically, this process is reversal to endocytosis . Process follows by pich [ clarification needed ] of the vesicle into the lysosomal/vacuolar lumen. This process is independent on SNARE proteins. [ 8 ]
Vesicle moves freely in the lumen and during the time is degraded by hydrolases (ec. Atg15p). Nutrients are then released by Atg22p. [ 1 ]
Process of non-selective microautophagy can be observed in all types of eukaryotic cells . On the other hand, selective microautophagy is commonly observed in yeast cells.
Three types of selective microautophagy selective microautophagy can be distinguished: micropexophagy, piecemeal microautophagy of the nucleus and micromitophagy [ 1 ] [ 9 ] | https://en.wikipedia.org/wiki/Microautophagy |
See text.
Microbacteriaceae is a family of bacteria of the order Actinomycetales . [ 1 ] They are Gram-positive soil organisms.
The family Microbacteriaceae comprises the following genera: [ 2 ]
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature [ 2 ] and the phylogeny is based on whole-genome sequences. [ 3 ] [ a ]
Pseudoclavibacter
Gulosibacter
Leucobacter
Microbacterium
Agrococcus
Rhodoluna
Curtobacterium
Salinibacterium
Clavibacter
Rathayibacter
Agreia
Cnuibacter
Herbiconiux
Aurantimicrobium
Mycetocola
Okibacterium
Plantibacter
Leifsonia
Gryllotalpicola
Humibacter
Glaciibacter
Cryobacterium
Microterricola
Agromyces
" Tropherymataceae "
This Actinomycetota -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Microbacteriaceae |
MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology . It is provided by the American Society for Microbiology , Washington DC , United States .
Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education , Microbiology Education and Microbe . Around 40% of the materials are free to educators and students, the remainder require a subscription. As of 2016 [update] the service is suspended with the message to:
This microbiology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MicrobeLibrary |
Microbeads, also called Ugelstad particles [ 1 ] [ 2 ] [ 3 ] after the Norwegian chemist , professor John Ugelstad , who invented them in 1977 and patented the method in 1978, [ 4 ] are uniform polymer particles , typically 0.5 to 500 microns in diameter. Bio-reactive molecules can be absorbed or coupled to their surface, and used to separate biological materials such as cells , proteins , or nucleic acids .
Microbeads have been used for isolation and handling of specific material or molecules, as well as for analyzing sensitive molecules, or those that are in low abundance, e.g. in miniaturized and automated settings.
Microbeads were created when John Ugelstad managed to form polystyrene beads of the same spherical sizes at the Norwegian University of Science and Technology (NTNU) [ 5 ] in 1977. [ 4 ] A few years later, he created superparamagnetic microbeads ( Dynabeads ), which exhibit magnetic properties when placed in a magnetic field. When they are removed from the magnetic field, there is no residual magnetism , which led to the development of magnetic separation technology. Other processes such as centrifugation , filtration , columns, or precipitation are not needed.
Microbeads display a large surface area per volume. This, together with uniformity of size and shape, provides for very good accessibility and fast liquid-phase reaction kinetics , and rapid and efficient binding.
Black polyethylene microspheres can have magnetic or conductive functionality, and have uses in electronic devices, EMI shielding, and microscopy techniques. [ 6 ] [ 7 ]
Fluorescent polyethylene microspheres are commonly used to run blind tests on laboratory and industrial processes, in order to develop proper methods and minimize cross-contamination of equipment and materials. Microspheres that appear to be invisible in the daylight can be illuminated to display a bright fluorescent response under UV light . [ 8 ]
Colored polyethylene microspheres are used for fluid flow visualization to enable observation and characterization of flow of particles in a device or be used as visible markers in microscopy and biotechnology. [ 9 ]
Microbeads serve as the main tool for bio-magnetic separations. A range of patented processes and applications have been developed based on the use of microbeads in academic and industrial research. Microbeads are pre-coupled with a ligand ; a biomolecule such as antibody , streptavidin , protein , antigen , DNA / RNA , or other molecule. There are three steps involved in the magnetic separation process:
Microbeads are used for cell isolation and cell expansion. Proteins and protein complexes can be separated; e.g., in immunoprecipitation protocols. Molecular studies and diagnostics also benefit from microbeads (e.g. immunoassay IVD and nucleic acid IVD). When microbeads are coupled with streptavidin , they offer a very efficient way to isolate any biotinylated molecule. This is frequently used in DNA / RNA binding protein studies, sequencing , and to prepare single stranded templates. Gene expression analysis also benefits from microbeads, such as isolating mRNA for transcriptional analysis.
There are many uses for microbeads, mostly for biotechnology and biomedical research . Microbeads and magnetic separation technology have enabled a range of innovative methods to benefit research on disease prevention, medicine, and other fields to improve the human condition. | https://en.wikipedia.org/wiki/Microbead_(research) |
A microbeam is a narrow beam of radiation , of micrometer or sub-micrometer dimensions. Together with integrated imaging techniques, microbeams allow precisely defined quantities of damage to be introduced at precisely defined locations. Thus, the microbeam is a tool for investigators to study intra- and inter-cellular mechanisms of damage signal transduction .
Essentially, an automated imaging system locates user-specified targets, and these targets are sequentially irradiated, one by one, with a highly-focused radiation beam. Targets can be single cells , sub-cellular locations , or precise locations in 3D tissues. Key features of a microbeam are throughput, precision, and accuracy . While irradiating targeted regions, the system must guarantee that adjacent locations receive no energy deposition.
The first microbeam facilities were developed in the mid-90s. These facilities were a response to challenges in studying radiobiological processes using broadbeam exposures. Microbeams were originally designed to address two main issues: [ 1 ]
Additionally, microbeams were seen as ideal vehicles to investigate the mechanisms of radiation response.
At the time it was believed that radiation damage to cells was entirely the result of damage to DNA . Charged particle microbeams could probe the radiation sensitivity of the nucleus, which at the time appeared not to be uniformly sensitive. Experiments performed at microbeam facilities have since shown the existence of a bystander effect . A bystander effect is any biological response to radiation in cells or tissues that did not experience a radiation traversal. These "bystander" cells are neighbors of cells that have experienced a traversal. The mechanism for the bystander effect is believed to be due to cell-to-cell communication. The exact nature of this communication is an area of active research for many groups.
At the low doses of relevance to environmental radiation exposure, individual cells only rarely experience traversals by an ionizing particle and almost never experience more than one traversal. For example, in the case of domestic radon exposure, cancer risk estimation involves epidemiological studies of uranium miners. These miners inhale radon gas, which then undergoes radioactive decay , emitting an alpha particle This alpha particle traverses the cells of the bronchial epithelium, potentially causing cancer. The average lifetime radon exposure of these miners is high enough that cancer risk estimates are driven by data on individuals whose target bronchial cells are subjected to multiple alpha particle traversals. On the other hand, for an average house occupant, about 1 in 2,500 target bronchial cells will be exposed per year to a single alpha particle, but less than 1 in 10 7 of these cells will experience traversals by more than one particle. Therefore, in order to extrapolate from miner to environmental exposures, it is necessary to be able to extrapolate from the effects of multiple traversals to the effects of single traversals of a particle.
Due to the random distribution of particle tracks, the biological effects of an exact number (particularly one) of particles cannot practically be simulated in the laboratory using conventional broadbeam exposures. Microbeam techniques can overcome this limitation by delivering an exact number (one or more) of particles per cell nucleus. True single-particle irradiations should allow measurement of the effects of exactly one alpha particle traversal, relative to multiple traversals. The application of such systems to low frequency processes such as oncogenic transformation depends very much on the technology involved. With an irradiation rate of at least 5,000 cells per hour, experiments with yields of the order of 10 −4 can practically be accomplished. Hence, high throughput is a desired quality for microbeam systems.
The first microbeam facilities delivered charged particles. A charged particle microbeam facility must meet the following basic requirements: [ 2 ]
Beam spots with diameter down to about two micrometres can be obtained by collimating the beam with pinhole apertures or with a drawn capillary. Sub-micrometre beam spot sizes have been achieved by focusing the beam using various combinations of electrostatic or magnetic lenses. Both methods are used at present.
A vacuum window is necessary in order to perform microbeam experiments on living cells. Generally, this is accomplished with the use of a vacuum-tight window of a polymer a few micrometres thick or 100-500 nm thick Silicon nitride .
Cells must be identified and targeted with a high degree of accuracy. This can be accomplished using cell staining and fluorescence microscopy or without staining through the use of techniques such as quantitative phase microscopy or phase contrast microscopy. Ultimately, the objective is to recognize cells, target them, and move them into position for irradiation as fast as possible. Throughputs of up to 15,000 cells per hour have been achieved.
Particles must be counted with a high degree of detection efficiency in order to guarantee that a specific number of ions are delivered to a single cell. Generally, detectors can be placed before or after the target to be irradiated. If the detector is placed after the target, the beam must have sufficient energy to traverse the target and reach the detector. If the detector is placed before the target, the detector must have a minimal effect on the beam. When the desired number of particles are detected, the beam is either deflected or shut off.
Living cells must be maintained under conditions that do not stress the cell, causing an unwanted biological response. Normally, cells must be attached to a substrate so that their position can be determined by the imaging system. Recent advancements in beam position control and high speed imaging have made flow through systems possible ( Flow and Shoot ).
Some facilities have developed or are developing soft x-ray microbeams. In these systems, zone plates are used to focus characteristic x rays generated from a target hit by a charged particle beam. When using synchrotron x-rays as a source, x-ray microbeam can be obtained by cutting the beam with a precise slit system due to high directionality of synchrotron radiation .
Many biological endpoints have been studied including oncogenic transformation, apoptosis , mutations , and chromosomal aberrations .
There have been nine international workshops, held approximately once every two years, on Microbeam Probes of Cellular Radiation Response. These workshops serve as an opportunity for microbeam personnel to come together and share ideas. The proceedings of the workshops serve as an excellent reference on the state of microbeam-related science. | https://en.wikipedia.org/wiki/Microbeam |
MicrobesOnline is a publicly and freely accessible website that hosts multiple comparative genomic tools for comparing microbial species at the genomic, transcriptomic and functional levels. [ 1 ] [ 2 ] MicrobesOnline was developed by the Virtual Institute for Microbial Stress and Survival, which is based at the Lawrence Berkeley National Laboratory in Berkeley, California. The site was launched in 2005, with regular updates until 2011.
The main aim of MicrobesOnline is to provide an easy-to-use resource that integrates a wealth of data from multiple sources. This integrated platform facilitates studies in comparative genomics , metabolic pathway analysis, genome composition, functional genomics as well as in protein domain and family data. It also provides tools to search or browse the database with genes, species, sequences, orthologous groups , gene ontology (GO) terms or pathway keywords, etc. Another one of its main features is the Gene Cart, which allows users to keep a record of their genes of interest. One of the highlights of the database is the overall navigation accessibility and interconnection between the tools.
The development of high-throughput methods for genome sequencing has brought about a wealth of data that requires sophisticated bioinformatics tools for their analysis and interpretation. [ 3 ] Nowadays, numerous tools exist to study genomics sequence data and extract information from different perspectives. However, the lack of unification of nomenclature and standardised protocols between tools, makes direct comparison between their results very difficult. [ 4 ] Additionally, the user is forced to constantly switch from various websites or software, adjusting the format of their data to fit with individual requirements. MicrobesOnline was developed with the aim to integrate the capacities of different tools into a unified platform for easy comparison between analysis results, with a focus on prokaryote species and basal eukaryotes .
MicrobesOnline hosts genomic, gene expression and fitness data for a wide range of microbial species. Genomic data is available for 1752 bacteria , 94 archaea and 119 eukaryotes, for a total of 3707 genomes, 2842 of which are marked as being complete. Gene expression data is available for 113 species, and fitness data is available for 4 organisms. [ 5 ]
MicrobesOnline provides diverse tools for searching, analysing and integrating information related to bacteria genomes for applications in four major areas: genetic information, functional genomics, comparative genomics and metabolic pathway studies. [ 6 ] The homepage of MicrobesOnline is the portal for accessing its functions, which includes six main sections: the top navigation elements, a genome selector, examples of the tutorial based on E.coli K-12, a link to the Genome-Linked Application for Metabolic Maps (GLAMM), website highlights and the “about MicrobesOnline” list. As an ongoing project, the authors of MicrobesOnline claim that the tools for data analysis and the support of more data types will be expanded. [ 7 ]
Information of microbial genes stored in MicrobesOnline includes sequences ( genes , transcripts and proteins ), genomic loci , gene annotations and some statistics of sequences. This information can be accessed through three features displayed on the homepage of MicrobesOnline: sequence search and advanced search in the top navigation section, and the genome selector. For the sequence search tool, MicrobesOnline integrates BLAT , FastHMM and FastBLAST [ 8 ] to search protein sequences, and uses MEGABLAST to search nucleotide sequences. [ 9 ] It also provides a link to BLAST as an alternative way for searching sequences. On the other hand, the advanced search tool enables a user to search genetic information by categories, custom query, wild-card search and field-specific search, which uses the gene name, the description, the cluster of orthologous groups (COGs) id, the GO term, the KEGG enzyme commission (EC) number, etc. as key words.
The “genomes selected” box of the genome selector lists genomes added from the favourite genome list on the left or the ones searched by keywords. On the right side of the genome selector, four actions can be applied after selecting genomes: the “find genes” interface searches the gene name in the selected genomes and displays results in the gene list view; the “info” button lists a brief summary of selected genomes in the Summary View; the “GO” button opens a GO Browser called VertiGo which tabulates the number of genes under different GO items; finally, the “pathway” button initiates a pathway browser that illustrates the complete pathways of all organisms in the MicrobesOnline database.
In addition, the genome names shown in the summary view leads to a single-genome data view that presents a wealth of information about the selected genome. In the gene list view, the links “G O D H S T B...” lead the user to a locus information tool, where detailed information such as operon & regulon , domains & families, sequences, annotations, etc. are shown.
An important feature to store a user's work is the Gene Cart. Many web pages of MicrobesOnline displaying genetic information contain a link to add genes of interest to the session gene cart, which is available for all users. This is a temporary gene cart, and as such it loses information as a user closes the web browser. Genes in the session gene cart can be saved to the permanent gene cart which is only available to registered users after logging in.
One goal of setting up MicrobesOnline is to store functional information of microbial genomes. Such information includes gene ontology and microarray-based gene expression profiles, which can be accessed through two interfaces called GO browser and Expression Data Viewer respectively. The GO browser provides links to genes organised by gene ontology terms and the Expression Data Viewer provides both the access to expression profiles and information of experimental conditions.
The GO Browser, also known as VertiGo, is used by MicrobesOnline to search and visualise the GO hierarchy, which is a unified verbal system that describes properties of gene products, including cellular components, molecular function and biological process. The Genome Selector of the MicrobesOnline homepage provides a direct way to browse the GO hierarchy of the selected genomes, as well as provide a list of genes under a selected GO term, which can then be added to the session gene cart for further analysis.
The Expression Data Viewer is an interface for searching and inspecting microarray -base gene expression experiments and expression profiles . It consists of several components: an experiment browser for searching specific experiments in selected genomes under selected experimental conditions, an expression experiment viewer providing details of each microarray experiment, a gene expression viewer showing a heat map of the expression levels of the selected gene and genes in the same operon , and finally, a profile search tool for searching gene expression profiles. The Expression Data Viewer can be accessed through three ways: the “Browse Functional Data” in the navigator bar, the “Gene Expression Data” in the homepage and the “Gene expression” list in the single-genome data view, where the expression data are available. The single-genome data view can also show a protein-protein interaction browser that allows the inspection of interaction complexes and the download of expression data (e.g. Escherichia coli str. K-12 substr. MG1655). Furthermore, the user can launch a MultiExperiment Viewer (MeV) in the single-genome data view for analysing and visualising expression data.
MicrobesOnline stores information of gene homology and phylogeny for comparative genomic studies, which can be accessed through two interfaces. The first one is the Tree Browser, which draws a species tree or a gene tree for the selected gene and its gene neighbourhood. The second one is the Orthology Browser, which is an extension of the Genome Browser and demonstrates the selected gene within the context of its gene neighbourhood aligned with orthologs in other selected genomes. [ 10 ] Both browsers provide options to save a gene in the session gene cart for further analysis.
The tree browser can be accessed by searching a gene by the Find Genes tool on the homepage with its VIMSS id (e.g. VIMSS15779). Once the gene context view has been accessed through the “Browse genomes by trees” option, a gene tree and a gene context diagram are displayed. In addition, the “View species tree” option opens a species tree view, which shows a species tree alongside the gene tree. Additionally, the tree browser enables users to choose both genes and genomes according to their similarity. Furthermore, it also demonstrates horizontal gene transfers among genomes.
The Orthology Browser displays orthologs of genomes compared to the query genome by choosing multiple genomes from the “Select Organism(s) to Display” box.
The locus information can be viewed through the “view genes” option, and this gene can be added to the session gene cart, or its gene expression data (including the heatmap) can be downloaded. Alternatively, a gene context view appears when browsing genomes by trees.
The Pathway Browser lets users to navigate the Kyoto Encyclopedia of Genes and Genomes (KEGG) [ 11 ] pathway maps displaying predicted presence or absence of enzymes for up to two selected genomes. The map of a particular pathway and a comparison between two kinds of microbes can be shown in the pathway browser. The enzyme commission number (e.g. 3.1.3.25) provides a link to the gene list view that shows information of the selected enzyme and allows the user to add genes to the session gene cart.
The GLAMM is another tool for searching and visualising metabolic pathways in a unified web interface. It helps users to identify or construct novel, transgenic pathways. [ 12 ]
MicrobesOnline has integrated numerous tools for analysing sequences, gene expression profiles and protein-protein interactions into an interface called Bioinformatics Workbench, which is accessed via gene carts. Analyses currently supported include multiple sequence alignments , construction of phylogenetic trees , motif searches and scans, summaries of gene expression profiles and protein-protein interactions. In order to save computational resources, a user is allowed to run two concurrent jobs for at most four hours and all results are saved temporarily until the session is terminated. [ 13 ] Results can be shared with other users or groups via the resource access control tool.
MicrobesOnline is built on the integration of the data of an array of databases that manage different aspects of its capabilities. A comprehensive list is as follows: [ 14 ]
MicrobesOnline was updated every 3 to 9 months from 2007 to 2011, where new features as well as new species data were added. However, there have been no new release notes since March 2011. [ 39 ]
MicrobesOnline is compatible with other similar platforms of integrated microbe data, such as IMG and RegTransBase , given that standard identifiers of genes are maintained throughout the database. [ 40 ]
There have been other efforts to create a unified platform for prokaryote analysis tools, however, most of them focus on one set of analysis types. A few examples of these focused databases include those with an emphasis on metabolic data analysis (Microme [ 41 ] ), comparative genomics (MBGD [ 42 ] and the OMA Browser [ 43 ] ), regulons and transcription factors (RegPrecise [ 44 ] ), comparative functional genomics (Pathline [ 45 ] ), among many others. However, notable efforts have been made by other teams to create comprehensive platforms that largely overlap with the capabilities of MicrobesOnline. MicroScope [ 46 ] and the Integrated Microbial Genomes System [ 47 ] [ 48 ] (IMG) are examples of popular and recently updated databases (As of September 2014 [update] ).
metaMicrobesOnline [ 49 ] was compiled by the same developers as MicrobesOnline, and constitutes an extension of MicrobesOnline capacities, by focusing on the phylogenetic analysis of metagenomes . With a similar web interface to MicrobesOnline, the user is capable of toggling between sites via the “switch to” link on the homepage. | https://en.wikipedia.org/wiki/MicrobesOnline |
Microbes and Man is a popularising book by the English microbiologist John Postgate FRS [ 1 ] on the role of microorganisms in human society , first published in 1969, and still in print in 2017. Critics called it a "classic" [ 2 ] and "a pleasure to read". [ 3 ]
The book is structured as follows: [ 4 ]
The 4th edition has 32 illustrations, ranging from photographs of microscopic algae , protozoa , fungi , viruses and bacteria , to the macroscopic effects of microbes such as a sulphur-forming lake in Libya and fish killed by bacterial reduction of sulphate in water. [ 4 ]
The book has been translated into nine languages: Arabic, Chinese, Czech, French, German, Japanese, Polish, Portuguese, and Spanish. [ 5 ] [ 6 ]
The Guardian described the book as "a passionate case for the importance of micro-organisms". [ 7 ]
In his textbook Essential Microbiology , Stuart Hogg recommends the book to readers who want a general overview of microbes and their uses , stating "there can be no better starting point than John Postgate's classic". [ 2 ]
New Scientist described the book as "a pleasure to read from first page to last. It is a literal statement. Start to read it and the first page, describing the astonishing dispersion of microbes, from the upper atmosphere to the depths of the sea, will provide any reader with enough wonder and excitement to take them through to the last page and the surface of Venus." [ 3 ] The magazine commented that Postgate's "admirable, elegantly written and painlessly informative book" came close to losing its alliterative title, at the hands of "militant feminists " at Penguin Books editing the paperback version in 1986. [ a ] [ 8 ]
Dennis R. Schneider, reviewing the 3rd edition in 1992 for Cell , described the book as having "succinctly and carefully explained examples of how microorganisms affect our lives ... one of the classics of popular science", standing alongside classics like Rosebury's Life of Man and De Kruif's Microbe Hunters . Schneider wrote that the book's Britishness "'colours' the text", but Postgate's emphasis on the beneficial and not just the harmful effects of microbes was welcome and admirably explored. He noted few errors, but objected to Postgate's assertion that AIDS "originated by transmission from a primate ", for which there was at that time no evidence. Schneider would have liked a "better and longer" account of molecular biology . His chief criticism, however, was that by the 1990s the book no longer had an audience, since "the Victorial ideal of the educated middle class has vanished into the wasteland of broken families, double digit unemployment and a damaged educational system". All the same, he found the book "of value and beauty (except perhaps to the publisher)". [ 9 ]
Charles W. Kim, reviewing the 3rd edition for The Quarterly Review of Biology , stated that "If the author's intent was to present the impact of the ubiquitous microorganisms on the environment and humans, he has succeeded admirably", describing Postgate's style as "unique". [ 10 ]
D. Roy Cullimore, in his Practical Atlas for Bacterial Identification , comments that all four editions were "easy reading", addressing the challenges that microbes presented to human society. He suggests that "ideally" all four books be read in sequence for an overview of the development of microbiology in half a century. [ 11 ] | https://en.wikipedia.org/wiki/Microbes_and_Man |
Microbial DNA barcoding is the use of DNA metabarcoding to characterize a mixture of microorganisms . DNA metabarcoding is a method of DNA barcoding that uses universal genetic markers to identify DNA of a mixture of organisms. [ 1 ]
Using metabarcoding to assess microbial communities has a long history. Back in 1972, Carl Woese , Mitchell Sogin and Stephen Sogin first tried to detect several families within bacteria using the 5S rRNA gene. [ 2 ] Only a few years later, a new tree of life with three domains was proposed by again Woese and colleagues, who were the first to use the small subunit of the ribosomal RNA (SSU rRNA) gene to distinguish between bacteria, archaea and eukaryotes . [ 3 ] Out of this approach, the SSU rRNA gene made its way to be the most frequently used genetic marker for both prokaryotes (16S rRNA) and eukaryotes ( 18S rRNA ). The tedious process of cloning those DNA fragments for sequencing got fastened up by the steady improvement of sequencing technologies. With the development of HTS (High-Throughput-Sequencing) in the early 2000s and the ability to deal with this massive data using modern bioinformatics and cluster algorithms, investigating microbial life got much easier.
Genetic diversity is varying from species to species. Therefore, it is possible to identify distinct species by the recovery of a short DNA sequence from a standard part of the genome. This short sequence is defined as barcode sequence. Requirements for a specific part of the genome to serve as barcode should be a high variation between two different species , but not much differences in the gene between two individuals of the same species to make differentiating individual species easier. [ 4 ] [ 5 ] For both bacteria and archaea the 16S rRNA/rDNA gene is used. It is a common housekeeping gene in all prokaryotic organisms and therefore is used as a standard barcode to assess prokaryotic diversity. For protists, the corresponding 18S rRNA/rDNA gene is used. [ 6 ] To distinguish different species of fungi, the ITS ( Internal Transcribed Spacer ) region of the ribosomal cistron is used. [ 7 ]
The existing diversity of the microbial world is not unraveled completely yet, although we know that it is mainly composed by bacteria, fungi and unicellular eukaryotes. [ 4 ] Taxonomic identification of microbial eukaryotes requires exceedingly skillful expertise and is often difficult due to small sizes of the organisms, fragmented individuals, hidden diversity and cryptic species . [ 8 ] [ 9 ] Further, prokaryotes can simply not be taxonomically assigned using traditional methods like microscopy , because they are too small and morphologically indistinguishable. Therefore, via the use of DNA metabarcoding, it is possible to identify organisms without taxonomic expertise by matching short High Throughput Sequences (HTS)-derived gene fragments to a reference sequence database, e.g. NCBI . [ 10 ] These mentioned qualities make DNA barcoding a cost-effective, reliable and less time-consuming method, compared to the traditional ones, to meet the increasing need for large-scale environmental assessments.
A lot of studies followed the first usage of Woese et al., and are now covering a variety of applications. Not only in biological or ecological research metabarcoding is used. Also in medicine and human biology bacterial barcodes are used, e.g. to investigate the microbiome and bacterial colonization of the human gut in normal and obese twins [ 11 ] or comparison studies of newborn, child and adult gut bacteria composition. [ 12 ] Additionally, barcoding plays a major role in biomonitoring of e.g. rivers and streams [ 13 ] and grassland restoration. [ 14 ] Conservation parasitology, environmental parasitology and paleoparasitology rely on barcoding as a useful tool in disease investigating and management, too. [ 15 ]
Cyanobacteria are a group of photosynthetic prokaryotes . Similar as in other prokaryotes, taxonomy of cyanobacteria using DNA sequences is mostly based on similarity within the 16S ribosomal gene. [ 16 ] Thus, the most common barcode used for identification of cyanobacteria is 16S rDNA marker. While it is difficult to define species within prokaryotic organisms, 16S marker can be used for determining individual operational taxonomic units (OTUs). In some cases, these OTUs can also be linked to traditionally defined species and can therefore be considered a reliable representation of the evolutionary relationships. [ 17 ]
However, when analyzing a taxonomic structure or biodiversity of a whole cyanobacterial community (see DNA metabarcoding ), it is more informative to use markers specific for cyanobacteria. Universal 16S bacterial primers have been used successfully to isolate cyanobacterial rDNA from environmental samples , but they also recover many bacterial sequences. [ 18 ] [ 19 ] The use of cyanobacteria-specific [ 20 ] or phyto-specific 16S markers is commonly used for focusing on cyanobacteria only. [ 21 ] A few sets of such primers have been tested for barcoding or metabarcoding of environmental samples and gave good results, screening out majority of non-photosynthetic or non-cyanobacterial organisms. [ 22 ] [ 21 ] [ 23 ] [ 24 ]
Number of sequenced cyanobacterial genomes available in databases is increasing. [ 25 ] Besides 16S marker, phylogenetic studies could therefore include also more variable sequences, such as sequences of protein -coding genes (gyrB, rpoC, rpoD, [ 26 ] rbcL, hetR, [ 27 ] psbA, [ 28 ] [ 29 ] rnpB, [ 30 ] nifH, [ 31 ] nifD [ 32 ] ), internal transcribed spacer of ribosomal RNA genes (16S-23S rRNA-ITS) [ 33 ] [ 25 ] or phycocyanin intergenic spacer (PC-IGS). [ 33 ] However, nifD and nifH can only be used for identification of nitrogen-fixing cyanobacterial strains.
DNA barcoding of cyanobacteria can be applied in various ecological, evolutionary and taxonomical studies. Some examples include assessment of cyanobacterial diversity and community structure, [ 34 ] identification of harmful cyanobacteria in ecologically and economically important waterbodies [ 35 ] and assessment of cyanobacterial symbionts in marine invertebrates . [ 24 ] It has a potential to serve as a part of routine monitoring programs for occurrence of cyanobacteria, as well as early detection of potentially toxic species in waterbodies. This might help us detect harmful species before they start to form blooms and thus improve our water management strategies. Species identification based on environmental DNA could be particularly useful for cyanobacteria, as traditional identification using microscopy is challenging. Their morphological characteristics which are the basis for species delimitation vary in different growth conditions. [ 20 ] [ 36 ] Identification under microscope is also time-consuming and therefore relatively costly. Molecular methods can detect much lower concentration of cyanobacterial cells in the sample than traditional identification methods.
The reference database is a collection of DNA sequences, which are assigned to either a species or a function. It can be used to link molecular obtained sequences of an organism to pre-existing taxonomy. General databases like the NCBI platform include all kind of sequences, either whole genomes or specific marker genes of all organisms. There are also different platforms where only sequences from a distinct group of organisms are stored, e.g. UNITE database [ 37 ] exclusively for fungi sequences or the PR2 database solely for protist ribosomal sequences. [ 38 ] Some databases are curated, which allows a taxonomic assignment with higher accuracy than using uncurated databases as a reference. | https://en.wikipedia.org/wiki/Microbial_DNA_barcoding |
Microbial art , [ 1 ] agar art , [ 2 ] or germ art [ 3 ] is artwork created by culturing microorganisms in certain patterns. [ 4 ] The microbes used can be bacteria , yeast , fungi , or less commonly, protists . The microbes can be chosen for their natural colours or engineered to express fluorescent proteins and viewed under ultraviolet light to make them fluoresce in colour.
Agar plates are used as a canvas, while pigmented or fluorescent bacteria and yeasts represent paint. In order to preserve a piece of microbial art after a sufficient incubation, the microbe culture is sealed with epoxy . [ 2 ]
Microbe species can be artistically chosen for their natural colours to form a palette. Suitable species of bacteria (with their colours) include Bacillus subtilis (cream to brown), Chromobacterium violaceum (violet), Escherichia coli (colourless), Micrococcus luteus (yellow), Micrococcus roseus (pink), Proteus mirabilis , Pseudomonas aeruginosa (brown), Pseudomonas fluorescens (naturally blue-green fluorescent with pyoverdine ), Serratia marcescens (pink or orange), Staphylococcus aureus (yellow), and Vibrio fischeri ( bioluminescent ). [ 5 ]
Yeast species – which are fungi – used in microbial art include Saccharomyces cerevisiae (yellow–white) Aspergillus flavus (yellow–green spores), Aspergillus ochraceus (yellow), Aureobasidium pullulans (black), Candida albicans (whitish buff), Candida sake , Candida sp. (whitish), Cladosporium herbarum (brown to black), Cladosporium resinae , Epicoccum nigrum (yellow, orange, red, brown, and black), Fusarium sp., Rhodotorula sp., and Scopulariopsis brevicaulis . [ 5 ] [ a ]
Protist species used in microbial art include Euglena gracilis ( photosynthetic , green) and Physarum polycephalum (yellow–green). [ 5 ]
A technique called "bacteriography" involves selectively killing certain areas of a bacterial culture with radiation in order to produce artistic patterns. After incubation, the culture is sealed with acrylic . [ 6 ]
The type of medium in the agar plates is also important. Chromagar Candida is a differential medium used to identify different Candida species. When grown on this medium, C. albicans is light green, C. tropicalis is steel blue with purple around the edges, and C. krusei is rose pink with white around the edges. [ 7 ] However, using a different medium, C. tropicalis has maroon colonies. [ 8 ] The color of the medium itself can also be changed using microbes. In TCBS agar , Bromthymol Blue and Thymol Blue turn yellow when pH decreases, such as when bacteria consume sucrose . In this way, the background color of the medium can be changed from dark green to light yellow. [ 9 ]
Alexander Fleming , who is commonly credited with the discovery of penicillin in 1928, was known for creating germ paintings. [ 3 ] Throughout his career, Fleming’s paintings became more colorful as he came to know more microbial species. He would incorporate them into his paintings of ballerinas, families, and other images. [ 10 ]
The biochemist Roger Tsien won the 2008 Nobel Prize in Chemistry for his contributions to knowledge of green fluorescent protein (GFP) that has been used to create art-like works. [ 11 ]
The American Society for Microbiology hosts an annual contest for microbial art: the Agar Art Contest. [ 2 ] The contest was organized after a picture from a Christmas tree, made by Rositsa Tashkova, went viral in 2014. [ 12 ] The 2015 edition covered 85 submissions, of which microbial art created by Mehmet Berkmen and Maria Peñil called Neurons won first place. [ 13 ] The artwork used yellow Nesterenkonia and orange Deinococcus and Sphingomonas . [ 14 ] [ 15 ]
In 2020, the ASM received over 200 submissions, and awarded first place to Joanne Dungo for her multi-plate creation titled "The Gardener." [ 16 ] | https://en.wikipedia.org/wiki/Microbial_art |
Microbial biodegradation is the use of bioremediation and biotransformation methods to harness the naturally occurring ability of microbial xenobiotic metabolism to degrade, transform or accumulate environmental pollutants, including hydrocarbons (e.g. oil), polychlorinated biphenyls (PCBs), polyaromatic hydrocarbons (PAHs), heterocyclic compounds (such as pyridine or quinoline ), pharmaceutical substances, radionuclides and metals.
Interest in the microbial biodegradation of pollutants has intensified in recent years, [ 1 ] [ 2 ] and recent major methodological breakthroughs have enabled detailed genomic, metagenomic, proteomic, bioinformatic and other high-throughput analyses of environmentally relevant microorganisms , providing new insights into biodegradative pathways and the ability of organisms to adapt to changing environmental conditions.
Biological processes play a major role in the removal of contaminants and take advantage of the catabolic versatility of microorganisms to degrade or convert such compounds. In environmental microbiology , genome -based global studies are increasing the understanding of metabolic and regulatory networks, as well as providing new information on the evolution of degradation pathways and molecular adaptation strategies to changing environmental conditions.
The increasing amount of bacterial genomic data provides new opportunities for understanding the genetic and molecular bases of the degradation of organic pollutants . [ 3 ] Aromatic compounds are among the most persistent of these pollutants and lessons can be learned from the recent genomic studies of Burkholderia xenovorans LB400 and Rhodococcus sp. strain RHA1, two of the largest bacterial genomes completely sequenced to date. These studies have helped expand our understanding of bacterial catabolism , non-catabolic physiological adaptation to organic compounds , and the evolution of large bacterial genomes . First, the metabolic pathways from phylogenetically diverse isolates are very similar with respect to overall organization. Thus, as originally noted in pseudomonads , a large number of "peripheral aromatic" pathways funnel a range of natural and xenobiotic compounds into a restricted number of "central aromatic" pathways. Nevertheless, these pathways are genetically organized in genus-specific fashions, as exemplified by the b-ketoadipate and Paa pathways. Comparative genomic studies further reveal that some pathways are more widespread than initially thought. Thus, the Box and Paa pathways illustrate the prevalence of non-oxygenolytic ring-cleavage strategies in aerobic aromatic degradation processes. Functional genomic studies have been useful in establishing that even organisms harboring high numbers of homologous enzymes seem to contain few examples of true redundancy. For example, the multiplicity of ring-cleaving dioxygenases in certain rhodococcal isolates may be attributed to the cryptic aromatic catabolism of different terpenoids and steroids. Finally, analyses have indicated that recent genetic flux appears to have played a more significant role in the evolution of some large genomes, such as LB400's, than others. However, the emerging trend is that the large gene repertoires of potent pollutant degraders such as LB400 and RHA1 have evolved principally through more ancient processes. That this is true in such phylogenetically diverse species is remarkable and further suggests the ancient origin of this catabolic capacity. [ 4 ]
Anaerobic microbial mineralization of recalcitrant organic pollutants is of great environmental significance and involves intriguing novel biochemical reactions. [ 5 ] In particular, hydrocarbons and halogenated compounds have long been doubted to be degradable in the absence of oxygen, but the isolation of hitherto unknown anaerobic hydrocarbon-degrading and reductively dehalogenating bacteria during the last decades provided ultimate proof for these processes in nature. While such research involved mostly chlorinated compounds initially, recent studies have revealed reductive dehalogenation of bromine and iodine moieties in aromatic pesticides. [ 6 ] Other reactions, such as biologically induced abiotic reduction by soil minerals, [ 7 ] has been shown to deactivate relatively persistent aniline-based herbicides far more rapidly than observed in aerobic environments. Many novel biochemical reactions were discovered enabling the respective metabolic pathways, but progress in the molecular understanding of these bacteria was rather slow, since genetic systems are not readily applicable for most of them. However, with the increasing application of genomics in the field of environmental microbiology , a new and promising perspective is now at hand to obtain molecular insights into these new metabolic properties. Several complete genome sequences were determined during the last few years from bacteria capable of anaerobic organic pollutant degradation. The ~4.7 Mb genome of the facultative denitrifying Aromatoleum aromaticum strain EbN1 was the first to be determined for an anaerobic hydrocarbon degrader (using toluene or ethylbenzene as substrates ). The genome sequence revealed about two dozen gene clusters (including several paralogs ) coding for a complex catabolic network for anaerobic and aerobic degradation of aromatic compounds. The genome sequence forms the basis for current detailed studies on regulation of pathways and enzyme structures. Further genomes of anaerobic hydrocarbon degrading bacteria were recently completed for the iron-reducing species Geobacter metallireducens (accession nr. NC_007517) and the perchlorate-reducing Dechloromonas aromatica (accession nr. NC_007298), but these are not yet evaluated in formal publications. Complete genomes were also determined for bacteria capable of anaerobic degradation of halogenated hydrocarbons by halorespiration : the ~1.4 Mb genomes of Dehalococcoides ethenogenes strain 195 and Dehalococcoides sp. strain CBDB1 and the ~5.7 Mb genome of Desulfitobacterium hafniense strain Y51. Characteristic for all these bacteria is the presence of multiple paralogous genes for reductive dehalogenases, implicating a wider dehalogenating spectrum of the organisms than previously known. Moreover, genome sequences provided unprecedented insights into the evolution of reductive dehalogenation and differing strategies for niche adaptation. [ 8 ]
Recently, it has become apparent that some organisms, including Desulfitobacterium chlororespirans , originally evaluated for halorespiration on chlorophenols, can also use certain brominated compounds, such as the herbicide bromoxynil and its major metabolite as electron acceptors for growth. Iodinated compounds may be dehalogenated as well, though the process may not satisfy the need for an electron acceptor. [ 6 ]
Bioavailability , or the amount of a substance that is physiochemically accessible to microorganisms is a key factor in the efficient biodegradation of pollutants. O'Loughlin et al. (2000) [ 9 ] showed that, with the exception of kaolinite clay, most soil clays and cation exchange resins attenuated biodegradation of 2-picoline by Arthrobacter sp. strain R1, as a result of adsorption of the substrate to the clays. Chemotaxis , or the directed movement of motile organisms towards or away from chemicals in the environment is an important physiological response that may contribute to effective catabolism of molecules in the environment. In addition, mechanisms for the intracellular accumulation of aromatic molecules via various transport mechanisms are also important. [ 10 ]
Petroleum oil contains aromatic compounds that are toxic to most life forms. Episodic and chronic pollution of the environment by oil causes major disruption to the local ecological environment. Marine environments in particular are especially vulnerable, as oil spills near coastal regions and in the open sea are difficult to contain and make mitigation efforts more complicated. In addition to pollution through human activities, approximately 250 million litres of petroleum enter the marine environment every year from natural seepages. [ 11 ] Despite its toxicity, a considerable fraction of petroleum oil entering marine systems is eliminated by the hydrocarbon-degrading activities of microbial communities, in particular by a recently discovered group of specialists, the hydrocarbonoclastic bacteria (HCB). [ 12 ] Alcanivorax borkumensis was the first HCB to have its genome sequenced. [ 13 ] In addition to hydrocarbons, crude oil often contains various heterocyclic compounds , such as pyridine, which appear to be degraded by similar mechanisms to hydrocarbons. [ 14 ]
Many synthetic steroidic compounds like some sexual hormones frequently appear in municipal and industrial wastewaters, acting as environmental pollutants with strong metabolic activities negatively affecting the ecosystems. Since these compounds are common carbon sources for many different microorganisms their aerobic and anaerobic mineralization has been extensively studied. The interest of these studies lies on the biotechnological applications of sterol transforming enzymes for the industrial synthesis of sexual hormones and corticoids. Very recently, the catabolism of cholesterol has acquired a high relevance because it is involved in the infectivity of the pathogen Mycobacterium tuberculosis ( Mtb ). [ 1 ] [ 15 ] Mtb causes tuberculosis disease, and it has been demonstrated that novel enzyme architectures have evolved to bind and modify steroid compounds like cholesterol in this organism and other steroid-utilizing bacteria as well. [ 16 ] [ 17 ] These new enzymes might be of interest for their potential in the chemical modification of steroid substrates.
Sustainable development requires the promotion of environmental management and a constant search for new technologies to treat vast quantities of wastes generated by increasing anthropogenic activities. Biotreatment, the processing of wastes using living organisms, is an environmentally friendly, relatively simple and cost-effective alternative to physico-chemical clean-up options. Confined environments, such as bioreactors , have been engineered to overcome the physical, chemical and biological limiting factors of biotreatment processes in highly controlled systems. The great versatility in the design of confined environments allows the treatment of a wide range of wastes under optimized conditions. To perform a correct assessment, it is necessary to consider various microorganisms having a variety of genomes and expressed transcripts and proteins. A great number of analyses are often required. Using traditional genomic techniques, such assessments are limited and time-consuming. However, several high-throughput techniques originally developed for medical studies can be applied to assess biotreatment in confined environments. [ 18 ]
The study of the fate of persistent organic chemicals in the environment has revealed a large reservoir of enzymatic reactions with a large potential in preparative organic synthesis, which has already been exploited for a number of oxygenases on pilot and even on industrial scale. Novel catalysts can be obtained from metagenomic libraries and DNA sequence based approaches. Our increasing capabilities in adapting the catalysts to specific reactions and process requirements by rational and random mutagenesis broadens the scope for application in the fine chemical industry, but also in the field of biodegradation . In many cases, these catalysts need to be exploited in whole cell bioconversions or in fermentations , calling for system-wide approaches to understanding strain physiology and metabolism and rational approaches to the engineering of whole cells as they are increasingly put forward in the area of systems biotechnology and synthetic biology. [ 19 ]
In the ecosystem, different substrates are attacked at different rates by consortia of organisms from different kingdoms. Aspergillus and other moulds play an important role in these consortia because they are adept at recycling starches, hemicelluloses, celluloses, pectins and other sugar polymers. Some aspergilli are capable of degrading more refractory compounds such as fats, oils, chitin, and keratin. Maximum decomposition occurs when there is sufficient nitrogen, phosphorus and other essential inorganic nutrients. Fungi also provide food for many soil organisms. [ 20 ]
For Aspergillus the process of degradation is the means of obtaining nutrients. When these moulds degrade human-made substrates, the process usually is called biodeterioration. Both paper and textiles (cotton, jute, and linen) are particularly vulnerable to Aspergillus degradation. Our artistic heritage is also subject to Aspergillus assault. To give but one example, after Florence in Italy flooded in 1969, 74% of the isolates from a damaged Ghirlandaio fresco in the Ognissanti church were Aspergillus versicolor . [ 21 ] | https://en.wikipedia.org/wiki/Microbial_biodegradation |
Microbial biogeography is a subset of biogeography , a field that concerns the distribution of organisms across space and time. [ 1 ] Although biogeography traditionally focused on plants and larger animals, recent studies have broadened this field to include distribution patterns of microorganisms . This extension of biogeography to smaller scales—known as "microbial biogeography"—is enabled by ongoing advances in genetic technologies.
The aim of microbial biogeography is to reveal where microorganisms live, at what abundance, and why. Microbial biogeography can therefore provide insight into the underlying mechanisms that generate and hinder biodiversity . [ 2 ] Microbial biogeography also enables predictions of where certain organisms can survive and how they respond to changing environments, making it applicable to several other fields such as climate change research.
Schewiakoff (1893) theorized about the cosmopolitan habitat of free-living protozoans. [ 3 ] In 1934, Lourens Baas Becking , based on his own research in California's salt lakes, as well as work by others on salt lakes worldwide, [ 4 ] concluded that "everything is everywhere, but the environment selects". [ 5 ] Baas Becking attributed the first half of this hypothesis to his colleague Martinus Beijerinck (1913). [ 6 ] [ 7 ]
Baas Becking hypothesis of cosmopolitan microbial distribution would later be challenged by other works. [ 8 ] [ 9 ] [ 10 ] [ 11 ]
The biogeography of macro-organisms (i.e., plants and animals that can be seen with the naked eye) has been studied since the eighteenth century. For macro-organisms, biogeographical patterns (i.e., which organism assemblages appear in specific places and times) appear to arise from both past and current environments. For example, polar bears live in the Arctic but not the Antarctic , while the reverse is true for penguins ; although both polar bears and penguins have adapted to cold climates over many generations (the result of past environments), the distance and warmer climates between the north and south poles prevent these species from spreading to the opposite hemisphere (the result of current environments). This demonstrates the biogeographical pattern known as "isolation with geographic distance" by which the limited ability of a species to physically disperse across space (rather than any selective genetic reasons) restricts the geographical range over which it can be found. [ citation needed ]
The biogeography of microorganisms (i.e., organisms that cannot be seen with the naked eye, such as fungi and bacteria) is an emerging field enabled by ongoing advancements in genetic technologies, in particular cheaper DNA sequencing with higher throughput that now allows analysis of global datasets on microbial biology at the molecular level. When scientists began studying microbial biogeography, they anticipated a lack of biogeographic patterns due to the high dispersibility and large population sizes of microbes, which were expected to ultimately render geographical distance irrelevant. Indeed, in microbial ecology the oft-repeated saying by Lourens Baas Becking that "everything is everywhere, but the environment selects" has come to mean that as long as the environment is ecologically appropriate, geological barriers are irrelevant. [ 12 ] However, recent studies show clear evidence for biogeographical patterns in microbial life, which challenge this common interpretation: the existence of microbial biogeographic patterns disputes the idea that "everything is everywhere" while also supporting the idea that environmental selection includes geography as well as historical events that can leave lasting signatures on microbial communities. [ 2 ]
Microbial biogeographic patterns are often similar to those of macro-organisms. Microbes generally follow well-known patterns such as the distance decay relationship, the abundance-range relationship, and Rapoport's rule . [ 13 ] [ 14 ] This is surprising given the many disparities between microorganisms and macro-organisms, in particular their size ( micrometers vs. meters), time between generations (minutes vs. years), and dispersibility (global vs. local). However, important differences between the biogeographical patterns of microorganism and macro-organism do exist, and likely result from differences in their underlying biogeographic processes (e.g., drift, dispersal , selection, and mutation ). For example, dispersal is an important biogeographical process for both microbes and larger organisms, but small microbes can disperse across much greater ranges and at much greater speeds by traveling through the atmosphere (for larger animals dispersal is much more constrained due to their size). [ 2 ] As a result, many microbial species can be found in both northern and southern hemispheres, while larger animals are typically found only at one pole rather than both. [ 15 ] Furthermore, microorganisms, such as bacteria, are affected by conditions at very small scales that may differ from the scales that are typically considered for macro-organisms. For example, soil bacterial diversity is shaped by the carbon input and connectivity in microscale aqueous habitats. [ 16 ]
Larger organisms tend to exhibit latitudinal gradients in species diversity , with larger biodiversity existing in the tropics and decreasing toward more temperate polar regions. In contrast, studies on indoor fungal communities [ 14 ] and global topsoil microbiomes [ 17 ] found microbial biodiversity to be significantly higher in temperate zones than in the tropics. Interestingly, different buildings exhibited the same indoor fungal composition in any given location, where similarity increased with proximity. [ 14 ] Thus, despite human efforts to control indoor climates, outside environments appear to be the strongest determinant of indoor fungal composition. [ 14 ] On the other hand, the strong biogeographical pattern of soil bacteria is typically attributed to changes in environmental factors such as soil pH. [ 18 ] [ 19 ] However, soil pH may be a biogeographical proxy [ 18 ] that is affected by a soils climatic water balance, [ 20 ] which mediates carbon inputs and the connectivity of bacterial aqueous habitats. [ 16 ] [ 21 ]
Certain microbial populations exist in opposite hemispheres and at complementary latitudes. These 'bipolar' (or 'antitropical') distributions are much rarer in macro-organisms; although macro-organisms exhibit latitude gradients, 'isolation by geographic distance' prevents bipolar distributions (e.g., polar bears are not found at both poles). In contrast, a study on marine surface bacteria [ 15 ] showed not only a latitude gradient, but also complementarity distributions with similar populations at both poles, suggesting no "isolation by geographic distance". This is likely due to differences in the underlying biogeographic process, dispersal, as microbes tend to disperse at high rates and far distances by traveling through the atmosphere. [ citation needed ]
Microbial diversity can exhibit striking seasonal patterns at a single geographical location. This is largely due to dormancy, a microbial feature not seen in larger animals that allows microbial community composition to fluctuate in relative abundance of persistent species (rather than actual species present). This is known as the "seed-bank hypothesis" [ 22 ] and has implications for our understanding of ecological resilience and thresholds to change. [ 23 ]
Panspermia suggests that life can be distributed throughout outer space via comets , asteroids , and meteoroids . Panspermia assumes that life can survive the harsh space environment, which features vacuum conditions, intense radiation, extreme temperatures, and a dearth of available nutrients. Many microorganisms are able to evade such stressors by forming spores or entering a state of low-metabolic dormancy. [ 24 ] Studies in microbial biogeography have even shown that the ability of microbes to enter and successfully emerge from dormancy when their respective environmental conditions are favorable contributes to the high levels of microbial biodiversity observed in almost all ecosystems . [ 25 ] Thus microbial biogeography can be applied to panspermia as it predicts that microbes are able to protect themselves from the harsh space environment, know to emerge when conditions are safe, and also take advantage of their dormancy capability to enhance biodiversity wherever they may land. [ citation needed ]
Directed panspermia is the deliberate transport of microorganisms to colonize another planet . If aiming to colonize an Earth-like environment, microbial biogeography can inform decisions on the biological payload of such a mission. In particular, microbes exhibit latitudinal ranges according to Rapoport's rule , which states that organisms living at lower latitudes (near the equator ) are found within smaller latitude ranges than those living at higher latitudes (near the poles). Thus the ideal biological payload would include widespread, higher-latitude microorganisms that can tolerate of a wider range of climates. This is not necessarily the obvious choice, as these widespread organisms are also rare in microbial communities and tend to be weaker competitors when faced with endemic organisms. Still, they can survive in a range of climates and thus would be ideal for inhabiting otherwise lifeless Earth-like planets with uncertain environmental conditions. Extremophiles , although tough enough to withstand the space environment, may not be ideal for directed panspermia as any given extremophile species requires a very specific climate to survive. However, if the target was closer to Earth, such as a planet or moon in our Solar System , it may be possible to select a specific extremophile species for the well-defined target environment. [ citation needed ] | https://en.wikipedia.org/wiki/Microbial_biogeography |
Microbial cell factory is an approach to bioengineering which considers microbial cells as a production facility in which the optimization process largely depends on metabolic engineering . [ 1 ] MCFs is a derivation of cell factories, which are engineered microbes and plant cells. [ 2 ] In 1980s and 1990s, MCFs were originally conceived to improve productivity of cellular systems and metabolite yields through strain engineering . [ 3 ] A MCF develops native and nonnative metabolites through targeted strain design. [ 4 ] In addition, MCFs can shorten the synthesis cycle while reducing the difficulty of product separation.
[ 5 ]
Prior to MCFs, scientists employed traditional engineering techniques to produce various commodities. These methodologies include modifying metabolic pathways , eliminating enzymes, or the balancing of ATP to drive metabolic flux. [ 6 ] However, when these approaches were applied for industrial productions, they could not withstand the industrial environments that consisted of toxins and fluctuating temperatures. [ 6 ] Ultimately, the techniques were never able to scale up and output bio-products that were obtained in the laboratory. [ 7 ]
Thus, MCFs were developed by using a heterogenous biosynthesis pathway in a microbial host. [ 8 ] As a host, MCFs take in various substrates and convert them into valuable compounds. [ 9 ] These products can range from fuels, chemical, food ingredients, to pharmaceuticals. [ 10 ]
In microbial cells, the cell walls are either Gram-positive or Gram-negative. These outcomes are based on the Gram Stain test . Gram-positive cell walls have thick peptidoglycan layer and no outer lipid membrane while Gram-negative bacteria have a thin peptidoglycan layer and an outer lipid membrane. [ 11 ] Although a thick Gram-positive cell wall is advantageous, it is easier to attack as the peptidoglycan layer absorbs antibiotics and cleaning products. A Gram-negative cell wall is more resistant to such attacks and more difficult to destroy.
The membrane of microbial cells are bilayers, composed of phospholipids . [ 12 ] The phospholipids may range in chain length to branching. Ultimately, the phospholipid will determine the membrane properties, such as fluidity and charge, that will regulate the interactions with nearby proteins. In addition, the membrane oversees the development of the cell's morphology and cell sizes. [ 13 ] Escherichia coli is often utilized a base line to differentiate and define the membrane of MCFs. [ 14 ]
The nucleoid forms an irregular shaped region within a prokaryote cell, containing all or majority of the genetic material to reproduce. [ 15 ] The nucleoid controls the activity of the MCF and reproduction of itself and products.
Current methods of programming MCFs utilize strain engineering, which rely on random mutagenesis. [ 16 ] In addition, the conventional techniques are labor-intensive, timely, and difficult to analyze. [ 16 ] This has led many scientific trials to utilize genomic editing tools to improve MCFs, such as ZFNs , TALENs , and CRISPR . These approaches allow genetic manipulation and analysis, specifically creating double stranded breaks within a genome sequence.
Zinc-finger nucleases (ZFNs) were the first genomic editing tool to be able to target any genomic site. By inducing a double-stranded break, ZFNs can facilitate targeted editing. However, when employed to reinforce MCFs, ZFNs have an unusual low success rate. In various trials, the ZFNs were unable to obtain a three-finger array or the triplet was unable to be assembled into a new sequence. [ 16 ] [ 17 ] Thus, incorporation of ZFNs into MCFs has remained strenuous and costly.
Transcription activator-like effector nucleases (TALENs) work in a similar manner to ZFNs, but TALENs are based on fusion proteins. TALENs have been applied to numerous MCFs, such as yeast and zebrafish. [ 18 ] Many developments has explored fairyTALE, a liquid phase synthesis TALEN platform, to create nucleases, activators, and repressors for MCFs. [ 19 ] Although TALENs have fewer obstacles than ZFNs, they are still troublesome as assembling large quantities of repeats into an array remains a significant problem. [ 20 ]
Clustered regularly interspaced palindromic repeats (CRISPR) and its associated proteins (Cas) has become one of the most popular genome editing tools due to its efficiency and low cost. The CRISPR/CAS9 has been utilized to enhance MCFs to produce yeast, bacteria, and E.coli. [ 21 ] When optimizing yeast, CRISPR/CAS9 promoting S.pyogenes has been found to be the most influential strategy. For E.coli, studies have determined a strategy preventing genome instability to be the most robust metabolic engineering approach regardless of the specific methodology. [ 21 ]
The most significant advantage of MCFs is the ability to be utilized in industrial environments with minimal limitations. Through metabolic engineering, MCFs rely on innovative strategic tools for the development and optimization of metabolic and gene regulatory networks for efficient production. [ 22 ] Going from lab to large scale development involves consideration of three factors: product yield, productivity, and the product titre. [ 22 ] A common dilemma however is the trade-off between product yield and productivity. If a company maximizes productivity, they will ultimately lower their product yield and vice versa.
To combat this issue, strategies have been developed to maximize all three factors. One of the most common techniques is utilizing fed-batch culture . Fed-batch culture is, in the broadest sense, defined as an operational technique in biotechnological processes where one or more nutrients (substrates) are fed (supplied) to the bioreactor during cultivation and in which the product(s) remain in the bioreactor until the end of the run. [ 23 ] Another method is utilizing continuous cultivation strategy. The premise behind continuous cultivation is to maintain a steady-state cell metabolism over long periods of times. [ 24 ] By having multiple approaches for MCF, companies may customize each process to their specific product(s).
The commercialization of MCFs has ranged from chemical to biofuels. | https://en.wikipedia.org/wiki/Microbial_cell_factory |
A microbial consortium or microbial community , is two or more bacterial or microbial groups living symbiotically . [ 1 ] [ 2 ] Consortiums can be endosymbiotic or ectosymbiotic , or occasionally may be both. The protist Mixotricha paradoxa , itself an endosymbiont of the Mastotermes darwiniensis termite, is always found as a consortium of at least one endosymbiotic coccus , multiple ectosymbiotic species of flagellate or ciliate bacteria, and at least one species of helical Treponema bacteria that forms the basis of Mixotricha protists' locomotion. [ 3 ]
The concept of a consortium was first introduced by Johannes Reinke in 1872, [ 4 ] [ 5 ] and in 1877 the term symbiosis was introduced and later expanded on. Evidence for symbiosis between microbes strongly suggests it to have been a necessary precursor of the evolution of land plants and for their transition from algal communities in the sea to land. [ 6 ]
Microbes hold promising application potential to raise the efficiency of bioprocesses when dealing with substances that are resistant to decomposition. [ 8 ] [ 9 ] A large number of microorganisms have been isolated based on their ability to degrade recalcitrant materials such as lignocellulose and polyurethanes. [ 10 ] [ 11 ] In many cases of degradation efficiency, microbial consortia have been found superior when compared to single strains. [ 12 ] For example, novel thermophilic consortia of Brevibacillus spp. and Aneurinibacillus sp. have been isolated from the environment to enhance polymer degradation. [ 13 ]
Two approaches exist to obtain microbial consortia involving either (i) a synthetic assembly from scratch by combining several isolated strains, [ 14 ] or (ii) obtainment of complex microbial communities from environmental samples. [ 15 ] For the later, enrichment process is often used to get the desired microbial consortia. [ 16 ] [ 17 ] [ 18 ] For instance, a termite gut-derived consortium showing a high xylanase activity was enriched on raw wheat straw as the sole carbon source, which was able to transform lignocellulose into carboxylates under anaerobic conditions. [ 19 ]
Relatively high diversity levels are still observed despite the use of enrichment steps when working from environmental samples, [ 18 ] likely due to the high functional redundancy observed in environmental microbial communities, being a key asset of their functional stability. [ 20 ] [ 21 ] This intrinsic diversity may stand as a bottleneck in attempts to move forward to practical application due to (i) potential negative correlation with efficiency, [ 22 ] (ii) real microbial cheaters whose presence has no impacts on degradation, (iii) security threats posed by the presence of known or unknown pathogens, and (iv) risks of losing the properties of interest if supported by rare taxa. [ 23 ]
Utilization of microbial consortia with less complexity, but equal efficiency, can lead to more controlled and optimized industrial processes. [ 24 ] For instance, a large proportion of functional genes were remarkably altered and the efficiency of diesel biodegradation was increased by reducing the biodiversity of a microbial community from diesel-contaminated soils. [ 25 ] Therefore, it is crucial to find reliable strategies to narrow down the diversity toward optimized microbial consortia gained from environmental samples. A reductive-screening approach was applied to construct effective minimal microbial consortia for lignocellulose degradation based on different metabolic functional groups. [ 24 ] Additionally, artificial selection approaches (dilution, toxicity, and heat) have been also employed to obtain bacterial consortia. [ 26 ] Among them, dilution-to-extinction has already proven its efficiency for obtaining functional microbial consortia from seawater and rumen liquor . [ 27 ] [ 28 ] [ 29 ] Dilution-to-extinction is expected to provide more advantages compared to conventional isolation and assembly as it (i) generates many microbial combinations ready to be screened, (ii) includes strains from the initial microbial pool that might be lost due to cultivation/isolation biases, and (iii) ensures that all microbes are physically present and interacting spontaneously. [ 30 ] [ 23 ]
Microbialites are lithified microbial mats that grow in benthic freshwater and marine environments. Microbialites are the earliest known fossilised evidence of life, dating back 3.7 billion years. [ citation needed ] Today modern microbialites are scarce, and are formed mainly by Pseudomonadota (formerly Proteobacteria), cyanobacteria , sulphate-reducing bacteria , diatoms , and microalgae . [ citation needed ] These microorganisms produce adhesive compounds that cement sand and join other rocky materials to form mineral " microbial mats ". The mats build layer by layer, growing gradually over time. [ citation needed ]
Although various studies have shown that single microorganisms can exert beneficial effects on plants, it is increasingly evident that when a microbial consortium — two or more interacting microorganisms — is involved, additive or synergistic results can be expected. This occurs, in part, due to the fact that multiple species can perform a variety of tasks in an ecosystem like the plant root rhizosphere . Beneficial mechanisms of plant growth stimulation include enhanced nutrient availability, phytohormone modulation, biocontrol , biotic and abiotic stress tolerance) exerted by different microbial players within the rhizosphere, such as plant-growth-promoting bacteria (PGPB) and fungi such as Trichoderma and Mycorrhizae . [ 31 ]
The diagram on the right illustrates that rhizosphere microorganisms like plant-growth-promoting bacteria (PGPB), arbuscular mycorrhizal fungi (AMF), and fungi from the genus Trichoderma spp. can establish beneficial interactions with plants, promoting plant growth and development, increasing the plant defense system against pathogens, promoting nutrient uptake, and enhancing tolerance to different environmental stresses. Rhizosphere microorganisms can influence one another, and the resulting consortia of PGPB + PGPB (e.g., a nitrogen-fixing bacterium such as Rhizobium spp. and Pseudomonas fluorescens ), AMF + PGPB, and Trichoderma + PGPB may have synergetic effects on plant growth and fitness, providing the plant with enhanced benefits to overcome biotic and abiotic stress. Dashed arrows indicate beneficial interactions between AMF and Trichoderma. [ 31 ]
The capacity of microbes to degrade recalcitrant materials has been extensively explored for environmental remediation and industrial production. Significant achievements have been made with single strains, but focus is now going toward the use of microbial consortia owing to their functional stability and efficiency. However, assembly of simplified microbial consortia (SMC) from complex environmental communities is still far from trivial due to large diversity and the effect of biotic interactions . [ 23 ]
Keratins are recalcitrant fibrous materials with cross-linked components, representing the most abundant proteins in epithelial cells . [ 32 ] They are estimated to have considerable economic value after biodegradation . [ 33 ] An efficient keratinolytic microbial consortium (KMCG6) was previously enriched from an environmental sample through cultivation in keratin medium. [ 18 ] Despite reducing the microbial diversity during the enrichment process, KMCG6 still included several OTUs scattered amongst seven bacterial genera. [ 23 ]
In 2020 Kang et al., using a strategy based on enrichment and dilution-to-extinction cultures, extracted from this original consortium (KMCG6) a simplified microbial consortia (SMC) with fewer species but similar keratinolytic activity. [ 23 ] Serial dilutions were performed on a keratinolytic microbial consortium pre-enriched from a soil sample. An appropriate dilution regime (10 9 ) was selected to construct a SMC library from the enriched microbial consortium. Further sequencing analysis and keratinolytic activity assays demonstrated that obtained SMC displayed actual reduced microbial diversity, together with various taxonomic composition, and biodegradation capabilities. More importantly, several SMC possessed equivalent levels of keratinolytic efficiency compared to the initial consortium, showing that simplification can be achieved without loss of function and efficiency. [ 23 ]
As shown in the diagram on the right, the workflow for this study included four steps: (1) Enrichment for the desired traits e.g., keratinolytic activity by selection in keratin medium, where keratin is the sole carbon source. This process was evaluated by functional assessments (cell density, enzymes activity, and ratio of the residual substrate) and compositional analysis. (2) Serial dilutions were conducted to the enriched effective microbial consortia. Six dilutions were prepared, from dilution 10 2 to 10 10 with 24 replicates. The dissimilarity between dilutions was evaluated by Euclidean distance calculation based on functional assessment criteria. (3) Library construction was done from the dilution offering the optimal dissimilarity among replicates. Dilution 10 9 was selected to construct the SMC library in this case. (4) Selection of the most promising SMC is based on functional and compositional characterization. [ 23 ]
Consortia are commonly found in humans, with the predominant examples being the skin consortium and the intestinal consortium which provide protection and aid in human nutrition. Additionally, bacteria have been identified as existing within the brain (previously believed to be sterile), with metagenomic evidence suggesting the species found may be enteric in origin. [ 34 ] [ 35 ] As the species found appear to be well-established, have no discernible impact on human health, and are species known to form consortia when found in the gut, it is highly likely they have also formed a symbiotic consortium within the brain. [ 36 ]
Synthetic microbial consortia (commonly called co-cultures) are multi-population systems that can contain a diverse range of microbial species, and are adjustable to serve a variety of industrial and ecological interests. For synthetic biology , consortia take the ability to engineer novel cell behaviors to a population level. Consortia are more common than not in nature, and generally prove to be more robust than monocultures. [ 37 ] Just over 7,000 species of bacteria have been cultured and identified to date. Many of the estimated 1.2 million bacteria species that remain have yet to be cultured and identified, in part due to inabilities to be cultured axenically . [ 38 ] When designing synthetic consortia, or editing naturally occurring consortia, synthetic biologists keep track of pH, temperature, initial metabolic profiles, incubation times, growth rate, and other pertinent variables. [ 37 ] | https://en.wikipedia.org/wiki/Microbial_consortium |
Microorganisms engage in a wide variety of social interactions, including cooperation . A cooperative behavior is one that benefits an individual (the recipient) other than the one performing the behavior (the actor). [ 1 ] This article outlines the various forms of cooperative interactions ( mutualism and altruism ) seen in microbial systems, as well as the benefits that might have driven the evolution of these complex behaviors.
Microorganisms, or microbes, span all three domains of life – bacteria , archaea , and many unicellular eukaryotes including some fungi and protists . Typically defined as unicellular life forms that can only be observed with a microscope, microorganisms were the first cellular life forms, and were critical for creating the conditions for the evolution of more complex multicellular forms.
Although microbes are too small to see with the naked eye, they represent the overwhelming majority of biological diversity, and thus serve as an excellent system to study evolutionary questions. One such topic that scientists have examined in microbes is the evolution of social behaviors, including cooperation. A cooperative interaction benefits a recipient, and is selected for on that basis. In microbial systems, cells belonging to the same taxa have been documented partaking in cooperative interactions to perform a wide range of complex multicellular behaviors such as dispersal, foraging, construction of biofilms , reproduction, chemical warfare, and signaling. This article will outline the various forms of cooperative interactions seen in microbial systems, as well as the benefits that might have driven the evolution of these complex behaviors.
From an evolutionary point of view, a behavior is social if it has fitness consequences for both the individual that performs that behavior (the actor) and another individual (the recipient). Hamilton first categorized social behaviors according to whether the consequences they entail for the actor and recipient are beneficial (increase direct fitness) or costly (decrease direct fitness). [ 2 ] Based on Hamilton's definition, there are four unique types of social interactions: mutualism (+/+), selfishness (+/−), altruism (−/+), and spite (−/−) (Table 1). Mutualism and altruism are considered cooperative interactions because they are beneficial to the recipient, and will be the focus of this article.
Explaining cooperation remains one of the greatest challenges for evolutionary biology, regardless of whether the behavior is considered mutually beneficial or altruistic. According to classical evolutionary theory, an organism will only behave in ways that maximize its own fitness . Therefore, the origin of cooperative interactions, or actions by individuals that result in other individuals receiving fitness benefits, seems counterintuitive.
Theoretical explanations for the evolution of cooperation can be broadly classified into two categories: direct fitness benefits or indirect fitness benefits. This follows from Hamilton's 1964 insight that individuals gain inclusive fitness directly through their impact on their own reproduction (direct fitness effects), as well as through their impact on the reproduction of individuals with related genes (indirect fitness effects). [ 2 ]
Perhaps the most common cooperative interactions seen in microbial systems are mutually beneficial . Mutually beneficial social interactions provide a direct fitness benefit to both individuals involved, while outweighing any cost of performing the behaviour. [ 3 ] In an environment with individual microbes, mutualism is most often performed in order to increase individual fitness benefit. However, in a community, microorganisms will interact on a large scale to allow for the persistence of the population, which will thereby increase their own fitness. [ 4 ]
The majority of the time, organisms partaking in these behaviours have a shared interest in cooperation. In microbial systems, this is often seen in the production of metabolically expensive molecules, known as public goods. Many microbes, especially bacteria, produce numerous public goods that are released into the extracellular environment. The diffusion that occurs allows for them to be used by neighbouring organisms, despite being produced for the individual.
One very popular example of mutually beneficial microbial interactions involves the production of siderophores . Siderophores are iron-scavenging molecules produced by many microbial taxa , including bacteria and fungi. These molecules are known as chelating agents and play an important role in facilitating the uptake and metabolism of iron in the environment, as it normally exists in an insoluble form. [ 5 ] In order for bacteria to access this limiting factor, cells will manufacture these molecules, and then secrete them into the extracellular space. [ 6 ] Once released, the siderophores will sequester the iron, and form a complex, which is recognized by bacterial cell receptors. It can then be transported into the cell and reduced, making the iron metabolically accessible for the bacteria. The production of siderophores is often used as an example of mutualism as the compounds are not constricted to individual usage. As long as the organism possesses a receptor for the siderophore-Fe (III) complex, they can be taken up and utilized. [ 7 ]
There are many explanations in place that justify the evolution of mutually beneficial interactions. Most importantly, in order for the production of public goods to be evolutionarily beneficial, the behaviour must provide a direct benefit to the reproductive performance of the actor that outweighs the cost of performing the behaviour. [ 5 ] This is most often seen in the case of direct fitness benefit. As bacteria are most often found in colonies, neighbouring bacteria are likely to express genetic commonality. Therefore, by increasing the chances for a nearby bacterium to grow and divide, the host is increasing their own passage of genetic material. In the case of siderophores, a positive correlation was found between relatedness among bacterial lineages and siderophore production. [ 6 ]
Microbial communities are not only interested in the survival and productivity of their own species, however. In a mixed community, different bacterial species have been found to adapt to different food sources, including the waste products of other species, in order to stave off unnecessary competition. [ 8 ] This allows heightened efficiency for the community as a whole.
Having a balanced community is very important for microbial success. In the case of siderophore production, there must be equilibrium between the microbes that spend their energy to produce the chelating agents, and those that can utilize xenosiderophores. Otherwise, the exploitative microbes would eventually out-compete the producers, leaving a community with no organisms able to produce siderophores, and thus, unable to survive in low iron conditions. This ability to balance between the two populations is currently being researched. It is thought to be due to the presence of low-affinity receptors on the non-producers, or producers generating a toxin-mediated interference mechanism. [ 9 ]
While the production of public goods aims to benefit all individuals, it also leads to the evolution of cheaters, or individuals that do not pay the cost of producing a good, but still receive benefits (Figure 1). In order to minimize fitness costs, natural selection will favor individuals that do not to secrete while taking advantage of the secretions of their neighbors. In a population of siderophore secreting cells, non-secreting mutant cells do not pay the cost of secretion, but still gain the same benefit as the wild-type neighbors. Recently, Griffin et al. (2004) investigated the social nature of the production of siderophores in Pseudomonas aeruginosa . [ 10 ] When cells were grown in pure culture were placed in an iron-limiting environment, populations of cells that secreted siderophores ( wild-type ) outcompeted a population of mutant non-secretors. Therefore, siderophore production is beneficial when iron is limiting. However, when the same populations were placed in an iron-rich environment, the mutant population outcompeted wild-type population, demonstrating that siderophore production is metabolically costly. Finally, when both wild type and mutant bacteria were placed in the same mixed population, the mutants can gain the benefit of siderophore production without paying the cost, and hence increase in frequency. This concept is commonly referred to the tragedy of the commons .
The prisoner's dilemma game is another way that evolutionary biologists explain the presence of cheating in cooperative microbial systems. Originally framed by Merrill Flood and Melvin Dresher in 1950, the Prisoner's Dilemma is a fundamental problem in game theory , and demonstrates that two individuals might not cooperate even if it is in both their best interests to do so. In the dilemma, two individuals each choose whether to cooperate with the other individual or to cheat. Cooperation by both individuals gives the greatest average advantage. However, if one individual decides to cheat, they will obtain a greater individual advantage. If the game is played only once cheating is the superior strategy since it is the superior strategy. However, in biologically realistic situations, with repeated interactions (games), mutations, and heterogeneous environments, there is often no single stable solution and the success of individual strategies can vary in endless periodic or chaotic cycles. The specific solution to the game will depend critically on the way iterations are implemented and how pay-offs are translated to population and community dynamics.
In the bacteria Escherichia coli , a Prisoner Dilemma situation can be observed when mutants exhibiting a Grow Advantage in Stationary Phase (GASP) phenotype [ 11 ] compete with a wild type (WT) strain in batch culture. [ 12 ] In such batch culture settings, where the growth environment is homogenized by shaking the cultures, WT cells cooperate by arresting bacterial growth in order to prevent ecological collapse while the GASP mutants continue to grow by defecting to the wild type regulatory mechanism. As a consequence of such defection to the self-regulation of growth by the GASP cells, although higher cell densities are achieved in the short term, a population collapse is attained in the long run due to the tragedy of the commons (Figure 1). On the contrary, although WT cells do not achieve such high population densities, their populations are sustainable at the same density in the long term.
As predicted by theory, [ 13 ] in a spatial setting such as those implemented experimentally by microfluidics chips, coexistence between the two strains is possible due to the localization of interactions and the spatial segregation of cheaters. [ 14 ] When provided with such a spatial environment, bacteria can self-organize into dynamic patterns of cell aggregation, desegregation which ensure that cooperator WT cells can reap the benefits of cooperation (Figure 2).
Greig & Travisano (2004) addressed these ideas with an experimental study on yeast Saccharomyces cerevisiae . [ 15 ] S. cerevisiae possesses multiple genes that each produce invertase , an enzyme that is secreted to digest sucrose outside of the cell. As discussed above, this public good production creates the potential for individual cells to cheat by stealing the sugar digested by their neighbors without contributing the enzyme themselves. Greig & Travisano (2004) measured the fitness of a cheater type (who possessed a reduced number of invertase genes) relative to a cooperator (who contained all possible invertase genes). [ 15 ] By manipulating the level of social interaction within the community by varying the population density, they found that the cheater is less fit than the cooperator at low levels of sociality, but more fit in dense communities. Therefore, they propose that selection for "cheating" causes natural variation in the amount of invertase genes an individual may possess, and that variation in invertase genes reflects constant adaptation to an ever-changing biotic environment that is a consequence of the instability of cooperative interactions.
The second type of cooperative interactions is altruistic , or interactions that are beneficial to the recipient but costly to the actor (-/+). Justifying the evolutionary benefit of altruistic behavior is a highly debated topic. A common justification for the presence of altruistic behaviors is that they provide an indirect benefit because the behavior is directed towards other individuals who carry the cooperative gene. [ 2 ] The simplest and most common reason for two individuals to share genes in common is for them to be genealogical relatives (kin), and so this is often termed kin selection . [ 16 ] According to Hamilton, an altruistic act is evolutionarily beneficial if the relatedness of the individual that profits from the altruistic act is higher than the cost/benefit ratio this act imposes. This rationale is referred to as Hamilton's rule .
Natural selection normally favors a gene if it increases reproduction, because the offspring share copies of that gene. However, a gene can also be favored if it aids other relatives, who also share copies. Therefore, by helping a close relative reproduce, an individual is still passing on its own genes to the next generation, albeit indirectly. Hamilton pointed out that kin selection could occur via two mechanisms: (a) kin discrimination , when cooperation is preferentially directed toward relatives, and (b) limited dispersal (population viscosity), which keeps relatives in spatial proximity to one another, allowing cooperation to be directed indiscriminately toward all neighbors (who tend to be relatives). [ 2 ] In microbial systems, these two mechanisms are equally important. For example, most microbial populations often begin from a small number of colonizers. Because most microbes reproduce asexually , close genetic relatives will surround cells as the population grows. These clonal populations often result in an extremely high density, especially in terrestrial systems. Therefore, the probability that a cell's altruistic behavior will benefit a close relative is extremely high.
While altruistic behaviors are most common between individuals with high genetic relatedness, it is not completely necessary. Altruistic behaviors can also be evolutionarily beneficial if the cooperation is directed towards individuals who share the gene of interest, regardless of whether this is due to coancestry or some other mechanism. [ 17 ] An example of this is known as the " green beard " mechanism, and requires a single gene (or a number of tightly linked genes) that both causes the cooperative behavior and can be recognized by other individuals due to a distinctive phenotypic marker, such as a green beard. [ 2 ]
The most studied slime mold from this perspective is Dictyostelium discoideum , a predator of bacteria that is common in the soil. When starving, the usually solitary single-celled amoebae aggregate and form a multicellullar slug that can contain 10 4 –10 6 cells. This slug migrates to the soil surface, where it transforms into a fruiting body composed of a spherical tip of spores and a stalk consisting of nonviable stalk cells that hold the spores aloft (Figure 2). Roughly 20% of the cells develop into the non-reproductive stalk, elevating the spores and aiding their dispersal. [ 18 ]
Programmed cell death (PCD) is another suggested form of microbial altruistic behavior. Although programmed cell death (also known as apoptosis or autolysis ) clearly provides no direct fitness benefit, it can be evolutionary adaptive if it provides indirect benefits to individuals with high genetic relatedness ( kin selection ). Several altruistic possibilities have been suggested for PCD, such as providing resources that could be used by other cells for growth and survival in Saccharomyces cerevisiae . [ 19 ] [ 20 ] While using kin selection to explain the evolutionary benefits of PCD is common, the reasoning contains some inherent problems. Charlesworth (1978) noted that it is extremely hard for a gene causing suicide to spread because only relatives that do NOT share the gene would ultimately benefit. [ 21 ] Therefore, the possible solution to this problem in microbes is that selection could favor a low probability of PCD among a large population of cells, possibly depending upon individual condition, environmental conditions, or signaling.
The integration of cooperative and communicative interactions appear to be extremely important to microbes; for example, 6–10% of all genes in the bacterium Pseudomonas aeruginosa are controlled by cell-cell signaling systems. [ 22 ] One way that microbes communicate and organize with each other in order to partake in more advanced cooperative interactions is through quorum sensing . Quorum sensing describes the phenomenon in which the accumulation of signaling molecules in the surrounding environment enables a single cell to assess the number of individuals (cell density) so that the population as a whole can make a coordinated response. This interaction is fairly common among bacterial taxa, and involves the secretion by individual cells of 'signaling' molecules, called autoinducers or pheromones . These bacteria also have a receptor that can specifically detect the signaling molecule. When the inducer binds the receptor, it activates transcription of certain genes, including those for inducer synthesis. There is a low likelihood of a bacterium detecting its own secreted inducer. Thus, in order for gene transcription to be activated, the cell must encounter signaling molecules secreted by other cells in its environment. When only a few other bacteria of the same kind are in the vicinity, diffusion reduces the concentration of the inducer in the surrounding medium to almost zero, so the bacteria produce little inducer. However, as the population grows the concentration of the inducer passes a threshold, causing more inducer to be synthesized. This forms a positive feedback loop , and the receptor becomes fully activated. Activation of the receptor induces the up regulation of other specific genes, causing all of the cells to begin transcription at approximately the same time. In other words, when the local concentration of these molecules has reached a threshold, the cells respond by switching on particular genes. In this way individual cells can sense the local density of bacteria, so that the population as a whole can make a coordinated response. [ 23 ]
In many situations, the cost bacterial cells pay in order to coordinate behaviors outweighs the benefits unless there is a sufficient number of collaborators. For instance, the bioluminescent luciferase produced by Vibrio fischeri would not be visible if it was produced by a single cell. By using quorum sensing to limit the production of luciferase to situations when cell populations are large, V. fischeri cells are able to avoid wasting energy on the production of useless product. In many situations bacterial activities, such as the production of the mentioned public goods, are only worthwhile as a joint activity by a sufficient number of collaborators. Regulation by quorum sensing would allow the cells to express appropriate behavior only when it is effective, thus saving resources under low density conditions. Therefore, quorum sensing has been interpreted as a bacterial communication system to coordinate behaviors at the population level.
The opportunistic bacteria Pseudomonas aeruginosa also uses quorum sensing to coordinate the formation of biofilms , swarming motility , exopolysaccharide production, and cell aggregation. [ 24 ] These bacteria can grow within a host without harming it, until they reach a certain concentration. Then they become aggressive, their numbers sufficient to overcome the host's immune system, and form a biofilm, leading to disease within the host. Another form of gene regulation that allows the bacteria to rapidly adapt to surrounding changes is through environmental signaling. Recent studies have discovered that anaerobiosis can significantly impact the major regulatory circuit of quorum sensing. This important link between quorum sensing and anaerobiosis has a significant impact on production of virulence factors of this organism. [ 25 ] It is hoped that the therapeutic enzymatic degradation of the signaling molecules will prevent the formation of such biofilms and possibly weaken established biofilms. Disrupting the signalling process in this way is called quorum inhibition.
S. pneumoniae has evolved a cooperative complex quorum sensing system that regulates the production of bacteriocins as well as entry into the competent state necessary for natural genetic transformation . [ 26 ] In naturally competent S. pneumoniae , the competent state rather than being a constitutive property, is induced by a peptide pheromone involving a quorum-sensing mechanism. [ 27 ] The induction of the competence causes release of DNA from a sub-fraction of the S. pneumoniae population, most probably by cell lysis. Subsequently the majority of the S. pneumoniae cells that are induced to competence act as recipients and take up the DNA that is released by the donors. [ 27 ] Natural transformation in S. pneumoniae is likely a natural cooperative mechanism for promoting genetic recombination that is similar to sex in higher organisms. [ 27 ] In spite of the availability of effective vaccines, S. pneumoniae is responsible for the death of more than a million people yearly. [ 28 ]
V. cholerae has the ability to communicate strongly at the cellular level, and this process is recognized as a form of cooperative quorum-sensing. [ 29 ] [ 30 ] Two different stimuli that are encountered in the small intestine, the absence of oxygen and the presence of host-produced bile salts , affect V. cholerae quorum sensing function and therefore its pathogenicity. [ 31 ] Cooperative quorum sensing likely contributes to natural genetic transformation , a process that includes the uptake of V. cholerae extracellular DNA by ( competent ) V. cholerae cells. [ 32 ] V. cholerae is a bacterial pathogen that causes cholera , a disease that is associated with severe contagious diarrhea that affects millions of people globally.
At least 80 species of bacteria appear to be capable of transformation, about evenly divided between Gram-positive and Gram-negative bacteria. [ 33 ]
While the evolution of cooperative interactions allowed microbial taxa to increase their fitness, it is hypothesized that cooperation provided a proximate cause to other major evolutionary transitions , including the evolution of multicellularity . [ 34 ] This idea, often referred to as the Colonial Theory, was first proposed by Haeckel in 1874, and claims that the symbiosis of many organisms of the same species (unlike the symbiotic theory, which suggests the symbiosis of different species) led to a multicellular organism. In a few instances, multicellularity occurs by cells separating and then rejoining (e.g., cellular slime molds) whereas for the majority of multicellular types, multicellularity occurs as a consequence of cells failing to separate following division. [ 35 ] The mechanism of this latter colony formation can be as simple as incomplete cytokinesis, though multicellularity is also typically considered to involve cellular differentiation. [ 36 ]
The advantage of the Colonial Theory hypothesis is that it has been seen to occur independently numerous times (in 16 different protoctistan phyla). For instance, during food shortages Dictyostelium discoideum cells group together in a colony that moves as one to a new location. Some of these cells then slightly differentiate from each other. Other examples of colonial organisation in protozoa are Volvocaceae , such as Eudorina and Volvox . However, it can often be hard to separate colonial protists from true multicellular organisms, as the two concepts are not distinct. This problem plagues most hypotheses of how multicellularisation could have occurred. | https://en.wikipedia.org/wiki/Microbial_cooperation |
Microbial corrosion , also known as microbiologically influenced corrosion (MIC) , microbially induced corrosion (MIC) or biocorrosion , occurs when microbes affect the electrochemical environment of the surface on which they are fixed. This usually involves the formation of a biofilm , which can either increase the corrosion of the surface or, in a process called microbial corrosion inhibition, protect the surface from corrosion.
As every surface exposed to the environment is in some way also exposed to microbes, [ 1 ] microbial corrosion causes trillions of dollars in damage around the globe annually. [ citation needed ]
Microbes can locally create hypoxic conditions at the metal surface under a biofilm and contribute to the formation of anodic ( oxidation ) and cathodic ( reduction ) sites initiating electrochemical potential differences and electrochemical corrosion. They can also act by either releasing byproducts from their cellular metabolism that corrode metals, or preventing normal corrosion inhibitors from functioning and leaving surfaces open to attack from other environmental factors. [ 2 ]
Some sulfate-reducing bacteria produce hydrogen sulfide , which can cause sulfide stress cracking . Acidithiobacillus bacteria produce sulfuric acid ; Acidothiobacillus thiooxidans frequently damages sewer pipes. Ferrobacillus ferrooxidans directly oxidizes iron to iron oxides and iron hydroxides ; the rusticles forming on the RMS Titanic wreck are caused by bacterial activity. Other bacteria produce various acids , both organic and mineral, or ammonia .
In presence of oxygen, aerobic bacteria like Acidithiobacillus thiooxidans , Thiobacillus thioparus , and Thiobacillus concretivorus , all three widely present in the environment, are the common corrosion-causing factors resulting in biogenic sulfide corrosion .
Without presence of oxygen, anaerobic bacteria , especially Desulfovibrio and Desulfotomaculum , are common. Desulfovibrio salixigens requires at least 2.5% concentration of sodium chloride , but D. vulgaris and D. desulfuricans can grow in both fresh and salt water. D. africanus is another common corrosion-causing microorganism. The genus Desulfotomaculum comprises sulfate-reducing spore-forming bacteria; Dtm. orientis and Dtm. nigrificans are involved in corrosion processes. Sulfate-reducers require a reducing environment; an electrode potential lower than −100 mV is required for them to thrive. However, even a small amount of produced hydrogen sulfide can achieve this shift, so the growth, once started, tends to accelerate. [ citation needed ]
Layers of anaerobic bacteria can exist in the inner parts of the corrosion deposits, while the outer parts are inhabited by aerobic bacteria.
Some bacteria are able to utilize hydrogen formed during cathodic corrosion processes.
Bacterial colonies and deposits can form concentration cells , causing and enhancing galvanic corrosion . [ 3 ]
Bacterial corrosion may appear in form of pitting corrosion , for example in pipelines of the oil and gas industry. [ 4 ] Anaerobic corrosion is evident as layers of metal sulfides and hydrogen sulfide smell. On cast iron , a graphitic corrosion selective leaching may be the result, with iron being consumed by the bacteria, leaving graphite matrix with low mechanical strength in place.
Various corrosion inhibitors can be used to combat microbial corrosion. Formulae based on benzalkonium chloride are common in oilfield industry.
Microbial corrosion can also apply to plastics , concrete , and many other materials. Two examples are Nylon-eating bacteria and Plastic-eating bacteria.
Fungi can cause microbial corrosion of concrete. With adequate environmental factors, such as humidity, temperature, and organic carbon sources, fungi will produce colonies on concrete. Some fungi can reproduce asexually. This common process among fungi allows many new fungal spores to quickly spread to new environments, developing entire colonies where nothing existed. These colonies and the new spores produced use hyphae to absorb environmental nutrients.
Hyphae are incredibly tiny and thin, growing only 2 to 6 micrometers in diameter. Fungal hyphae are used to reach deep into minuscule holes, cracks, and ravines in concrete. These areas contain moisture and nutrients the fungus survives on. As more hyphae force their way into these tiny cracks and crevices, the pressure causes those gaps to expand, similar to how water freezes in tiny holes and cracks, causing them to widen. The mechanical pressure enables cracks to expand, leading to more moisture getting inside, and thus, the fungi have more nutrients, allowing them to travel deeper into the concrete structure. By altering their environment, fungi break down concrete and its alkaline layer, thus providing ideal conditions for corrosion-causing bacteria to further degrade concrete structures.
Another way fungi cause corrosion on concrete is through organic acids naturally produced by the fungi. These organic acids chemically react with Calcium 2+ in the concrete which produces water-soluble salts as a product. The Calcium 2+ is then released, causing extensive damage over time to the structure. Due to the fact that fungi expel digestive juices to gain nutrients, the structure they grow on will begin to dissolve. This is no different for concrete when fungi such as Fusarium take root. An experiment compared the corrosion of the bacteria Tiobacillus to the corrosion of a fungus called Fusarium. In the experiment, both groups of organisms were provided with adequate conditions to grow, along with an equal piece of concrete in each experiment. After 147 days, the Tiobacillus bacterium caused an 18% mass reduction. However, the Fusarium fungus caused a 24% mass reduction in the same time frame, thus showcasing its corrosive abilities.
Bhattacharyya [ 5 ] did a study on the three separate types of fungi that are known to cause concrete corrosion: Aspergillus tamarii, Aspergillus niger, and Fusarium. Aspergillus tamarii was the most destructive of the three fungi. It causes cracks to widen and deepen, quickly and efficiently takes root, and promotes calcium oxalate. By causing calcium oxalate, there is an increase in the speed of calcium ion leaching, which lowers the overall strength of concrete. In 90 days, exposure to the fungus resulted in a mass reduction of 7.2% in the concrete. Aspergillus niger was the second worst offender out of the three, followed by Fusarium, which can lower the mass of concrete by 6.2 grams in a single year, as well as cause the pH to down from 12 to 8 in the same time frame. [ 6 ]
Hydrocarbon utilizing microorganisms, mostly Cladosporium resinae and Pseudomonas aeruginosa and sulfate reducing bacteria , colloquially known as "HUM bugs", are commonly present in jet fuel . They live in the water-fuel interface of the water droplets, form dark black/brown/green, gel-like mats, and cause microbial corrosion to plastic and rubber parts of the aircraft fuel system by consuming them, and to the metal parts by the means of their acidic metabolic products. They are also incorrectly called algae due to their appearance. FSII , is added to fuel as a growth retardant. There are about 250 kinds of bacteria that can live in jet fuel, but fewer than a dozen are meaningfully harmful. [ 7 ]
Microorganisms can negatively affect [ how? ] radioactive elements confined in nuclear waste . [ citation needed ]
Multiple factors produced by the environment stimulate the corrosion and deterioration of concrete, such as freezing conditions, radiation exposure, and extensive heat cycles or freeze-thaw and wet-dry cycles. Cycles that cause mechanical breakdowns of concrete, such as freeze-thaw cycles, are incredibly ruinous. All these provide ways for microbes to take over, further eroding and weakening structures made of concrete. An uptick in damages on urbanized sewer systems and cities that line the coast has forced people to look further in-depth at how to preserve concrete from microbes.
To halt the damage done by microbes, a complete comprehension of corrosion-causing microbes must be undertaken. This includes knowing what the specific microbes and their community are made up of and how they break down structural concrete. Environmental stressors on structures often promote microbial corrosion caused by bacteria, Archaea, algae, and fungi. These microorganisms depend on their environment to provide proper moisture, pH levels, and resources that allow reproduction.
The pH level of concrete greatly influences what microbes can reproduce and how much damage is done to the concrete. A concrete surface is alkaline, making it difficult for microbes to germinate. However, chemical processes by the environment and microorganisms themselves cause changes in the concrete. Environmental conditions combined with carbonization caused by select microbes fabricate negative changes in the pH of the concrete. These few microbes can excrete metabolites that change the pH from 12 to 8. With a lower pH level, more microorganisms can survive on the concrete, thus quickening the corrosion rate. This becomes an extreme problem, as many microbes that attack concrete survive in anaerobic conditions. Sewers, for example, have low oxygen levels and are high in nitrogen and sulfuric gas, making them perfect for microbes that metabolize those gases. [ 5 ]
Sewer network structures are prone to biodeterioration of materials due to the action of some microorganisms associated to the sulfur cycle. It can be a severely damaging phenomenon which was firstly described by Olmstead and Hamlin in 1900 [ 8 ] for a brick sewer located in Los Angeles. Jointed mortar between the bricks disintegrated and ironwork was heavily rusted. The mortar joint had ballooned to two to three times its original volume, leading to the destruction or the loosening of some bricks.
Around 9% of damages described in sewer networks can be ascribed to the successive action of two kinds of microorganisms. [ 9 ] Sulfate-reducing bacteria (SRB) can grow in relatively thick layers of sedimentary sludge and sand (typically 1 mm thick) accumulating at the bottom of the pipes and characterized by anoxic conditions. They can grow using oxidized sulfur compounds present in the effluent as electron acceptor and excrete hydrogen sulfide (H 2 S). This gas is then emitted in the aerial part of the pipe and can impact the structure in two ways: either directly by reacting with the material and leading to a decrease in pH, or indirectly through its use as a nutrient by sulfur-oxidizing bacteria (SOB), growing in oxic conditions, which produce biogenic sulfuric acid. [ 10 ] The structure is then submitted to a biogenic sulfuric acid attack. Materials like calcium aluminate cements , PVC or vitrified clay pipe may be substituted for ordinary concrete or steel sewers that are not resistant in these environments. Mild steel corrosion reduction in water by uptake of dissolved oxygen is carried out by Rhodotorula mucilaginosa(7).
Many methods have been developed for the restriction of microbial corrosion. The primary challenge has been finding ways to prevent or stop microbial growth without negatively impacting the surrounding environment. The list below provides an overview of some of the tactics that have been used or that are in development.
Rao and Mulky [ 2 ] developed an extensive list of methods to limit the growth of microbes and therefore microbial corrosion.
Though microorganisms are often responsible for corrosion, they can also protect surfaces from corrosion. [ 12 ] For example, oxidization is a common cause of corrosion. If a susceptible surface has a biofilm covering it that takes in and uses oxygen, then that surface will be protected from corrosion due to oxidization. Biofilms can also release antimicrobial compounds, which is helpful if the biofilm is not corrosive and can deter microbes that would be. Biofilms provide a barrier between a surface and the ecosystem surrounding it, so as long as the biofilm has no adverse effects, it can serve as protection from corrosion as well. [ 11 ] Because biofilms don’t negatively impact the ecosystem, they are potentially one of the best mechanisms for corrosion inhibition. They can also alter the conditions on the surface of a metal so that the metal is less likely to be damaged, preventing corrosion. [ 2 ] | https://en.wikipedia.org/wiki/Microbial_corrosion |
A microbial cyst is a resting or dormant stage of a microorganism , that can be thought of as a state of suspended animation in which the metabolic processes of the cell are slowed and the cell ceases all activities like feeding and locomotion. Many groups of single-celled, microscopic organisms, or microbes, [ 1 ] possess the ability to enter this dormant state.
Encystment, the process of cyst formation, can function as a method for dispersal and as a way for an organism to survive in unfavorable environmental conditions. These two functions can be combined when a microbe needs to be able to survive harsh conditions between habitable environments (such as between hosts) in order to disperse. Cysts can also be sites for nuclear reorganization and cell division, and in parasitic species they are often the infectious stage between hosts. When the encysted microbe reaches an environment favorable to its growth and survival, the cyst wall breaks down by a process known as excystation . [ 2 ]
Environmental conditions that may trigger encystment include, but are not limited to: lack of nutrients or oxygen, extreme temperatures, desiccation , adverse pH, and presence of toxic chemicals which are not conducive for the growth of the microbe. [ 3 ] [ 4 ]
The idea that microbes could temporarily assume an alternate state of being to withstand changes in environmental conditions began with Antonie van Leeuwenhoek’s 1702 study on Animalcules , currently known as rotifers : [ 5 ]
“'I have often placed the Animalcules I have before described out of the water, not leaving the quantity of a grain of sand adjoining to them, in order to see whether when all the water about them was evaporated and they were exposed to the air their bodies would burst, as I had often seen in other Animalcules. But now I found that when almost all the water was evaporated, so that the creature could no longer be covered with water, nor move itself as usual, it then contracted itself into an oval figure, and in that state it remained, nor could I perceive that the moisture evaporated from its body, for it preserved its oval and round shape, unhurt." [ 5 ]
Leeuwenhoek later continued his work with rotifers to discover that when he returned the dried bodies to their preferred aquatic conditions, they resumed their original shape and began swimming again. [ 5 ] These observations did not gain traction with the general microbiological community of the time, and the phenomena as Leeuwenhoek observed it was never given a name. [ 5 ]
In 1743, John Turberville Needham observed the revival of the encysted larval stage of the wheat parasite, Anguillulina tritici and later published these findings in New Microscopal Discoveries (1745). [ 5 ] Several others repeated and expanded upon this work, informally referring to their studies on the “phenomenon of reviviscence.” [ 5 ]
In the late 1850s, reviviscence became embroiled in the debate surrounding the theory of spontaneous generation of life, leading two highly involved scientists on either side of the issue to call upon the Biological Society of France for an independent review of their opposing conclusions on the matter. Doyere, who believed rotifers could be desiccated and revitalized, and Pouchet, who believed they could not, allowed independent observers of various scientific backgrounds to observe and attempt to replicate their findings. The resulting report leaned toward the arguments made by Pouchet, with notable dissension from the main author who blamed his framing of the issue in the report on fear of religious retribution. Despite the attempt by Doyere and Pouchet to conclude debate on the topic of resurrection, investigations continued. [ 5 ]
In 1872, Wilhelm Preyer introduced the term ‘ anabiosis ’ (return to life) to describe the revitalization of viable, lifeless organisms to an active state. This was quickly followed by Schmidt’s 1948 proposal of the term ‘abiosis,’ leading to some confusion between terms describing the beginning of life from non-living elements, viable lifelessness, and nonliving components that are necessary for life. [ 5 ]
As part of his 1959 review of Leeuwenhoek’s original findings and the evolution of the science surrounding microbial cysts and other forms of metabolic suspension, D. Keilin proposed the term ‘ cryptobiosis ’ (latent life) to describe:
“...the state of an organism when it shows no visible signs of life and when its metabolic activity becomes hardly measurable, or comes reversibly to a standstill.” [ 5 ]
As microbial research began to gain popularity exponentially, details about ciliated protist physiology and cyst formation led to increased curiosity about the role of encystment in the life cycle of ciliates and other microbes. [ 6 ] The realization that no one category of microscopic organism ‘owns’ the ability to form metabolically dormant cysts necessitates the term ‘microbial cyst’ to describe the physical object as it exists in all its forms. Also important in the generation of the term, is the delineation of endospores and microbial cysts as different forms of cryptobiosis or dormancy. Endospores exhibit more extreme isolation from their environment in terms of cell wall thickness, impermeability to substrates, and presence of dipicolinic acid , a compound known to confer resistance to extreme heat. [ 7 ] Microbial cysts have been likened to modified vegetative cells with the addition of a specialized capsule. [ 7 ] Importantly, encystment is a process observed to precede cell division, [ 8 ] while the formation of an endospore involves non-reproductive cellular division. The study of the encystment process was mostly confined to the 1970s and '80s, resulting in the lack of understanding of genetic mechanisms and additional defining characteristics, though they are generally thought to follow a different formation sequence than endospores. [ 9 ]
Indicators of cyst formation in ciliated protists include varying degrees of ciliature resorption, with some ciliates losing both cilia and the membranous structures supporting them while others maintain kinetosomes and/or microtubular structures. De novo synthesis of cyst wall precursors in the endoplasmic reticulum also frequently indicate a ciliate is undergoing encystment. [ 10 ]
The composition of the cyst wall is variable in different organisms.
In bacteria (for instance, Azotobacter sp. ), encystment occurs by changes in the cell wall ; the cytoplasm contracts and the cell wall thickens. Various members of the Azotobacteraceae family have been shown to survive in an encysted form for up to 24 years. The extremophile Rhodospirillum centenum , an anoxygenic, photosynthetic, nitrogen-fixing bacterium that grows in hot springs was found to form cysts in response to desiccation as well. [ 12 ] Bacteria do not always form a single cyst. Varieties of cysts formation events are known. Rhodospirillum centenum can change the number of cysts per cell, usually ranging from four to ten cells per cyst depending on the environment. [ 12 ]
Some species of filamentous cyanobacteria have been known to form heterocysts to escape levels of oxygen concentration detrimental to their nitrogen fixing processes. This process is distinct from other types of microbial cysts in that the heterocysts are often produced in a repeating pattern within a filament composed of several vegetative cells, and once formed, heterocysts cannot return to a vegetative state. [ 13 ]
Protists, especially protozoan parasites, are often exposed to very harsh conditions at various stages in their life cycle. For example, Entamoeba histolytica , a common intestinal parasite that causes dysentery , has to endure the highly acidic environment of the stomach before it reaches the intestine and various unpredictable conditions like desiccation and lack of nutrients while it is outside the host. [ 14 ] An encysted form is well suited to survive such extreme conditions, although protozoan cysts are less resistant to adverse conditions compared to bacterial cysts. [ 3 ] Cytoplasmic dehydration, high autophagic activity, nuclear condensation, and decrease of cell volume are all indicators of encystment initiation in ciliated protists. [ 10 ] In addition to survival, the chemical composition of certain protozoan cyst walls may play a role in their dispersal. The sialyl groups present in the cyst wall of Entamoeba histolytica confer a net negative charge to the cyst which prevents its attachment to the intestinal wall [ 11 ] thus causing its elimination in the feces. Other protozoan intestinal parasites like Giardia lamblia and Cryptosporidium also produce cysts as part of their life cycle (see oocyst ). Due to the hard outer shell of the cyst, Cryptosporidium and Giardia are resistant to common disinfectants used by water treatment facilities such as chlorine. [ 15 ] In some protozoans, the unicellular organism multiplies during or after encystment and releases multiple trophozoites upon excystation. [ 14 ]
Many additional species of protists have been shown to exhibit encystment when confronted with unfavorable environmental conditions. [ 10 ]
Rotifers also produce diapause cysts, which are different from quiescent (environmentally triggered) cysts in that the process of their formation begins before environmental conditions have deteriorated to unfavorable levels and the dormant state may extend past the restoration of ideal conditions for microbial life. [ 16 ] [ 17 ] Food limited females of some Synchaeta pectinata strains produce unfertilized diapausing eggs with a thicker shell. Fertilized diapausing eggs can be produced in both food limited and non-food limited conditions, indicative of a bet-hedging mechanism for food availability or perhaps an adaptation to variation in food levels throughout a growing season. [ 18 ]
While the cyst component itself is not pathogenic, the formation of a cyst is what gives Giardia its primary tool of survival and its ability to spread from host to host. Ingestion of contaminated water, foods, or fecal matter gives rise to the most commonly diagnosed intestinal disease, giardiasis . [ 8 ]
Whereas it was previously believed that encystment only served a purpose for the organism itself, it has been found that protozoan cysts have a harboring effect. Common pathogenic bacteria can also be found taking refuge in the cyst of free-living protozoa. Survival times for bacteria in these cysts range from a few days to a few months in harsh environments. [ 19 ] Not all bacteria are guaranteed to survive in the cyst formation of a protozoan; many species of bacteria are digested by the protozoan as it undergoes cystic growth. [ 20 ] | https://en.wikipedia.org/wiki/Microbial_cyst |
Microbial cytology is the study of microscopic and submicroscopic details of microorganisms . [ 1 ] Origin of "Microbial" 1880-85; < Greek mīkro- micro- small + bíos life). "Cytology" 1857; < Cyto-is derived from the Greek "kytos" meaning "hollow, as a cell or container." + -logy meaning "the study of"). [ 2 ] Microbial cytology is analyzed under a microscope for cells which were collected from a part of the body. The main purpose of microbial cytology is to see the structure of the cells , and how they form and operate. [ 3 ] | https://en.wikipedia.org/wiki/Microbial_cytology |
Microbial dark matter [ 1 ] [ 2 ] (MDM) comprises the vast majority of microbial organisms (usually bacteria and archaea ) that microbiologists are unable to culture in the laboratory, due to lack of knowledge or ability to supply the required growth conditions. Microbial dark matter is analogous to the dark matter of physics and cosmology due to its elusiveness in research and importance to our understanding of biological diversity. Microbial dark matter can be found ubiquitously and abundantly across multiple ecosystems, but remains difficult to study due to difficulties in detecting and culturing these species, posing challenges to research efforts. [ 3 ] It is difficult to estimate its relative magnitude, but the accepted gross estimate is that as little as one percent of microbial species in a given ecological niche are culturable. In recent years, more effort has been directed towards deciphering microbial dark matter by means of recovering genome DNA sequences from environmental samples via culture independent methods such as single cell genomics [ 4 ] and metagenomics . [ 5 ] These studies have enabled insights into the evolutionary history and the metabolism of the sequenced genomes, [ 6 ] [ 7 ] providing valuable knowledge required for the cultivation of microbial dark matter lineages. However, microbial dark matter research remains comparatively undeveloped and is hypothesized to provide insight into processes radically different from known biology, new understandings of microbial communities, and increasing understanding of how life survives in extreme environments. [ 8 ]
Our contemporary understanding of microbial dark matter was born from a field that still faced constraints with the cultivation of traditional microbes. One of the main constraints of this time was an over dependence on the use of culturing methods. This over reliance meant that a large amount of microbial diversity remained yet to be discovered. However in the late 20th century new developments in molecular techniques led to a surge in discovery of uncultured microbes. Despite this newfound diversity, a large majority of microbial species remain uncharacterized. [ 9 ] This fact was further proven by the development of advanced genomic sequencing techniques in the early 21st century which uncovered a larger amount of microbial diversity than previously thought. [ 8 ]
Metagenomics is a technique in the field of microbial studies that enables us to sequence DNA directly from samples of microbial environments. This innovative technique allows us to identify the genetic material of unknown microbes and avoid overreliance on the use of culturing. The use of metagenomics differs from other microbial methods in that it uses a broad description through its use of bulk samples. This technique has expanded our understanding of microbial functions in ecosystems through the discovery of new genes and metabolic pathways. [ 10 ]
Methods of single-cell genomics have shown promise in supporting metagenomics approaches by allowing the study of individual microbial cells isolated from their natural environments, a method which has been employed to uncover the genomic and functional diversity within microbial communities, particularly those that cannot be cultured. Single-cell techniques have also successfully identified numerous new branches on the tree of life, providing insight into the gaps of current phylogenetic understanding and metabolic potential of these organisms. [ 11 ]
Despite the rise of culture-independent methods as successful methods for dark matter research, improvements in culturing techniques remain both relevant and necessary to further current understanding of MRM microbes. To this point, developments in methods such as highly specific growth media to mimic natural microbial environments and co-culturing of synergistic microbial species have shown success in studying previously unculturable microbes. These advancements also serve to facilitate the application of MRM research into biotechnological and physiological uses. [ 12 ]
Genomic studies produce vast amounts of data to be analyzed. This analysis requires the use of advanced computational components. The scientific subdiscipline of bioinformatics used computational technology to collect genomes and conduct analysis on metabolic pathways. In recent years, research on artificial intelligence and machine learning has produced new ways to increase our ability to predict the behavior of microbial species using their genetic data. [ 13 ] These new developments in the world of computational tools have allowed us to further understand the structure and dynamics present in microbial communities.
It has been suggested that certain microbial dark matter genetic material could belong to a new (i.e., fourth) domain of life, [ 14 ] [ 15 ] although other explanations (e.g., viral origin) are also possible, which has ties with the issue of a hypothetical shadow biosphere . [ 16 ]
[ 1 ] | https://en.wikipedia.org/wiki/Microbial_dark_matter |
A microbial desalination cell (MDC) is a biological electrochemical system that implements the use of electro-active bacteria to power desalination of water in situ , resourcing the natural anode and cathode gradient of the electro-active bacteria and thus creating an internal supercapacitor . Available water supply has become a worldwide endemic as only .3% of the Earth's water supply is usable for human consumption, while over 99% is sequestered by oceans, glaciers, brackish waters, and biomass. [ 1 ] Current applications in electrocoagulation , such as microbial desalination cells, are able to desalinate and sterilize formerly unavailable water to render it suitable for safe water supply. Microbial desalination cells stem from microbial fuel cells, deviating by no longer requiring the use of a mediator and instead relying on the charged components of the internal sludge to power the desalination process. [ 2 ] Microbial desalination cells therefore do not require additional bacteria to mediate the catabolism of the substrate during biofilm oxidation on the anodic side of the capacitor. MDCs and other bio-electrical systems are favored over reverse osmosis , nanofiltration and other desalination systems due to lower costs, energy and environmental impacts associated with bio-electrical systems. [ 3 ]
An MDC is constructed similarly to a microbial fuel cell by including two
chambers with two electrodes, an anode and a cathode, in addition to both a third chamber separated by an anion exchange membrane (AEM) and cation exchange membrane (CEM), and a peripheral, external circuit that is responsible for aerobic and anaerobic processes at each respective electrode. Organic matter from the sludge proliferates in the anode chamber and creates a biofilm that generates an electric current. The biofilm thus begins to oxidize the pollutants in the sludge by strictly adhering to the anode, freeing both electrons and protons from the bio-sludge, creating a current of atoms that are collected by the electrodes through circuit transportation. [ 4 ] Electrical current is produced by the potential difference generated between the anode and cathode due to the aerobic nature of the cathode chamber. [ 5 ]
MDCs are utilized in seawater desalination by primarily acting as a precursor treatment for electrodialysis (ED) due to the inefficiency in salinity removal due to biofouling and membrane scaling by the complex ion composition. Studies show that efficacy of MDC systems diminish over 5000 hours due to membrane scaling such as calcium and potassium accumulation, increasing ohmic resistance and reducing ion exchange through the membrane. However, by utilizing MDCs as a precursor treatment for electrodialysis, results show that system time is reduced by 25% and energy expenditure decreases by 45.3%. [ 6 ] Reduction in external resistance increases desalination efficiency to as high as 74%, as demonstrated in upflow microbial desalination cells (UMDC), but increased membrane scaling on the ion exchange membranes by calcium and magnesium accumulation, resulting in a higher internal ohmic resistance and decrease in overall desalination of seawater. With the application of an osmotic MFC (OsMFC) in conjunction with the UMDC as an initial pretreatment of biosolid removal and desalination, 85% of oxygen demand and approximately 97% of salts was reduced after secondary treatment. Subsequent treatment by traditional BES systems such as electrodialysis can function as a more effective system for desalination, provisioning energy demands by the output energy obtained from the MDC pretreatment. [ 6 ]
As MDCs contain low electrical conductivity in the desalination chamber and additional energy is not applied to the system, electron conductive-resins are applied to improve conductivity, decrease internal resistance and increase the desalination process of brackish waters. [ 7 ] Brackish waters are low in salinity with a high amount of total dissolved solids, which results in difficulties in maintaining strong electrical currents due to increased internal resistance in the cell. MDCs also experience problems with the saturation of ions in the anode chamber which can be combatted by utilizing a microbial capacitive desalination cell (MCDC). MCDCs are analogous to MDCs with the exception of modification to the cation membrane by the addition of activate carbon cloth, permitting the free exchange of protons across both chambers of the cell and increasing the efficiency of desalination. [ 8 ]
Increasing agricultural development is associated with the trend of elevated nitrogen concentrations in surrounding soil and groundwater composition due to the runoff of fertilizers and agricultural byproducts. Development of a submerged microbial desalination-denitrification cell (SMDDC) to remove nitrogen and saline from subsurface water alleviates the demand for additional compounds acting as electron donors and instead produces both a net energy and clean, desalinated and denitrified water. [ 9 ] In contrast to the typical MDC model, the SMDDC excludes a middle desalination chamber, but instead only contains an anode and cathode chamber separated by a polycarbonate plate and are parallel to the exterior AEM and CEM respectively. Nitrate is introduced through the AEM into the anode chamber through synthetic groundwater, then propagated as an effluent through the external loop to the cathode chamber, in which nitrate is reduced to nitrogen by the cathode and sodium influent. A wastewater feeding tank pumps water to the anodic chamber for subsequent oxidation of sludge by the anodic biofilm. Similar to the original configuration of the MDC, the SMDDC also includes an external circuit in which electrons are thus freed from the oxidation process of the sludge and drove through a closed, external circuit to the cathodic chamber. The toxic and pathogenic content of the wastewater are thus separated simultaneously with the denitrification of the groundwater, producing water that is thus filtered out as a usable effluent. Highest nitrate removal was exhibited when an external voltage (0.8 V) was applied to the circuit, transporting the ions to the anodic chamber and reducing nitrate via heterotrophic denitrification. [ 10 ] | https://en.wikipedia.org/wiki/Microbial_desalination_cell |
Microbial ecology (or environmental microbiology ) is a discipline where the interaction of microorganisms and their environment are studied. [ 2 ] Microorganisms are known to have important and harmful ecological relationships within their species and other species. [ 2 ] Many scientists have studied the relationship between nature and microorganisms: Martinus Beijerinck , Sergei Winogradsky , Louis Pasteur , Robert Koch , Lorenz Hiltner , Dionicia Gamboa and many more; [ 3 ] [ 4 ] [ 5 ] [ 6 ] to understand the specific roles that these microorganisms have in biological and chemical pathways and how microorganisms have evolved. Currently, there are several types of biotechnologies that have allowed scientists to analyze the biological/chemical properties of these microorganisms also. [ 7 ]
Many of these microorganisms have been known to form different symbiotic relationships with other organisms in their environment. [ 8 ] Some symbiotic relationships include mutualism , commensalism , amenalism , and parasitism . [ 9 ] [ 10 ]
In addition, it has been discovered that certain substances in the environment can kill microorganisms, thus preventing them from interacting with their environment. These substances are called antimicrobial substances. These can be antibiotic , antifungal , or antiviral . [ 11 ]
Martinus Beijerinck invented the enrichment culture , a fundamental method of studying microbes from the environment. Sergei Winogradsky was one of the first researchers to attempt to understand microorganisms outside of the medical context—making him among the first students of microbial ecology and environmental microbiology—discovering chemosynthesis and developing the Winogradsky column in the process. [ 12 ] : 644
Louis Pasteur was a French chemist who derived key microbial principles that we use today: microbial fermentation, pasteurization, germ theory, and vaccines. [ 13 ] These principles have served as a foundation for scientists in viewing the relationship between microbes and their environment. [ 13 ] For example, Pasteur disproved the theory of spontaneous generation, the belief of life arising from nonliving materials. [ 14 ] Pasteur stated that life can only come from life and not nonliving materials. [ 15 ] This led to the idea that microorganisms were responsible for the microbial growth in any environment. [ 15 ]
Robert Koch was a physician-scientist who implemented oil-immersion lens and a condenser while using microscopes, to increase the imagery of viewing bacteria. [ 16 ] This led Koch to be the first publisher of bacteria photographs. As a result, Koch was able to study wound infections in animals at the microscopic level. [ 16 ] He was able to distinguish distinct bacteria species, which led him to believe that the best way to study a certain disease is to focus on a specific pathogen. [ 16 ] In 1879, Koch started to develop "pure" cultures to grow bacteria colonies. [ 16 ] These advancements led Koch to solve the Cholera endemic in India during the year 1883. [ 16 ] Koch's laboratory techniques and materials led him to conclude that the use of unfiltered water was causing the Cholera endemic, since it contained bacteria causing intestinal harm in humans. [ 16 ]
Lorenz Hiltner is known as one of the pioneers in "microbial ecology." [ 4 ] His research focused on how microbials in the rhizosphere provided nutrients to plants. Hiltner stated that the quality of plant products was a result of the plant's roots microflora. [ 4 ] One of Hiltner contributions to the study of plant nutrition and soil bacteriology was creating antimicrobial seeds covered with mercury chloride. [ 4 ] The sole purpose of creating the antimicrobial seeds were to protect the seeds from the harmful effects of pathogenic fungi . In addition, he recognized the known bacteria that were responsible for the nitrogen cycle : denitrification , nitrification , and nitrogen fixation . [ 4 ]
Dionicia Gamboa is a prime example of how scientists are still trying to understand the relationship between microorganisms and nature. [ 6 ] Gamboa is a Peruvian biologist who has dedicated her career towards treating malaria and leishmaniasis microorganisms. [ 6 ] In 2009, Gamboa and her colleagues published a paper on treating different strains of malaria and leishmaniasis microorganisms, using plant extracts from the amazon. [ 6 ] To add on, Gamboa has studied different ways to accurately detect malaria and leishmaniasis microorganisms in humans, using PCR and serology . [ 17 ] Her studies have helped understand the epidemiology of these microorganisms, to reduce the interaction with them in nature and their harmful effects. [ 17 ]
Microorganisms are the backbone of all ecosystems , even in areas where photosynthesis cannot take place. For example, chemosynthetic microorganisms are the primary producers in extreme environments, such as high temperature geothermal environments. [ 18 ] In these extreme conditions, the chemosynthetic microbes provide energy and carbon to other organisms. Chemosynthetic microorganisms gain energy by oxidizing inorganic compounds such as hydrogen, nitrite, ammonia, sulfur and iron (II). These organisms can be found in both aerobic and anaerobic environment. [ 19 ]
The nitrogen cycle , phosphorus cycle , sulphur cycle , and carbon cycle depend on microorganisms also. Each cycle involves microorganisms in certain processes. [ 20 ] For example, nitrogen gas makes up 78% of the Earth's atmosphere, but it is almost chemically inert; as a result, it is unavailable to most organisms. It has to be converted biologically to an available form by microorganism, through nitrogen fixation . [ 21 ] Through these biogeochemical cycles, microorganisms are able to make nutrients such as nitrogen, phosphorus and potassium available in the soil. [ 22 ] Microorganisms play a role in solubilizing phosphate, improving soil health, and plant growth. [ 23 ]
Microbial interactions are found in bioremediation. Bioremediation is a technology that removes contaminants from soil [ 24 ] and wastewater [ 25 ] using microorganisms. [ 26 ] [ 27 ] Examples of some microorganisms that play a role in bioremediation are the following: Pseudomonas , Bacillus , Arthrobacter , Corynebacterium , Methosinus , Rhodococcus , Stereum hirsutum , methanogens , Aspergilus niger , Pleurotus ostreatus, Rhizopus arrhizus , Azotobacter , Alcaligenes , Phormidium valderium , and Ganoderma applantus . [ 28 ]
Due to high levels of horizontal gene transfer among microbial communities, [ 29 ] microbial ecology is also important to the studies of evolution . [ 30 ]
Mutualism is a close relationship between two different species in which each has a positive effect on the other . In mutualism, one partner provides service to the other partner and also receives service from the other partner as well. [ 31 ] Mutualism in microbial ecology is a relationship between microbial species and other species (example humans) that allows for both sides to benefit. [ 32 ] Microorganisms form mutualistic relationship with other microorganism, plants or animals. One example of microbe-microbe interaction would be syntrophy , also known as cross-feeding, [ 33 ] of which Methanobacterium omelianskii is a classical example. [ 34 ] [ 35 ] This consortium is formed by an ethanol fermenting organism and a methanogen . The ethanol-fermenting organism provides the archaeal partner with the H 2 , which this methanogen needs in order to grow and produce methane. [ 36 ] [ 35 ] Syntrophy has been hypothesized to play a significant role in energy and nutrient-limited environments, such as deep subsurface, where it can help the microbial community with diverse functional properties to survive, grow and produce maximum amount of energy. [ 37 ] [ 38 ] Anaerobic oxidation of methane (AOM) is carried out by mutualistic consortium of a sulfate-reducing bacterium and an anaerobic methane-oxidizing archaeon . [ 39 ] [ 40 ] The reaction used by the bacterial partner for the production of H 2 is endergonic (and so thermodynamically unfavored) however, when coupled to the reaction used by archaeal partner, the overall reaction becomes exergonic . [ 36 ] Thus the two organisms are in a mutualistic relationship which allows them to grow and thrive in an environment, deadly for either species alone. Lichen is an example of a symbiotic organism. [ 35 ]
Microorganisms also engage in mutualistic relationship with plants and a typical example of such relationship is arbuscular mycorrhizal (AM) relationship, a symbiotic relationship between plants and fungi. [ 9 ] This relationship begins when chemical signals are exchange between the plant and the fungi leading to the metabolic stimulation of the fungus. [ 41 ] [ 42 ] The fungus then attacks the epidermis of the plant’s root and penetrates its highly branched hyphae into the cortical cells of the plant. [ 9 ] In this relationship, the fungi gives the plant phosphate and nitrogen obtained from the soil with the plant in return providing the fungi with carbohydrate and lipids obtained from photosynthesis. [ 43 ] Also, microorganisms are involve in mutualistic relationship with mammals such as humans. As the host provides shelter and nutrient to the microorganisms, the microorganisms also provide benefits such as helping in the growth of the gastrointestinal tract of the host and protecting host from other detrimental microorganisms. [ 44 ]
Commensalism is very common in microbial world, literally meaning "eating from the same table". [ 45 ] It is a relationship between two species where one species benefits with no harm or benefit for the other species. [ 10 ] Metabolic products of one microbial population are used by another microbial population without either gain or harm for the first population. There are many "pairs "of microbial species that perform either oxidation or reduction reaction to the same chemical equation. For example, methanogens produce methane by reducing CO 2 to CH 4 , while methanotrophs oxidise methane back to CO 2 . [ 46 ]
Amensalism (also commonly known as antagonism) is a type of symbiotic relationship where one species/organism is harmed while the other remains unaffected. [ 32 ] One example of such a relationship that takes place in microbial ecology is between the microbial species Lactobacillus casei and Pseudomonas taetrolens . [ 47 ] When co-existing in an environment, Pseudomonas taetrolens shows inhibited growth and decreased production of lactobionic acid (its main product) most likely due to the byproducts created by Lactobacillus casei during its production of lactic acid. [ 48 ]
Certain microorganisms are known to have a host-parasite interaction with other organisms. For example, phytopathogenic fungi are known to infect and damage plants. [ 49 ] The phytopathogenic fungi is a major issue in agriculture, because it has the capacity to infect its host by their root system. [ 49 ] This is a major issue because the symptoms of the infection are not easily detected. [ 49 ] Another example of a parasitic microorganism is the nematode . [ 50 ] These organisms are known to cause river blindness and lymphatic filariasis in humans. [ 50 ] These organisms are transmitted to hosts through different mosquito species from the following groups: Aedes , Anopheles , and Culex . [ 50 ]
Antimicrobials are substances that are capable of killing microorganism. Antimicrobial can be antibacterial or antibiotic, antifungal or antiviral substance and most of these substance are natural products or may have been obtain from natural products. [ 11 ] Natural products are therefore vital in the discovery of pharmaceutical agents. [ 51 ] [ 52 ] Most of the naturally obtained antibiotics are produced by organism under the phylum Actinobacteria. The genus Streptomyces are responsible for most of the antibiotic substances produced by Actinobacteria. [ 53 ] [ 54 ] These natural products with antimicrobial properties belong to the terpenoids , spirotetronate, tetracenedione, lactam , and other groups of compounds. Examples include napyradiomycin, nomimicin, formicamycin, and isoikarugamycin, [ 55 ] [ 56 ] [ 57 ] [ 58 ] Some metals, particularly copper , silver , and gold also have antimicrobial properties. Using antimicrobial copper-alloy touch surfaces is a technique that has begun to be used in the 21st century to prevent the transmission of bacteria. [ 59 ] [ 60 ] Silver nanoparticles have also begun to be incorporated into building surfaces and fabrics, although concerns have been raised about the potential side-effects of the tiny particles on human health. [ 61 ] Due to the antimicrobial properties certain metals possess, products such as medical devices are made using those metals. [ 60 ] | https://en.wikipedia.org/wiki/Microbial_ecology |
Microbial electrochemical technologies (METs) use microorganisms as electrochemical catalyst , merging the microbial metabolism with electrochemical processes for the production of bioelectricity , biofuels , H 2 and other valuable chemicals. [ 1 ] Microbial fuel cells (MFC) and microbial electrolysis cells (MEC) are prominent examples of METs. While MFC is used to generate electricity from organic matter typically associated with wastewater treatment , MEC use electricity to drive chemical reactions such as the production of H 2 or methane . Recently, microbial electrosynthesis cells (MES) have also emerged as a promising MET, where valuable chemicals can be produced in the cathode compartment. [ 2 ] [ 3 ] [ 4 ] Other MET applications include microbial remediation cell, microbial desalination cell , microbial solar cell , microbial chemical cell , etc.,. [ 5 ] [ 6 ] [ 7 ]
The use of microbial cells to produce electricity was perceived by M.C. Potter in 1911 with the finding that " The disintegration of organic compounds by microorganisms is accompanied by the liberation of electrical energy ". [ 8 ] A noteworthy addition in MFC research was made by B. Cohen in 1931, [ 9 ] when microbial half fuel cells stack connected in series was created, capable of producing over 35 V with a current of 0.2 mA. Two breakthroughs were made in the late 1980s when two of the first known bacteria capable of transporting electron from the cell interior to the extracellular metal oxides without artificial redox mediators: Shewanella (formerly Alteromonas ) oneidensis MR-1 [ 10 ] and Geobacter sulfurreducens PCA were isolated. In late 90s, Kim et al . [ 11 ] showed that the Fe(III)-reducing bacterium, S. oneidensis MR-1 was electrochemically active and can generate electricity in a MFC without any added electron mediators. These findings set basis for the development of electromicrobiology, and the field of MFC started. However, due to low power generation , it was also doubtful whether the MFC can be practical application on wastewater organics reduction . This view was changed when it was established that domestic wastewater could be treated to practical limits while simultaneously producing power. [ 12 ] Furthermore, power densities two orders of magnitude higher was demonstrated in an MFC using glucose, without the need for exogenous chemical mediators. [ 13 ] Building upon these works, a race to develop practical applications of MFCs initiated to emerge at a very fast pace, with the major goals being development of a large scale technology for the treatment of domestic, industrial, and other types of wastewaters. [ 14 ]
In 2004, extracellular electron uptake (EEU) from cathodes to microbes (Geobacter spp.) was established with attached biofilm , where fumarate was reduced to succinate . [ 15 ] This reverse reaction for electron transport generated the research field of MES. In 2010, Nevin et al. discovered that the acetogenic microorganism Sporomusa ovata can convert CO 2 to acetic acid in MES cells by uptaking electrons from the cathode electrode. [ 16 ] In the next years, also due to the growing concerns on greenhouse gas emissions , the field of CO 2 bioelectroconversion in MES cell flourished. Several autotrophic microorganisms showed ability of capturing electrons from the cathode, either directly or through mediators. [ 17 ] Besides specific microbial species , it was shown that CO 2 reducing communities can be enriched in MES cells from inoculum sources such as sewage sludge, digester sludge or marine/river sediments. [ 18 ] [ 19 ] [ 20 ] In the following decade, technical improvements led to an increase of acetate production rate from few to hundreds g/m 2 cathode /d. [ 21 ] MES cells demonstrated also a promising technology for converting CO 2 into biomethane, with production rates up to 200 L CH 4 /m 2 cathode /d. [ 22 ] Furthermore, the MES scope was expanded to target more valuable products, including ethanol and caproate . [ 23 ]
There are various mechanisms for bacteria to electrons with an electrode. These include a "direct" process, where redox components located on the cell surface, that can be multiheme cytochromes or nanofilaments , contact directly with the solid surfaces (Figure 1A, C and D), [ 24 ] [ 25 ] [ 26 ] [ 27 ] and an "indirect" process that is mediated by soluble redox mediators that cyclically shuttle electrons between cells and electrodes [28-30] [ 28 ] [ 29 ] [ 30 ] (Figure 1B). Electron shuttles can be humic substances that are not produced by the cells, [ 31 ] or secondary metabolites that are produced by the organisms including phenazines [32, 33] [ 32 ] [ 33 ] and flavins [34, 35]. [ 34 ] [ 35 ] In addition, some primary metabolites of bacteria, such as sulphur species and H 2 , can convey electrons towards extracellular electron acceptors . In addition to heme cofactors in multiheme cytochromes , flavin mononucleotide also were shown to enhance the rate of electron transfer in some outer membrane cytochrome as redox cofactors [27]. [ 36 ] Because electrons are transferred from the interior to the exterior of microbial cells across the cellular membrane during EET, ions with positive charge need to simultaneously move in the same direction as the electron flow to maintain charge neutrality (Figure 1A). [ 37 ]
A bioelectrochemical system (BES) is the device used in METs. A classic BES such as the MFC is typically composed of two sections (Figure 2): An anodic and a cathodic section separated by a selectively permeable, proton /cation exchange membrane or a salt bridge . In a MFC, the anodic section contains microbes that work as biocatalysts under anaerobic conditions in the anolyte , where the cathodic section contains the electron acceptor (e.g. oxygen). Electrons generated from the oxidation of organic compounds are conveyed to the anode. Electrons produced by the microbes are transferred to the anode directly [ 38 ] via ' nanowires ' [ 39 ] or outer-membrane proteins , or indirectly using electron shuttling agents. These electrons reach the cathode across an external circuit and for every electron conducted, protons react at the cathode for completing the reaction and sustaining the electric current [ 40 ] .There are numerous types of BES reactors but broadly they all share the same operating principles. Various designs and configurations have been established to optimize the assembly of the three basic elements (anode, cathode and separator) in a functioning system. [ 41 ] The performance of BESs is significantly changed with their design. Table 1. shows a summary of the major BES components and associated materials for their construction.
Table 1. Major components of MFC
It is well-known that pumping, aeration , and solids handling are the major energy consuming process in wastewater treatments. Aeration alone can account for 50% of the operation costs at a typical wastewater treatment plant. Eliminating these costs can save a large amount of energy. MFCs in wastewater treatment, besides electricity generation, also help in energy savings linked to these mentioned processes which add a great advantage. The MFC process is an anaerobic process and sludge production for an anaerobic process is approximately 1/5 of that for an aerobic process. Thus, using MFCs could reduce solids production at a wastewater treatment plant, ultimately reducing significant operating costs for solids handling. Moreover, this technology has seen a nearly exponential increase in power production from the start of this century. This evolution echoes a mounting appreciation by engineers that this technology is ready to emerge as practical applications and associated technologies will be in limelight very soon.
The treatment of wastewater by MFC technologies is a promising and yet unique methodology as the process of wastewater treatment can become an approach of producing energy in the form of electricity, rather than energy expenditure. MFCs were used for the determination of lactate in water by K.I.M. and coworkers, [ 42 ] and later showed that electricity production in an MFC could be sustained by starch using an industrial wastewater. A great variety of substrates have been used in MFCs for electricity production varying from pure compounds to complex mixtures of organic matter present in wastewater. The application of MFC for biotreatment of wastewater has also recorded effective conversion of organic matter in wastewater into electricity with about 40-90% COD and BOD reduction. [ 43 ] Obviously, the energy that could be captured from wastewater is not enough to power a city, but it could be large enough to run a treatment plant. With the continuous advances, bagging this power could lead to energy sustainability of the wastewater infrastructure.
Benthic MFCs generate power through the microbial oxidation of organic substrates in anoxic marine sediments coupled to reduction of oxygen in the overlying water column. Electrons are generated from the metabolism of the naturally occurring microorganism in the various sediments. As such, benthic MFCs do not require the addition of any exogenous microorganisms or electron shuttles. [ 44 ] [ 45 ] The weather buoys obtained their entire power from the benthic MFC allowing them to operate continuously and independently from the need to replace batteries. Benthic MFCs can be operated for several years with no decrease in power output. The researchers estimated that a benthic MFC could provide power indefinitely at the same power levels and the same cost as a deep-sea power and light enclosed lead acid battery could deliver for one year.
Nitrogen and phosphorus are considered as major pollutants in the wastewater whose removal and recovery are required for sustainable treatment systems. Nitrogen is conventionally removed by biological nitrification and denitrification processes which involves a very high energy and cost in wastewater treatment. BESs has a good potential for the recovery of ammonium nitrogen with good profits from waste streams rich in nitrogen such as urine, swine liquor, digester liquor and landfill leachate , etc. [ 46 ] Phosphorus from the wastewaters is conventionally recovered as polyphosphate granules, Fe-P or struvite by bacteria. Cusick et al. achieved struvite production in a BES by employing single-chamber MEC, where up to 40% soluble phosphate was recovered by struvite precipitation at a rate of 0.3–0.9 g/m 2 /h. [ 47 ] Other phosphorus recovery in BES involved exchange of hydroxide ions generated by the cathode reaction with phosphate ions from wastewater which resulted in removing 52.4 ± 9.8% of phosphate. [ 48 ]
BESs are known for both the oxidation and reduction-based processes for remediation of underground contaminants. In comparison to conventional biological treatment or chemical processes, BESs employ a single or multiple electrodes which are not closed reactors for pollutants' remediation. Solid electrodes in this system work as non-exhaustible electron acceptors/donors for stimulating microbial transformation of pollutants into non-toxic or less toxic forms. For example, enhancing the biodegradation of toxins with concomitant bioelectricity can be production. [ 49 ] The complex petroleum organics, such as BTEX complexes ( benzene , toluene , xylenes , and ethylbenzenes , etc.) can be bioremediated using BES systems. Morris et al reported that diesel (C8–C25) degradation was improved by 164% by introducing electrodes without power input. [ 50 ] Investigations on biodiesel , phenol , total petroleum hydrocarbons , polycyclic aromatic hydrocarbons (PAHs), 1,2-dichloroethane , pyridine , etc., have been also stated, validating BES can be used as a practical technology for degrading petroleum hydrocarbon with simultaneous current generation. [ 51 ] [ 52 ] Chlorinated solvents like trichloroethene and tetrachloroethene , known for high toxicity or carcinogenic nature have been reported to degrade by using negatively polarized solid-state electrode which donate electrons with and without electron shuttles. [ 53 ] The removal of nitrate , a common groundwater contaminant, have also been demonstrated either alone {Cecconet, 2018 #9} or in combination with other co-contaminants such as arsenite . [ 54 ] In comparison to traditional denitrification which involves heterotrophic denitrifying bacteria, denitrification by BES involves autotrophic denitrifying bacteria which have the electrons uptake ability from the electrodes. Consequently, biocathodes in BES have been developed for denitrification which results in an efficient reduction of nitrate/nitrite at low energy costs in either groundwater and wastewater. [ 55 ] [ 56 ] In other studies, reduction of perchlorate , [ 57 ] Cr(VI), [ 58 ] Cu(II), and radioactive uranium [ 59 ] have also been achieved in BESs with cathode as electron donors. The major benefit associated with the use of a solid electrodes as an electron donor instead of soluble electron donor is the reduction of contaminant (e.g., U(VI) to U(IV)) which is a stable precipitate at the electrode. Not only groundwater but also soil bioremediation have been explored using BES. For example, it has been demonstrated successful cleaning of herbicides and antibiotics in soil (Ref). [ 60 ] [ 61 ]
MES, a type of BES, can employ electricity for driving the synthesis of fuels and high value chemicals by employing microbes as cathodic catalysts which also results in the treatment of waste streams (Fig. 3). [ 62 ] The dual benefits associated with this system are carbon sequestration and value added chemicals production. [ 63 ] A wide range of valuable compounds have been produced by MES, such as H 2 , acetate, CH 4 , ethanol, butanol , H 2 O 2 , etc. [ 64 ] [ 65 ] [ 66 ] [ 67 ] The product spectrum in MES is largely governed by biocathodes materials (carbon- or metal-based), microorganisms involved, reduction potentials, and redox mediators activity, and operation conditions including pH, temperature and pressure. [ 68 ] [ 69 ] Potentials between -0.6 and -1.0 V vs SHE are typically applied to MES inoculated with mixed cultures to ensure production of hydrogen at the cathode, which is then uptake by acetogenic and methanogenic microorganisms to reduce CO 2 . [ 70 ] CO 2 reduction at less negative potentials, even above the theoretical potential of -0.4 V vs SHE, ha been demonstrated for specific microorganisms such as Sporomusa , although is still debated whether this it to be attributed to direct electron uptake from the cathode or to favorable thermodynamics at the electrode surface. [ 71 ] Most studies on MES has been performed under ambient (around 20 °C) or mesophilic (around 35 °C), but the process was demonstrated feasible under thermophilic conditions (50-70 °C). [ 72 ] Neutral or slightly acidic pH (5.5-7.0) was shown optimal for CO 2 conversion to acetic acid, although lower pH, or use of inhibitors such as bromoethane sulphonic acid (BESA), is required to avoid the onset of methanogenesis . [ 73 ] The chemical compounds obtained from MES can be used as precursors for the production of downstream industrial products such as polymeric products, diesel or kerosene resembling products, plasticizers , and as lubricating agents in many industries. [ 74 ]
Figure. 3. Schematics of MES showing treatment of waste streams and formation of high value products.
Many organic compounds such as acetate , butyrate , and lactate, largely exists in effluents of wastewater plants and fermentation units. These organics are valued products, but due to their low concentrations, extraction is not a cost-effective option. Therefore, MES has been employed for the conversion of these short-chain carboxylic acids to longer chain acids and other useful products. [ 75 ] [ 76 ] Although higher value compounds can be obtained from low resource cost feeds, studies are required to compare if controlling the redox potential and supplying current to cathodes is economically feasible in comparison to current technologies. Nevertheless, further improvements in this technology platform can help in overcoming many of the fundamental challenges of a future bioeconomy .
When used for hydrogen production, the MEC needs to be supplemented by an external power source to get over the energy barrier of turning all organic material into carbon dioxide and hydrogen gas. A standard MFC is converted to a hydrogen producing MEC by supplementing > 0.14 V. [ 77 ] Hydrogen bubbles form at the cathode and are collected to be used as fuel source. [ 78 ] Although electricity is used instead of generated as in normal MFCs, this method of producing hydrogen is efficient because more than 90% of the protons and electrons generated by the bacteria at the anode are turned into hydrogen gas. [ 79 ] Hydrogen can be accumulated and stored for later usage to overcome the inherent low power feature of the MFCs. [ 80 ]
The concept of microbial electrochemical reduction involves the conversion of carbon dioxide which is the non-energy-rich component of the biogas produced in the anaerobic digester to the energy-rich component of methane. This reduction is possible through the chemical reaction between carbon dioxide, protons and electrons (from electricity) in a MES. [ 81 ] This is otherwise known as Power-to-Gas technology, which allows electrochemical units to act as carbon sinks for industrial waste and more importantly industrial CO 2 emissions . [ 82 ] Power-to-Gas technology potentially generates biogas with a similar grade to natural gas without the need to remove CO 2 using expensive techniques, such as amine scrubbing or pressure swing adsorption . [ 83 ]
Desalination of sea water and brackish water used for drinking water has always presented significant problems because of the amount of energy required to remove the dissolved salts from the water. By using an adapted MFC, this process could proceed with no external electrical energy input. When adding a third chamber in between the two electrodes of a standard MFC and filling it with sea water, the cell's positive and negative electrodes attract the positive and negative salt ions, respectively, and the salt can be filtered out from the sea water using semi-permeable membranes . [ 84 ] Salt removal efficiencies of up to 90% have been recorded in laboratory work. [ 85 ]
MFCs have applications in monitoring and control of biological waste treatment unit due to their correlation of coulombic yield of MFC and strength of organic matter in wastewater which serves as readings for biosensors . [ 86 ] Systems based on the microorganism Shewanella show promise as sensors for quantifying the biological oxygen demand in sewage. [ 87 ] [ 88 ] This concept can readily be expanded to detect other compounds that can act as electron donors for electricity production, such as hydrogen or aromatic contaminants. [ 89 ] Also, such sensors could be extremely useful as indicators of toxicants in rivers, at the entrance of wastewater treatment plants, to detect pollution or illegal dumping, or to perform research on polluted sites. [ 90 ] [ 91 ]
With the development of micro-electronics and related disciplines the power requirement for electronic devices has drastically reduced. MFCs can run low-power sensors that collect data from remote areas. Anaerobic bacteria that naturally grow in the sediment produce the small current that can be used to charge a capacitor to store energy for the sensor. One major advantage of using a MFC in remote sensing rather than a traditional battery is that the bacteria reproduce, giving the MFC a significantly longer lifetime than traditional batteries. [ 92 ] The sensor can thus be left alone in a remote area for many years without maintenance. Extensive research toward developing reliable MFCs to this effect, is focused mostly on selecting suitable organic and inorganic substances that could be used as sources of energy. [ 93 ] Microbial current production is also applicable to bioelectrochemical sensors for drug screening to biofilm [ 94 ] [ 95 ] or wastewater-based epidemiology. [ 96 ] | https://en.wikipedia.org/wiki/Microbial_electrochemical_technologies |
A microbial electrolysis cell ( MEC ) is a technology related to Microbial fuel cells (MFC). Whilst MFCs produce an electric current from the microbial decomposition of organic compounds, MECs partially reverse the process to generate hydrogen or methane from organic material by applying an electric current. [ 1 ] The electric current would ideally be produced by a renewable source of power. The hydrogen or methane produced can be used to produce electricity by means of an additional PEM fuel cell or internal combustion engine.
MEC systems are based on a number of components:
Microorganisms – are attached to the anode. The identity of the microorganisms determines the products and efficiency of the MEC.
Materials – The anode material in a MEC can be the same as an MFC, such as carbon cloth, carbon paper, graphite felt, graphite granules or graphite brushes. Platinum can be used as a catalyst to reduce the overpotential required for hydrogen production . The high cost of platinum is driving research into biocathodes as an alternative. Or as other alternative for catalyst, the stainless steel plates were used as cathode and anode materials. [ 2 ] Other materials include membranes (although some MECs are membraneless), and tubing and gas collection systems. [ 3 ]
Electrogenic microorganisms consuming an energy source (such as acetic acid ) release electrons and protons, creating an electrical potential of up to 0.3 volts. In a conventional MFC, this voltage is used to generate electrical power. In a MEC, an additional voltage is supplied to the cell from an outside source. The combined voltage is sufficient to reduce protons, producing hydrogen gas. As part of the energy for this reduction is derived from bacterial activity, the total electrical energy that has to be supplied is less than for electrolysis of water in the absence of microbes. Hydrogen production has reached up to 3.12 m 3 H 2 /m 3 d with an input voltage of 0.8 volts. The efficiency of hydrogen production depends on which organic substances are used. Lactic and acetic acid achieve 82% efficiency, while the values for unpretreated cellulose or glucose are close to 63%. The efficiency of normal water electrolysis is 60 to 70 percent. As MEC's convert unusable biomass into usable hydrogen, they can produce 144% more usable energy than they consume as electrical energy. Depending on the organisms present at the cathode, MECs can also produce methane by a related mechanism.
Calculations Overall hydrogen recovery was calculated as RH 2 = C E R Cat . The Coulombic efficiency is C E =( n CE / n th ), where n th is the moles of hydrogen that could be theoretically produced and n CE = C P /(2 F ) is the moles of hydrogen that could be produced from the measured current, C P is the total coulombs calculated by integrating the current over time, F is Faraday's constant, and 2 is the moles of electrons per mole of hydrogen. The cathodic hydrogen recovery was calculated as R Cat = n H2 / n CE , where n H2 is the total moles of hydrogen produced. Hydrogen yield ( Y H2 ) was calculated as Y H2 = n H2 / n s , where n s is substrate removal calculated on the basis of chemical oxygen demand (22). [ 4 ]
Hydrogen and methane can both be used as alternatives to fossil fuels in internal combustion engines or for power generation. Like MFCs or bioethanol production plants, MECs have the potential to convert waste organic matter into a valuable energy source. Hydrogen can also be combined with the nitrogen in the air to produce ammonia, which can be used to make ammonium fertilizer. Ammonia has been proposed as a practical alternative to fossil fuel for internal combustion engines. [ 5 ] | https://en.wikipedia.org/wiki/Microbial_electrolysis_cell |
Microbial electrosynthesis (MES) is a form of microbial electrocatalysis in which electrons are supplied to living microorganisms via a cathode in an electrochemical cell by applying an electric current . The electrons are then used by the microorganisms to reduce carbon dioxide to yield industrially relevant products. The electric current would ideally be produced by a renewable source of power. [ 1 ] This process is the opposite to that employed in a microbial fuel cell , in which microorganisms transfer electrons from the oxidation of compounds to an anode to generate an electric current.
Microbial electrosynthesis (MES) is related to microbial electrolysis cells (MEC). Both use the interactions of microorganisms with a cathode to reduce chemical compounds. In MECs, an electrical power source is used to augment the electrical potential produced by the microorganisms consuming a source of chemical energy such as acetic acid . The combined potential provided by the power source and the microorganisms is then sufficient to reduce hydrogen ions to molecular hydrogen . [ 2 ] The mechanism of MES is not well understood, but the potential products include alcohols and organic acids. [ 3 ] MES can be combined with MEC in a single reaction vessel, where substrate consumed by the microorganisms provides a voltage potential that is lowered as the microbe ages. [ 4 ] "MES has gained increasing attention as it promises to use renewable (electric) energy and biogenic feedstock for a bio-based economy." [ 5 ]
Microbial electrosynthesis may be used to produce fuel from carbon dioxide using electrical energy generated by either traditional power stations or renewable electricity generation. It may also be used to produce speciality chemicals such as drug precursors through microbially assisted electrocatalysis . [ 6 ]
Microbial electrosynthesis can also be used to "power" plants. Plants can then be grown without sunlight. [ 7 ] [ 8 ] [ 9 ] | https://en.wikipedia.org/wiki/Microbial_electrosynthesis |
Microbial food cultures are live bacteria , yeasts or moulds used in food production. Microbial food cultures carry out the fermentation process in foodstuffs. Used by humans since the Neolithic period (around 10 000 years BC) [ 1 ] fermentation helps to preserve perishable foods and to improve their nutritional and organoleptic qualities (in this case, taste , sight , smell , touch ). As of 1995, fermented food represented between one quarter and one third of food consumed in Central Europe . [ 2 ] More than 260 different species of microbial food culture are identified and described for their beneficial use in fermented food products globally, [ 3 ] showing the importance of their use.
The scientific rationale of the function of microbes in fermentation started to be built with the discoveries of Louis Pasteur in the second half of the 19th century. [ 4 ] [ 5 ] Extensive scientific study continues to characterize microbial food cultures traditionally used in food fermentation taxonomically , physiologically , biochemically and genetically . This allows better understanding and improvement of traditional food processing and opens up new fields of applications.
Microorganisms are the earliest form of life on earth, first evolving more than three billion years ago. [ 6 ] [ 7 ] [ 8 ] Our ancestors discovered how to harness the power of microorganisms to make new foods, [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] even if they did not know the science behind what they were doing.
Milestones
1665— Robert Hooke and Antoni Van Leeuwenhoek first observe and describe microorganisms. [ 14 ]
1857–1876— Louis Pasteur proves the function of microorganisms in lactic and alcoholic fermentation. [ 15 ]
1881— Emil Christian Hansen isolates Saccharomyces carlsbergensis , a pure yeast culture, which is today widely used in brewing of lager beers . [ 16 ]
1889–1896— Herbert William Conn , Vilhelm Storch and Hermann Weigmann demonstrate that bacteria are responsible for the acidification of milk and of cream. [ 17 ]
1897— Eduard von Freudenreich isolates Lactobacillus brevis . [ 18 ]
1919—Sigurd Orla-Jensen classifies lactic acid bacteria on the basis of the bacteria's physiological response patterns. [ 19 ]
Starting from 1970s—production of first industrial concentrated cultures, frozen or freeze-dried cultures, for the direct inoculation of processed milk , improving the regularity of production processes.
Microbial food cultures preserve food through formation of inhibitory metabolites such as organic acid ( lactic acid , acetic acid , formic acid , propionic acid ), ethanol , bacteriocins , etc., often in combination with decrease of water activity (by drying or use of salt). [ 20 ] [ 21 ] Further, microbial food cultures help to improve food safety through inhibition of pathogens [ 22 ] [ 23 ] or removing of toxic compounds. [ 24 ] Microbial food cultures also improve the nutritional value [ 25 ] [ 26 ] and organoleptic quality of the food. [ 27 ] [ 28 ] [ 29 ] [ 30 ]
The microbial food cultures used in food fermentation can be divided into three major groups: bacteria , yeasts and moulds .
Bacterial food cultures can be divided into starter cultures and probiotics .
Starter cultures have mainly a technological function in the food manufacturing. They are used as food ingredients at one or more stages in the food manufacturing process and develop the desired metabolic activity during the fermentation or ripening process. They contribute to the one or multiple unique properties of a foodstuff especially in regard to taste, flavour, colour, texture, safety, preservation, nutritional value, wholesomeness and/or health benefits. [ 31 ] [ 32 ] [ 33 ]
Probiotics have a functional role, which refers to the ability of certain microbes to confer health benefits to the consumer. [ 34 ] [ 35 ]
Generally, the bacteria used as starter culture are not the same used as probiotics. There are, however, cases when one bacterium can be used both as starter culture and as probiotic. [ 36 ] [ 37 ] The scientific community is presently trying to deepen understanding of the roles played by microbes in food processing and human health. [ 38 ] [ 39 ]
The most important bacteria in food manufacturing are Lactobacillus species, belonging to the group of lactic acid bacteria . [ 40 ]
Bacterial food cultures are responsible for the aroma, taste and texture of cheeses and fermented milk products such as yogurts , ayran , doogh , skyr or ymer . They contribute to developing the flavour and colour of such fermented products as salami , pepperoni and dried ham . Lactic acid bacteria converts the unstable malic acid [ 41 ] that is naturally present in wine into the stable lactic acid. This malolactic fermentation gives the stability that is characteristic of high-quality wines that improve on storage. [ 42 ]
Lactic acid bacteria are also used in food supplements as probiotics which help to restore the balance in human intestinal biota . [ 43 ]
The most familiar yeast in food production, Saccharomyces cerevisiae , has been used in brewing and baking for thousands of years. [ citation needed ]
S. cerevisiae feeds on the sugars present in the bread dough and produces the gas carbon dioxide . This forms bubbles within the dough, causing it to expand and the bread to rise.
Several different yeasts are used in brewing beer, where they ferment the sugars present in malted barley to produce alcohol . [ 44 ] One of the most common is S. cerevisiae . The same strain of S. cerevisiae which can also be used in breadmaking is used to make ale -type beers. It is known as a top-fermenting yeast because it creates a foam on the top of the brew. Bottom-fermenting yeasts, such as S. pastorianus , are more commonly used to make lagers . [ 45 ] They ferment more of the sugars in the mixture than top-fermenting yeasts, which gives a cleaner taste.
The alcohol in wine is formed by the fermentation of the sugars in grape juice, with carbon dioxide as a by-product . Yeast is naturally present on grapeskins, and this alone can be sufficient for the fermentation of sugars to alcohol to occur. A pure yeast culture, most often S. cerevisiae , is usually added to ensure the fermentation is reliable. [ 46 ] Other yeast cultures like Pichia , Torulaspora and Kluyveromyces are naturally present or added to create special flavours in the wine. Sparkling wine , including champagne , is made by adding further yeast to the wine when it is bottled. The carbon dioxide formed in this second fermentation is trapped as bubbles. [ 47 ]
Yeasts are also used to produce kefir products, [ 48 ] semi-soft ripened cheeses and fermented soy drinks. [ 49 ]
Three main types of cheese rely on moulds for their characteristic properties: blue cheese , soft ripened cheese (such as camembert and brie ) and rind-washed cheese (such as époisses and taleggio ).
To make blue cheese, the cheese is treated with a mould, usually Penicillium roqueforti , while it is still in the loosely pressed curd form. As the cheese matures, the mould grows, creating blue veins within it which gives the cheese its characteristic flavour. Examples include stilton , roquefort and gorgonzola . [ 50 ]
Soft ripened cheese such as brie and camembert are made by allowing P. camemberti to grow on the outside of the cheese, which causes them to age from the outside in. The mould forms a soft white crust, and the interior becomes runny with a strong flavour. [ 51 ]
Rind-washed cheeses like limburger also ripen inwards, but here, as the name suggests, they are washed with brine and other ingredients such as beer and wine which contain mould. This also makes them attractive to bacteria, which add to the flavour. [ 52 ]
Traditionally, inoculations of sausages with moulds were done with the indigenous biota of the slaughters. Different moulds (such as P. chrysogenum and P. nalgiovense ) can be used to ripen surfaces of sausages. The mould cultures develop the aroma and improve the texture of the sausages. They also contribute to shortening of the ripening period and preserving the natural quality. This expands the shelf life of the meat product. [ 53 ] [ 54 ] [ 55 ]
In the past, soy sauce has been made by mixing soybeans and other grains with a mould ( Aspergillus oryzae or A. sojae ) and yeast. This mixture was then left to ferment in the sun. [ 56 ] Today soy sauce is made under controlled conditions. The key flavour ingredients formed in this process are salts of the amino acid glutamic acid , notably monosodium glutamate . [ 57 ]
The industrial production of microbial food cultures is carried out after careful selection process and under strictly controlled conditions. First, the microbiology laboratory, where the original strains are kept, prepares the inoculation material, which is a small quantity of microbes of a single (pure) strain. Then, the inoculation material is multiplied and grown either in fermenters (liquid) or on a surface (solid) under defined and monitored conditions. Grown cells of pure culture are harvested, eventually blended with other cultures and, finally, formulated (preserved) for subsequent transportation and storage. They are sold in liquid, frozen or freeze-dried formats. [ 58 ]
Another and traditional way of starting a food fermentation is often referred to as spontaneous fermentation. Cultures come from raw milk , i.e. milk that has not undergone any sanitation treatment or from the reuse of a fraction of the previous production (back-slopping). [ 59 ] The composition of such cultures is complex and extremely variable. [ 60 ] The use of such techniques is steadily decreasing in developed countries. Some countries even prohibit the back-slopping technique because of the "potential to magnify pathogen loads to very dangerous levels". [ 61 ]
Microbial protein (MP) can be created with micro-algae, bacteria, yeasts and microfungi ( mycoprotein ). [ 62 ]
Examples of already available (commercialized) MP products include:
It can substitute meat and feed, mitigating environmental impacts of meat and other animal-based products . [ 62 ] It could also substite animal-based protein supplements . [ 65 ]
Researchers are working on improving the sustainability and economics of microbial protein production and on solving challenges in scaling up to industrial production. [ 64 ]
A study found that solar-energy -driven production of microbial foods from direct air capture substantially outperforms agricultural cultivation of staple crops in terms of land use . Growing such food from air yielded 10 times more protein and at least twice the calories than growing soybeans with the same amount of land. [ 66 ] [ 67 ] [ 68 ]
A study complements life-cycle assessment studies, showing substantial deforestation reduction (56%) and climate change mitigation if only 20% of per-capita beef was replaced by microbial protein (see above ) by 2050. [ 69 ]
Single cell protein (SCP) can substitute conventional protein feed. Land shortage and environmental calamities such as droughts or floods aren't a bottleneck in SCP production. [ 70 ] [ additional citation(s) needed ]
Microbial food cultures are considered as traditional food ingredients and are permitted in the production of foodstuffs all over the world under general food laws.
Commercially available microbial food cultures are sold as preparations, which are formulations, consisting of concentrates of one or more microbial species and/or strains including unavoidable media components carried over from the fermentation and components, which are necessary for their survival, storage, standardisation and to facilitate their application in the food production process.
Safety of microbial food cultures, depending on their characteristics and use, can be based on genus, species or strain levels.
The first (non-exhaustive) inventory of microorganisms with a documented history of use [ 71 ] in food was for the first time compiled in 2001 by the International Dairy Federation (IDF) and the European Food and Feed Cultures Association (EFFCA) . [ 72 ]
In 2012, this inventory was updated. It now covers a wide range of food applications (including dairy, fish, meat, beverages and vinegar) and features a reviewed taxonomy of microorganisms. [ 3 ]
In the United States of America, microbial food cultures are regulated under the Food, Drug and Cosmetic Act . Section 409 of the 1958 Food Additives Amendment of the Food, Drug and Cosmetic Act, [ 73 ] exempts from the definition of food additives substances generally recognized by experts as safe ( GRAS ) under conditions of their intended use. These substances do not require premarket approval by the US Food and Drug Administration . [ 74 ]
Because there are various ways to obtain GRAS status for microbial food cultures, there is no exhaustive list of microbial food cultures having GRAS status in the US. [ 3 ] [ 75 ]
Within the European Union , microbial food cultures are regarded as food ingredients and are regulated by Regulation 178/2002, [ 76 ] commonly referred to as the General Food Law. [ 77 ]
Since 2007, the European Food Safety Authority (EFSA) has been maintaining a list of microorganisms having qualified presumption of safety (QPS). [ 78 ] The QPS list covers only a limited number of microorganisms, which have been referred to EFSA for safety assessment. [ 79 ] [ 80 ] It has been conceived as an internal evaluation tool for microorganisms used in the food production chain (e.g. feed cultures, cell factories producing enzymes or additives, plant protection) that need an evaluation by EFSA scientific panels before being marketed in the EU. Microbial food cultures with a long history of safe use are, however, considered to be traditional food ingredients and are legally permitted for use in human food without EFSA evaluation.
From 1974 to 2010 Denmark required premarket approval of microbial food cultures. The positive list of microbial food cultures is available on the website of the Danish Veterinary and Food Administration. [ 81 ]
In 2010, the regulation changed. Approval is no longer needed but a notification should be made to the Veterinary and Food Administration. [ 82 ] | https://en.wikipedia.org/wiki/Microbial_food_cultures |
The microbial food web refers to the combined trophic interactions among microbes in aquatic environments. These microbes include viruses , bacteria , algae , heterotrophic protists (such as ciliates and flagellates ). [ 1 ] In aquatic ecosystems, microbial food webs are essential because they form the basis for the cycling of nutrients and energy. These webs are vital to the stability and production of ecosystems in a variety of aquatic environments, including lakes, rivers, and oceans. By converting dissolved organic carbon (DOC) and other nutrients into biomass that larger organisms may eat, microbial food webs maintain higher trophic levels. Thus, these webs are crucial for energy flow and nutrient cycling in both freshwater and marine ecosystems. [ 2 ]
In aquatic environments, microbes constitute the base of the food web . Single celled photosynthetic organisms such as diatoms and cyanobacteria are generally the most important primary producers in the open ocean. Many of these cells, especially cyanobacteria, are too small to be captured and consumed by small crustaceans and planktonic larvae . Instead, these cells are consumed by phagotrophic protists which are readily consumed by larger organisms. [ 3 ]
Viruses
Aquatic ecosystems are full of viruses, which are essential for managing microbial populations. They release organic matter back into the environment by infecting and lysing planktonic algae (phycoviruses) and bacterial cells (bacteriophages). This mechanism, called the viral shunt, promotes nutrient recycling and aids in the control of microbial populations. Viral particles and dissolved organic carbon (DOC), which can be further used by other microorganisms, are released when bacterial cells are lysed. Viruses can infect and break open bacterial cells and (to a lesser extent), planktonic algae (a.k.a. phytoplankton ). Therefore, viruses in the microbial food web act to reduce the population of bacteria and, by lysing bacterial cells, release particulate and dissolved organic carbon (DOC). [ 4 ]
Bacteria
In the microbial food web, bacteria play a crucial role in breaking down organic materials and recycling nutrients. They transform DOC into bacterial biomass so that protists and other higher trophic levels can consume it. Additionally, bacteria take part in the nitrogen and carbon cycles, among other biogeochemical cycles. [ 4 ]
Algae
In aquatic ecosystems, single-celled photosynthetic organisms like cyanobacteria and diatoms are the main producers. Through the process of photosynthesis, they transform sunlight into chemical energy and create organic matter, which is the foundation of the food chain. Particularly significant in nutrient-poor environments are cyanobacteria because of their capacity to fix atmospheric nitrogen. When vital nutrients like nitrogen and phosphorus are scarce during periods of uneven development, algal cells have the potential to produce DOC. DOC may also be released into the environment by algal cells. One of the reasons phytoplankton release DOC termed "unbalanced growth" is when essential nutrients (e.g. nitrogen and phosphorus ) are limiting. Therefore, carbon produced during photosynthesis is not used for the synthesis of proteins (and subsequent cell growth), but is limited due to a lack of the nutrients necessary for macromolecules . Excess photosynthate, or DOC is then released, or exuded. [ 3 ]
Heterotrophic Protists
In the microbial food web, protists including ciliates and flagellates are significant consumers. By consuming bacteria, algae, and other tiny particles, they move nutrients and energy up the food chain. Larger creatures like zooplankton feed on these protists in turn. [ 3 ]
The food web's microbial interactions are varied and diverse. Predation, rivalry, and symbiotic connections are some of these interactions. For instance, certain bacteria and algae create mutualistic relationships in which the bacteria give the algae vital nutrients, and the algae give the bacteria organic carbon. Microbial communities can be shaped by competition for resources like light and nutrition, which can affect their makeup and functionality. [ 5 ]
Environmental factors that have a significant impact on microbial food webs include temperature, availability of light, and nutrient concentrations. Microbe development and metabolic rates are influenced by temperature, and photosynthetic organisms are impacted by light availability. The availability of nutrients, especially phosphorus and nitrogen, might restrict the growth and productivity of microorganisms. For instance, during times of nitrogen constraint, phytoplankton may emit DOC, a phenomenon referred to as imbalanced growth. [ 6 ]
A major impact of human activity on microbial food webs is eutrophication , pollution, and climate change . The activities of microbial communities can be disturbed by pollutants like pesticides and heavy metals. Microbial growth and dispersal are impacted by temperature and precipitation changes brought about by climate change. The entire aquatic food chain may be impacted by eutrophication, which is brought on by nutrient runoff from cities and farms. Eutrophication can also result in toxic algal blooms and hypoxic conditions. [ 7 ]
Technological developments have completely changed the way that microbial food webs are studied. By analyzing genetic material from environmental samples, researchers can get insights into the diversity and roles of microbial communities using metagenomics. The utilization of remote sensing technology facilitates the large-scale monitoring of environmental variables and microbial activity, consequently augmenting our comprehension of microbial dynamics across various ecosystems. [ 8 ]
The microbial loop describes a pathway in the microbial food web where DOC is returned to higher trophic levels via the incorporation into bacterial biomass. This loop makes sure that the DOC created by photosynthetic organisms is used by heterotrophic bacteria and then moves up the food chain, which is crucial for sustaining the flow of nutrients and energy within the ecosystem. [ 7 ]
By facilitating the transfer of nutrients and energy, microbial food webs are essential for the health and stability of aquatic ecosystems. It is crucial to comprehend these complex relationships to address environmental issues and advance sustainable management of aquatic resources. Technological developments keep expanding our understanding and illuminating the complex mechanisms that support life in the oceans of our planet. | https://en.wikipedia.org/wiki/Microbial_food_web |
Microbial genetics is a subject area within microbiology and genetic engineering . Microbial genetics studies microorganisms for different purposes. The microorganisms that are observed are bacteria and archaea. Some fungi and protozoa are also subjects used to study in this field. The studies of microorganisms involve studies of genotype and expression system. Genotypes are the inherited compositions of an organism. (Austin, "Genotype," n.d.) Genetic Engineering is a field of work and study within microbial genetics. [ 1 ] The usage of recombinant DNA technology is a process of this work. [ 1 ] The process involves creating recombinant DNA molecules through manipulating a DNA sequence. [ 1 ] That DNA created is then in contact with a host organism. Cloning is also an example of genetic engineering. [ 1 ]
Since the discovery of microorganisms by Robert Hooke and Antoni van Leeuwenhoek during the period 1665-1885 [ 2 ] they have been used to study many processes and have had applications in various areas of study in genetics.
For example: Microorganisms' rapid growth rates and short generation times are used by scientists to study evolution. Robert Hooke and Antoni van Leeuwenhoek discoveries involved depictions, observations, and descriptions of microorganisms. [ 3 ] Mucor is the microfungus that Hooke presented and gave a depiction of. [ 4 ] His contribution being, Mucor as the first microorganism to be illustrated. Antoni van Leeuwenhoek’s contribution to the microscopic protozoa and microscopic bacteria yielded to scientific observations and descriptions. [ 4 ] These contributions were accomplished by a simple microscope, which led to the understanding of microbes today and continues to progress scientists understanding. [ 5 ] Microbial genetics also has applications in being able to study processes and pathways that are similar to those found in humans such as drug metabolism . [ 6 ]
Microbial genetics can focus on Charles Darwin's work and scientists have continued to study his work and theories by the use of microbes. [ 7 ] Specifically, Darwin's theory of natural selection is a source used. Studying evolution by using microbial genetics involves scientists looking at evolutionary balance. [ 1 ] An example of how they may accomplish this is studying natural selection or drift of microbes. [ 7 ] Application of this knowledge comes from looking for the presence or absence in a variety of different ways. [ 7 ] The ways include identifying certain pathways, genes, and functions. Once the subject is observed, scientist may compare it to a sequence of a conserved gene. [ 1 ] The process of studying microbial evolution in this way lacks the ability to give a time scale of when the evolution took place. [ 7 ] However, by testing evolution in this way, scientist can learn the rates and outcomes of evolution. Studying the relationship between microbes and the environment is a key component to microbial genetics evolution. [ 8 ]
Bacteria have been on this planet for approximately 3.5 billion years, and are classified by their shape. [ 9 ] Bacterial genetics studies the mechanisms of their heritable information, their chromosomes , plasmids , transposons , and phages . [ 10 ]
Gene transfer systems that have been extensively studied in bacteria include genetic transformation , conjugation and transduction . Natural transformation is a bacterial adaptation for DNA transfer between two cells through the intervening medium. The uptake of donor DNA and its recombinational incorporation into the recipient chromosome depends on the expression of numerous bacterial genes whose products direct this process. [ 11 ] [ 12 ] In general, transformation is a complex, energy-requiring developmental process that appears to be an adaptation for repairing DNA damage. [ 13 ]
Bacterial conjugation is the transfer of genetic material between bacterial cells by direct cell-to-cell contact or by a bridge-like connection between two cells. Bacterial conjugation has been extensively studied in Escherichia coli , but also occurs in other bacteria such as Mycobacterium smegmatis . Conjugation requires stable and extended contact between a donor and a recipient strain, is DNase resistant, and the transferred DNA is incorporated into the recipient chromosome by homologous recombination . E. coli conjugation is mediated by expression of plasmid genes, whereas mycobacterial conjugation is mediated by genes on the bacterial chromosome. [ 14 ]
Transduction is the process by which foreign DNA is introduced into a cell by a virus or viral vector . Transduction is a common tool used by molecular biologists to stably introduce a foreign gene into a host cell's genome .
Archaea is a domain of organisms that are prokaryotic , single-celled, and are thought to have developed 4 billion years ago. "They have no cell nucleus or any other organelles inside their cells."Archaea replicate asexually in a process known as binary fission. The cell division cycle includes when chromosomes of daughter cells replicate. Because archea have a singular structure chromosome, the two daughter cells separate and cell divides. Archaea have motility include with flagella , which is a tail like structure. Archaeal chromosomes replicate from different origins of replication, producing two haploid daughter cells. [ 15 ] " [ 16 ] They share a common ancestor with bacteria , but are more closely related to eukaryotes in comparison to bacteria. [ 17 ] Some Archaea are able to survive extreme environments, which leads to many applications in the field of genetics. One of such applications is the use of archaeal enzymes, which would be better able to survive harsh conditions in vitro . [ 18 ]
Gene transfer and genetic exchange have been studied in the halophilic archaeon Halobacterium volcanii and the hyperthermophilic archaeons Sulfolobus solfataricus and Sulfolobus acidocaldarius . H. volcani forms cytoplasmic bridges between cells that appear to be used for transfer of DNA from one cell to another in either direction. [ 19 ] When S. solfataricus and S. acidocaldarius are exposed to DNA damaging agents, species-specific cellular aggregation is induced. Cellular aggregation mediates chromosomal marker exchange and genetic recombination with high frequency. Cellular aggregation is thought to enhance species specific DNA transfer between Sulfolobus cells in order to provide increased repair of damaged DNA by means of homologous recombination . [ 20 ] [ 21 ] [ 22 ] Archaea are divided into 3 subgroups which are halophiles , methanogens , and thermoacidophiles . The first group, methanogens, are archaeabacteria that live in swamps and marshes as well as in the gut of humans. They also play a major role in decay and decomposition with dead organisms. Methanogens are anaerobic organisms, which are killed when they are exposed to oxygen. The second subgroup of archaeabacteria, halophiles are organisms that are present in areas with high salt concentration like the Great Salt Lake and the Dead Sea. The third subgroup thermoacidophiles also called thermophiles, are organisms that live in acidic areas. They are present in area with low pH levels like hot springs and geyers. Most thermophiles are found in the Yellowstone National Park. [ 23 ]
Archaeal Genetics is the study of genes that consist of single nucleus-free cells. [ 24 ] Archaea have a single, circular chromosomes that contain multiple origins of replication for initiation of DNA synthesis. [ 25 ] DNA replication of Archaea involves similar processes including initiation, elongation, and termination. The primase used to synthesize a RNA primer varies than in eukaryotes. The primase by archaea is highly derived version of RNA recognition motif(RRM). [ 25 ] Archaea come from Gram positive bacteria, which both have a single lipid bilayer, which are resistant to antibiotics. Archaea are similar to mitochondria in eukaryotes in that they release energy as adenosine triphosphate (ATP) through the chemical reaction called metabolism. [ 25 ] Some archaea known as phototrophic archaea use the sun’s energy to produce ATP. ATP synthase is used as photophosphorylation to convert chemicals into ATP. [ 15 ]
Archaea and bacteria are structurally similar even though they are not closely related in the tree of life. The shapes of both bacteria and archaea cells vary from a spherical shape known as coccus or a rod-shape known as bacillus. They are also related with no internal membrane and a cell wall that assists the cell maintaining its shape. Even though archaeal cells have cells walls, they do not contain peptidoglycan, which means archaea do not produce cellulose or chitin. Archaea are most closely related to eukaryotes due to tRNA present in archaea, but not in bacteria. Archaea have the same ribosomes as eukaryotes that synthesize into proteins. [ 26 ] Aside from the morphology of archaea and bacteria, there are other differences between these domains. Archaea that live in extreme and harsh environments with low pH levels such as salt lakes, oceans, and in the gut of ruminants and humans are also known as extremophiles. In contrast, bacteria are found in various areas such as plants, animals, soil, and rocks. [ 27 ]
Fungi can be both multicellular and unicellular organisms, and are distinguished from other microbes by the way they obtain nutrients. Fungi secrete enzymes into their surroundings, to break down organic matter. [ 9 ] Fungal genetics uses yeast , and filamentous fungi as model organisms for eukaryotic genetic research, including cell cycle regulation, chromatin structure and gene regulation . [ 28 ]
Studies of the fungus Neurospora crassa have contributed substantially to understanding how genes work. N. crassa is a type of red bread mold of the phylum Ascomycota . It is used as a model organism because it is easy to grow and has a haploid life cycle that makes genetic analysis simple since recessive traits will show up in the offspring. Analysis of genetic recombination is facilitated by the ordered arrangement of the products of meiosis in ascospores . In its natural environment, N. crassa lives mainly in tropical and sub-tropical regions. It often can be found growing on dead plant matter after fires.
Neurospora was used by Edward Tatum and George Beadle in their experiments [ 29 ] for which they won the Nobel Prize in Physiology or Medicine in 1958. The results of these experiments led directly to the one gene-one enzyme hypothesis that specific genes code for specific proteins . This concept proved to be the opening gun in what became molecular genetics and all the developments that have followed from that. [ 30 ]
Saccharomyces cerevisiae is a yeast of the phylum Ascomycota . During vegetative growth that ordinarily occurs when nutrients are abundant, S. cerevisiae reproduces by mitosis as diploid cells. However, when starved, these cells undergo meiosis to form haploid spores . [ 31 ] Mating occurs when haploid cells of opposite mating types MATa and MATα come into contact. Ruderfer et al. [ 32 ] pointed out that, in nature, such contacts are frequent between closely related yeast cells for two reasons. The first is that cells of opposite mating type are present together in the same acus , the sac that contains the cells directly produced by a single meiosis , and these cells can mate with each other. The second reason is that haploid cells of one mating type, upon cell division, often produce cells of the opposite mating type. An analysis of the ancestry of natural S. cerevisiae strains concluded that outcrossing occurs very infrequently (only about once every 50,000 cell divisions). [ 32 ] The relative rarity in nature of meiotic events that result from outcrossing suggests that the possible long-term benefits of outcrossing (e.g. generation of diversity) are unlikely to be sufficient for generally maintaining sex from one generation to the next. Rather, a short-term benefit, such as meiotic recombinational repair of DNA damages caused by stressful conditions (such as starvation) [ 33 ] may be the key to the maintenance of sex in S. cerevisiae .
Candida albicans is a diploid fungus that grows both as a yeast and as a filament . C. albicans is the most common fungal pathogen in humans. It causes both debilitating mucosal infections and potentially life-threatening systemic infections. C. albicans has maintained an elaborate, but largely hidden, mating apparatus. [ 34 ] Johnson [ 34 ] suggested that mating strategies may allow C. albicans to survive in the hostile environment of a mammalian host.
Among the 250 known species of aspergilli , about 33% have an identified sexual state. [ 35 ] Among those Aspergillus species that exhibit a sexual cycle the overwhelming majority in nature are homothallic (self-fertilizing). [ 35 ] Selfing in the homothallic fungus Aspergillus nidulans involves activation of the same mating pathways characteristic of sex in outcrossing species, i.e. self-fertilization does not bypass required pathways for outcrossing sex but instead requires activation of these pathways within a single individual. [ 36 ] Fusion of haploid nuclei occurs within reproductive structures termed cleistothecia , in which the diploid zygote undergoes meiotic divisions to yield haploid ascospores .
Protozoa are unicellular organisms, which have nuclei, and ultramicroscopic cellular bodies within their cytoplasm. [ 9 ] One particular aspect of protozoa that are of interest to human geneticists are their flagella , which are very similar to human sperm flagella.
Studies of Paramecium have contributed to our understanding of the function of meiosis. Like all ciliates , Paramecium has a polyploid macronucleus , and one or more diploid micronuclei . The macronucleus controls non-reproductive cell functions, expressing the genes needed for daily functioning. The micronucleus is the generative, or germline nucleus, containing the genetic material that is passed along from one generation to the next. [ 37 ]
In the asexual fission phase of growth, during which cell divisions occur by mitosis rather than meiosis , clonal aging occurs leading to a gradual loss of vitality. In some species, such as the well studied Paramecium tetraurelia , the asexual line of clonally aging paramecia loses vitality and expires after about 200 fissions if the cells fail to undergo meiosis followed by either autogamy (self-fertilization) or conjugation (outcrossing) (see aging in Paramecium ). DNA damage increases dramatically during successive clonal cell divisions and is a likely cause of clonal aging in P. tetraurelia . [ 38 ] [ 39 ] [ 40 ]
When clonally aged P. tetraurelia are stimulated to undergo meiosis in association with either autogamy or conjugation , the progeny are rejuvenated, and are able to have many more mitotic binary fission divisions. During either of these processes the micronuclei of the cell(s) undergo meiosis, the old macronucleus disintegrates and a new macronucleus is formed by replication of the micronuclear DNA that had recently undergone meiosis. There is apparently little, if any, DNA damage in the new macronucleus, suggesting that rejuvenation is associated with the repair of these damages in the micronucleus during meiosis. [ citation needed ]
Viruses are capsid -encoding organisms composed of proteins and nucleic acids that can self-assemble after replication in a host cell using the host's replication machinery. [ 41 ] There is a disagreement in science about whether viruses are living due to their lack of ribosomes . [ 41 ] Comprehending the viral genome is important not only for studies in genetics but also for understanding their pathogenic properties. [ 42 ]
Many types of virus are capable of genetic recombination. When two or more individual viruses of the same type infect a cell, their genomes may recombine with each other to produce recombinant virus progeny. Both DNA and RNA viruses can undergo recombination.
When two or more viruses, each containing lethal genomic damage infect the same host cell, the virus genomes often can pair with each other and undergo homologous recombinational repair to produce viable progeny. [ 43 ] [ 44 ] This process is known as multiplicity reactivation. [ 43 ] [ 45 ] Enzymes employed in multiplicity reactivation are functionally homologous to enzymes employed in bacterial and eukaryotic recombinational repair. Multiplicity reactivation has been found to occur with pathogenic viruses including influenza virus, HIV-1, adenovirus simian virus 40, vaccinia virus, reovirus, poliovirus and herpes simplex virus as well as numerous Bacteriophages. [ 45 ]
Any living organism can contract a virus by giving parasites the opportunity to grow. Parasites feed on the nutrients of another organism which allows the virus to thrive. Once the human body detects a virus, it then creates fighter cells that attack the parasite/virus; literally, causing a war within the body. [ 46 ] A virus can affect any part of the body causing a wide range of illnesses such as the flu, the common cold, and sexually transmitted diseases. [ 46 ] The flu is an airborne virus that travels through tiny droplets and is formally known as Influenza. Parasites travel through the air and attack the human respiratory system. People that are initially infected with this virus pass infection on by normal day to day activity such as talking and sneezing. When a person comes in contact with the virus, unlike the common cold, the flu virus affects people almost immediately. Symptoms of this virus are very similar to the common cold but much worse. Body aches, sore throat, headache, cold sweats, muscle aches and fatigue are among the many symptoms accompanied by the virus. [ 47 ] A viral infection in the upper respiratory tract results in the common cold. [ 48 ] With symptoms like sore throat, sneezing, small fever, and a cough, the common cold is usually harmless and tends to clear up within a week or so. The common cold is also a virus that is spread through the air but can also be passed through direct contact. This infection takes a few days to develop symptoms; it is a gradual process unlike the flu. [ 48 ]
Microbes are ideally suited for biochemical and genetics studies and have made huge contributions to these fields of science such as the demonstration that DNA is the genetic material, [ 49 ] [ 50 ] that the gene has a simple linear structure, [ 51 ] that the genetic code is a triplet code, [ 52 ] and that gene expression is regulated by specific genetic processes. [ 53 ] Jacques Monod and François Jacob used Escherichia coli , a type of bacteria, in order to develop the operon model of gene expression , which lay down the basis of gene expression and regulation. [ 54 ] Furthermore, the hereditary processes of single-celled eukaryotic microorganisms are similar to those in multi-cellular organisms allowing researchers to gather information on this process as well. [ 55 ] Another bacterium which has greatly contributed to the field of genetics is Thermus aquaticus , which is a bacterium that tolerates high temperatures. From this microbe scientists isolated the enzyme Taq polymerase , which is now used in the powerful experimental technique, Polymerase chain reaction (PCR). [ 56 ] Additionally the development of recombinant DNA technology through the use of bacteria has led to the birth of modern genetic engineering and biotechnology . [ 9 ]
Using microbes, protocols were developed to insert genes into bacterial plasmids , taking advantage of their fast reproduction, to make biofactories for the gene of interest. Such genetically engineered bacteria can produce pharmaceuticals such as insulin , human growth hormone , interferons and blood clotting factors . [ 9 ] These biofactories are typically much cheaper to operate and maintain than the alternative procedures of producing pharmaceuticals. They're like millions of tiny pharmaceutical machines that only require basic raw materials and the right environment to produce a large amount of product. The utilization of incorporating the human insulin gene alone has had profound impacts on the medical industry. It is thought that biofactories might be the ultimate key in reducing the price of expensive life saving pharmaceutical compounds.
Microbes synthesize a variety of enzymes for industrial applications, such as fermented foods, laboratory test reagents, dairy products (such as renin ), and even in clothing (such as Trichoderma fungus whose enzyme is used to give jeans a stone washed appearance). [ 9 ]
There is currently potential for microbes to be used as an alternative for petroleum-based surfactants. Microbial surfactants would still have the same kind of hydrophillic and hydrophobic functional groups as their petroleum-based counterparts, but they have numerous advantages over their competition. In comparison, microbial amphiphillic compounds have robust a tendency to stay functional in extreme environments such as areas with high heat or extreme ph. all while being biodegradable and less toxic to the environment. This efficient and cheap method of production could be the solution to the ever increasing global consumption of surfactants. Ironically, the application for bio-based surfactants with the most demand is the oil industry which uses surfactants in general production as well as development of specific oil compositions. [ 57 ]
Microbes are an abundant source of lipases which have a wide variety of industrial and consumer applications. Enzymes perform a wide variety of functions inside the cells of living things, so it only makes sense that we can use them for similar purposes on a larger scale. Microbial enzymes are typically preferred for mass production due to the wide variety of functions available and their ability to be mass produced. Plant and animal enzymes are typically too expensive to be mass-produced, however this is not always the case. Especially in plants. Industrial applications of lipases generally include the enzyme as a more efficient and cost-effective catalyst in the production of commercially valuable chemicals from fats and oils, because they are able to retain their specific properties in mild easy to maintain conditions and work at an increased rate. Other already successful applications of lipolytic enzymes include the production of biofuels, polymers, non-stereoisomeric pharmaceuticals, agricultural compounds, and flavor-enhancing compounds. [ 58 ]
In regards to industrial optimization, the benefit of the biofactory method of production is the ability to direct optimization by means of directed evolution. The efficiency and specificity of production will increase over time by imposing artificial selection. This method of improving efficiency is nothing new in agriculture, but it's a relatively new concept in industrial production. It is thought that this method will be far superior to conventional industrial methods because you have optimization on multiple fronts. The first front being that the microorganisms that make up biofactories can be evolved to our needs. The second front being the conventional method of optimization brought about by the integration of advancing technologies. This combination of conventional and biological advancement is just now becoming utilized and provides a virtually limitless number of applications. [ 59 ] | https://en.wikipedia.org/wiki/Microbial_genetics |
Microbial hyaluronic acid production refers to the process by which microorganisms , such as bacteria and yeast , are utilized in fermentation to synthesize hyaluronic acid (HA). [ 1 ] HA is used in a wide range of medical, cosmetic , and biological products because of its high moisture retention and viscoelasticity qualities. [ 2 ] HA had originally been extracted from rooster combs in limited quantities. [ 3 ] However, challenges such as low yields, high production costs, and ethical issues associated with animal-derived HA has driven the development of microbial production methods for HA. [ 4 ]
Although there are other methods for instance chemical synthesis and modification, chemoenzymatic synthesis, enzymatic synthesis; microbial fermentation has been preferred to produce HA because of economical advantages. [ 5 ]
Some bacteria, like Streptococcus , develop an extracellular capsule that contains HA. This capsule functions as a molecular mimic to elude the host's immune system during the infection process in addition to providing adherence and protection. [ 6 ] Streptococcus zooepidemicus was used for first commercially HA fermentation, and that is most used bacteria since provides high yields although it is a pathogen microorganism. [ 7 ]
Encoding of HA production is carried out by hasA, hasB, hasC, hasD and hasE genes in S. zooepidemicus . [ 8 ]
transport
biosynthesis
biosynthesis
pyrophosphorylase (bifunctional)
biosynthesis
biosynthesis
Genetically modified producers were developed such as Kluysveromyces lactis , [ 14 ] Lactococcus lactis , [ 15 ] Bacillus subtilis , [ 16 ] Escherichia coli , [ 17 ] and Corynebacterium glutamicum [ 18 ] [ 19 ] because of S. zooepidemicus’s pathogeny.
Intermediates are used from pathways essential to support cell growth, such as the production of organic acids, polysaccharides during the HA production. [ 20 ] HA is not an essential metabolite, and it competes other metabolites to attend the carbon flux in the cell. [ 4 ] Reduction potential of S. zooepidemicus may have a role in hyaluronic acid production, because 2 NAD + are consumed during the synthesis of one monomer. Although NAD + does not control HA synthesis when NADH oxidase over-expressed, [ 21 ] it has a big role in biomass formation.
Some studies showed that balanced intracellular concentration of precursors and their fluxes balanced provides higher molecular weight such as UDP-acetylglucosamine concentration. [ 22 ] [ 23 ] Enzymes such as hyaluronidase , [ 24 ] β-glucuronidase [ 25 ] of S. zooepidemicus decrease yield of HA. HA concentration is increased by deletion of associated genes of these enzymes. [ 24 ] [ 25 ]
On the other hand, some enzymes induce HA production like sucrose-6-phosphatate hydrolase, [ 26 ] and hyaluronan synthase. [ 27 ] Using combined approaches with these two type enzymes is a good strategy for high yield HA production. [ 20 ]
HA is produced around the cell , serving as a barrier against the host immune system by the bacteria. Only 8% of HA remains as attached the cell when cells arrived stationary phase . Biosurfactants such as sodium dodecyl sulfate (SDS) are used to gain this product. [ 28 ] Hyaluronan synthase , that is a membrane-binding enzyme, is one of the factors that reduces the production of HA. Hyaluronan synthase limits hyaluronic acid production by affecting cell morphology. [ 28 ]
Organic acids formed during HA production by S. zooepidemicus cause pH to decrease [ 20 ] Although HA production without pH control is cheaper, it prefers since provides high hyaluronic acid yields.. [ 29 ] [ 30 ]
HA production is affected regarding to yield and molecular weight by temperature. [ 31 ] HA production increases while bacterial cells are growing above 37°C. However, HA yield decreases while molecular weight is higher with fermentation under 32°C. [ 30 ]
Although S. zooepidemicus is an aerotolerant anaerobe , hyaluronic acid production is affected by oxygen because NADH/NAD + balance of cells changes with oxygen amount. Controlling oxygen during the cultivation via agitation rate provides increase both HA yield and molecular weight. [ 32 ]
The carbon source is one of the media components that has effects on production of microbial HA. [ 20 ] Although the glucose [ 33 ] [ 34 ] is most used one as a carbon source for the HA production; molasses , [ 35 ] sucrose , [ 36 ] and maltose [ 32 ] are used for microbial production.
HA production needs also many amino acids in the culture media therefore nitrogen source concentration has a key. [ 37 ] | https://en.wikipedia.org/wiki/Microbial_hyaluronic_acid_production |
Microbial inoculants , also known as soil inoculants or bioinoculants , are agricultural amendments that use beneficial rhizosphericic or endophytic microbes to promote plant health. Many of the microbes involved form symbiotic relationships with the target crops where both parties benefit ( mutualism ). While microbial inoculants are applied to improve plant nutrition, they can also be used to promote plant growth by stimulating plant hormone production. [ 1 ] [ 2 ] Although bacterial and fungal inoculants are common, inoculation with archaea to promote plant growth is being increasingly studied. [ 3 ]
Research into the benefits of inoculants in agriculture extends beyond their capacity as biofertilizers . Microbial inoculants can induce systemic acquired resistance (SAR) of crop species to several common crop diseases (provides resistance against pathogens). So far SAR has been demonstrated for powdery mildew ( Blumeria graminis f. sp. hordei , Heitefuss, 2001), take-all ( Gaeumannomyces graminis var. tritici , Khaosaad et al. , 2007), leaf spot ( Pseudomonas syringae , Ramos Solano et al. , 2008) and root rot ( Fusarium culmorum , Waller et al. 2005).
However, it is increasingly recognized that microbial inoculants often modify the soil microbial community (Mawarda et al. , 2020). Additionally, recent research (2024) suggests that as few as one in nine commercial products are beneficial. Common problems are crop mortality, unlabeled fertilizers and non-viability (doa = dead on arrival.) A global study found mycorrhizal colonization to be less than 10% when commercial products are used [ 4 ] meaning that a lot of the estimated 836 million USD spent annually on commerical inocluants could be better spent.
The rhizobacteria commonly applied as inoculants include nitrogen-fixers, phosphate-solubilisers and other root-associated beneficial bacteria which enhance the availability of the macronutrients nitrogen and phosphorus to the host plant. Such bacteria are commonly referred to as plant growth promoting rhizobacteria (PGPR).
The most commonly applied rhizobacteria are Rhizobium and closely related genera. Rhizobium are nitrogen-fixing bacteria that form symbiotic associations within nodules on the roots of legumes . This increases host nitrogen nutrition and is important to the cultivation of soybeans, chickpeas and many other leguminous crops. For non-leguminous crops, Azospirillum has been demonstrated to be beneficial in some cases for nitrogen fixation and plant nutrition. [ 1 ]
For cereal crops, diazotrophic rhizobacteria have increased plant growth, [ 5 ] grain yield (Caballero-Mellado et al. , 1992), nitrogen and phosphorus uptake, [ 5 ] and nitrogen (Caballero-Mellado et al. , 1992), phosphorus (Caballero-Mellado et al. , 1992; Belimov et al. , 1995) and potassium content (Caballero-Mellado et al. , 1992). Rhizobacteria live in root nodes, and are associated with legumes.
To improve phosphorus nutrition, the use of phosphate-solubilising bacteria (PSB) such as Agrobacterium radiobacter has also received attention (Belimov et al. , 1995a; 1995b; Singh & Kapoor, 1999). As the name suggests, PSB are free-living bacteria that break down inorganic soil phosphates to simpler forms that enable uptake by plants.
Symbiotic relationships between fungi and plant roots is referred to as a Mycorrhiza association. [ 6 ] This symbiotic relationships is present in nearly all land plants and give both the plant and fungi advantages to survival. [ 6 ] The plant can give upwards of 5-30% of its energy production to the fungi in exchange for increasing the root absorptive area with hyphae which gives the plant access to nutrients it would otherwise not be able to attain. [ 6 ] [ 7 ] The two most common mycorrhizae are arbuscular mycorrhizae and ectomycorrhizae . Ectomycorrhizae associations are most commonly found in woody-species, and have less implications for agricultural systems. [ 8 ]
Arbuscular mycorrhiza (AM) has received attention as a potential agriculture amendment for its ability to access and provide the host plant phosphorus. [ 8 ] Under a reduced fertilization greenhouse system that was inoculated with a mixture of AM fungi and rhizobacteria , tomato yields that were given from 100% fertility were attained at 70% fertility. [ 9 ] This 30% reduction in fertilizer application can aid in the reduction of nutrient pollution , and help prolong finite mineral resources such as phosphorus ( Peak phosphorus ). Other effects include increases in salinity tolerance, [ 10 ] drought tolerance, [ 11 ] and resistance to trace metal toxicity. [ 12 ]
Fungal inoculation alone can benefit host
plants. Inoculation paired with other amendments can further improve
conditions. Arbuscular mycorrhizal inoculation combined with compost is a
common household amendment for personal gardens, agriculture, and nurseries. It
has been observed that this pairing can also promote microbial functions in
soils that have been affected by mining . [ 13 ]
Certain fungal partners do best in specific ecotones or with certain crops. Arbuscular mycorrhizal inoculation paired with plant
growth promoting bacteria resulted in a higher yield and quicker maturation in
upland rice paddys. [ 14 ]
Maize growth improved after an amendment of arbuscular mycorrhizae and biochar . This amendment can also decrease cadmium uptake by crops. [ 15 ]
Fungal inoculants can be used with or without additional amendments in private gardens, homesteads, agricultural production, native nurseries, and land restoration projects.
The combination of strains of Plant Growth Promoting Rhizobacteria (PGPR) has been shown to benefit rice and barley. [ 16 ] [ 17 ] The main benefit from dual inoculation is increased plant nutrient uptake from both soil and fertilizer. [ 16 ] Multiple strains of inoculant have also been demonstrated to increase total nitrogenase activity compared to single strains of inoculants, even when only one strain is diazotrophic . [ 16 ] [ 18 ] [ 19 ]
PGPR and arbuscular mycorrhizae in combination can be useful in increasing wheat growth in nutrient poor soil [ 20 ] and improving nitrogen-extraction from fertilised soils. [ 21 ] | https://en.wikipedia.org/wiki/Microbial_inoculant |
The microbial loop describes a trophic pathway where, in aquatic systems, dissolved organic carbon (DOC) is returned to higher trophic levels via its incorporation into bacterial biomass, and then coupled with the classic food chain formed by phytoplankton - zooplankton - nekton . In soil systems, the microbial loop refers to soil carbon . The term microbial loop was coined by Farooq Azam , Tom Fenchel et al. [ 1 ] in 1983 to include the role played by bacteria in the carbon and nutrient cycles of the marine environment.
In general, dissolved organic carbon (DOC) is introduced into the ocean environment from bacterial lysis , the leakage or exudation of fixed carbon from phytoplankton (e.g., mucilaginous exopolymer from diatoms ), sudden cell senescence , sloppy feeding by zooplankton, the excretion of waste products by aquatic animals, or the breakdown or dissolution of organic particles from terrestrial plants and soils. [ 2 ] Bacteria in the microbial loop decompose this particulate detritus to utilize this energy-rich matter for growth. Since more than 95% of organic matter in marine ecosystems consists of polymeric, high molecular weight (HMW) compounds (e.g., protein, polysaccharides, lipids), only a small portion of total dissolved organic matter (DOM) is readily utilizable to most marine organisms at higher trophic levels. This means that dissolved organic carbon is not available directly to most marine organisms; marine bacteria introduce this organic carbon into the food web, resulting in additional energy becoming available to higher trophic levels. Recently the term " microbial food web " has been substituted for the term "microbial loop".
Prior to the discovery of the microbial loop, the classic view of marine food webs was one of a linear chain from phytoplankton to nekton . Generally, marine bacteria were not thought to be significant consumers of organic matter (including carbon), although they were known to exist. However, the view of a marine pelagic food web was challenged during the 1970s and 1980s by Pomeroy and Azam, who suggested the alternative pathway of carbon flow from bacteria to protozoans to metazoans . [ 3 ] [ 1 ]
Early work in marine ecology that investigated the role of bacteria in oceanic environments concluded their role to be very minimal. Traditional methods of counting bacteria (e.g., culturing on agar plates ) only yielded small numbers of bacteria that were much smaller than their true ambient abundance in seawater. Developments in technology for counting bacteria have led to an understanding of the significant importance of marine bacteria in oceanic environments.
In the 1970s, the alternative technique of direct microscopic counting was developed by Francisco et al. (1973) and Hobbie et al. (1977). Bacterial cells were counted with an epifluorescence microscope , producing what is called an " acridine orange direct count" (AODC). This led to a reassessment of the large concentration of bacteria in seawater, which was found to be more than was expected (typically on the order of 1 million per milliliter). Also, development of the "bacterial productivity assay" showed that a large fraction (i.e. 50%) of net primary production (NPP) was processed by marine bacteria.
In 1974, Larry Pomeroy published a paper in BioScience entitled "The Ocean's Food Web: A Changing Paradigm", where the key role of microbes in ocean productivity was highlighted. [ 3 ] In the early 1980s, Azam and a panel of top ocean scientists published the synthesis of their discussion in the journal Marine Ecology Progress Series entitled "The Ecological Role of Water Column Microbes in the Sea". The term 'microbial loop' was introduced in this paper, which noted that the bacteria-consuming protists were in the same size class as phytoplankton and likely an important component of the diet of planktonic crustaceans . [ 1 ]
Evidence accumulated since this time has indicated that some of these bacterivorous protists (such as ciliates ) are actually selectively preyed upon by these copepods . In 1986, Prochlorococcus , which is found in high abundance in oligotrophic areas of the ocean, was discovered by Sallie W. Chisholm , Robert J. Olson, and other collaborators (although there had been several earlier records of very small cyanobacteria containing chlorophyll b in the ocean [ 4 ] [ 5 ] Prochlorococcus was discovered in 1986 [ 6 ] ). [ 7 ] Stemming from this discovery, researchers observed the changing role of marine bacteria along a nutrient gradient from eutrophic to oligotrophic areas in the ocean.
The efficiency of the microbial loop is determined by the density of marine bacteria within it. [ 8 ] It has become clear that bacterial density is mainly controlled by the grazing activity of small protozoans such as various taxonomic groups of flagellates. Also, viral infection causes bacterial lysis, which release cell contents back into the dissolved organic matter (DOM) pool, lowering the overall efficiency of the microbial loop. Mortality from viral infection has almost the same magnitude as that from protozoan grazing. However, compared to protozoan grazing, the effect of viral lysis can be very different because lysis is highly host-specific to each marine bacteria. Both protozoan grazing and viral infection balance the major fraction of bacterial growth. In addition, the microbial loop dominates in oligotrophic waters, rather than in eutrophic areas - there the classical plankton food chain predominates, due to the frequent fresh supply of mineral nutrients (e.g. spring bloom in temperate waters, upwelling areas). The magnitude of the efficiency of the microbial loop can be determined by measuring bacterial incorporation of radiolabeled substrates (such as tritiated thymidine or leucine).
The microbial loop is of particular importance in increasing the efficiency of the marine food web via the utilization of dissolved organic matter (DOM), which is typically unavailable to most marine organisms. In this sense, the process aids in recycling of organic matter and nutrients and mediates the transfer of energy above the thermocline . More than 30% of dissolved organic carbon (DOC) incorporated into bacteria is respired and released as carbon dioxide . The other main effect of the microbial loop in the water column is that it accelerates mineralization through regenerating production in nutrient-limited environments (e.g. oligotrophic waters). In general, the entire microbial loop is to some extent typically five to ten times the mass of all multicellular marine organisms in the marine ecosystem. Marine bacteria are the base of the food web in most oceanic environments, and they improve the trophic efficiency of both marine food webs and important aquatic processes (such as the productivity of fisheries and the amount of carbon exported to the ocean floor). Therefore, the microbial loop, together with primary production, controls the productivity of marine systems in the ocean.
Many planktonic bacteria are motile, using a flagellum to propagate, and chemotax to locate, move toward, and attach to a point source of dissolved organic matter (DOM) where fast growing cells digest all or part of the particle. Accumulation within just a few minutes at such patches is directly observable. Therefore, the water column can be considered to some extent as a spatially organized place on a small scale rather than a completely mixed system. This patch formation affects the biologically-mediated transfer of matter and energy in the microbial loop.
More currently, the microbial loop is considered to be more extended. [ 9 ] Chemical compounds in typical bacteria (such as DNA, lipids, sugars, etc.) and similar values of C:N ratios per particle are found in the microparticles formed abiotically. Microparticles are a potentially attractive food source to bacterivorous plankton. If this is the case, the microbial loop can be extended by the pathway of direct transfer of dissolved organic matter (DOM) via abiotic microparticle formation to higher trophic levels. This has ecological importance in two ways. First, it occurs without carbon loss, and makes organic matter more efficiently available to phagotrophic organisms, rather than only heterotrophic bacteria. Furthermore, abiotic transformation in the extended microbial loop depends only on temperature and the capacity of DOM to aggregate, while biotic transformation is dependent on its biological availability. [ 9 ]
Soil ecosystems are highly complex and subject to different landscape-scale perturbations that govern whether soil carbon is retained or released to the atmosphere. [ 11 ] The ultimate fate of soil organic carbon is a function of the combined activities of plants and below ground organisms, including soil microbes. Although soil microorganisms are known to support a plethora of biogeochemical functions related to carbon cycling, [ 12 ] the vast majority of the soil microbiome remains uncultivated and has largely cryptic functions. [ 13 ] Only a mere fraction of soil microbial life has been catalogued to date, although new soil microbes [ 13 ] and viruses are increasingly being discovered. [ 14 ] This lack of knowledge results in uncertainty of the contribution of soil microorganisms to soil organic carbon cycling and hinders construction of accurate predictive models for global carbon flux under climate change . [ 15 ] [ 10 ]
The lack of information concerning the soil microbiome metabolic potential makes it particularly challenging to accurately account for the shifts in microbial activities that occur in response to environmental change. For example, plant-derived carbon inputs can prime microbial activity to decompose existing soil organic carbon at rates higher than model expectations, resulting in error within predictive models of carbon fluxes. [ 16 ] [ 10 ]
To account for this, a conceptual model known as the microbial carbon pump, illustrated in the diagram on the right, has been developed to define how soil microorganisms transform and stabilise soil organic matter. [ 17 ] As shown in the diagram, carbon dioxide in the atmosphere is fixed by plants (or autotrophic microorganisms) and added to soil through processes such as (1) root exudation of low-molecular weight simple carbon compounds, or deposition of leaf and root litter leading to accumulation of complex plant polysaccharides. (2) Through these processes, carbon is made bioavailable to the microbial metabolic "factory" and subsequently is either (3) respired to the atmosphere or (4) enters the stable carbon pool as microbial necromass. The exact balance of carbon efflux versus persistence is a function of several factors, including aboveground plant community composition and root exudate profiles, environmental variables, and collective microbial phenotypes (i.e., the metaphenome). [ 18 ] [ 10 ]
In this model, microbial metabolic activities for carbon turnover are segregated into two categories: ex vivo modification, referring to transformation of plant-derived carbon by extracellular enzymes, and in vivo turnover, for intracellular carbon used in microbial biomass turnover or deposited as dead microbial biomass, referred to as necromass. The contrasting impacts of catabolic activities that release soil organic carbon as carbon dioxide (CO 2 ), versus anabolic pathways that produce stable carbon compounds, control net carbon retention rates. In particular, microbial carbon sequestration represents an underrepresented aspect of soil carbon flux that the microbial carbon pump model attempts to address. [ 17 ] A related area of uncertainty is how the type of plant-derived carbon enhances microbial soil organic carbon storage or alternatively accelerates soil organic carbon decomposition. [ 19 ] For example, leaf litter and needle litter serve as sources of carbon for microbial growth in forest soils, but litter chemistry and pH varies by vegetation type [e.g., between root and foliar litter [ 20 ] or between deciduous and coniferous forest litter (14)]. In turn, these biochemical differences influence soil organic carbon levels through changing decomposition dynamics. [ 21 ] Also, increased diversity of plant communities increases rates of rhizodeposition, stimulating microbial activity and soil organic carbon storage, [ 22 ] although soils eventually reach a saturation point beyond which they cannot store additional carbon. [ 23 ] [ 10 ] | https://en.wikipedia.org/wiki/Microbial_loop |
A microbial mat is a multi-layered sheet or biofilm of microbial colonies , composed of mainly bacteria and/or archaea . Microbial mats grow at interfaces between different types of material, mostly on submerged or moist surfaces , but a few survive in deserts. [ 1 ] A few are found as endosymbionts of animals .
Although only a few centimetres thick at most, microbial mats create a wide range of internal chemical environments, and hence generally consist of layers of microorganisms that can feed on or at least tolerate the dominant chemicals at their level and which are usually of closely related species. In moist conditions mats are usually held together by slimy substances secreted by the microorganisms. In many cases some of the bacteria form tangled webs of filaments which make the mat tougher. The best known physical forms are flat mats and stubby pillars called stromatolites , but there are also spherical forms.
Microbial mats are the earliest form of life on Earth for which there is good fossil evidence, from 3,500 million years ago , and have been the most important members and maintainers of the planet's ecosystems . Originally they depended on hydrothermal vents for energy and chemical "food", but the development of photosynthesis allowed mats to proliferate outside of these environments by utilizing a more widely available energy source, sunlight. The final and most significant stage of this liberation was the development of oxygen-producing photosynthesis, since the main chemical inputs for this are carbon dioxide and water.
As a result, microbial mats began to produce the atmosphere we know today, in which free oxygen is a vital component. At around the same time they may also have been the birthplace of the more complex eukaryote type of cell , of which all multicellular organisms are composed. [ 2 ] Microbial mats were abundant on the shallow seabed until the Cambrian substrate revolution , when animals living in shallow seas increased their burrowing capabilities and thus broke up the surfaces of mats and let oxygenated water into the deeper layers, poisoning the oxygen-intolerant microorganisms that lived there. Although this revolution drove mats off soft floors of shallow seas, they still flourish in many environments where burrowing is limited or impossible, including rocky seabeds and shores, and hyper-saline and brackish lagoons. They are found also on the floors of the deep oceans.
Because of microbial mats' ability to use almost anything as "food", there is considerable interest in industrial uses of mats, especially for water treatment and for cleaning up pollution .
Microbial mats may also be referred to as algal mats and bacterial mats. They are a type of biofilm that is large enough to see with the naked eye and robust enough to survive moderate physical stresses. These colonies of bacteria form on surfaces at many types of interface , for example between water and the sediment or rock at the bottom, between air and rock or sediment, between soil and bed-rock, etc. Such interfaces form vertical chemical gradients , i.e. vertical variations in chemical composition, which make different levels suitable for different types of bacteria and thus divide microbial mats into layers, which may be sharply defined or may merge more gradually into each other. [ 3 ] A variety of microbes are able to transcend the limits of diffusion by using "nanowires" to shuttle electrons from their metabolic reactions up to two centimetres deep in the sediment – for example, electrons can be transferred from reactions involving hydrogen sulfide deeper within the sediment to oxygen in the water, which acts as an electron acceptor. [ 4 ]
The best-known types of microbial mat may be flat laminated mats, which form on approximately horizontal surfaces, and stromatolites , stubby pillars built as the microbes slowly move upwards to avoid being smothered by sediment deposited on them by water. However, there are also spherical mats, some on the outside of pellets of rock or other firm material and others inside spheres of sediment. [ 3 ]
A microbial mat consists of several layers, each of which is dominated by specific types of microorganisms , mainly bacteria . Although the composition of individual mats varies depending on the environment, as a general rule the by-products of each group of microorganisms serve as "food" for other groups. In effect each mat forms its own food chain , with one or a few groups at the top of the food chain as their by-products are not consumed by other groups. Different types of microorganism dominate different layers based on their comparative advantage for living in that layer. In other words, they live in positions where they can out-perform other groups rather than where they would absolutely be most comfortable — ecological relationships between different groups are a combination of competition and co-operation. Since the metabolic capabilities of bacteria (what they can "eat" and what conditions they can tolerate) generally depend on their phylogeny (i.e. the most closely related groups have the most similar metabolisms), the different layers of a mat are divided both by their different metabolic contributions to the community and by their phylogenetic relationships.
In a wet environment where sunlight is the main source of energy, the uppermost layers are generally dominated by aerobic photosynthesizing cyanobacteria (blue-green bacteria whose color is caused by their having chlorophyll ), while the lowest layers are generally dominated by anaerobic sulfate-reducing bacteria . [ 5 ] Sometimes there are intermediate (oxygenated only in the daytime) layers inhabited by facultative anaerobic bacteria. For example, in hypersaline ponds near Guerrero Negro (Mexico) various kind of mats were explored. There are some mats with a middle purple layer inhabited by photosynthesizing purple bacteria. [ 6 ] Some other mats have a white layer inhabited by chemotrophic sulfur oxidizing bacteria and beneath them an olive layer inhabited by photosynthesizing green sulfur bacteria and heterotrophic bacteria. [ 7 ] However, this layer structure is not changeless during a day: some species of cyanobacteria migrate to deeper layers at morning, and go back at evening, to avoid intensive solar light and UV radiation at mid-day. [ 7 ] [ 8 ]
Microbial mats are generally held together and bound to their substrates by slimy extracellular polymeric substances which they secrete. In many cases some of the bacteria form filaments (threads), which tangle and thus increase the colonies' structural strength, especially if the filaments have sheaths (tough outer coverings). [ 3 ]
This combination of slime and tangled threads attracts other microorganisms which become part of the mat community, for example protozoa , some of which feed on the mat-forming bacteria, and diatoms , which often seal the surfaces of submerged microbial mats with thin, parchment -like coverings. [ 3 ]
Marine mats may grow to a few centimeters in thickness, of which only the top few millimeters are oxygenated. [ 9 ]
Underwater microbial mats have been described as layers that live by exploiting and to some extent modifying local chemical gradients , i.e. variations in the chemical composition. Thinner, less complex biofilms live in many sub-aerial environments, for example on rocks, on mineral particles such as sand, and within soil . They have to survive for long periods without liquid water, often in a dormant state. Microbial mats that live in tidal zones, such as those found in the Sippewissett salt marsh , often contain a large proportion of similar microorganisms that can survive for several hours without water. [ 3 ]
Microbial mats and less complex types of biofilm are found at temperature ranges from –40 °C to +120 °C, because variations in pressure affect the temperatures at which water remains liquid. [ 3 ]
They even appear as endosymbionts in some animals, for example in the hindguts of some echinoids . [ 10 ]
Microbial mats use all of the types of metabolism and feeding strategy that have evolved on Earth—anoxygenic and oxygenic photosynthesis ; anaerobic and aerobic chemotrophy (using chemicals rather than sunshine as a source of energy); organic and inorganic respiration and fermentation (i..e converting food into energy with and without using oxygen in the process); autotrophy (producing food from inorganic compounds) and heterotrophy (producing food only from organic compounds, by some combination of predation and detritivory ). [ 3 ]
Most sedimentary rocks and ore deposits have grown by a reef -like build-up rather than by "falling" out of the water, and this build-up has been at least influenced and perhaps sometimes caused by the actions of microbes. Stromatolites , bioherms (domes or columns similar internally to stromatolites) and biostromes (distinct sheets of sediment) are among such microbe-influenced build-ups. [ 3 ] Other types of microbial mat have created wrinkled "elephant skin" textures in marine sediments, although it was many years before these textures were recognized as trace fossils of mats. [ 12 ] Microbial mats have increased the concentration of metal in many ore deposits, and without this it would not be feasible to mine them—examples include iron (both sulfide and oxide ores), uranium, copper, silver and gold deposits. [ 3 ]
Microbial mats are among the oldest clear signs of life, as microbially induced sedimentary structures (MISS) formed 3,480 million years ago have been found in western Australia . [ 3 ] [ 13 ] [ 14 ] At that early stage the mats' structure may already have been similar to that of modern mats that do not include photosynthesizing bacteria. It is even possible that non-photosynthesizing mats were present as early as 4,000 million years ago . If so, their energy source would have been hydrothermal vents (high-pressure hot springs around submerged volcanoes ), and the evolutionary split between bacteria and archea may also have occurred around this time. [ 15 ]
The earliest mats may have been small, single-species biofilms of chemotrophs that relied on hydrothermal vents to supply both energy and chemical "food". Within a short time (by geological standards) the build-up of dead microorganisms would have created an ecological niche for scavenging heterotrophs , possibly methane-emitting and sulfate-reducing organisms that would have formed new layers in the mats and enriched their supply of biologically useful chemicals. [ 15 ]
It is generally thought that photosynthesis , the biological generation of chemical energy from light, evolved shortly after 3,000 million years ago (3 billion). [ 15 ] However an isotope analysis suggests that oxygenic photosynthesis may have been widespread as early as 3,500 million years ago . [ 15 ] There are several different types of photosynthetic reaction, and analysis of bacterial DNA indicates that photosynthesis first arose in anoxygenic purple bacteria , while the oxygenic photosynthesis seen in cyanobacteria and much later in plants was the last to evolve. [ 16 ]
The earliest photosynthesis may have been powered by infra-red light, using modified versions of pigments whose original function was to detect infra-red heat emissions from hydrothermal vents. The development of photosynthetic energy generation enabled the microorganisms first to colonize wider areas around vents and then to use sunlight as an energy source. The role of the hydrothermal vents was now limited to supplying reduced metals into the oceans as a whole rather than being the main supporters of life in specific locations. [ 16 ] Heterotrophic scavengers would have accompanied the photosynthesizers in their migration out of the "hydrothermal ghetto". [ 15 ]
The evolution of purple bacteria, which do not produce or use oxygen but can tolerate it, enabled mats to colonize areas that locally had relatively high concentrations of oxygen, which is toxic to organisms that are not adapted to it. [ 17 ] Microbial mats could have been separated into oxidized and reduced layers. [ 15 ]
The last major stage in the evolution of microbial mats was the appearance of cyanobacteria , photosynthesizers which both produce and use oxygen . This gave undersea mats their typical modern structure: an oxygen-rich top layer of cyanobacteria; a layer of photosynthesizing purple bacteria that could tolerate oxygen; and oxygen-free, H 2 S -dominated lower layers of heterotrophic scavengers, mainly methane-emitting and sulfate-reducing organisms. [ 15 ]
It is estimated that the appearance of oxygenic photosynthesis increased biological productivity by a factor of between 100 and 1,000. All photosynthetic reactions require a reducing agent , but the significance of oxygenic photosynthesis is that it uses water as a reducing agent, and water is much more plentiful than the geologically produced reducing agents on which photosynthesis previously depended. The resulting increases in the populations of photosynthesizing bacteria in the top layers of microbial mats would have caused corresponding population increases among the chemotrophic and heterotrophic microorganisms that inhabited the lower layers and which fed respectively on the by-products of the photosynthesizers and on the corpses and / or living bodies of the other mat organisms. These increases would have made microbial mats the planet's dominant ecosystems. From this point onwards life itself would have produced significantly more of the resources it needed than did geochemical processes. [ 18 ]
Oxygenic photosynthesis in microbial mats would also have increased the free oxygen content of the Earth's atmosphere, both directly by emitting oxygen and because the mats emitted molecular hydrogen (H 2 ), some of which would have escaped from the Earth's atmosphere before it could re-combine with free oxygen to form more water. Microbial mats thus likely played a major role in the evolution of organisms which could first tolerate free oxygen and then use it as an energy source. [ 18 ] Oxygen is toxic to organisms that are not adapted to it, but greatly increases the metabolic efficiency of oxygen-adapted organisms [ 17 ] — for example anaerobic fermentation produces a net yield of two molecules of adenosine triphosphate , cells' internal "fuel", per molecule of glucose , while aerobic respiration produces a net yield of 36. [ 19 ] The oxygenation of the atmosphere was a prerequisite for the evolution of the more complex eukaryote type of cell, from which all multicellular organisms are built. [ 20 ]
Cyanobacteria have the most complete biochemical "toolkits" of all the mat-forming organisms: the photosynthesis mechanisms of both green bacteria and purple bacteria; oxygen production; and the Calvin cycle , which converts carbon dioxide and water into carbohydrates and sugars . It is likely that they acquired many of these sub-systems from existing mat organisms, by some combination of horizontal gene transfer and endosymbiosis followed by fusion. Whatever the causes, cyanobacteria are the most self-sufficient of the mat organisms and were well-adapted to strike out on their own both as floating mats and as the first of the phytoplankton , which forms the basis of most marine food chains . [ 15 ]
The time at which eukaryotes first appeared is still uncertain: there is reasonable evidence that fossils dated between 1,600 million years ago and 2,100 million years ago represent eukaryotes, [ 21 ] but the presence of steranes in Australian shales may indicate that eukaryotes were present 2,700 million years ago . [ 22 ] There is still debate about the origins of eukaryotes, and many of the theories focus on the idea that a bacterium first became an endosymbiont of an anaerobic archean and then fused with it to become one organism. If such endosymbiosis was an important factor, microbial mats would have encouraged it. [ 2 ] There are two known variations of this scenario:
Microbial mats from ~ 1,200 million years ago provide the first evidence of life in the terrestrial realm. [ 24 ]
The Ediacara biota are the earliest widely accepted evidence of multicellular animals. Most Ediacaran strata with the "elephant skin" texture characteristic of microbial mats contain fossils, and Ediacaran fossils are hardly ever found in beds that do not contain these microbial mats. [ 25 ] Adolf Seilacher categorized the animals as: "mat encrusters", which were permanently attached to the mat;
"mat scratchers", which grazed the surface of the mat without destroying it; "mat stickers", suspension feeders that were partially embedded in the mat; and "undermat miners", which burrowed underneath the mat and fed on decomposing mat material. [ 26 ]
In the Early Cambrian, however, organisms began to burrow vertically for protection or food, breaking down the microbial mats, and thus allowing water and oxygen to penetrate a considerable distance below the surface and kill the oxygen-intolerant microorganisms in the lower layers. As a result of this Cambrian substrate revolution , marine microbial mats are confined to environments in which burrowing is non-existent or negligible: [ 27 ] very harsh environments, such as hyper-saline lagoons or brackish estuaries, which are uninhabitable for the burrowing organisms that broke up the mats; [ 28 ] rocky "floors" which the burrowers cannot penetrate; [ 27 ] the depths of the oceans, where burrowing activity today is at a similar level to that in the shallow coastal seas before the revolution. [ 27 ]
Although the Cambrian substrate revolution opened up new niches for animals, it was not catastrophic for microbial mats, but it did greatly reduce their extent.
Most fossils preserve only the hard parts of organisms, e.g. shells. The rare cases where soft-bodied fossils are preserved (the remains of soft-bodied organisms and also of the soft parts of organisms for which only hard parts such as shells are usually found) are extremely valuable because they provide information about organisms that are hardly ever fossilized and much more information than is usually available about those for which only the hard parts are usually preserved. [ 29 ] Microbial mats help to preserve soft-bodied fossils by:
The ability of microbial mat communities to use a vast range of "foods" has recently led to interest in industrial uses. There have been trials of microbial mats for purifying water, both for human use and in fish farming , [ 31 ] [ 32 ] and studies of their potential for cleaning up oil spills . [ 33 ] As a result of the growing commercial potential, there have been applications for and grants of patents relating to the growing, installation and use of microbial mats, mainly for cleaning up pollutants and waste products. [ 34 ] | https://en.wikipedia.org/wiki/Microbial_mat |
Microbial metabolism is the means by which a microbe obtains the energy and nutrients (e.g. carbon ) it needs to live and reproduce. Microbes use many different types of metabolic strategies and species can often be differentiated from each other based on metabolic characteristics. The specific metabolic properties of a microbe are the major factors in determining that microbe's ecological niche , and often allow for that microbe to be useful in industrial processes or responsible for biogeochemical cycles.
All microbial metabolisms can be arranged according to three principles:
1. How the organism obtains carbon for synthesizing cell mass: [ 1 ]
2. How the organism obtains reducing equivalents (hydrogen atoms or electrons) used either in energy conservation or in biosynthetic reactions:
3. How the organism obtains energy for living and growing:
In practice, these terms are almost freely combined. Typical examples are as follows:
Some microbes are heterotrophic (more precisely chemoorganoheterotrophic), using organic compounds as both carbon and energy sources. Heterotrophic microbes live off of nutrients that they scavenge from living hosts (as commensals or parasites ) or find in dead organic matter of all kind ( saprophages ). Microbial metabolism is the main contribution for the bodily decay of all organisms after death. Many eukaryotic microorganisms are heterotrophic by predation or parasitism , properties also found in some bacteria such as Bdellovibrio (an intracellular parasite of other bacteria, causing death of its victims) and Myxobacteria such as Myxococcus (predators of other bacteria which are killed and used by cooperating swarms of many single cells of Myxobacteria). Most pathogenic bacteria can be viewed as heterotrophic parasites of humans or the other eukaryotic species they affect. Heterotrophic microbes are extremely abundant in nature and are responsible for the breakdown of large organic polymers such as cellulose , chitin or lignin which are generally indigestible to larger animals. Generally, the oxidative breakdown of large polymers to carbon dioxide ( mineralization ) requires several different organisms, with one breaking down the polymer into its constituent monomers, one able to use the monomers and excreting simpler waste compounds as by-products, and one able to use the excreted wastes. There are many variations on this theme, as different organisms are able to degrade different polymers and secrete different waste products. Some organisms are even able to degrade more recalcitrant compounds such as petroleum compounds or pesticides, making them useful in bioremediation .
Biochemically, prokaryotic heterotrophic metabolism is much more versatile than that of eukaryotic organisms, although many prokaryotes share the most basic metabolic models with eukaryotes, e. g. using glycolysis (also called EMP pathway ) for sugar metabolism and the citric acid cycle to degrade acetate , producing energy in the form of ATP and reducing power in the form of NADH or quinols . These basic pathways are well conserved because they are also involved in biosynthesis of many conserved building blocks needed for cell growth (sometimes in reverse direction). However, many bacteria and archaea utilize alternative metabolic pathways other than glycolysis and the citric acid cycle. A well-studied example is sugar metabolism via the keto-deoxy-phosphogluconate pathway (also called ED pathway ) in Pseudomonas . Moreover, there is a third alternative sugar-catabolic pathway used by some bacteria, the pentose phosphate pathway . The metabolic diversity and ability of prokaryotes to use a large variety of organic compounds arises from the much deeper evolutionary history and diversity of prokaryotes, as compared to eukaryotes. It is also noteworthy that the mitochondrion , the small membrane-bound intracellular organelle that is the site of eukaryotic oxygen-using energy metabolism, arose from the endosymbiosis of a bacterium related to obligate intracellular Rickettsia , and also to plant-associated Rhizobium or Agrobacterium . Therefore, it is not surprising that all mitrochondriate eukaryotes share metabolic properties with these Pseudomonadota . Most microbes respire (use an electron transport chain ), although oxygen is not the only terminal electron acceptor that may be used. As discussed below, the use of terminal electron acceptors other than oxygen has important biogeochemical consequences.
Fermentation is a specific type of heterotrophic metabolism that uses organic carbon instead of oxygen as a terminal electron acceptor. This means that these organisms do not use an electron transport chain to oxidize NADH to NAD + and therefore must have an alternative method of using this reducing power and maintaining a supply of NAD + for the proper functioning of normal metabolic pathways (e.g. glycolysis). As oxygen is not required, fermentative organisms are anaerobic . Many organisms can use fermentation under anaerobic conditions and aerobic respiration when oxygen is present. These organisms are facultative anaerobes . To avoid the overproduction of NADH, obligately fermentative organisms usually do not have a complete citric acid cycle. Instead of using an ATP synthase as in respiration , ATP in fermentative organisms is produced by substrate-level phosphorylation where a phosphate group is transferred from a high-energy organic compound to ADP to form ATP. As a result of the need to produce high energy phosphate-containing organic compounds (generally in the form of Coenzyme A -esters) fermentative organisms use NADH and other cofactors to produce many different reduced metabolic by-products, often including hydrogen gas ( H 2 ). These reduced organic compounds are generally small organic acids and alcohols derived from pyruvate , the end product of glycolysis . Examples include ethanol , acetate , lactate , and butyrate . Fermentative organisms are very important industrially and are used to make many different types of food products. The different metabolic end products produced by each specific bacterial species are responsible for the different tastes and properties of each food.
Not all fermentative organisms use substrate-level phosphorylation . Instead, some organisms are able to couple the oxidation of low-energy organic compounds directly to the formation of a proton motive force or sodium-motive force and therefore ATP synthesis . Examples of these unusual forms of fermentation include succinate fermentation by Propionigenium modestum and oxalate fermentation by Oxalobacter formigenes . These reactions are extremely low-energy yielding. Humans and other higher animals also use fermentation to produce lactate from excess NADH, although this is not the major form of metabolism as it is in fermentative microorganisms.
Methylotrophy refers to the ability of an organism to use C1-compounds as energy sources. These compounds include methanol , methyl amines , formaldehyde , and formate . Several other less common substrates may also be used for metabolism, all of which lack carbon-carbon bonds. Examples of methylotrophs include the bacteria Methylomonas and Methylobacter . Methanotrophs are a specific type of methylotroph that are also able to use methane ( CH 4 ) as a carbon source by oxidizing it sequentially to methanol ( CH 3 OH ), formaldehyde ( CH 2 O ), formate ( HCOO − ), and carbon dioxide CO 2 initially using the enzyme methane monooxygenase . As oxygen is required for this process, all (conventional) methanotrophs are obligate aerobes . Reducing power in the form of quinones and NADH is produced during these oxidations to produce a proton motive force and therefore ATP generation. Methylotrophs and methanotrophs are not considered as autotrophic, because they are able to incorporate some of the oxidized methane (or other metabolites) into cellular carbon before it is completely oxidized to CO 2 (at the level of formaldehyde), using either the serine pathway ( Methylosinus , Methylocystis ) or the ribulose monophosphate pathway ( Methylococcus ), depending on the species of methylotroph.
In addition to aerobic methylotrophy, methane can also be oxidized anaerobically. This occurs by a consortium of sulfate-reducing bacteria and relatives of methanogenic Archaea working syntrophically (see below). Little is currently known about the biochemistry and ecology of this process.
Methanogenesis is the biological production of methane. It is carried out by methanogens, strictly anaerobic Archaea such as Methanococcus , Methanocaldococcus , Methanobacterium , Methanothermus , Methanosarcina , Methanosaeta and Methanopyrus . The biochemistry of methanogenesis is unique in nature in its use of a number of unusual cofactors to sequentially reduce methanogenic substrates to methane, such as coenzyme M and methanofuran . [ 4 ] These cofactors are responsible (among other things) for the establishment of a proton gradient across the outer membrane thereby driving ATP synthesis. Several types of methanogenesis occur, differing in the starting compounds oxidized. Some methanogens reduce carbon dioxide (CO 2 ) to methane ( CH 4 ) using electrons (most often) from hydrogen gas ( H 2 ) chemolithoautotrophically. These methanogens can often be found in environments containing fermentative organisms. The tight association of methanogens and fermentative bacteria can be considered to be syntrophic (see below) because the methanogens, which rely on the fermentors for hydrogen, relieve feedback inhibition of the fermentors by the build-up of excess hydrogen that would otherwise inhibit their growth. This type of syntrophic relationship is specifically known as interspecies hydrogen transfer . A second group of methanogens use methanol ( CH 3 OH ) as a substrate for methanogenesis. These are chemoorganotrophic, but still autotrophic in using CO 2 as only carbon source. The biochemistry of this process is quite different from that of the carbon dioxide-reducing methanogens. Lastly, a third group of methanogens produce both methane and carbon dioxide from acetate ( CH 3 COO − ) with the acetate being split between the two carbons. These acetate-cleaving organisms are the only chemoorganoheterotrophic methanogens. All autotrophic methanogens use a variation of the reductive acetyl-CoA pathway to fix CO 2 and obtain cellular carbon.
Syntrophy, in the context of microbial metabolism, refers to the pairing of multiple species to achieve a chemical reaction that, on its own, would be energetically unfavorable. The best studied example of this process is the oxidation of fermentative end products (such as acetate, ethanol and butyrate ) by organisms such as Syntrophomonas . Alone, the oxidation of butyrate to acetate and hydrogen gas is energetically unfavorable. However, when a hydrogenotrophic (hydrogen-using) methanogen is present the use of the hydrogen gas will significantly lower the concentration of hydrogen (down to 10 −5 atm) and thereby shift the equilibrium of the butyrate oxidation reaction under standard conditions (ΔGº') to non-standard conditions (ΔG'). Because the concentration of one product is lowered, the reaction is "pulled" towards the products and shifted towards net energetically favorable conditions (for butyrate oxidation: ΔGº'= +48.2 kJ/mol, but ΔG' = -8.9 kJ/mol at 10 −5 atm hydrogen and even lower if also the initially produced acetate is further metabolized by methanogens). Conversely, the available free energy from methanogenesis is lowered from ΔGº'= -131 kJ/mol under standard conditions to ΔG' = -17 kJ/mol at 10 −5 atm hydrogen. This is an example of intraspecies hydrogen transfer. In this way, low energy-yielding carbon sources can be used by a consortium of organisms to achieve further degradation and eventual mineralization of these compounds. These reactions help prevent the excess sequestration of carbon over geologic time scales, releasing it back to the biosphere in usable forms such as methane and CO 2 .
Aerobic metabolism occurs in Bacteria, Archaea and Eucarya. Although most bacterial species are anaerobic, many are facultative or obligate aerobes. The majority of archaeal species live in extreme environments that are often highly anaerobic. There are, however, several cases of aerobic archaea such as Haiobacterium , Thermoplasma , Sulfolobus and Yymbaculum . Most of the known eukaryotes carry out aerobic metabolism within their mitochondria which is an organelle that had a symbiogenesis origin from prokarya . All aerobic organisms contain oxidases of the cytochrome oxidase super family, but some members of the Pseudomonadota ( E. coli and Acetobacter ) can also use an unrelated cytochrome bd complex as a respiratory terminal oxidase. [ 5 ]
While aerobic organisms during respiration use oxygen as a terminal electron acceptor , anaerobic organisms use other electron acceptors. These inorganic compounds release less energy in cellular respiration , which leads to slower growth rates than aerobes. Many facultative anaerobes can use either oxygen or alternative terminal electron acceptors for respiration depending on the environmental conditions.
Most respiring anaerobes are heterotrophs, although some do live autotrophically. All of the processes described below are dissimilative, meaning that they are used during energy production and not to provide nutrients for the cell (assimilative). Assimilative pathways for many forms of anaerobic respiration are also known.
Denitrification is the utilization of nitrate ( NO − 3 ) as a terminal electron acceptor. It is a widespread process that is used by many members of the Pseudomonadota. Many facultative anaerobes use denitrification because nitrate, like oxygen, has a high reduction potential. Many denitrifying bacteria can also use ferric iron ( Fe 3+ ) and some organic electron acceptors . Denitrification involves the stepwise reduction of nitrate to nitrite ( NO − 2 ), nitric oxide (NO), nitrous oxide ( N 2 O ), and dinitrogen ( N 2 ) by the enzymes nitrate reductase , nitrite reductase , nitric oxide reductase, and nitrous oxide reductase, respectively. Protons are transported across the membrane by the initial NADH reductase, quinones, and nitrous oxide reductase to produce the electrochemical gradient critical for respiration. Some organisms (e.g. E. coli ) only produce nitrate reductase and therefore can accomplish only the first reduction leading to the accumulation of nitrite. Others (e.g. Paracoccus denitrificans or Pseudomonas stutzeri ) reduce nitrate completely. Complete denitrification is an environmentally significant process because some intermediates of denitrification (nitric oxide and nitrous oxide) are important greenhouse gases that react with sunlight and ozone to produce nitric acid, a component of acid rain . Denitrification is also important in biological wastewater treatment where it is used to reduce the amount of nitrogen released into the environment thereby reducing eutrophication . Denitrification can be determined via a nitrate reductase test .
Dissimilatory sulfate reduction is a relatively energetically poor process used by many Gram-negative bacteria found within the Thermodesulfobacteriota , Gram-positive organisms relating to Desulfotomaculum or the archaeon Archaeoglobus . Hydrogen sulfide ( H 2 S ) is produced as a metabolic end product. For sulfate reduction electron donors and energy are needed.
Many sulfate reducers are organotrophic, using carbon compounds such as lactate and pyruvate (among many others) as electron donors , [ 6 ] while others are lithotrophic, using hydrogen gas ( H 2 ) as an electron donor. [ 7 ] Some unusual autotrophic sulfate-reducing bacteria (e.g. Desulfotignum phosphitoxidans ) can use phosphite ( HPO − 3 ) as an electron donor [ 8 ] whereas others (e.g. Desulfovibrio sulfodismutans , Desulfocapsa thiozymogenes , Desulfocapsa sulfoexigens ) are capable of sulfur disproportionation (splitting one compound into two different compounds, in this case an electron donor and an electron acceptor) using elemental sulfur (S 0 ), sulfite ( SO 2− 3 ), and thiosulfate ( S 2 O 2− 3 ) to produce both hydrogen sulfide ( H 2 S ) and sulfate ( SO 2− 4 ). [ 9 ]
All sulfate-reducing organisms are strict anaerobes. Because sulfate is energetically stable, before it can be metabolized it must first be activated by adenylation to form APS (adenosine 5'-phosphosulfate) thereby consuming ATP. The APS is then reduced by the enzyme APS reductase to form sulfite ( SO 2− 3 ) and AMP . In organisms that use carbon compounds as electron donors, the ATP consumed is accounted for by fermentation of the carbon substrate. The hydrogen produced during fermentation is actually what drives respiration during sulfate reduction.
Acetogenesis is a type of microbial metabolism that uses hydrogen ( H 2 ) as an electron donor and carbon dioxide (CO 2 ) as an electron acceptor to produce acetate, the same electron donors and acceptors used in methanogenesis (see above). Bacteria that can autotrophically synthesize acetate are called homoacetogens. Carbon dioxide reduction in all homoacetogens occurs by the acetyl-CoA pathway. This pathway is also used for carbon fixation by autotrophic sulfate-reducing bacteria and hydrogenotrophic methanogens. Often homoacetogens can also be fermentative, using the hydrogen and carbon dioxide produced as a result of fermentation to produce acetate, which is secreted as an end product.
Ferric iron ( Fe 3+ ) is a widespread anaerobic terminal electron acceptor both for autotrophic and heterotrophic organisms. Electron flow in these organisms is similar to those in electron transport , ending in oxygen or nitrate, except that in ferric iron-reducing organisms the final enzyme in this system is a ferric iron reductase. Model organisms include Shewanella putrefaciens and Geobacter metallireducens . Since some ferric iron-reducing bacteria (e.g. G. metallireducens ) can use toxic hydrocarbons such as toluene as a carbon source, there is significant interest in using these organisms as bioremediation agents in ferric iron-rich contaminated aquifers .
Although ferric iron is the most prevalent inorganic electron acceptor, a number of organisms (including the iron-reducing bacteria mentioned above) can use other inorganic ions in anaerobic respiration. While these processes may often be less significant ecologically, they are of considerable interest for bioremediation, especially when heavy metals or radionuclides are used as electron acceptors. Examples include:
A number of organisms, instead of using inorganic compounds as terminal electron acceptors, are able to use organic compounds to accept electrons from respiration. Examples include:
TMAO is a chemical commonly produced by fish , and when reduced to TMA produces a strong odor. DMSO is a common marine and freshwater chemical which is also odiferous when reduced to DMS. Reductive dechlorination is the process by which chlorinated organic compounds are reduced to form their non-chlorinated endproducts. As chlorinated organic compounds are often important (and difficult to degrade) environmental pollutants, reductive dechlorination is an important process in bioremediation.
Chemolithotrophy is a type of metabolism where energy is obtained from the oxidation of inorganic compounds. Most chemolithotrophic organisms are also autotrophic. There are two major objectives to chemolithotrophy: the generation of energy (ATP) and the generation of reducing power (NADH).
Many organisms are capable of using hydrogen ( H 2 ) as a source of energy. While several mechanisms of anaerobic hydrogen oxidation have been mentioned previously (e.g. sulfate reducing- and acetogenic bacteria), the chemical energy of hydrogen can be used in the aerobic Knallgas reaction: [ 10 ]
In these organisms, hydrogen is oxidized by a membrane-bound hydrogenase causing proton pumping via electron transfer to various quinones and cytochromes . In many organisms, a second cytoplasmic hydrogenase is used to generate reducing power in the form of NADH, which is subsequently used to fix carbon dioxide via the Calvin cycle . Hydrogen-oxidizing organisms, such as Cupriavidus necator (formerly Ralstonia eutropha ), often inhabit oxic-anoxic interfaces in nature to take advantage of the hydrogen produced by anaerobic fermentative organisms while still maintaining a supply of oxygen. [ 11 ]
Sulfur oxidation involves the oxidation of reduced sulfur compounds (such as sulfide H 2 S ), inorganic sulfur (S), and thiosulfate ( S 2 O 2− 3 ) to form sulfuric acid ( H 2 SO 4 ). A classic example of a sulfur-oxidizing bacterium is Beggiatoa , a microbe originally described by Sergei Winogradsky , one of the founders of environmental microbiology . Another example is Paracoccus . Generally, the oxidation of sulfide occurs in stages, with inorganic sulfur being stored either inside or outside of the cell until needed. This two step process occurs because energetically sulfide is a better electron donor than inorganic sulfur or thiosulfate, allowing for a greater number of protons to be translocated across the membrane. Sulfur-oxidizing organisms generate reducing power for carbon dioxide fixation via the Calvin cycle using reverse electron flow , an energy-requiring process that pushes the electrons against their thermodynamic gradient to produce NADH. Biochemically, reduced sulfur compounds are converted to sulfite ( SO 2− 3 ) and subsequently converted to sulfate ( SO 2− 4 ) by the enzyme sulfite oxidase . [ 12 ] Some organisms, however, accomplish the same oxidation using a reversal of the APS reductase system used by sulfate-reducing bacteria (see above ). In all cases the energy liberated is transferred to the electron transport chain for ATP and NADH production. [ 12 ] In addition to aerobic sulfur oxidation, some organisms (e.g. Thiobacillus denitrificans ) use nitrate ( NO − 3 ) as a terminal electron acceptor and therefore grow anaerobically.
Ferrous iron is a soluble form of iron that is stable at extremely low pHs or under anaerobic conditions. Under aerobic, moderate pH conditions ferrous iron is oxidized spontaneously to the ferric ( Fe 3+ ) form and is hydrolyzed abiotically to insoluble ferric hydroxide ( Fe(OH) 3 ). There are three distinct types of ferrous iron-oxidizing microbes. The first are acidophiles , such as the bacteria Acidithiobacillus ferrooxidans and Leptospirillum ferrooxidans , as well as the archaeon Ferroplasma . These microbes oxidize iron in environments that have a very low pH and are important in acid mine drainage . The second type of microbes oxidize ferrous iron at near-neutral pH. These micro-organisms (for example Gallionella ferruginea , Leptothrix ochracea , or Mariprofundus ferrooxydans ) live at the oxic-anoxic interfaces and are microaerophiles. The third type of iron-oxidizing microbes are anaerobic photosynthetic bacteria such as Rhodopseudomonas , [ 13 ] which use ferrous iron to produce NADH for autotrophic carbon dioxide fixation. Biochemically, aerobic iron oxidation is a very energetically poor process which therefore requires large amounts of iron to be oxidized by the enzyme rusticyanin to facilitate the formation of proton motive force. Like sulfur oxidation, reverse electron flow must be used to form the NADH used for carbon dioxide fixation via the Calvin cycle.
Nitrification is the process by which ammonia ( NH 3 ) is converted to nitrate ( NO − 3 ). Nitrification is actually the net result of two distinct processes: oxidation of ammonia to nitrite ( NO − 2 ) by nitrosifying bacteria (e.g. Nitrosomonas ) and oxidation of nitrite to nitrate by the nitrite-oxidizing bacteria (e.g. Nitrobacter ). Both of these processes are extremely energetically poor leading to very slow growth rates for both types of organisms. Biochemically, ammonia oxidation occurs by the stepwise oxidation of ammonia to hydroxylamine ( NH 2 OH ) by the enzyme ammonia monooxygenase in the cytoplasm , followed by the oxidation of hydroxylamine to nitrite by the enzyme hydroxylamine oxidoreductase in the periplasm .
Electron and proton cycling are very complex but as a net result only one proton is translocated across the membrane per molecule of ammonia oxidized. Nitrite oxidation is much simpler, with nitrite being oxidized by the enzyme nitrite oxidoreductase coupled to proton translocation by a very short electron transport chain, again leading to very low growth rates for these organisms. Oxygen is required in both ammonia and nitrite oxidation, meaning that both nitrosifying and nitrite-oxidizing bacteria are aerobes. As in sulfur and iron oxidation, NADH for carbon dioxide fixation using the Calvin cycle is generated by reverse electron flow, thereby placing a further metabolic burden on an already energy-poor process.
In 2015, two groups independently showed the microbial genus Nitrospira is capable of complete nitrification ( Comammox ). [ 14 ] [ 15 ]
Anammox stands for anaerobic ammonia oxidation and the organisms responsible were relatively recently discovered, in the late 1990s. [ 16 ] This form of metabolism occurs in members of the Planctomycetota (e.g. " Candidatus Brocadia anammoxidans ") and involves the coupling of ammonia oxidation to nitrite reduction. As oxygen is not required for this process, these organisms are strict anaerobes. Hydrazine ( N 2 H 4 – rocket fuel) is produced as an intermediate during anammox metabolism. To deal with the high toxicity of hydrazine, anammox bacteria contain a hydrazine-containing intracellular organelle called the anammoxasome, surrounded by highly compact (and unusual) ladderane lipid membrane. These lipids are unique in nature, as is the use of hydrazine as a metabolic intermediate. Anammox organisms are autotrophs although the mechanism for carbon dioxide fixation is unclear. Because of this property, these organisms could be used to remove nitrogen in industrial wastewater treatment processes. [ 17 ] Anammox has also been shown to have widespread occurrence in anaerobic aquatic systems and has been speculated to account for approximately 50% of nitrogen gas production in the ocean. [ 18 ]
In July 2020 researchers report the discovery of chemolithoautotrophic bacterial culture that feeds on the metal manganese after performing unrelated experiments and named its bacterial species Candidatus Manganitrophus noduliformans and Ramlibacter lithotrophicus . [ 19 ] [ 20 ] [ 21 ]
Many microbes (phototrophs) are capable of using light as a source of energy to produce ATP and organic compounds such as carbohydrates , lipids , and proteins . Of these, algae are particularly significant because they are oxygenic, using water as an electron donor for electron transfer during photosynthesis. [ 22 ] Phototrophic bacteria are found in the phyla " Cyanobacteria ", Chlorobiota , Pseudomonadota , Chloroflexota , and Bacillota . [ 23 ] Along with plants these microbes are responsible for all biological generation of oxygen gas on Earth . Because chloroplasts were derived from a lineage of the Cyanobacteria, the general principles of metabolism in these endosymbionts can also be applied to chloroplasts. [ 24 ] In addition to oxygenic photosynthesis, many bacteria can also photosynthesize anaerobically, typically using sulfide ( H 2 S ) as an electron donor to produce sulfate. Inorganic sulfur ( S 0 ), thiosulfate ( S 2 O 2− 3 ) and ferrous iron ( Fe 2+ ) can also be used by some organisms. Phylogenetically, all oxygenic photosynthetic bacteria are Cyanobacteria, while anoxygenic photosynthetic bacteria belong to the purple bacteria (Pseudomonadota), green sulfur bacteria (e.g., Chlorobium ), green non-sulfur bacteria (e.g., Chloroflexus ), or the heliobacteria (Low %G+C Gram positives). In addition to these organisms, some microbes (e.g. the Archaeon Halobacterium or the bacterium Roseobacter , among others) can utilize light to produce energy using the enzyme bacteriorhodopsin , a light-driven proton pump. However, there are no known Archaea that carry out photosynthesis. [ 23 ]
As befits the large diversity of photosynthetic bacteria, there are many different mechanisms by which light is converted into energy for metabolism. All photosynthetic organisms locate their photosynthetic reaction centers within a membrane, which may be invaginations of the cytoplasmic membrane (Pseudomonadota), thylakoid membranes ("Cyanobacteria"), specialized antenna structures called chlorosomes (Green sulfur and non-sulfur bacteria), or the cytoplasmic membrane itself (heliobacteria). Different photosynthetic bacteria also contain different photosynthetic pigments, such as chlorophylls and carotenoids , allowing them to take advantage of different portions of the electromagnetic spectrum and thereby inhabit different niches . Some groups of organisms contain more specialized light-harvesting structures (e.g. phycobilisomes in Cyanobacteria and chlorosomes in Green sulfur and non-sulfur bacteria), allowing for increased efficiency in light utilization.
Biochemically, anoxygenic photosynthesis is very different from oxygenic photosynthesis. Cyanobacteria (and by extension, chloroplasts) use the Z scheme of electron flow in which electrons eventually are used to form NADH. Two different reaction centers (photosystems) are used and proton motive force is generated both by using cyclic electron flow and the quinone pool. In anoxygenic photosynthetic bacteria, electron flow is cyclic, with all electrons used in photosynthesis eventually being transferred back to the single reaction center. A proton motive force is generated using only the quinone pool. In heliobacteria, Green sulfur, and Green non-sulfur bacteria, NADH is formed using the protein ferredoxin , an energetically favorable reaction. In purple bacteria, NADH is formed by reverse electron flow due to the lower chemical potential of this reaction center. In all cases, however, a proton motive force is generated and used to drive ATP production via an ATPase.
Most photosynthetic microbes are autotrophic, fixing carbon dioxide via the Calvin cycle. Some photosynthetic bacteria (e.g. Chloroflexus ) are photoheterotrophs, meaning that they use organic carbon compounds as a carbon source for growth. Some photosynthetic organisms also fix nitrogen (see below).
Nitrogen is an element required for growth by all biological systems. While extremely common (80% by volume) in the atmosphere , dinitrogen gas ( N 2 ) is generally biologically inaccessible due to its high activation energy . Throughout all of nature, only specialized bacteria and Archaea are capable of nitrogen fixation, converting dinitrogen gas into ammonia ( NH 3 ), which is easily assimilated by all organisms. [ 25 ] These prokaryotes, therefore, are very important ecologically and are often essential for the survival of entire ecosystems. This is especially true in the ocean, where nitrogen-fixing cyanobacteria are often the only sources of fixed nitrogen, and in soils, where specialized symbioses exist between legumes and their nitrogen-fixing partners to provide the nitrogen needed by these plants for growth.
Nitrogen fixation can be found distributed throughout nearly all bacterial lineages and physiological classes but is not a universal property. Because the enzyme nitrogenase , responsible for nitrogen fixation, is very sensitive to oxygen which will inhibit it irreversibly, all nitrogen-fixing organisms must possess some mechanism to keep the concentration of oxygen low. Examples include:
The production and activity of nitrogenases is very highly regulated, both because nitrogen fixation is an extremely energetically expensive process (16–24 ATP are used per N 2 fixed) and due to the extreme sensitivity of the nitrogenase to oxygen. | https://en.wikipedia.org/wiki/Microbial_metabolism |
Microbial oxidation of sulfur refers to the process by which microorganisms oxidize reduced sulfur compounds to obtain energy, often supporting autotrophic carbon fixation. This process is primarily carried out by chemolithoautotrophic sulfur-oxidizing prokaryotes , which use compounds such as hydrogen sulfide (H₂S), elemental sulfur (S⁰), thiosulfate (S₂O₃²⁻), and sulfite (SO₃²⁻) as electron donors. The oxidation of these substrates is typically coupled to the reduction of oxygen (O₂) or nitrate (NO₃⁻) as terminal electron acceptors. [ 1 ] [ 2 ] Under anaerobic conditions, some sulfur-oxidizing bacteria can use alternative oxidants, and certain phototrophic sulfur oxidizers derive energy from light while using sulfide or elemental sulfur as electron sources. [ 3 ]
Several key microbial groups involved in sulfur oxidation include genera such as Beggiatoa , Thiobacillus , Acidithiobacillus , and Sulfurimonas , each adapted to specific redox conditions and environmental niches. [ 4 ] [ 5 ] [ 6 ] Metabolic pathways like the Sox (sulfur oxidation) system, reverse dissimilatory sulfite reductase (rDSR) pathway, and the SQR (sulfide:quinone oxidoreductase) pathway are discussed as central mechanisms through which these microbes mediate sulfur transformations. [ 7 ] [ 8 ]
Microbial sulfur oxidation plays a major role in the biogeochemical cycling of sulfur and contributes to nutrient dynamics in environments hosting both abundant reduced sulfur species and low concentrations of oxygen. These include marine sediments, hydrothermal vents, cold seeps, sulfidic caves, oxygen minimum zones (OMZs), and stratified water columns. [ 9 ] Microbial communities are structured by local biogeochemical gradients and their sulfur-oxidizing activity links carbon and nitrogen cycling in suboxic or anoxic environments. [ 10 ] Through their metabolic versatility and ecological distribution, sulfur-oxidizing microorganisms help maintain redox balance and influence the chemistry of their surrounding environments, supporting broader ecosystem functioning. [ 11 ] [ 12 ]
The oxidation of hydrogen sulfide is a significant environmental process, particularly in the context of Earth's history, during which oceanic conditions were often characterized by very low oxygen and high sulfidic concentrations. The modern analog ecosystems are deep marine basins, for instance in the Black Sea, near the Cariaco trench and the Santa Barbara basin. Other zones of the ocean that experience periodic anoxic and sulfidic conditions are the upwelling zones off the coasts of Chile and Namibia, and hydrothermal vents, which are a key source of H 2 S to the ocean. [ 13 ] Sulfur oxidizing microorganisms (SOM) are thus restricted to upper sediment layers in these environments, where oxygen and nitrate are more readily available. The SOM may play an important yet unconsidered role in carbon sequestration , [ 14 ] since some models [ 15 ] and experiments with Gammaproteobacteria [ 16 ] [ 17 ] have suggested that sulfur-dependent carbon fixation in marine sediments could be responsible for almost half of total dark carbon fixation in the oceans. Further, they may have been critical to the evolution of eukaryotic organisms, given that sulfur metabolism is hypothesized to have driven the formation of the symbiotic associations that sustained eukaryotes (see below). [ 18 ]
Although the biological oxidation of reduced sulfur compounds competes with abiotic chemical reactions (e.g. the iron-mediated oxidation of sulfide to iron sulfide (FeS) or pyrite (FeS 2 )), [ 19 ] thermodynamic and kinetic considerations suggest that biological oxidation far exceeds the chemical oxidation of sulfide in most environments. Experimental data from the anaerobic phototroph Chlorobaculum tepidum indicates that microorganisms may enhance sulfide oxidation by three or more orders of magnitude. [ 13 ] However, the general contribution of microorganisms to total sulfur oxidation in marine sediments is still unknown. The SOM of Alphaproteobacteria , Gammaproteobacteria and Campylobacterota account for average cell abundances of 10 8 cells/m 3 in organic-rich marine sediments. [ 20 ] Considering that these organisms have a very narrow range of habitats, as explained below, a major fraction of sulfur oxidation in many marine sediments may be accounted for by these groups. [ 21 ]
Given that the maximal concentrations of oxygen, nitrate and sulfide are usually separated in depth profiles, many SOM cannot directly access their hydrogen or electron sources (reduced sulfur species) and energy sources (O 2 or nitrate) simultaneously. This limitation has led SOM to develop different morphological adaptations. [ 21 ] The large sulfur bacteria (LSB) of the family Beggiatoaceae (Gammaproteobacteria) have been used as model organisms for benthic sulfur oxidation. They are known as 'gradient organisms,' species that are indicative of hypoxic (low oxygen) and sulfidic (rich in reduced sulfur species) conditions. They internally store large amounts of nitrate and elemental sulfur to overcome the spatial gap between oxygen and sulfide. Some species of Beggiatoaceae are filamentous and can thus glide between oxic/suboxic and sulfidic environments, while the non-motile species rely on nutrient suspensions, fluxes, or attach themselves to larger particles. [ 21 ] Some aquatic, non-motile LSB are the only known free-living bacteria that utilize two distinct carbon fixation pathways: the Calvin-Benson cycle (used by plants and other photosynthetic organisms) and the reverse tricarboxylic acid cycle . [ 22 ]
Another evolutionary strategy of SOM is form mutualistic relationships with motile eukaryotic organisms. The symbiotic SOM provides carbon and, in some cases, bioavailable nitrogen to the host, and receives enhanced access to resources and shelter in return. This lifestyle has evolved independently in sediment-dwelling ciliates , oligochaetes , nematodes , flatworms and bivalves . [ 23 ] Recently, a new mechanism for sulfur oxidation was discovered in filamentous bacteria. This mechanism, called electrogenic sulfur oxidation (e-SOx), involves the formation of multicellular bridges that connect the oxidation of sulfide in anoxic sediment layers with the reduction of oxygen or nitrate in oxic surface sediments, generating electric currents over centimeter-long distances. The so-called cable bacteria are widespread in shallow marine sediments, [ 24 ] and are believed to conduct electrons through structures inside a common periplasm of the multicellular filament. [ 25 ] This process may influence the cycling of elements at aquatic sediment surfaces, for instance, by altering iron speciation. [ 26 ] The LSB and cable bacteria are hypothesized to be restricted to undisturbed sediments with stable hydrodynamic conditions, [ 27 ] while symbiotic SOM and their hosts have mainly been identified in permeable coastal sediments. [ 21 ]
The oxidation of reduced sulfur compounds is performed exclusively by bacteria and archaea . Archaea involved in this process are aerobic and belong to the order Sulfolobales , [ 28 ] [ 29 ] characterized by acidophiles ( extremophiles that require low pHs to grow) and thermophiles (extremophiles that require high temperatures to grow). The most studied have been the genera Sulfolobus , an aerobic archaea , and Acidianus , a facultative anaerobe (i.e. an organism that can obtain energy either by aerobic or anaerobic respiration).
Sulfur oxidizing bacteria (SOB) are aerobic, anaerobic or facultative, with most of them being obligate (capable of metabolizing only a specific compound) or facultative (capable of metabolizing a secondary compound when primary compound is absent) autotrophs that can utilize either carbon dioxide or organic compounds as a source of carbon ( mixotrophs ). [ 30 ] The most abundant and studied SOB are in the family Thiobacilliaceae, found in terrestrial environments, and in the family Beggiatoaceae, found in aquatic environments. [ 30 ] Aerobic sulfur oxidizing bacteria are mainly mesophilic , growing optimally at moderate ranges of temperature and pH, although some are thermophilic and/or acidophilic. Outside of these families, other SOB described belong to the genera Acidithiobacillus , [ 31 ] Aquaspirillum , [ 32 ] Aquifex , [ 33 ] Bacillus , [ 34 ] Methylobacterium , [ 35 ] Paracoccus , Pseudomonas [ 32 ] Starkeya , [ 36 ] Thermithiobacillus , [ 31 ] and Xanthobacter . [ 32 ] On the other hand, the cable bacteria belong to the family Desulfobulbaceae of the Deltaproteobacteria and are currently represented by two candidate genera, " Candidatus Electronema" and " Candidatus Electrothrix ." [ 37 ]
Anaerobic SOB (AnSOB) are mainly neutrophilic/mesophilic photosynthetic autotrophs , obtaining energy from sunlight but using reduced sulfur compounds instead of water as hydrogen or electron donors for photosynthesis . AnSOB include some purple sulfur bacteria (Chromatiaceae) [ 38 ] such as Allochromatium , [ 39 ] and green sulfur bacteria (Chlorobiaceae), as well as the purple non-sulfur bacteria (Rhodospirillaceae) [ 40 ] and some Cyanobacteria . [ 30 ] The AnSOB Cyanobacteria are only able to oxidize sulfide to elemental sulfur and have been identified as Oscillatoria , Lyngbya , Aphanotece, Microcoleus , and Phormidium. [ 41 ] Some AnSOB, such as the facultative anaerobes Thiobacillus spp., and Thermothrix sp., are chemolithoautotrophs , meaning that they obtain energy from the oxidation of reduced sulfur species, which is then used to fix CO 2 . Others, such as some filamentous gliding green bacteria (Chloroflexaceae), are mixotrophs. From all of the SOB, the only group that directly oxidize sulfide to sulfate in an abundance of oxygen without accumulating elemental sulfur are the Thiobacilli . The other groups accumulate elemental sulfur, which they may oxidize to sulfate when sulfide is limited or depleted. [ 30 ]
SOB have prospective use in environmental and industrial settings for detoxifying hydrogen sulfide, soil bioremediation, and wastewater treatment . In highly basic and ionic environments, Thiobacillus thiooxidans has been observed to increase the pH of soil from 1.5pH to a neutral 7.0pH. [ 42 ] The use of SOB in the detoxification of hydrogen sulfide can circumvent detrimental effects from the conventional oxidation methods of hydrogen peroxide (H 2 O 2 ), chlorine gas (Cl 2 ), and hypochlorite (NaClO) usage. [ 43 ] SOB of the Beggiotoa genera oxidize sulfur compounds in microaerophilic up-flow sludge beds during wastewater treatment, [ 43 ] and can be combined with nitrogen-reducing bacteria to effectively remove chemical build-ups in industrial settings. [ 44 ]
The chemolithotrophic subset of SOB are gram-negative, rod-shaped bacteria, which abide in a wide range of environments—from anoxic to oxic, 4 to 90°C, and 1 to 9pH. [ 45 ] Chemolithotrophic SOB play a key role in agricultural ecosystems by oxidizing reduced sulfur fertilizers into available forms, such as sulfate, for plants. SOB are often present in agricultural ecosystems at low densities, creating the opportunity for inoculation to increase nutrient availability. Presence of Thiobacillus thiooxidans has been shown to increase phosphorus availability in addition to the oxidation of sulfur. [ 46 ] Utilization of SOB in treating alkaline and low available-sulfur soils, such as those in Iran, could directly increase crop yields in many ecosystems around the world. [ 47 ]
Certain SOB have the potential to serve as biotic pesticides and anti-infectious agents for the control of crops. [ 48 ] The benefits of utilization have been demonstrated through the outcomes of sulfur-oxidation, including balancing sodium content as well as increasing sulfur and phosphorus availability in the soil. Increased levels of reduced sulfur compounds in acidic soil permits the growth of Streptomyces scabies and S. ipomea , both pathogens of potato plants. Presence of SOB such as Thiobacillus have decreased the growth of these bacteria, as well as root pathogens such as Rhizoctonia solani . An additional impact of SOB on crop protection includes a collateral effect of increased sulfur content in plants, resulting in resistance to Rhizoctonia .
SOB such as Hallothiobacillus and Thiobacillus have been shown to play a role in regulating the pH of mining impoundment waters in an oscillating cycle over the course of several years. [ 49 ] In the presence of oxygen, Halothiobacillus drives the ecosystem into a low pH, down to 4.3, and significantly decreases thiosulfate (S2O32-) levels through the sulfur oxidation (Sox) pathway. In the absence of oxygen, Thiobacillus dominates, leading to increased thiosulfate without a shift in pH. The increase in thiosulfate results from an incomplete Sox pathway coupled with the oxidation of sulfide to sulfite in the reverse dissimilatory sulfite reduction (rDsr) pathway. [ 49 ] These opposing pathways result in adverse events for downstream environments by blocking the discharge of sulfur compounds.
There are two described pathways for the microbial oxidation of sulfide:
Similarly, two pathways for the oxidation of sulfite (SO 3 2- ) have been identified:
On the other hand, at least three pathways exist for the oxidation of thiosulfate (S 2 O 3 2- ):
In any of these pathways, oxygen is the preferred electron acceptor , but in oxygen-limited environments, nitrate , oxidized forms of iron and even organic matter are used instead. [ 61 ]
Cyanobacteria normally perform oxygenic photosynthesis by utilizing water as an electron donor. However, in the presence of sulfide, oxygenic photosynthesis is inhibited, and some cyanobacteria can perform anoxygenic photosynthesis by the oxidation of sulfide to thiosulfate by using Photosystem I with sulfite as a possible intermediate sulfur compound. [ 62 ] [ 63 ]
Sulfide oxidation can proceed under aerobic or anaerobic conditions. Aerobic sulfide-oxidizing bacteria usually oxidize sulfide to sulfate and are obligate or facultative chemolithoautotrophs. The latter can grow as heterotrophs , obtaining carbon from organic sources, or as autotrophs, using sulfide as the electron donor (energy source) for CO 2 fixation. [ 30 ] The oxidation of sulfide can proceed aerobically by two different mechanisms: substrate-level phosphorylation , which is dependent on adenosine monophosphate (AMP), and oxidative phosphorylation independent of AMP, [ 64 ] which has been detected in several Thiobacilli ( T. denitrificans , T. thioparus, T. novellus and T. neapolitanus ), as well as in Acidithiobacillus ferrooxidans . [ 65 ] The archaeon Acidianus ambivalens appears to possess both an ADP-dependent and an ADP-independent pathway for the oxidation of sulfide. [ 66 ] Similarly, both mechanisms operate in the chemoautotroph Thiobacillus denitrificans , [ 67 ] which can oxidize sulfide to sulfate anaerobically by utilizing nitrate—which is reduced to dinitrogen (N 2 )—as a terminal electron acceptor. [ 68 ] Two other anaerobic strains that can perform a similar process were identified as similar to Thiomicrospira denitrificans and Arcobacter . [ 69 ]
Among the heterotrophic SOB are included species of Beggiatoa that can grow mixotrophically, using sulfide to obtain energy (autotrophic metabolism) or to eliminate metabolically formed hydrogen peroxide in the absence of catalase (heterotrophic metabolism). [ 70 ] Other organisms, such as the Bacteria Sphaerotilus natans [ 71 ] and the yeast Alternaria [ 72 ] are able to oxidize sulfide to elemental sulfur by means of the rDsr pathway. [ 73 ]
Some Bacteria and Archaea can aerobically oxidize elemental sulfur to sulfuric acid . [ 30 ] Acidithiobacillus ferrooxidans and Thiobacillus thioparus can oxidize sulfur to sulfite by means of an oxygenase enzyme, although it is hypothesized that an oxidase could also serve as an energy saving mechanism. [ 74 ] In the anaerobic oxidation of elemental sulfur, it is hypothesized that the Sox pathway plays an significant role, although the complexity of this pathway is not yet thoroughly understood. [ 56 ] Thiobacillus denitrificans uses oxidized forms of nitrogen as an energy source and terminal electron acceptor instead of oxygen. [ 75 ]
Most of the chemosynthetic autotrophic bacteria that can oxidize elemental sulfur to sulfate are also able to oxidize thiosulfate to sulfate as a source of reducing power for carbon dioxide assimilation. However, the mechanisms that these bacteria utilize may vary, since some species, such as the photosynthetic purple bacteria, transiently accumulate extracellular elemental sulfur during the oxidation of tetrathionate, while other species, such as the green sulfur bacteria, do not. [ 30 ] A direct oxidation reaction ( T. versutus [ 76 ] ), as well as others that involve sulfite ( T. denitrificans ) and tetrathionate ( A. ferrooxidans , A. thiooxidans, and Acidiphilum acidophilum [ 77 ] ) as intermediate compounds, have been proposed. Some mixotrophic bacteria only oxidize thiosulfate to tetrathionate. [ 30 ]
The mechanism of bacterial oxidation of tetrathionate is still unclear and may involve sulfur disproportionation , during which both sulfide and sulfate are produced from reduced sulfur species and hydrolysis reactions. [ 30 ]
The fractionation of sulfur and oxygen isotopes during microbial sulfide oxidation (MSO) has been studied to assess its potential as a proxy to differentiate it from the abiotic oxidation of sulfur. [ 78 ] The light isotopes of the elements that are most commonly found in organic molecules, such as 12 C, 16 O, 1 H, 14 N and 32 S, form bonds that are broken slightly more easily than bonds between the corresponding heavy isotopes, 13 C, 18 O, 2 H, 15 N and 34 S. Because there is a lower energetic cost associated with the use of light isotopes, enzymatic processes usually discriminate against the heavy isotopes, and, as a consequence, biological fractionations of isotopes are expected between the reactants and the products. A normal kinetic isotope effect is that in which the products are depleted significantly in the heavy isotope relative to the reactants (low heavy isotope to light isotope ratio), and although this is not always the case, the study of isotope fractionations between enzymatic processes may enable tracing of the source of the product.
The formation of sulfate in aerobic conditions entails the incorporation of four oxygen atoms from water, and when coupled with dissimilatory nitrate reduction (DNR)—the preferential reduction pathway under anoxic conditions—this process can involve an additional contribution of oxygen atoms from nitrate. The δ 18 O value of the newly formed sulfate thus depends on the δ 18 O value of the water, the isotopic fractionation associated with the incorporation of oxygen atoms from water to sulfate and a potential exchange of oxygen atoms between sulfur and nitrogen intermediates and water. [ 79 ] MSO has been found to produce small fractionations in 18 O compared to water (~5‰). Given the very small fractionation of 18 O that usually accompanies MSO, the relatively higher depletions in 18 O of the sulfate produced by MSO coupled to DNR (-1.8 to -8.5 ‰) suggest a kinetic isotope effect in the incorporation of oxygen from water to sulfate and the role of nitrate as a potential alternative source of light oxygen. [ 79 ] The fractionations of oxygen produced by sulfur disproportionation from elemental sulfur have been found to be higher, with reported values from 8 to 18.4‰, which suggests a kinetic isotope effect in the pathways involved in the oxidation of elemental sulfur to sulfate, although more studies are necessary to determine what are the specific steps and conditions that favor this fractionation. The table below summarizes the reported fractionations of oxygen isotopes from MSO in different organisms and conditions.
(no temperature provided)
Anaerobic
(no temperature provided)
(chemolithotroph)
−8.5 to −2.1‰ (21 °C)
(chemolithotroph; "cable bacteria")
Enrichment culture
12.7 to 17.9‰ (28 °C)
(chemolithotroph; "cable bacteria")
Enrichment culture
Aerobic MSO generates depletions in the 34 S of sulfate that have been found to be as small as −1.5‰ and as large as -18‰. For most microorganisms and oxidation conditions, only small fractionations accompany either the aerobic or anaerobic oxidation of sulfide, elemental sulfur, thiosulfate and sulfite to elemental sulfur or sulfate. The phototrophic oxidation of sulfide to thiosulfate under anoxic conditions also generates negligible fractionations. Although the change in sulfur isotopes is usually small during MSO, MSO oxidizes reduced forms of sulfur which are usually depleted in 34 S compared to seawater sulfate. Therefore, large-scale MSO can also significantly affect the sulfur isotopes of a reservoir. It has been proposed that the observed global average S-isotope fractionation is around −50‰, instead of the theoretically predicted value of -70‰, because of MSO. [ 85 ]
In the chemolithotrophs Thiobacillus denitrificans and Sulfurimonas denitrificans , MSO coupled with DNR has the effect of inducing the SQR and Sox pathways, respectively. In both cases, a small fractionation in the 34 S of the sulfate, lower than -4.3‰, has been measured. Sulfate depletion in 34 S from MSO could be used to trace sulfide oxidation processes in the environment, although a distinction between the SQR and Sox pathways is not currently possible. [ 79 ] The depletion produced by MSO coupled to DNR is similar to up to -5‰ depletion estimated for the 34 S in the sulfide produced from rDsr. [ 86 ] [ 87 ] In contrast, disproportionation under anaerobic conditions generates sulfate enriched in 34 S up to 9‰ and ~34‰ from sulfide and elemental sulfur, respectively. The isotope effect of disproportionation is, however, limited by the rates of sulfate reduction and MSO. [ 88 ] Similar to the fractionation of oxygen isotopes, the larger fractionations in sulfate from the disproportionation of elemental sulfur point to a key step or pathway critical for inducing this large kinetic isotope effect. The table below summarizes the reported fractionations of sulfur isotopes from MSO in different organisms and conditions.
(product/reactant)
(no temperature provided)
(no temperature provided)
Anaerobic
−2.9 to −1.6‰ (28 °C)
(chemolithotroph)
Calothrix sp . (Cyanobacteria)
Sulfate
2‰ (30-35 °C)
Sulfate
(chemolithotroph)
D. thiozymogenes (chemolithotroph; "cable bacteria")
0‰ (30 °C; compared to the sulfonate functional group); 2 to 4‰ (30 °C; compared to the sulfane functional group)
(chemolithotroph; "cable bacteria")
Enrichment culture
16.6‰ (28 °C)
Desulfocapsa thiozymogenes
(chemolithotrophs; "cable bacteria")
Desulfobulbus propionicus ( chemoorganotroph)
Marine enrichments and sediments
17.4‰ (30 °C)
33.9‰ (35 °C)
17.1 to 20.6‰ (28 °C)
(chemolithotroph; "cable bacteria")
Enrichment culture
−0.2 to 1.1‰ (28 °C)
(chemolithotroph)
D. thiozymogenes
(chemolithotroph; "cable bacteria")
7 to 9‰ (30 °C) | https://en.wikipedia.org/wiki/Microbial_oxidation_of_sulfur |
Microbial pathogenesis is a field of microbiology that started at least as early as 1988, with the identification of the triune Falkow's criteria , aka molecular Koch's postulates . [ 1 ] [ 2 ] In 1996, Fredricks and Relman proposed a seven-point list of "Molecular Guidelines for Establishing Microbial Disease Causation," because of "the discovery of nucleic acids " by Watson and Crick "as the source of genetic information and as the basis for precise characterization of an organism ." The subsequent development of the "ability to detect and manipulate these nucleic acid molecules in microorganisms has created a powerful means for identifying previously unknown microbial pathogens and for studying the host-parasite relationship ." [ 2 ]
In 1996, Fredricks and Relman suggested the following postulates for the novel field of microbial pathogenesis. [ 2 ] [ 3 ]
This microbiology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Microbial_pathogenesis |
Microbial phylogenetics is the study of the manner in which various groups of microorganisms are genetically related. This helps to trace their evolution . [ 1 ] [ 2 ] To study these relationships biologists rely on comparative genomics , as physiology and comparative anatomy are not possible methods. [ 3 ]
Microbial phylogenetics emerged as a field of study in the 1960s, scientists started to create genealogical trees based on differences in the order of amino acids of proteins and nucleotides of genes instead of using comparative anatomy and physiology. [ 4 ] [ 5 ]
One of the most important figures in the early stage of this field is Carl Woese , who in his researches, focused on Bacteria , looking at RNA instead of proteins. More specifically, he decided to compare the small subunit ribosomal RNA (16rRNA) oligonucleotides. Matching oligonucleotides in different bacteria could be compared to one another to determine how closely the organisms were related. In 1977, after collecting and comparing 16s rRNA fragments for almost 200 species of bacteria, Woese and his team in 1977 concluded that Archaebacteria were not part of Bacteria but completely independent organisms. [ 3 ] [ 6 ]
In the 1980s microbial phylogenetics went into its golden age, as the techniques for sequencing RNA and DNA improved greatly. [ 7 ] [ 8 ] For example, comparison of the nucleotide sequences of whole genes was facilitated by the development of the means to clone DNA, making possible to create many copies of sequences from minute samples. Of incredible impact for the microbial phylogenetics was the invention of the polymerase chain reaction (PCR). [ 9 ] [ 10 ] All these new techniques led to the formal proposal of the three domains of life: Bacteria , Archaea (Woese himself proposed this name to replace the old nomination of Archaebacteria), and Eukarya, arguably one of the key passage in the history of taxonomy. [ 11 ]
One of the intrinsic problems of studying microbial organisms was the dependence of the studies from pure culture in a laboratory. Biologists tried to overcome this limitation by sequencing rRNA genes obtained from DNA isolated directly from the environment. [ 12 ] [ 13 ] This technique made possible to fully appreciate that bacteria, not only to have the greatest diversity but to constitute the greatest biomass on earth. [ 14 ]
In the late 1990s sequencing of genomes from various microbial organisms started and by 2005, 260 complete genomes had been sequenced resulting in the classification of 33 eucaryotes, 206 eubacteria, and 21 archeons. [ 15 ]
In the early 2000s, scientists started creating phylogenetic trees based not on rRNA , but on other genes with different function (for example the gene for the enzyme RNA polymerase [ 16 ] ). The resulting genealogies differed greatly from the ones based on the rRNA. These gene histories were so different between them that the only hypothesis that could explain these divergences was a major influence of horizontal gene transfer (HGT), a mechanism which permits a bacterium to acquire one or more genes from a completely unrelated organism. [ 17 ] HGT explains why similarities and differences in some genes have to be carefully studied before being used as a measure of genealogical relationship for microbial organisms. [ 18 ]
Studies aimed at understanding the widespread of HGT suggested that the ease with which genes are transferred among bacteria made impossible to apply ‘the biological species concept’ for them. [ 19 ] [ 20 ]
Since Darwin , every phylogeny for every organism has been represented in the form of a tree. Nonetheless, due to the great role that HGT plays for microbes some evolutionary microbiologists suggested abandoning this classical view in favor of a representation of genealogies more closely resembling a web, also known as network. However, there are some issues with this network representation, such as the inability to precisely establish the donor organism for a HGT event and the difficulty to determine the correct path across organisms when multiple HGT events happened. Therefore, there is not still a consensus between biologists on which representation is a better fit for the microbial world. [ 21 ]
Most microbial taxa have never been cultivated or experimentally characterized. Utilizing taxonomy and phylogeny are essential tools for organizing the diversity of life. Collecting gene sequences, aligning such sequences based on homologies and thus using models of mutation to infer evolutionary history are common methods to estimate microbial phylogenies. [ 22 ] Small subunit (SSU) rRNA (SSU rRNA) have revolutionized microbial classification since the 1970s and has since become the most sequenced gene [ 23 ] . Phylogenetic inferences are determined based on the genes chosen, for example, 16S rRNA gene is commonly selected to investigate inferences in Bacteria and Archaea, and microbial eukaryotes most commonly use the 18S RNA gene. [ 24 ]
Phylogenetic comparative methods ( PCMs ) are commonly utilized to compare multiple traits across organisms. Within the scope of microbiome studies, it is not common for the use of PCMs, however, recent studies have been successful in identifying genes associated with colonization of human gut. [ 22 ] This challenge was addressed through measuring the statistical association between a species that harbors the gene and the probability the species is present in the gut microbiome. The analyses showcase the combination of shotgun metagenomics paired with phylogenetically aware models. [ 25 ]
This method is commonly used for estimation of genetic and metabolic profiles of extant communities using a set of reference genomes, commonly performed with PICRUSt (Phylogenetic Investigation of Communities by Reconstructing of Unobserved States) in microbiome studies. [ 22 ] PICRUSt is a computational approach capable of prediction functional composition of a metagenome with marker data and a database of reference genomes. To predict which gene families are present, PICRUSt uses extended ancestral-state reconstruction algorithm and then combines the gene families to estimate composite metagenome. [ 26 ]
Phylogenetic variables are used to describe variables that are constructed using features in the phylogeny to summarize and contrast data of species in the phylogenetic tree. Microbiome datasets can be simplifies using phylogenetic variables by reducing the dimensions of the data to a few variables carrying biological information. [ 22 ] Recent methods such as PhILR and phylofactorization address the challenges of phylogenetic variables analysis. The PhILR transform combines statistical and phylogenetic models to overcome compositional data challenges. Incorporating both microbial evolutionary models with the isometric log-ratio transform creates the PhILR transform. [ 27 ] Phylofactorization is a dimensionality-reducing tool used to identify edges in the phylogeny from which putative functional ecological traits may have arisen. [ 28 ]
Inferences in phylogenetics requires the assumption of common ancestry or homology but when this assumption is violated the signal can be disrupted by noise. [ 23 ] It is possible for microbial traits to be unrelated due to horizontal gene transfer causing the taxonomic composition to reveal little about the function of a system. [ 29 ] | https://en.wikipedia.org/wiki/Microbial_phylogenetics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.