id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
8,255,875 | https://en.wikipedia.org/wiki/Tetrakis%28triphenylphosphine%29platinum%280%29 | Tetrakis(triphenylphosphine)platinum(0) is the chemical compound with the formula Pt(P(C6H5)3)4, often abbreviated Pt(PPh3)4. The bright yellow compound is used as a precursor to other platinum complexes.
Structure and behavior
The molecule is tetrahedral, with point group symmetry of Td, as expected for a four-coordinate metal complex of a metal with the d10 configuration. Even though this complex follows the 18 electron rule, it dissociates triphenylphosphine in solution to give the 16e− derivative containing only three PPh3 ligands:
Pt(PPh3)4 → Pt(PPh3)3 + PPh3
Synthesis and reactions
The complex is typically prepared in one-pot reaction from potassium tetrachloroplatinate(II). Reduction of this platinum(II) species with alkaline ethanol in the presence of excess triphenylphosphine affords the product as a precipitate. The reaction occurs in two distinct steps. In the first step, PtCl2(PPh3)2 is generated. In the second step, this platinum(II) complex is reduced. The overall synthesis can be summarized as:
K2[PtCl4] + 2KOH + 4PPh3 + C2H5OH → Pt(PPh3)4 + 4KCl + CH3CHO + 2H2O
Pt(PPh3)4 reacts with oxidants to give platinum(II) derivatives:
Pt(PPh3)4 + Cl2 → cis-PtCl2(PPh3)2 + 2 PPh3
Mineral acids give the corresponding hydride complexes:
Pt(PPh3)4 + HCl → trans-PtCl(H)(PPh3)2 + 2 PPh3
The reaction with oxygen affords a dioxygen complex:
Pt(PPh3)4 + O2 → Pt(η2-O2)(PPh3)2 + 2 PPh3
This complex is a precursor to the ethylene complex
Pt(η2-O2)(PPh3)2 + C2H4 → Pt(η2-C2H4)(PPh3)2 + "NaBH2(OH)2"
References
Platinum compounds
Catalysts
Homogeneous catalysis
Triphenylphosphine complexes
Coordination complexes | Tetrakis(triphenylphosphine)platinum(0) | [
"Chemistry"
] | 506 | [
"Catalysis",
"Catalysts",
"Coordination complexes",
"Coordination chemistry",
"Homogeneous catalysis",
"Chemical kinetics"
] |
8,261,048 | https://en.wikipedia.org/wiki/Antitermination | In molecular biology, antitermination is the prokaryotic cell's aid to fix premature termination during the transcription of RNA. It occurs when the RNA polymerase ignores the termination signal and continues elongating its transcript until a second signal is reached. Antitermination provides a mechanism whereby one or more genes at the end of an operon can be switched either on or off, depending on the polymerase either recognizing or not recognizing the termination signal.
Antitermination is used by some phages to regulate progression from one stage of gene expression to the next. The lambda gene N, codes for an antitermination protein (pN) that is necessary to allow RNA polymerase to read through the terminators located at the ends of the immediate early genes. Another antitermination protein, pQ, is required later in phage infection. pN and pQ act on RNA polymerase as it passes specific sites. These sites are located at different relative positions in their respective transcription units.
Antitermination may be a regulated event
Antitermination was discovered in bacteriophage infections. A common feature in the control of phage infection is that very few of the phage genes can be transcribed by the bacterial host RNA polymerase. Among these genes, however, are regulators whose products allow the next set of phage genes to be expressed. One of these types of regulator is an antitermination protein. In the absence of the antitermination protein, RNA polymerase terminates at the terminator. When the antitermination protein is present, it continues past the terminator.
The best characterized example of antitermination is provided by lambda phage, in which the phenomenon was discovered. It is used at two stages of phage expression. The antitermination protein produced at each stage is specific for the particular transcription units that are expressed at that stage.
The host RNA polymerase initially transcribes two genes, which are called the immediate early genes (N and cro). The transition to the next stage of expression is controlled by preventing termination at the ends of the immediate early genes, with the result that the delayed early genes are expressed. The antitermination protein pN acts specifically on the immediate early transcription units. Later during infection, another antitermination protein pQ acts specifically on the late transcription unit, to allow its transcription to continue past a termination sequence.
The different specificities on pN and pQ establish an important general principle: RNA polymerase interacts with transcription units in such a way that an ancillary factor can sponsor antitermination specifically for some transcripts. Termination can be controlled with the same sort of precision as initiation.
The antitermination activity of pN is highly specific, but the antitermination event is not determined by the terminators tL1 and tR1; the recognition site needed for antitermination lies upstream in the transcription unit, that is, at a different place from the terminator site at which the action eventually is accomplished.
The recognition sites required for pN action are called nut (for N utilization). The sites responsible for determining leftward and rightward antitermination are described as nutL and nutG, respectively.
When pN recognizes the nut site, it forms a persistent antitermination complex in cooperation with a number of E. coli host proteins. These include three host Nus proteins, NusA, B, and C. NusA is an interesting protein. By itself in E. coli, it is part of the transcription termination system. However, when co-opted by N, it participates in antitermination. The complex must act on RNA polymerase to ensure that the enzyme can no longer respond to the terminator. The variable locations of the nut sites indicate that this event is linked neither to initiation nor to termination, but can occur to RNA polymerase as it elongates the RNA chain past the nut site. Phages that are related to lambda have different N genes and different antitermination specificities. The region on the phage genome in which the nut sites lie has a different sequence in each of these phages, and each phage must therefore have characteristic nut sites recognized specifically by its own pN. Each of these pN products must have the same general ability to interact with the transcription apparatus in an antitermination capacity, but each product also has a different specificity for the sequence of DNA that activates the mechanism.
Processive antitermination
Antitermination in lambda is induced by two quite distinct mechanisms. The first is the result of interaction between lambda N protein and its targets in the early phage transcripts, and the second is the result of an interaction between the lambda Q protein and its target in the late phage promoter. We describe the N mechanism first. Lambda N, a small basic protein of the arginine-rich motif (ARM) family of RNA binding proteins, binds to a 15-nucleotide (nt) stem-loop called BOXB. (We will capitalize the names of sites in RNA and italicize the names of the corresponding DNA sequences; e.g., BOXB and boxB.) boxB is found twice in the lambda chromosome, once in each of the two early operons. It is close to the start point of the PL operon transcript and just downstream of the first translated gene of the PR operon. Neither the distance between the transcription start site and boxB, nor the nature of the promoter (at least in the case of sigma-70-dependent promoters), nor the nature of the terminator is relevant to N action. Although the boxB sequence is not well conserved in other bacteriophages of the lambda family, most of these phages encode proteins that are analogous to lambda N and have sequences capable of forming BOXB-like structures in their PL and PR operons. In some cases, it has been shown that these structures are recognized by the cognate N analogs. It is believed that this accounts for the phage specificity of N-mediated antitermination.
Processive antitermination requires the complete antitermination complex. The assembly of NusB, S10, and NusG onto the core complex involves nt 2 to 7 of lambda BOXA (CGCUCUUACACA), as well as the carboxyl-terminal region of N, which interacts with RNAP. The role of NusG in the N antitermination reaction is not clear. NusG binds to termination factor Rho and to RNAP. It stimulates the rate of transcription elongation and is required for the activity of certain Rho-dependent terminators. NusG is a component of the complete antitermination complex and enhances N antitermination in vitro. However, alteration of lambda BOXA to a variant called BOXA consensus (CGCUCUUUAACA) allows NusB and S10 to assemble in the absence of NusG. Furthermore, depletion of NusG has no effect on lambda N antitermination in vivo, and unlike nusA, nusB, and nusE, no point mutations in nusG that block N activity have been isolated. A NusG homolog, RfaH, enhances elongation of several transcripts in E. coli and S. typhimurium. The possibility that RfaH and NusG are redundant for N antitermination has not yet been tested, although for several other functions, the two proteins are not interchangeable.
Processive antitermination can be mediated by RNA as well as proteins. Coliphage HK022, alone among the known lambdoid phages, does not encode an analog to lambda N. Instead, it promotes antitermination of early phage transcription through the direct action of transcribed sequences called put (for polymerase utilization) sites. There are two closely related put sites, one located in the PL operon and the other located in the PR operon, roughly corresponding to the positions of the nut sequences in lambda and in other lambda relatives. put sites act in cis to promote readthrough of downstream terminators in the absence of all HK022 proteins. The put transcripts are predicted to form two stem-loops separated by a single unpaired nucleotide. This prediction is supported by mutational studies and the pattern of sensitivity of the two RNAs to cleavage with single- and double-strand-specific endoribonucleases. RNA structure is critical to antitermination because mutations that prevent the formation of base pairs in the stems reduce function, and these mutations can be suppressed by additional mutations that restore base pairing. Like lambda N and Q, the PUT sequences suppress polymerase pausing and promote processive antitermination in a purified in vitro transcription system. In contrast to lambda N, no phage or auxiliary bacterial factors are required. The only mutations known to block PUT-mediated antitermination change highly conserved amino acids located in a cysteine-rich amino-proximal domain of the RNAP beta' subunit. Strains carrying these mutations are unable to support lytic growth of HK022 but are normal in all other respects tested, including lytic growth of lambda and other lambda relatives. The phage-restricted phenotypes conferred by these mutations suggest that they alter a domain of RNAP-beta’ that interacts specifically with nascent PUT RNA in the transcription elongation complex, but this idea has not been directly tested. The stability of the putative PUT-RNAP interaction and the nature of the PUT-induced modification to the elongation complex are unknown.
Processive antitermination was first discovered in a bacteriophage, but examples have since been found in bacterial operons. The E. coli rrn operons are regulated by an antitermination mechanism that is dependent on sites that are closely related to lambda boxA and located promoter proximal to the 16S and 23S structural genes in each operon. The sequences of the rrn BOXA sites are more similar to the bacteriophage consensus than is that of lambda, and they bind NusB-S10 more efficiently. Although stem-loop structures analogous to BOXB are found promoter proximal to the BOXA sites, they are not essential for antitermination. An rrn BOXA sequence confers full antitermination activity against Rho-dependent but not against intrinsic terminators. BOXA also increases the rate of transcription elongation by RNAP. Point mutations in BOXA induce premature transcription termination. rrn antitermination requires NusB in vivo, as shown by a NusB depletion experiment. NusA stimulates the elongation rate of rrn RNA chains carrying BOXA. A role for NusA is further suggested by the observation that the nusA10 (Cs) mutation inhibits both antitermination and the rate of transcription elongation in an rrn operon. The role of other Nus factors in rrn regulation in vivo is not clear. In vitro, an antitermination complex that includes NusA, NusB, S10, and NusG forms at the BOXA sequence of rrnG, but these components are not sufficient for antitermination by themselves. An additional factor or factors that can be supplied by a cellular extract are required, but their identities are unknown.
References
Krebs, J. E., Goldstein, E. S., Lewin, B., & Kilpatrick, S. T. (2010). Antitermination may be a regulated event. In Lewin's essential genes (2nd ed., pp. 287–291). Sudbury, Massachusetts: Jones and Bartlett Publishers
Weisberg, R. A., & Gottesman, M. E. (1999). Processive Antitermination. Journal of Bacteriology, 181 (2), 359-367
External links
American Society for Microbiology Journal of Bacteriology
Lewin's essential genes on Google Books
Gene expression | Antitermination | [
"Chemistry",
"Biology"
] | 2,520 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
8,263,115 | https://en.wikipedia.org/wiki/Penning%20ionization | Penning ionization is a form of chemi-ionization, an ionization process involving reactions between neutral atoms or molecules.
The Penning effect is put to practical use in applications such as gas-discharge neon lamps and fluorescent lamps, where the lamp is filled with a Penning mixture to improve the electrical characteristics of the lamps.
History
The process is named after the Dutch physicist Frans Michel Penning who first reported it in 1927. Penning started to work at the Philips Natuurkundig Laboratorium at Eindhoven to continue the investigation of electric discharge on rare gases. Later, he started measurements on the liberation of electrons from metal surfaces by positive ions and metastable atoms, and especially on the effects related to ionization by metastable atoms.
Reaction
Penning ionization refers to the interaction between an electronically excited gas-phase atom G* and a target molecule M. The collision results in the ionization of the molecule yielding a cation M+., an electron e−, and a neutral gas molecule, G, in the ground state. Penning ionization occurs via formation of a high energy collision complex, evolving toward the formation of a cationic species, by ejecting a high energy electron.
{G^\ast} + M -> {M^{+\bullet}} + {e^-} + G
Penning ionization occurs when the target molecule has an ionization potential lower than the excited energy of the excited-state atom or molecule.
Variants
When the total electron excitation energy of colliding particles is sufficient, then the bonding energy of two particles that bonded together can also be contributed into the associative penning ionization act.
Associative Penning ionization can also occur:
{G^\ast} + M -> {MG^{+\bullet}} + e^-
Surface Penning ionization (Auger Deexcitation) refers to the interaction of the excited-state gas with a surface S, resulting in the release of an electron:
{G^\ast} + S -> {G} + {S} + e^-
The positive charge symbol S+ that would appear to be required for charge conservation is omitted, because S is a macroscopic surface and the loss of one electron has a negligible effect.
Applications
Electron spectroscopy
Penning ionization has been applied to Penning ionization electron spectroscopy (PIES) for gas chromatography detector in glow discharge by using the reaction for He or Ne. The kinetic energy of electron ejected is analyzed by the collisions between target (gas or solid) and metastable atoms by scanning the retarding field in a flight tube of the analyzer in the presence of a weak magnetic field. The electron produced by reaction has a kinetic energy E determined by:
The Penning ionization electron energy does not depend on the conditions of the experiments or any other species since both E and IE are atomic or molecular constants of the energy of He and the ionization energy for the species. Penning ionization electron spectroscopy applied to organic solids. It enables the study of local electron distribution of individual molecular orbitals, which exposes to the outside of the outermost surface layers.
Mass spectrometry
Multiple mass spectrometric techniques, including glow discharge mass spectrometry and direct analysis in real time mass spectrometry rely on Penning ionization.
Glow discharge mass spectrometry is the direct determination of trace element in solid samples. It occurs with two ionization mechanisms: the direct electron impact ionization and Penning ionization. Processes inherent to the glow discharge, namely cathodic sputtering coupled with Penning ionization, yield an ion
population from which semi-quantitative results can be directly obtained.
See also
Chemical ionization
References
Ion source
Electron spectroscopy | Penning ionization | [
"Physics",
"Chemistry"
] | 791 | [
"Spectrum (physical sciences)",
"Electron spectroscopy",
"Ion source",
"Mass spectrometry",
"Spectroscopy"
] |
4,796,040 | https://en.wikipedia.org/wiki/Chemostat | A chemostat (from chemical environment is static) is a bioreactor to which fresh medium is continuously added, while culture liquid containing left over nutrients, metabolic end products and microorganisms is continuously removed at the same rate to keep the culture volume constant. By changing the rate with which medium is added to the bioreactor the specific growth rate of the microorganism can be easily controlled within limits.
Operation
Steady state
One of the most important features of chemostats is that microorganisms can be grown in a physiological steady state under constant environmental conditions. In this steady state, growth occurs at a constant specific growth rate and all culture parameters remain constant (culture volume, dissolved oxygen concentration, nutrient and product concentrations, pH, cell density, etc.). In addition, environmental conditions can be controlled by the experimenter. Microorganisms growing in chemostats usually reach a steady state because of a negative feedback between growth rate and nutrient consumption: if a low number of cells are present in the bioreactor, the cells can grow at growth rates higher than the dilution rate as they consume little nutrient so growth is less limited by the addition of limiting nutrient with the inflowing fresh medium. The limiting nutrient is a nutrient essential for growth, present in the medium at a limiting concentration (all other nutrients are usually supplied in surplus). However, the higher the number of cells becomes, the more nutrient is consumed, lowering the concentration of the limiting nutrient. In turn, this will reduce the specific growth rate of the cells, which will lead to a decline in the number of cells as they keep being removed from the system with the outflow. This results in a steady state. Due to self-regulation, the steady state is stable. This enables the experimenter to control the specific growth rate of the microorganisms by changing the speed of the pump feeding fresh medium into the vessel.
Well-mixed
Another important feature of chemostats and other continuous culture systems is that they are well-mixed so that environmental conditions are homogenous or uniform and microorganisms are randomly dispersed and encounter each other randomly. Therefore, competition and other interactions in the chemostat are global, in contrast to biofilms.
Dilution rate
The rate of nutrient exchange is expressed as the dilution rate D. At steady state, the specific growth rate μ of the micro-organism is equal to the dilution rate D. The dilution rate is defined as the flow of medium per unit of time, F, over the volume V of culture in the bioreactor
Maximal growth rate and critical dilution rate
Specific growth rate μ is inversely related to the time it takes the biomass to double, called doubling time td, by:
Therefore, the doubling time td becomes a function of dilution rate D in steady state:
Each microorganism growing on a particular substrate has a maximal specific growth rate μmax (the rate of growth observed if growth is limited by internal constraints rather than external nutrients). If a dilution rate is chosen that is higher than μmax, the cells cannot grow at a rate as fast as the rate with which they are being removed so the culture will not be able to sustain itself in the bioreactor, and will wash out.
However, since the concentration of the limiting nutrient in the chemostat cannot exceed the concentration in the feed, the specific growth rate that the cells can reach in the chemostat is usually slightly lower than the maximal specific growth rate because specific growth rate usually increases with nutrient concentration as described by the kinetics of the Monod equation. The highest specific growth' rates (μmax) cells can attain is equal to the critical dilution rate (D'c):
where S is the substrate or nutrient concentration in the chemostat and K''S is the half-saturation constant (this equation assumes Monod kinetics).
Applications
Research
Chemostats in research are used for investigations in cell biology, as a source for large volumes of uniform cells or protein. The chemostat is often used to gather steady state data about an organism in order to generate a mathematical model relating to its metabolic processes. Chemostats are also used as microcosms in ecology and evolutionary biology. In the one case, mutation/selection is a nuisance, in the other case, it is the desired process under study. Chemostats can also be used to enrich for specific types of bacterial mutants in culture such as auxotrophs or those that are resistant to antibiotics or bacteriophages for further scientific study. Variations in the dilution rate permit the study of the metabolic strategies pursued by the organisms at different growth rates.
Competition for single and multiple resources, the evolution of resource acquisition and utilization pathways, cross-feeding/symbiosis, antagonism, predation, and competition among predators have all been studied in ecology and evolutionary biology using chemostats.
Industry
Chemostats are frequently used in the industrial manufacturing of ethanol. In this case, several chemostats are used in series, each maintained at decreasing sugar concentrations. The chemostat also serves as an experimental model of continuous cell cultures in the biotechnological industry.
Technical concerns
Foaming results in overflow with the volume of liquid not exactly constant.
Some very fragile cells are ruptured during agitation and aeration.
Cells may grow on the walls or adhere to other surfaces, which may be overcome by treating the glass walls of the vessel with a silane to render them hydrophobic. However, cells will be selected for attachment to the walls since those that do will not be removed from the system. Those bacteria that stick firmly to the walls forming a biofilm are difficult to study under chemostat conditions.
Mixing may not truly be uniform, upsetting the "static" property of the chemostat.
Dripping the media into the chamber actually results in small pulses of nutrients and thus oscillations in concentrations, again upsetting the "static" property of the chemostat.
Bacteria travel upstream quite easily. They will reach the reservoir of sterile medium quickly unless the liquid path is interrupted by an air break in which the medium falls in drops through air.
Continuous efforts to remedy each defect lead to variations on the basic chemostat quite regularly. Examples in the literature are numerous.
Antifoaming agents are used to suppress foaming.
Agitation and aeration can be done gently.
Many approaches have been taken to reduce wall growth
Various applications use paddles, bubbling, or other mechanisms for mixing
Dripping can be made less drastic with smaller droplets and larger vessel volumes
Many improvements target the threat of contamination
Experimental design considerations
Parameter choice and setup
The steady state concentration of the limiting substrate in the chemostat is independent of the influx concentration. The influx concentration will affect the cell concentration and thus the steady state OD.
Even though the limiting substrate concentration in the chemostat is usually very low, and is maintained by discrete highly concentrated influx pulses, in practice the temporal variation in the concentration within the chemostat is small (a few percent or less) and can thus be viewed as quasi-steady state.
The time it takes for the cell density (OD) to converge to a steady-state value (overshoot/undershoot) will often be long (multiple chemostat turnovers), especially when the initial inoculum is large. But, the time can be minimized with proper parameter choice.
Steady state growth
A chemostat might appear to be in steady state, but mutant strain takeovers can occur continuously, even though they are not detectable by monitoring macro scale parameters like OD or product concentrations.
The limiting substrate is usually at such low concentrations that it is undetectable. As a result, the concentration of the limiting substrate can vary greatly over time (percentage-wise) as different strains takeover the population, even if resulting changes in OD are too small to detect.
A “pulsed” chemostat (with very large influx pulses) has a substantially lower selective capacity than a standard quasi-continuous chemostat, for a mutant strain with increased fitness in limiting conditions.
By abruptly lowering the influx limiting substrate concentration it is possible to temporarily subject the cells to relatively harsher conditions, until the chemostat stabilizes back to the steady state (on the time order of the dilution rate D).
Mutation
Some types of mutant strains will appear rapidly:
If there is a SNP that can increase fitness it should appear in the population after only few chemostat doublings, for characteristically large chemostats (e.g. 10^11 E. coli cells).
A strain that requires two specific SNPs where only their combination gives a fitness advantage (whereas each one separately is neutral), is likely to appear only if the target size (the number of different SNP locations that give rise to an advantageous mutation) for each SNP is very large.
Other types of mutant strains (e.g. two SNPs with a small target size, more SNPs or in smaller chemostats) are highly unlikely to appear.
These other mutations are expected only through successive sweeps of mutants with a fitness advantage. One can only expect multiple mutants to arise if each mutation is independently beneficial, and not in cases where the mutations are individually neutral but together advantageous. Successive takeovers are the only reliable way for evolution to proceed in a chemostat.
The seemingly extreme scenario where we require every possible single SNP to co-exist at least once in the chemostat is actually quite likely. A large chemostat is very likely to reach this state.
For a large chemostat the expected time until an advantageous mutation occurs to be on the order of the chemostat turnover time. Note, this is usually substantially shorter than the time for an advantageous strain to take over the chemostat population. This is not necessarily so in a small chemostat.
The above points are expected to be the same across different asexually reproductive species (E. coli, S. cerevisiae, etc.).
Furthermore, the time until mutation appearance is independent of genome size, but dependent on per-BP mutation rate.
For characteristically large chemostats, a hyper-mutating strain does not give enough of an advantage to warrant use. Also, it does not have enough of a selective advantage to be expected to always appear through random mutation and take over the chemostat.
Single takeover
The takeover time is predictable given the relevant strain parameters.
Different dilution rates selectively favor different mutant strains to take over the chemostat population, if such a strain exists. For example:
A fast dilution rate creates a selection pressure for a mutant strain with a raised maximal growth rate;
A mid-range dilution rate creates a selection pressure for a mutant strain with a higher affinity to the limiting substrate;
A slow dilution rate creates a selection pressure for a mutant strain which can grow in media with no limiting substrate (presumably by consuming a different substrate present in the media);
The time for takeover of a superior mutant will be quite constant across a range of operation parameters. For characteristic operation values the take over time is on the order of days to weeks.
Successive takeovers
When the conditions are right (a large enough population, and multiple targets in the genome for simple advantageous mutations) multiple strains are expected to successively takeover the population, and to do so in a relatively timed and paced manner. The timing depends on the type of mutations.
In a takeover succession, even if the selective improvement of each of the strains stays constant (e.g. each new strain is better than the previous strain by a constant factor) – the takeover rate does not stay constant, but rather diminishes from strain to strain.
There are cases where successive takeovers occur so rapidly that it is very difficult to differentiate between strains, even when examining allele frequency. Thus, a lineage of multiple takeovers of consecutive strains might appear as the takeover of a single strain with a cohort of mutations.
Variations
Fermentation setups closely related to the chemostats are the turbidostat, the auxostat and the retentostat. In retentostats, culture liquid is also removed from the bioreactor, but a filter retains the biomass. In this case, the biomass concentration increases until the nutrient requirement for biomass maintenance has become equal to the amount of limiting nutrient that can be consumed.
See also
Bacterial growth
Biochemical engineering
Changestat
Continuous stirred-tank reactor (CSTR)
E. coli long-term evolution experiment
Fed-batch
References
External links
http://www.pererikstrandberg.se/examensarbete/chemostat.pdf
https://web.archive.org/web/20060504172359/http://www.rpi.edu/dept/chem-eng/Biotech-Environ/Contin/chemosta.htm
A final thesis including mathematical models of the chemostat and other bioreactors
A page about one laboratory chemostat design
Comprehensive chemostat manual (Dunham lab). Procedures and principles are general.
Bioreactors | Chemostat | [
"Chemistry",
"Engineering",
"Biology"
] | 2,703 | [
"Bioreactors",
"Biological engineering",
"Chemical reactors",
"Biochemical engineering",
"Microbiology equipment"
] |
4,800,139 | https://en.wikipedia.org/wiki/Ladle%20%28metallurgy%29 | In metallurgy, a ladle is a bucket-shaped container or vessel used to transport and pour out molten metals. Ladles are often used in foundries and range in size from small hand-carried vessels that resemble a kitchen ladle and hold to large steelmill ladles that hold up to . Many non-ferrous foundries also use ceramic crucibles for transporting and pouring molten metal and will also refer to these as ladles.
Types
The basic term is often prefixed to define the actual purpose of the ladle. The basic ladle design can therefore include many variations that improve the usage of the ladle for specific tasks. For example:
Casting ladle: a ladle used to pour molten metal into moulds to produce the casting.
Transfer ladle: a ladle used to transfer a large amount of molten metal from one process to another. Typically a transfer ladle will be used to transfer molten metal from a primary melting furnace to either a holding furnace or an auto-pour unit.
Treatment ladle: a ladle used for a process to take place within the ladle to change some aspect of the molten metal. A typical example being to convert cast iron to ductile iron by the addition of various elements into the ladle.
Unless the ladle is to be used with alloys that have very low temperature melting point, the ladle is also fitted with a refractory lining. It is the refractory lining that stops the steel vessel from suffering damage when the ladle is used to transport metals with high melting temperatures that, if the molten metal came in direct contact with the ladle shell, would rapidly melt through the shell. Refractory lining materials come in many forms and the right choice very much depends on each foundry's working practices. Traditionally ladles used to be lined using pre-cast firebricks however refractory concretes have tended to supersede these in many countries.
Foundry ladles are normally rated by their working capacity rather than by their physical size. Hand-held ladles are typically known as handshank ladles and are fitted with a long handle to keep the heat of the metal away from the person holding it. Their capacity is limited to what a man can safely handle. Larger ladles are usually referred to as geared crane ladles. Their capacity is usually determined by the ladle function. Small hand-held ladles might also be crucibles that are fitted with carrying devices. However, in most foundries, the foundry ladle refers to a steel vessel that has a lifting bail fitted so that the vessel can be carried by an overhead crane or monorail system and is also fitted with a mechanical means for rotating the vessel, usually in the form of a gearbox. The gearbox can either be manually operated or powered operation. (See the paragraph below for further details).
For the transportation of very large volumes of molten metal, such as in steel mills, the ladle can run on wheels, a purpose-built ladle transfer car or be slung from an overhead crane and will be tilted using a second overhead lifting device.
The most common shape for a ladle is a vertical cone, but other shapes are possible. Having a tapered cone as the shell adds strength and rigidity to the shell. Having the taper also helps when it comes time to remove the refractory lining. However straight sided shells are also fabricated as are other shapes.
The most common of these other shapes is known as a drum ladle and is shaped as a horizontal cylinder suspended between two bogies. Large versions, often having capacities in excess of are used in steel mills are often referred to as torpedo ladles. Torpedo ladles are commonly used to transport liquid iron from a blast furnace to another part of the steel mill. Some versions are even adapted so that they can be carried on special bogies that can be transported by either road or rail.
Pour designs
Ladles can be "lip pour" design, "teapot spout" design, "lip-axis design" or "bottom pour" design:
For lip pour design the ladle is tilted and the molten metal pours out of the ladle like water from a pitcher.
The teapot spout design, like a teapot, takes liquid from the base of the ladle and pours it out via a lip-pour spout. Any impurities in the molten metal will form on the top of the metal so by taking the metal from the base of the ladle, the impurities are not poured into the mould. The same idea is behind the bottom pour process.
Lip-axis ladles have the pivot point of the vessel as close to the tip of the pouring spout as can be practicable. Therefore as the ladle is rotated the actual pouring point has very little movement. Lip-axis pouring is often used on molten metal pouring systems where there is a need to automate the process as much as possible and the operator controls the pouring operation at a remote distance.
For bottom pour ladles, a stopper rod is inserted into a tapping hole in the bottom of the ladle. To pour metal the stopper is raised vertically to allow the metal to flow out the bottom of the ladle. To stop pouring the stopper rod is inserted back into the drain hole. Large ladles in the steelmaking industry may use slide gates below the taphole.
Ladles can be either open-topped or covered. Covered ladles have a (sometimes removable) dome-shaped lid to contain radiant heat; they lose heat slower than open-topped ladles. Small ladles do not commonly have covers, although a ceramic blanket may be used instead (where available).
Medium and large ladles which are suspended from a crane have a bail which holds the ladle on shafts, called trunnions. To tilt the ladle a gearbox is used and this is typically a worm gear. The gear mechanism may be hand operated with a large wheel or may be operated by an electric motor or pneumatic motor. Powered rotation allows the ladle operator to be moved to a safe distance and control the rotation of the ladle via a pendant or radio remote control. Powered rotation also allows the ladle to have a number of rotation speeds which may be beneficial to the overall casting process. Powered rotation obviously also reduces the effort required by the ladle operator and allows high volumes of molten metal to be transferred and poured for long periods without operator fatigue. Where the ladle is fitted with a manually operated gearbox, the type of gearbox most commonly used is the worm and wheel design because in most practical circumstances, and when correctly maintained it can be considered as "self-locking" and does not need an internal friction brake to regulate the tilting speed of the ladle. Other types of gear system can also be used but they have to be fitted with an additional braking system that can hold the ladle if the operator takes his hand off the hand-wheel.
Lip-axis ladles may also use hydraulic rams to tilt the ladle. The largest ladles are un-geared and are typically poured using a special, two-winch crane, where the main winch carries the ladle while the second winch engages a lug at the bottom of the ladle. Raising the second winch then rotates the ladle on its trunnions.
Ladles are often designed for special purposes such as adding alloys to the molten metal. Ladles may also have porous plugs inserted into the base, so inert gases can be bubbled through the ladle to enhance alloying or metallic treatment practices.
See also
Crucible
Ladle transfer car
References
External links
About ladles and perlite used in a foundry
Casting (manufacturing)
Metallurgical processes
Steelmaking | Ladle (metallurgy) | [
"Chemistry",
"Materials_science"
] | 1,587 | [
"Metallurgical processes",
"Steelmaking",
"Metallurgy"
] |
4,802,556 | https://en.wikipedia.org/wiki/Rheoscopic%20fluid | In fluid mechanics (specifically rheology), rheoscopic fluids are fluids whose internal currents are visible as it flows. Such fluids are effective in visualizing dynamic currents, such as convection and laminar flow. They are microscopic crystalline platelets such as mica, metallic flakes, or fish scales in suspension in a fluid such as water or glycol stearate.
When the fluid is put in motion, the suspended particles orient themselves with the local fluid shear. With appropriate illumination, the particle-filled fluid will reflect differing intensities of light.
A Kalliroscope is an art device/technique based on rheoscopic fluids (using crystalline guanine as the indicator particles) invented by artist Paul Matisse.
See also
Reynolds number
References
External links
University of Chicago Materials Research Centre Demonstration
Instructables: Making Rheoscopic fluid
Paul Matisse, rheoscopist
artistic techniques
fluid dynamics
educational toys | Rheoscopic fluid | [
"Chemistry",
"Engineering"
] | 195 | [
"Piping",
"Chemical engineering",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
4,803,621 | https://en.wikipedia.org/wiki/Vesicular%20monoamine%20transporter%202 | The solute carrier family 18 member 2 (SLC18A2) also known as vesicular monoamine transporter 2 (VMAT2) is a protein that in humans is encoded by the SLC18A2 gene. SLC18A2 is an integral membrane protein that transports monoamines—particularly neurotransmitters such as dopamine, norepinephrine, serotonin, and histamine—from cellular cytosol into synaptic vesicles. In nigrostriatal pathway and mesolimbic pathway dopamine-releasing neurons, SLC18A2 function is also necessary for the vesicular release of the neurotransmitter GABA.
Binding sites and ligands
SLC18A2 is believed to possess at least two distinct binding sites, which are characterized by tetrabenazine (TBZ) and reserpine binding to the transporter. Amphetamine (TBZ site) and methamphetamine (reserpine site) bind at distinct sites on SLC18A2 to inhibit its function. SLC18A2 inhibitors like tetrabenazine and reserpine reduce the concentration of monoamine neurotransmitters in the synaptic cleft by inhibiting uptake through SLC18A2; the inhibition of SLC18A2 uptake by these drugs prevents the storage of neurotransmitters in synaptic vesicles and reduces the quantity of neurotransmitters that are released through exocytosis. Although many substituted amphetamines induce the release of neurotransmitters from vesicles through SLC18A2 while inhibiting uptake through SLC18A2, they may facilitate the release of monoamine neurotransmitters into the synaptic cleft by simultaneously reversing the direction of transport through the primary plasma membrane transport proteins for monoamines (i.e., the dopamine transporter, norepinephrine transporter, and serotonin transporter) in monoamine neurons. Other SLC18A2 inhibitors such as GZ-793A inhibit the reinforcing effects of methamphetamine, but without producing stimulant or reinforcing effects themselves.
Researchers have found that inhibiting the dopamine transporter (but not SLC18A2) will block the effects of amphetamine and cocaine; while, in another experiment, observing that disabling SLC18A2 (but not the dopamine transporter) prevents any notable action in test animals after amphetamine administration yet not cocaine administration. This suggests that amphetamine may be an atypical substrate with little to no ability to prevent dopamine reuptake via binding to the dopamine transporter but, instead, uses it to enter a neuron where it then interacts with SLC18A2 to induce efflux of dopamine from their vesicles into the cytoplasm whereupon dopamine transporters with amphetamine substrates attached move this recently liberated dopamine into the synaptic cleft.
Although most amphetamines and other monoamine releasing agents (MRA) act on VMAT2, several MRAs, including phentermine, phenmetrazine, and benzylpiperazine (BZP), are inactive at VMAT2. Others, including cathinones like mephedrone, methcathinone, and methylone, also show only weak VMAT2 activity (e.g., ~10-fold weaker than the corresponding amphetamines). MRAs acting on VMAT2 additionally continue to induce monoamine release in in-vitro systems in which VMAT2 is absent or inhibited.
List of VMAT2 Inhibitors
Lobelane
Quinlobelane
UKCP-110
CT-005404
GZ-11608
4-Benzyl-1-(3,4-dimethoxyphenethyl)piperidine [15565-25-0]
PC118857804
Valbenazine
JPC-141 (PC155541952)
arylpiperidinylquinazolines (APQs)
Inhibition
SLC18A2 is essential for enabling the release of neurotransmitters from the axon terminals of monoamine neurons into the synaptic cleft. If SLC18A2 function is inhibited or compromised, monoamine neurotransmitters such as dopamine cannot be released into the synapse via typical release mechanisms (i.e., exocytosis resulting from action potentials).
Cocaine users display a marked reduction in SLC18A2 immunoreactivity. Those with cocaine-induced mood disorders displayed a significant loss of SLC18A2 immunoreactivity; this might reflect damage to dopamine axon terminals in the striatum. These neuronal changes could play a role in causing disordered mood and motivational processes in more severely addicted users.
Induction
To date, no agent has been shown to directly interact with SLC18A2 in a way that promotes its activity. A VMAT2 positive allosteric modulator remains an elusive target in addiction and Parkinson's disease research. However, it has been observed that certain tricylcic and tetracylcic antidepressants (as well as a high-mesembrine Sceletium tortuosum extract) can upregulate the activity of VMAT2 in vitro, though whether this is due to a direct interaction is unknown.
In popular culture
Geneticist Dean Hamer has suggested that a particular allele of the SLC18A2 gene correlates with spirituality using data from a smoking survey, which included questions intended to measure "self-transcendence". Hamer performed the spirituality study on the side, independently of the National Cancer Institute smoking study. His findings were published in the mass-market book The God Gene: How Faith Is Hard-Wired into Our Genes. Hamer himself notes that SLC18A2 plays at most a minor role in influencing spirituality. Furthermore, Hamer's claim that the SLC18A2 gene contributes to spirituality is controversial. Hamer's study has not been published in a peer-reviewed journal and a reanalysis of the correlation demonstrates that it is not statistically significant.
References
Further reading
External links
Amphetamine
Biogenic amines
Molecular neuroscience
Neurotransmitter transporters
Receptors
Signal transduction
Solute carrier family
Articles containing video clips | Vesicular monoamine transporter 2 | [
"Chemistry",
"Biology"
] | 1,384 | [
"Biomolecules by chemical classification",
"Biogenic amines",
"Signal transduction",
"Receptors",
"Molecular neuroscience",
"Molecular biology",
"Biochemistry",
"Neurochemistry"
] |
6,274,233 | https://en.wikipedia.org/wiki/Encircled%20energy | In optics, encircled energy is a measure of concentration of energy in an image, or projected laser at a given range. For example, if a single star is brought to its sharpest focus by a lens giving the smallest image possible with that given lens (called a point spread function or PSF), calculation of the encircled energy of the resulting image gives the distribution of energy in that PSF.
Encircled energy is calculated by first determining the total energy of the PSF over the full image plane, then determining the centroid of the PSF. Circles of increasing radius are then created at that centroid and the PSF energy within each circle is calculated and divided by the total energy. As the circle increases in radius, more of the PSF energy is enclosed, until the circle is sufficiently large to completely contain all the PSF energy. The encircled energy curve thus ranges from zero to one.
A typical criterion for encircled energy (EE) is the radius of the PSF at which either 50% or 80% of the energy is encircled. This is a linear dimension, typically in micrometers. When divided by the lens or mirror focal length, this gives the angular size of the PSF, typically expressed in arc-seconds when specifying astronomical optical system performance.
Encircled energy is also used to quantify the spreading of a laser beam at a given distance. All laser beams spread due to the necessarily limited aperture of the optical system projecting the beam. As in star image PSF's, the linear spreading of the beam expressed as encircled energy is divided by the projection distance to give the angular spreading.
An alternative to encircled energy is ensquared energy, typically used when quantifying image sharpness for digital imaging cameras using pixels.
See also
Point spread function
Airy disc
References
Smith, Warren J., Modern Optical Engineering, 3rd ed., pp. 383–385. New York: McGraw-Hill, Inc., 2000.
Geometrical optics
Physical optics
Engineering concepts
Optical quantities | Encircled energy | [
"Physics",
"Mathematics",
"Engineering"
] | 405 | [
"Optical quantities",
"Quantity",
"Physical quantities",
"nan"
] |
12,901,064 | https://en.wikipedia.org/wiki/Quarry%20tile | Quarry tile is a building material, usually to inch (13 to 19 mm) thick, made by either the extrusion process or more commonly by press forming and firing natural clay or shales. Quarry tile is manufactured from clay in a manner similar to bricks. It is shaped from clay, and fired at a high temperature, about .
Sizes and shapes
The most traditional size in the US is nominally 6 in × 6 in × in thick. Other common sizes include 4 in × 8 in and 8 in × 8 in.
In the UK, traditional surface dimensions generally vary from 6 in × 6 in, to 12 in × 12 in. Such tiles, given the generally local and non-standardised production, commonly vary between those dimensions, but rarely stray outside of them.
Modern quarry tiles are generally thinner than their historic counterparts, sometimes as thin as 8 mm; by comparison, older tiles were rarely thinner than in and could be as thick as in thick.
Additionally, modern tiles can be found in different shapes, such as rectangular.
Finishes
Traditional quarry tiles were unglazed and either red, grey, or black/very dark blue; however, modern "decorator" tiles come in a variety of tints and finishes. Industrial quarry tile is available with abrasive frit embedded in the surface to provide a non-slip finish in wet areas such as commercial kitchens and laboratories.
Uses
Quarry tile is extensively used for floors where a very durable material is required. It can be used either indoors or outdoors, although freeze-resistant grades of tile should be used outdoors in climates where freeze-thaw action occurs. Quarry tile is used less often as a wall finish and is occasionally used for countertops, although the wide grout joints can make cleaning of countertops difficult. Most commercial kitchens require a quarry tile to be used because of its slip resistant and non-porous properties.
Installation
For floors, quarry tile is usually set in a thick bed of cementitious mortar. For wall applications, it can be set in either a thick bed of cementitious mortar or a thin bed of mastic. For both floors and walls, the joints between tiles are usually grouted with cementitious grout. Grout joints are traditionally about inch in width. Matching trim shapes such as coves, bases, shoes, and bullnoses are available to turn corners and terminate runs of the tile.
For traditional/historic applications, tiles were generally laid in lime mortar, doubling as grout, and with very fine grout joints (sometimes butted without joints, similarly to mosaic tiles).
Due to the typically square shape, quarry tiles were historically, and still today, restricted to either square or diamond patterns.
See also
Ceramic tile
Ceramic tile cutter
References
Tiling
Building materials | Quarry tile | [
"Physics",
"Engineering"
] | 561 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
12,901,396 | https://en.wikipedia.org/wiki/Bullnose | Bullnose is a term used in building construction for rounded convex trim, particularly in masonry and ceramic tile. It is also used in relation to road safety and (formerly) railroad engineering design.
Uses
Bullnose trim is used to provide a smooth, rounded edge for countertops, staircase steps, building corners, verandas, or other construction. Masonry units such as bricks, concrete masonry units or structural glazed facing tiles may be ordered from manufacturers with square or bullnosed corners.
When referring to bullnose, it is sometimes modified by adding the word quarter or half. In the illustration, one piece of quarter-bullnose tile is juxtaposed with a plain piece of tile, to create a finished look — note that the top trim strip shows a quarter-bullnose on two of its sides.
However, when referring to counter tops (such as a granite counter top in a kitchen) which extends beyond the edge of the underlying cabinetry, either a quarter-bullnose or half-bullnose edge may be used. A half-bullnose can be constructed by bonding two sections with quarter-bullnose, effectively creating a 180-degree curve, in order to create a more finished appearance. This would effectively double the thickness of that portion which extends beyond the cabinetry.
Non-architectural contexts
A bullnose is used in highway construction in North America and other countries to buffer and protect the end of the crash barrier or Jersey barrier at entrance and exit ramps.
In the 19th century, the roofs of railroad passenger cars often had a raised centre section to improve ventilation and internal lighting. They were called lantern or clerestory roofs. The design soon evolved to incorporate a bullnose at each end.
Name
The term bullnose originates from the rounded nose of a bull.
See also
Morris Oxford bullnose
References
Building materials | Bullnose | [
"Physics",
"Engineering"
] | 380 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Architecture stubs",
"Matter",
"Architecture"
] |
12,901,731 | https://en.wikipedia.org/wiki/Quantum%20nondemolition%20measurement | Quantum nondemolition (QND) measurement is a special type of measurement of a quantum system in which the uncertainty of the measured observable does not increase from its measured value during the subsequent normal evolution of the system. This necessarily requires that the measurement process preserves the physical integrity of the measured system, and moreover places requirements on the relationship between the measured observable and the self-Hamiltonian of the system. In a sense, QND measurements are the "most classical" and least disturbing type of measurement in quantum mechanics.
Most devices capable of detecting a single particle and measuring its position strongly modify the particle's state in the measurement process, e.g. photons are destroyed when striking a screen. Less dramatically, the measurement may simply perturb the particle in an unpredictable way; a second measurement, no matter how quickly after the first, is then not guaranteed to find the particle in the same location. Even for ideal, "first-kind" projective measurements in which the particle is in the measured eigenstate immediately after the measurement, the subsequent free evolution of the particle will cause uncertainty in position to quickly grow.
In contrast, a momentum (rather than position) measurement of a free particle can be QND because the momentum distribution is preserved by the particle's self-Hamiltonian p2/2m. Because the Hamiltonian of the free particle commutes with the momentum operator, a momentum eigenstate is also an energy eigenstate, so once momentum is measured its uncertainty does not increase due to free evolution.
Note that the term "nondemolition" does not imply that the wave function fails to collapse.
QND measurements are extremely difficult to carry out experimentally. Much of the investigation into QND measurements was motivated by the desire to avoid the standard quantum limit in the experimental detection of gravitational waves. The general theory of QND measurements was laid out by Braginsky, Vorontsov, and Thorne following much theoretical work by Braginsky, Caves, Drever, Hollenhorts, Khalili, Sandberg, Thorne, Unruh, Vorontsov, and Zimmermann.
Technical definition
Let be an observable for some system with self-Hamiltonian . The system is measured by an apparatus which is coupled to through interactions Hamiltonian for only brief moments. Otherwise, evolves freely according to . A precise measurement of is one which brings the global state of and into the approximate form
where are the eigenvectors of corresponding to the possible outcomes of the measurement, and are the corresponding states of the apparatus which record them.
Allow time-dependence to denote the Heisenberg picture observables:
A sequence of measurements of are said to be QND measurements if and only if
for any and when measurements are made. If this property holds for any choice of and , then is said to be a continuous QND variable. If this only holds for certain discrete times, then is said to be a stroboscopic QND variable.
For example, in the case of a free particle, the energy and momentum are conserved and indeed continuous QND
observables, but the position is not.
On the other hand, for the harmonic oscillator the position
and momentum satisfy periodic in time commutation relations which imply that x and p are not continuous QND observables. However, if one makes the
measurements at times separated by an integral numbers of half-periods (τ = kπ/ω), then the commutators vanish. This means that x and p are stroboscopic QND observables.
Discussion
An observable which is conserved under free evolution,
is automatically a QND variable. A sequence of ideal projective measurements of will automatically be QND measurements.
To implement QND measurements on atomic systems, the measurement strength (rate) is competing with atomic decay caused by measurement backaction. People usually use optical depth or cooperativity to characterize the relative ratio between measurement strength and the optical decay. By using nanophotonic waveguides as a quantum interface, it is actually possible to enhance atom-light coupling with a relatively weak field, and hence an enhanced precise quantum measurement with little disruption to the quantum system.
Criticism
It has been argued that the usage of the term QND does not add anything to the usual notion of a strong quantum measurement and can moreover be confusing because of the two different meanings of the word demolition in a quantum system (losing the quantum state vs. losing the particle).
References
See also
Interaction-free measurement
External links
Physicsworld article
Measuring quantum information without destroying it
Counting photons without destroying them
Quantum measurement | Quantum nondemolition measurement | [
"Physics"
] | 955 | [
"Quantum measurement",
"Quantum mechanics"
] |
12,903,181 | https://en.wikipedia.org/wiki/Riser%20clamp | A riser clamp is a type of hardware used by mechanical building trades for pipe support in vertical runs of piping (risers) at each floor level. The devices are placed around the pipe, and integral fasteners are then tightened to clamp them onto the pipe. The friction between the pipe and riser clamp transfers the weight of the pipe through the riser clamp to the building structure. Risers are generally located at floor penetrations, particularly for continuous floor slabs such as concrete. They may also be located at some other interval as dictated by local building codes or at intermediate intervals to support plumbing which has been altered or repaired. Heavier piping types, such as cast iron, require more frequent support. Ordinarily, riser clamps are made of carbon steel and individually sized to fit certain pipe sizes.
There are at least two types of riser clamp: the two-bolt pipe clamp and the yoke clamp.
References
Piping
Plumbing | Riser clamp | [
"Chemistry",
"Engineering"
] | 201 | [
"Building engineering",
"Chemical engineering",
"Plumbing",
"Construction",
"Mechanical engineering",
"Piping"
] |
23,876,318 | https://en.wikipedia.org/wiki/C15H13NO4 | {{DISPLAYTITLE:C15H13NO4}}
The molecular formula C15H13NO4 (molar mass: 271.24 g/mol, exact mass: 271.0845 u) may refer to:
Acetaminosalol
Darienine
Molecular formulas | C15H13NO4 | [
"Physics",
"Chemistry"
] | 61 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
23,876,569 | https://en.wikipedia.org/wiki/Allantoic%20acid | Allantoic acid is an organic compound with the chemical formula C4H8N4O4. It is a crystalline acid obtained by hydrolysis of allantoin.
In nature, allantoic acid is produced from allantoin by the enzyme allantoinase (encoded by the gene AllB (Uniprot: P77671) in Escherichia coli and other bacteria).
References
Ureas
Acetic acids | Allantoic acid | [
"Chemistry"
] | 94 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs",
"Ureas"
] |
23,877,012 | https://en.wikipedia.org/wiki/Coefficient%20of%20moment | The coefficients used for moment are similar to coefficients of lift, drag, and thrust, and are likewise dimensionless; however, these must include a characteristic length, in addition to the area; the span is used for rolling or yawing moment, and the chord is used for pitching moment.
Aerodynamics | Coefficient of moment | [
"Chemistry",
"Engineering"
] | 62 | [
"Aerospace engineering",
"Aerodynamics",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
23,879,650 | https://en.wikipedia.org/wiki/Software%20license%20manager | A software license manager is a software management tool used by independent software vendors or by end-user organizations to control where and how software products are able to run. License managers protect software vendors from losses due to software piracy and enable end-user organizations to comply with software license agreements. License managers enable software vendors to offer a wide range of usage-centric software licensing models, such as product activation, trial licenses, subscription licenses, feature-based licenses, and floating licensing from the same software package they provide to all users.
A license manager is different from a software asset management tool, which end-user organizations employ to manage the software they have licensed from many software vendors. However, some software asset management tools include license manager functions. These are used to reconcile software licenses and installed software, and generally include device discovery, software inventory, license compliance, and reporting functions.
An additional benefit of these software management tools are that they reduce the difficulty, cost, and time required for reporting and can increase operational transparency in order to prevent litigation costs associated with software misuse, as set forth by the Sarbanes-Oxley Act.
License management solutions provided by non-vendor companies are more valuable to the end-users, since most vendors do not provide enough license usage information. A vendor license manager provides limited information, while non-vendor license management solutions are developed for end-users in order to maximally optimize the licenses they have.
Most license managers can cover different software licensing models as license dongles or license USB keys, floating licenses, network licenses, concurrent license etc.
References
External links
Floating License Management
License4J Floating License Server User Guide
Cloud Software Licensing
OpenLM Software License Management
System administration
Manager | Software license manager | [
"Technology"
] | 344 | [
"Information systems",
"System administration"
] |
23,881,008 | https://en.wikipedia.org/wiki/Project%20Kaisei | Project Kaisei (from 海星, kaisei, "ocean planet" in Japanese) is a scientific and commercial mission to study and clean up the Great Pacific Garbage Patch, a large body of floating plastic and marine debris trapped in the Pacific Ocean by the currents of the North Pacific Gyre. Discovered by NOAA, and publicized by Captain Charles Moore, the patch is estimated to contain 20 times the density of floating debris compared to the global average. The project aims to study the types, extent, and nature of the debris with a view to identifying the scope of the problem and its effects on the ocean biome as well as ways of capturing, detoxifying, and recycling the material. It was organized by the Ocean Voyages Institute, a California-based 501c3 non-profit organisation dealing with marine preservation. The project is based in San Francisco and Hong Kong.
History
Project Kaisei was started in late 2008 by Mary Crowley, owner of Ocean Voyages, Inc., a for-profit yacht brokerage, Doug Woodring, and George Orbelian, from the San Francisco Bay Area, all with many years of experience in ocean stewardship and activities. As ocean lovers, Mary being a long time sailor, George being a surfer, expert on surfboard design, Author of Essential Surfing, and carries on the work of Project Kaisei by sitting on the boards of the Walter Munk Foundation For The Oceans and the Buckminster Fuller Institute as well as connections to the Gump Research Station For Coral Reefs – Moorea, Tahiti. Doug Woodring has backgrounds in business, finance, innovative technology and media maintains his passion for open water swimming and paddling racing. Each had different contacts, networks and abilities to contribute to the group. With Doug living in Hong Kong, the group set up two points of operation on either side of the Pacific (San Francisco and Hong Kong) to bring global attention and relevant stakeholders together to stem the flow of plastic and marine debris into our ocean. Doug carries on the work with the strategic planning developed for Project Kaisei by James Gollub: The Ocean Recovery Alliance and The Plastics Disclosure Project.
Project goals
The project launched on 19 March 2009, with plans for an initial phase of scientific study of plastic marine debris in the North Pacific Gyre and various feasibility studies of the effects to life, size, location, depths, approaches to potential recovery and recycling technologies. The goal is to bring about a global collaboration of science, technology, and solutions, to help remove the waste and restore the health of the ocean biome. New catch methods for the debris are being studied, which would have low energy input and low marine life loss. Technologies for remediation or recycling are being evaluated, to potentially create secondary products from the waste, which in turn could help subsidize a larger scale cleanup. The project has completed two expeditions, one in the summer of 2009, and one in 2010. New data on the issue has been collected, and more research and planning need to be done in order to understand the metrics, effectiveness and costs associated with a larger scale cleanup effort. Planning is now taking place for future research and expeditions which would allow the testing of new capture technologies and equipment, as well as the demonstration of remediation or recycling technologies.
Initial voyage
In August 2009, the initial study and feasibility voyage phase of Project Kaisei began, conducted by two vessels, the 53-metre (174-foot) diesel-powered research vessel R/V New Horizon, and the project flagship, the 46 m (150 ft) tall ship Kaisei. The New Horizon, owned by the Scripps Institution of Oceanography, left San Diego on 2 August 2009 on the Scripps Environmental Accumulation of Plastic Expedition (SEAPLEX), set to last until 21 August. The SEAPLEX expedition is funded by the University of California, San Diego, the National Science Foundation with supplemental funding from Project Kaisei. Two days later the Kaisei departed San Francisco on 4 August, and was expected to undertake a 30-day voyage. The Kaisei was to investigate the size and concentration of the debris field, and explore retrieval methods, while the New Horizon would join her and study the effect of the debris field on marine life. Both vessels carried Apple iPhones outfitted with Voyage Tracker apps built by Ojingo Labs that allowed researchers to share videos and photos from the expedition in real time, an innovation that brought the world along with the researchers and resulted in Google Earth Hero recognition of the project.
Intensive sampling
On reaching the patch, 1,900 kilometres (1,000 nautical miles) from the Californian coast, New Horizon began intensive sampling on 9 August. The crew took samples every few hours around the clock, using nets of various sizes and collecting samples at various depths. New Horizon returned on Friday 21, August 2009. SEAPLEX reported their initial findings on Thursday 27, August 2009, declaring that the patch stretched across at least 3,100 km (1,700 nmi). Plastic was found in every one of the 100 consecutive surface samples gathered. Miriam Goldstein, chief scientist of the SEAPLEX expedition described the findings as "shocking". Speaking about the patch, Goldstein added, "There’s no island, there’s no eighth continent, it doesn’t look like a garbage dump. It looks like beautiful ocean. But then when you put the nets in the water, you see all the little pieces".
Return
Kaisei returned to San Francisco on the morning of Monday 31 August. OVI founder and Project Kaisei co-founder Mary Crowley stated immediately following the Kaisei expeditions that the pollution was "what we expected to see, or a little worse." Andrea Neal, principal investigator on the Kaisei speaking on Tuesday 1 September stated that "Marine debris is the new man-made epidemic. It's that serious". Kaisei and New Horizon together had conducted tests over 6,500 km (3,500 nmi) of the ocean.
Initial findings from the voyages confirmed that the vast majority of the debris is small. The tiny portions of the debris field was said to be pervasive, and was found both at the surface and at numerous depths. It was also described as a "nearly inconceivable amount of tiny, confettilike pieces of broken plastic", increasing in density the further they sampled into the patch. Findings suggested that the presence of small debris, of a similar size to the existent marine life, could prove an obstacle to cleanup efforts. The research efforts also uncovered evidence of marine life consuming the microplastics.
Larger debris found consisted of mainly plastic bottles, but also included shoe soles, plastic buckets, patio chairs, Styrofoam pieces, old toys and fishing vessel buoys. A significant collection of floating debris became entangled in fishing nets creating dense patches of pollution. Various types of marine life were found on, around, and within the tangled bundles of debris. Some of the garbage collected was put on display at the Bay Model Visitor Center in Sausalito, California.
Goal
The initial feasibility mission aimed to collect 40 tons of debris, using special nets designed not to catch fish, in two passes through the field. The project would later test methods of recycling the collected garbage into new plastic, or commercial products such as diesel fuel or clothing. If the initial mission proved the collection and processing technologies to be viable, it was expected that the Kaisei would lead a full scale commercial cleanup voyage with other vessels, becoming operational within 18 months.
Fundraising and recognition
Ocean Voyages Institute raised $500,000 for the Project Kaisei initial voyages. The SEAPLEX expedition cost $387,000, funded with $190,000 from UC Ship Funds, $140,000 from Project Kaisei and $57,000 from the National Science Foundation. Project Kaisei is also partnered with the California Department of Toxic Substances Control.
The group has since been recognized by the United Nations Environment Programme (UNEP) in 2009 as a Climate Hero, by Google as a Google Earth Hero for its work with a video blogging voyage tracking system, and it was recently part of the Clinton Global Initiative in September 2010.
See also
Earth Day
Great Canadian Shoreline Cleanup
Junk raft
Kamilo Beach
Marine conservation
Marine debris
National Cleanup Day
Ocean Conservancy
Plastic recycling
Plastiki
SUPER HI-CAT
The Ocean Cleanup
World Cleanup Day
https://www.youtube.com/watch?v=mzX1N7qseC4
References
External links
Project Kaisei
Ocean Voyages Institute
SEAPLEX – Scripps Environmental Accumulation of Plastic Expedition
Research Vessel New Horizon
Biological oceanography
Ocean pollution
Oceanography
Pacific Ocean
Scripps Institution of Oceanography
Pacific expeditions | Project Kaisei | [
"Physics",
"Chemistry",
"Environmental_science"
] | 1,760 | [
"Ocean pollution",
"Hydrology",
"Applied and interdisciplinary physics",
"Oceanography",
"Water pollution"
] |
23,882,970 | https://en.wikipedia.org/wiki/Adenosinergic | Adenosinergic means "working on adenosine".
An adenosinergic agent (or drug) is a chemical which functions to directly modulate the adenosine system in the body or brain. Examples include adenosine receptor agonists, adenosine receptor antagonists (such as caffeine), and adenosine reuptake inhibitors.
See also
Adrenergic
Cannabinoidergic
Cholinergic
Dopaminergic
GABAergic
Glycinergic
Histaminergic
Melatonergic
Monoaminergic
Opioidergic
Serotonergic
References
Neurochemistry
Neurotransmitters | Adenosinergic | [
"Chemistry",
"Biology"
] | 137 | [
"Biochemistry",
"Neurochemistry",
"Neurotransmitters"
] |
23,884,289 | https://en.wikipedia.org/wiki/%CE%93-Hydroxyvaleric%20acid | γ-Hydroxyvaleric acid (GHV), also known as 4-methyl-GHB, is a designer drug related to γ-hydroxybutyric acid (GHB). It is sometimes seen on the grey market as a legal alternative to GHB, but with lower potency and higher toxicity, properties which have tended to limit its recreational use.
γ-Valerolactone (GVL) acts as a prodrug to GHV, analogously to how γ-butyrolactone (GBL) is a prodrug to GHB.
See also
1,4-Butanediol (1-4-BD)
Aceburic acid
Valerenic acid
Valeric acid
References
GABAB receptor agonists
Gamma hydroxy acids
GHB receptor agonists
Designer drugs
Euphoriants
Hypnotics | Γ-Hydroxyvaleric acid | [
"Biology"
] | 185 | [
"Hypnotics",
"Behavior",
"Sleep"
] |
23,884,660 | https://en.wikipedia.org/wiki/Strut%20channel | Strut channel, often referred to colloquially by one of several manufacturer trade names, is a standardized formed structural system used in the construction and electrical industries for light structural support, often for supporting wiring, plumbing, or mechanical components such as air conditioning or ventilation systems.
A strut is usually formed from a metal sheet, folded over into an open channel shape with inwards-curving lips to provide additional stiffness and as a location to mount interconnecting components. Increasingly, struts are being constructed from fiberglass, a highly corrosion-resistant material that's known for its lightweight strength and rigidity. Struts usually have holes of some sort in the base, to facilitate interconnection or fastening strut to underlying building structures.
The main advantage of strut channels in construction is that there are many options available for rapidly and easily connecting lengths together and other items to the strut channel, using various specialized strut-specific fasteners and bolts. They can be assembled very rapidly with minimal tools and only moderately trained labor, which reduces costs significantly in many applications. A strut channel installation also can often be modified or added-to relatively easily if needed. The only alternative to strut channels for most applications is custom fabrication using steel bar stock and other commodity components, requiring welding or extensive drilling and bolting, which has none of the above advantages.
The basic typical strut channel forms a box measuring about square. There are several additional sizes and combined shapes manufactured.
Shapes
Basic strut channel comes in the open box section square. A half height version is also available, used mostly where mounted directly to a wall as it has significantly less stiffness and ability to carry loads across an open space or brace. A deep channel version is also manufactured.
The material used to form the channel is typically sheet metal with a thickness of 1.5 mm or 2.5 mm (12 or 14 gauge; 0.1046 inch or 0.0747 inch, respectively).
Several variations are available with different hole patterns for mounting to walls and supports. Solid channel has no holes predrilled, and must either be drilled on site or mounted in another fashion. Punched channel has round holes, large enough for an M16 or 5/8 inch threaded steel rod or bolts, punched in the top of the channel at regular 48 mm (1 7/8 inch) centers. Half-slot channel has short, rounded end rectangular slots punched out on 50 mm (2 inch) centers. Slot channel has longer slots on 100 mm (4 inch) centers. In metric system based products, the eyelets are about 11×13 mm.
In addition, shapes are manufactured with two lengths of channel welded together back to back, or three or four welded together in various patterns, to form stronger structural elements.
Materials
Strut is normally made of sheet steel, with a zinc coating (galvanized), paint, epoxy, powder coat, or other finish.
Strut channel is also manufactured from stainless steel for use where rusting might become a problem (e.g., outdoors, facilities with corrosive materials), from aluminium alloy when weight is an issue or from fiberglass for very corrosive environments.
Standards
The Metal Framing Manufacturers Association (MFMA) defines a standard for strut channel construction that allows multiple manufacturers' channels to be compatible. The current version of the standard, as of 2020, is MFMA-4. Well-known manufacturers of strut channel, including Unistrut U.S., Cooper Industries/Eaton Corporation, and Thomas & Betts Corp./ABB Group, are members of the MFMA and defined the standard.
Interconnection
The inwards-facing lips on the open side of strut channel are routinely used to mount special nuts, braces, connecting angles, and other types of interconnection mechanism or device to join lengths of strut channel together or connect pipes, wire, other structures, threaded rod, bolts, or walls into the strut channel structural system.
Usage
Strut channel is used to mount, brace, support, and connect lightweight structural loads in building construction. These include pipes, electrical and data wire, mechanical systems such as ventilation, air conditioning, and other mechanical systems. Objects can be attached to the strut channel with a bolt, threaded into a channel nut, that may have a spring to ease installation. Circular objects such as pipes or cables may be attached with straps that have a shaped end to be retained by the channel. Strut channel is also used for other applications that require a strong framework, such as workbenches, shelving systems, equipment racks, etc. Specially made sockets are available to tighten nuts, bolts, etc. inside the channel, as normal sockets are unable to fit through the opening.
References
Structural engineering
Building materials | Strut channel | [
"Physics",
"Engineering"
] | 986 | [
"Structural engineering",
"Building engineering",
"Architecture",
"Construction",
"Materials",
"Civil engineering",
"Matter",
"Building materials"
] |
12,905,248 | https://en.wikipedia.org/wiki/Peridynamics | Peridynamics is a non-local formulation of continuum mechanics that is oriented toward deformations with discontinuities, especially fractures. Originally, bond-based peridynamic has been introduced, wherein, internal interaction forces between a material point and all the other ones with which it can interact, are modeled as a central forces field. This type of force fields can be imagined as a mesh of bonds connecting each point of the body with every other interacting point within a certain distance which depends on material property, called peridynamic horizon. Later, to overcome bond-based framework limitations for the material Poisson’s ratio ( for plane stress and for plane strain in two-dimesional configurations; for three-dimensional ones), state-base peridynamics, has been formulated. Its characteristic feature is that the force exchanged between a point and another one is influenced by the deformation state of all other bonds relative to its interaction zone.
The characteristic feature of peridynamics, which makes it different from classical local mechanics, is the presence of finite-range bond between any two points of the material body: it is a feature that approaches such formulations to discrete meso-scale theories of matter.
Etymology
The term peridynamic, as an adjective, was proposed in the year 2000 and comes from the prefix peri-, which means all around, near, or surrounding; and the root dyna, which means force or power. The term peridynamics, as a noun, is a shortened form of the phrase peridynamic model of solid mechanics.
Purpose
A fracture is a mathematical singularity to which the classical equations of continuum mechanics cannot be applied directly. The peridynamic theory has been proposed with the purpose of mathematically models fractures formation and dynamic in elastic materials. It is founded on integral equations, in contrast with classical continuum mechanics, which is based on partial differential equations. Since partial derivatives do not exist on crack surfaces and other geometric singularities, the classical equations of continuum mechanics cannot be applied directly when such features are present in a deformation. The integral equations of the peridynamic theory hold true also on singularities and can be applied directly, because they do not require partial derivatives. The ability to apply the same equations directly at all points in a mathematical model of a deforming structure helps the peridynamic approach to avoid the need for the special techniques of fracture mechanics like xFEM. For example, in peridynamics, there is no need for a separate crack growth law based on a stress intensity factor.
Definition and basic terminology
In the context of peridynamic theory, physical bodies are treated as constituted by a continuous points mesh which can exchange long-range mutual interaction forces, within a maximum and well established distance : the peridynamic horizon radius. This perspective approaches much more to molecular dynamics than macroscopic bodies, and as a consequence, is not based on the concept of stress tensor (which is a local concept) and drift toward the notion of pairwise force that a material point exchanges within its peridynamic horizon. With a Lagrangian point of view, suited for small displacements, the peridynamic horizon is considered fixed in the reference configuration and, then, deforms with the body. Consider a material body represented by , where can be either 1, 2 or 3. The body has a positive density . Its reference configuration at the initial time is denoted by . It is important to note that the reference configuration can either be the stress-free configuration or a specific configuration of the body chosen as a reference. In the context of peridynamics, every point in interacts with all the points within a certain neighborhood defined by , where and represents a suitable distance function on . This neighborhood is often referred to as in the literature. It is commonly known as the horizon or the family of .
The kinematics of is described in terms of its displacement from the reference position, denoted as . Consequently, the position of at a specific time is determined by . Furthermore, for each pair of interacting points, the change in the length of the bond relative to the initial configuration is tracked over time through the relative strain , which can be expressed as:
where denotes the Euclidean norm and .
The interaction between any and is referred to as a bond. These pairwise bonds have varying lengths over time in response to the force per unit volume squared, denoted as
.
This force is commonly known as the pairwise force function or peridynamic kernel, and it encompasses all the constitutive (material-dependent) properties. It describes how the internal forces depend on the deformation. It's worth noting that the dependence of on has been omitted here for the sake of simplicity in notation. Additionally, an external forcing term, , is introduced, which results in the following equation of motion, representing the fundamental equation of peridynamics:
where the integral term is the sum of all of the internal and external per-unit-volume forces acting on :
The vector valued function is the force density that exerts on . This force density depends on the relative displacement and relative position vectors between and . The dimension of is .
Bond-based peridynamics
In this formulation of peridynamics, the kernel is determined by the nature of internal forces and physical constraints that governs the interaction between only two material points. For the sake of brevity, the following quantities are defined and so that
Actio et reactio principle
For any and belonging to the neighborhood , the following relationship holds: . This expression reflects the principle of action and reaction, commonly known as Newton's Third Law. It guarantees the conservation of linear momentum in a system composed of mutually interacting particles.
Angular momentum conservation
For any and belonging to the neighborhood , the following condition holds: . This condition arises from considering the relative deformed ray-vector connecting and as . The condition is satisfied if and only if the pairwise force density vector has the same direction as the relative deformed ray-vector. In other words, for all and , where is a scalar-valued function.
Hyperelastic material
An hyperelastic material is a material with constitutive relation such that:
or, equivalently, by Stokes' theorem
,
and, thus,
In the equation above is the scalar valued potential function in . Due to the necessity of satisfying angular momentum conservation, the condition below on the scalar valued function follows
where is a scalar valued function. Integrating both sides of the equation, the following condition on is obtained
,
for a scalar valued function. The elastic nature of is evident: the interaction force depends only on the initial relative position between points and and the modulus of their relative position, , in the deformed configuration at time . Applying the isotropy hypothesis, the dependence on vector can be substituted with a dependence on its modulus ,
Bond forces can, thus, be considered as modeling a spring net that connects each point pairwise with .
Linear elastic material
If , the peridynamic kernel can be linearised around :
then, a second-order micro-modulus tensor can be defined as
where and is the identity tensor. Following application of linear momentum balance, elasticity and isotropy condition, the micro-modulus tensor can be expressed in this form
Therefore for a linearised hyperelastic material, its peridynamic kernel holds the following structure
Expressions for the peridynamic kernel
The peridynamic kernel is a versatile function that characterizes the constitutive behavior of materials within the framework of peridynamic theory. One commonly employed formulation of the kernel is used to describe a class of materials known as prototype micro-elastic brittle (PMB) materials. In the case of isotropic PMB materials, the pairwise force is assumed to be linearly proportional to the finite stretch experienced by the material, defined as
,
so that
where
and where the scalar function is defined as follow
with
The constant is referred to as the micro-modulus constant, and the function serves to indicate whether, at a given time , the bond stretch associated with the pair has surpassed the critical value . If the critical value is exceeded, the bond is considered broken, and a pairwise force of zero is assigned for all .
After a comparison between the strain energy density value obtained under isotropic extension respectively employing peridynamics and classical continuum theory framework, the physical coherent value of micro-modulus can be found
where is the material bulk modulus.
Following the same approach the micro-modulus constant can be extended to , where is now a micro-modulus function. This function provides a more detailed description of how the intensity of pairwise forces is distributed over the peridynamic horizon . Intuitively, the intensity of forces decreases as the distance between and increases, but the specific manner in which this decrease occurs can vary.
The micro-modulus function is expressed as
where the constant is obtained by comparing peridynamic strain density with the classical mechanical theories; is a function defined on with the following properties (given the restrictions of momentum conservation and isotropy)
where is the Dirac Delta function.
Cylindrical micro-modulus
The simplest expression for the micro-modulus function is
,
where : is the indicator function of the subset , defined as
Triangular micro-modulus
It is characterized by to a be a linear function
Normal micro-modulus
If one wants to reflects the fact that most common discrete physical systems are characterized by a Maxwell-Boltzmann distribution, in order to include this behavior in peridynamics, the following expression for can be utilized
Quartic micro-modulus
In the literature one can find also the following expression for the function
Overall, depending on the specific material property to be modeled, there exists a wide range of expressions for the micro-modulus and, in general, for the peridynamic kernel. The above list is, thus, not exhaustive.
Damage
Damage is incorporated in the pairwise force function by allowing bonds to break when their elongation exceeds some prescribed value. After a bond breaks, it no longer sustains any force, and the endpoints are effectively disconnected from each other. When a bond breaks, the force it was carrying is redistributed to other bonds that have not yet broken. This increased load makes it more likely that these other bonds will break. The process of bond breakage and load redistribution, leading to further breakage, is how cracks grow in the peridynamic model.
Analytically, the bond braking is specified inside the expression of peridynamic kernel, by the function
If the graph of versus bond stretching is plotted, the action of bond braking function in fracture formation is clear. However not only abrupt fracture can be modeled in peridynamic framework and more general expression for can be employed.
State-based peridynamics
The theory described above assumes that each peridynamic bond responds independently of all the others. This is an oversimplification for most materials and leads to restrictions on the types of materials that can be modeled. In particular, this assumption implies that any isotropic linear elastic solid is restricted to a Poisson ratio of 1/4.
To address this lack of generality, the idea of peridynamic states was introduced. This allows the force density in each bond to depend on the stretches in all the bonds connected to its endpoints, in addition to its own stretch. For example, the force in a bond could depend on the net volume changes at the endpoints. The effect of this volume change, relative to the effect of the bond stretch, determines the Poisson ratio. With peridynamic states, any material that can be modeled within the standard theory of continuum mechanics can be modeled as a peridynamic material, while retaining the advantages of the peridynamic theory for fracture.
Mathematically the equation of the internal and external force term
used in the bond-based formulations is substituted by
where is the force vector state field.
A general m-order state is a mathematical object similar to a tensor, with the exception that it is
in general non-linear;
in general non-continuous;
is not finite dimensional.
Vector states are states of order equal to 2. For so called simple material, is defined as
where is a Riemann-integrable function on , and is called deformation vector state field and is defined by the following relation
thus is the image of the bond under the deformation
such that
which means that two distinct particles never occupy the same point as the deformation progresses.
It can be proved that balance of linear momentum follow from the definition of , while, if the constitutive relation is such that
the force vector state field satisfy balance of angular momentum.
Applications
The growing interest in peridynamics come from its capability to fill the gap between atomistic theories of matter and classical local continuum mechanics. It is applied effectively to micro-scale phenomena, such as crack formation and propagation, wave dispersion, intra-granular fracture. These phenomena can be described by appropriately adjustment of the peridynamic horizon radius, which is directly linked to the extent of non-local interactions between points within the material.
In addition to the aforementioned research fields, peridynamics' non-local approach to discontinuities has found applications in various other areas. In geo-mechanics, it has been employed to study water-induced soil cracks, geo-material failure, rocks fragmentation, and so on. In biology, peridynamics has been used to model long-range interactions in living tissues, cellular ruptures, cracking of bio-membranes, and more. Furthermore, peridynamics has been extended to thermal diffusion theory, enabling the modeling of heat conduction in materials with discontinuities, defects, inhomogeneities, and cracks. It has also been applied to study advection-diffusion phenomena in multi-phase fluids and to construct models for transient advection-diffusion problems. With its versatility, peridynamics has been used in various multi-physics analyses, including micro-structural analysis, fatigue and heat conduction in composite materials, galvanic corrosion in metals, electricity-induced cracks in dielectric materials and more.
See also
Continuum mechanics
Fracture mechanics
Movable cellular automaton
Molecular dynamics
Non-local operator
Singularity
References
Further reading
External links
Implementation of finite element and finite difference approximation of Nonlocal models
Peridigm, an open-source computational peridynamics code
PeriDoX open-source repository for peridynamics and its documentation
PeriLab open-source repository for peridynamics written in Julia
Sandia Laboratory-Peridynamics
Website on peridynamics
Continuum mechanics
Fracture mechanics | Peridynamics | [
"Physics",
"Materials_science",
"Engineering"
] | 3,016 | [
"Structural engineering",
"Fracture mechanics",
"Continuum mechanics",
"Classical mechanics",
"Materials science",
"Materials degradation"
] |
12,905,296 | https://en.wikipedia.org/wiki/Composite%20construction | Composite construction is a generic term to describe any building construction involving multiple dissimilar materials. Composite construction is often used in building aircraft, watercraft, and building construction. There are several reasons to use composite materials including increased strength, aesthetics, and environmental sustainability.
Structural engineering
In structural engineering, composite construction exists when two different materials are bound together so strongly that they act together as a single unit from a structural point of view. When this occurs, it is called composite action. One common example involves steel beams supporting concrete floor slabs. If the beam is not connected firmly to the slab, then the slab transfers all of its weight to the beam and the slab contributes nothing to the load carrying capability of the beam. However, if the slab is connected positively to the beam with studs, then a portion of the slab can be assumed to act compositely with the beam. In effect, this composite creates a larger and stronger beam than would be provided by the steel beam alone. The structural engineer may calculate a transformed section as one step in analyzing the load carry capability of the composite beam.
Ships
In 19th-century shipbuilding, composite construction was the use of an iron hull framework which was covered in timber planking to provide the water-tight skin of the hull. If properly insulated fastenings were used on the timber, the underwater hull could be covered with copper sheathing without the problem of galvanic corrosion. Copper sheathing prevented fouling and teredo worm, but could not be used on iron hulls. The iron framework of composite ships was less bulky and lighter than timber, so allowing more cargo in a hull of the same external shape. The weight saving was particularly significant. The strength and stiffness allowed sailing vessels to be driven hard as the accumulated straining of the hull did not produce the leaks that would develop in the older wooden built ships.
Composite hulls were used for the majority of the clippers built from the mid-1860s. Early experiments with the system started with a patent issued in 1839, under which the steamer Assam was built. Other patents followed, with differing methods of electrically insulating the iron frames and fastenings from the copper sheathing.
Surviving examples are , a steam and sail-powered warship, and the clipper .
House building
A flitch beam is a simple form of composite construction sometimes used in North American light frame construction. This occurs when a steel plate is sandwiched between two wood joists and bolted together. A flitch beam can typically support heavier loads over a longer span than an all-wood beam of the same cross section.
Deck construction
Composite wood decking
The traditional decking material is pressure-treated wood. The current material many contractors choose to use is composite decking. This material is typically made from wood–plastic composite or fiberglass reinforced plastic (FRP). Such materials do not warp, crack, or split and are as versatile as traditional pressure treated wood. Composite decking is made through several different processes, and there are a multitude of sizes, shapes, and strengths available. Depending on the type of composite selected the decking materials can be used for a number of other construction projects including fences and sheds.
Composite steel deck
In a composite steel deck, the dissimilar materials in question are steel and concrete. A composite steel deck combines the tensile strength of steel with the compressive strength of concrete to improve design efficiency and reduce the material necessary to cover a given area. Additionally, composite steel decks supported by composite steel joists can span greater distances between supporting elements and have reduced live load deflection in comparison to previous construction methods.
Cement-polymer composites
Cement-polymer composites are being developed and tested as a replacement for traditional cement. The traditional cement used as stucco rapidly deteriorates. The deterioration causes the material to easily crack due to thermo-processes becoming permeable to water and no longer structurally sound. The United States Environmental Protection Agency in conjunction with Materials and Electrochemical Research Corporation tested a cement-polymer composite material consisting of crumb rubber made from recycled rubber tires and cement. It was found that 20% crumb rubber can be added to the cement mixture without affecting the appearance of the cement. This new material was tested for strength and durability using American Society for Testing and Materials (ASTM International) standards.
See also
Composite material
Construction
Framing (construction)
Reinforced concrete
Structural analysis
References
Building materials
Civil engineering
Construction
Prestressed concrete construction | Composite construction | [
"Physics",
"Technology",
"Engineering"
] | 898 | [
"Building engineering",
"Architecture",
"Structural system",
"Prestressed concrete construction",
"Materials",
"Construction",
"Civil engineering",
"Matter",
"Building materials"
] |
125,769 | https://en.wikipedia.org/wiki/Binding%20energy | In physics and chemistry, binding energy is the smallest amount of energy required to remove a particle from a system of particles or to disassemble a system of particles into individual parts. In the former meaning the term is predominantly used in condensed matter physics, atomic physics, and chemistry, whereas in nuclear physics the term separation energy is used. A bound system is typically at a lower energy level than its unbound constituents. According to relativity theory, a decrease in the total energy of a system is accompanied by a decrease in the total mass, where .
Types
There are several types of binding energy, each operating over a different distance and energy scale. The smaller the size of a bound system, the higher its associated binding energy.
Mass–energy relation
A bound system is typically at a lower energy level than its unbound constituents because its mass must be less than the total mass of its unbound constituents. For systems with low binding energies, this "lost" mass after binding may be fractionally small, whereas for systems with high binding energies, the missing mass may be an easily measurable fraction. This missing mass may be lost during the process of binding as energy in the form of heat or light, with the removed energy corresponding to the removed mass through Einstein's equation . In the process of binding, the constituents of the system might enter higher energy states of the nucleus/atom/molecule while retaining their mass, and because of this, it is necessary that they are removed from the system before its mass can decrease. Once the system cools to normal temperatures and returns to ground states regarding energy levels, it will contain less mass than when it first combined and was at high energy. This loss of heat represents the "mass deficit", and the heat itself retains the mass that was lost (from the point of view of the initial system). This mass will appear in any other system that absorbs the heat and gains thermal energy.
For example, if two objects are attracting each other in space through their gravitational field, the attraction force accelerates the objects, increasing their velocity, which converts their potential energy (gravity) into kinetic energy. When the particles either pass through each other without interaction or elastically repel during the collision, the gained kinetic energy (related to speed) begins to revert into potential energy, driving the collided particles apart. The decelerating particles will return to the initial distance and beyond into infinity, or stop and repeat the collision (oscillation takes place). This shows that the system, which loses no energy, does not combine (bind) into a solid object, parts of which oscillate at short distances. Therefore, to bind the particles, the kinetic energy gained due to the attraction must be dissipated by resistive force. Complex objects in collision ordinarily undergo inelastic collision, transforming some kinetic energy into internal energy (heat content, which is atomic movement), which is further radiated in the form of photonsthe light and heat. Once the energy to escape the gravity is dissipated in the collision, the parts will oscillate at a closer, possibly atomic, distance, thus looking like one solid object. This lost energy, necessary to overcome the potential barrier to separate the objects, is the binding energy. If this binding energy were retained in the system as heat, its mass would not decrease, whereas binding energy lost from the system as heat radiation would itself have mass. It directly represents the "mass deficit" of the cold, bound system.
Closely analogous considerations apply in chemical and nuclear reactions. Exothermic chemical reactions in closed systems do not change mass, but do become less massive once the heat of reaction is removed, though this mass change is too small to measure with standard equipment. In nuclear reactions, the fraction of mass that may be removed as light or heat, i.e. binding energy, is often a much larger fraction of the system mass. It may thus be measured directly as a mass difference between rest masses of reactants and (cooled) products. This is because nuclear forces are comparatively stronger than the Coulombic forces associated with the interactions between electrons and protons that generate heat in chemistry.
Mass change
Mass change (decrease) in bound systems, particularly atomic nuclei, has also been termed mass defect, mass deficit, or mass packing fraction.
The difference between the unbound system calculated mass and experimentally measured mass of nucleus (mass change) is denoted as Δm. It can be calculated as follows:
Mass change = (unbound system calculated mass) − (measured mass of system)
e.g. (sum of masses of protons and neutrons) − (measured mass of nucleus)
After a nuclear reaction occurs that results in an excited nucleus, the energy that must be radiated or otherwise removed as binding energy in order to decay to the unexcited state may be in one of several forms. This may be electromagnetic waves, such as gamma radiation; the kinetic energy of an ejected particle, such as an electron, in internal conversion decay; or partly as the rest mass of one or more emitted particles, such as the particles of beta decay. No mass deficit can appear, in theory, until this radiation or this energy has been emitted and is no longer part of the system.
When nucleons bind together to form a nucleus, they must lose a small amount of mass, i.e. there is a change in mass to stay bound. This mass change must be released as various types of photon or other particle energy as above, according to the relation . Thus, after the binding energy has been removed, binding energy = mass change × . This energy is a measure of the forces that hold the nucleons together. It represents energy that must be resupplied from the environment for the nucleus to be broken up into individual nucleons.
For example, an atom of deuterium has a mass defect of 0.0023884 Da, and its binding energy is nearly equal to 2.23 MeV. This means that energy of 2.23 MeV is required to disintegrate an atom of deuterium.
The energy given off during either nuclear fusion or nuclear fission is the difference of the binding energies of the "fuel", i.e. the initial nuclide(s), from that of the fission or fusion products. In practice, this energy may also be calculated from the substantial mass differences between the fuel and products, which uses previous measurements of the atomic masses of known nuclides, which always have the same mass for each species. This mass difference appears once evolved heat and radiation have been removed, which is required for measuring the (rest) masses of the (non-excited) nuclides involved in such calculations.
See also
Semi-empirical mass formula
Separation energy (binding energy of one nucleon)
Virial mass
Prout's hypothesis, an early model of the atom that did not account for mass defect
References
External links
Nuclear Binding Energy
Mass and Nuclide Stability
Experimental atomic mass data compiled Nov. 2003
Energy (physics)
Mass spectrometry
Nuclear physics
Forms of energy | Binding energy | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,446 | [
"Physical quantities",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Quantity",
"Mass",
"Forms of energy",
"Energy (physics)",
"Mass spectrometry",
"Nuclear physics",
"Wikipedia categories named after physical quantities",
"Matter"
] |
2,612,812 | https://en.wikipedia.org/wiki/CNDO/2 | Complete Neglect of Differential Overlap (CNDO) is one of the first semi empirical methods in quantum chemistry. It uses the core approximation, in which only the outer valence electrons are explicitly included, and the approximation of zero-differential overlap.
CNDO/2 is the main version of CNDO. The method was first introduced by John Pople and collaborators.
Background
An earlier method was Extended Hückel method, which explicitly ignores electron-electron repulsion terms. It was a method for calculating the electronic energy and the molecular orbitals. CNDO/1 and CNDO/2 were developed from this method by explicitly including the electron-electron repulsion terms, but neglecting many of them, approximating some of them and fitting others to experimental data from spectroscopy.
Methodology
Quantum mechanics provides equations based on the Hartree–Fock method and the Roothaan equations that CNDO uses to model atoms and their locations. These equations are solved iteratively to the point where the results do not vary significantly between two iterations. CNDO does not involve knowledge about chemical bonds but instead uses knowledge about quantum wavefunctions.
CNDO can be used for both closed shell molecules, where the electrons are fully paired in molecular orbitals and open shell molecules, which are radicals with unpaired electrons. It is also used in solid state and nanostructures calculations.
CNDO is considered to yield good results for partial atomic charges and molecular dipole moment. Total energy and binding energy are calculated. Eigenvalues for calculating the highest occupied molecular orbital and lowest unoccupied molecular orbital are reported from the closed shell approach.
See also
Molecular geometry
INDO
NDDO
References
External links
CNDO/2 Calculation (Pirika)
Molecular Modeling for Educators
Density Functional Modelling of Point Defects in Semiconductors
Semiempirical quantum chemistry methods | CNDO/2 | [
"Chemistry"
] | 373 | [
"Computational chemistry",
"Quantum chemistry",
"Semiempirical quantum chemistry methods"
] |
2,614,426 | https://en.wikipedia.org/wiki/Hypophosphorous%20acid | Hypophosphorous acid (HPA), or phosphinic acid, is a phosphorus oxyacid and a powerful reducing agent with molecular formula H3PO2. It is a colorless low-melting compound, which is soluble in water, dioxane
and alcohols. The formula for this acid is generally written H3PO2, but a more descriptive presentation is HOP(O)H2, which highlights its monoprotic character. Salts derived from this acid are called hypophosphites.
HOP(O)H2 exists in equilibrium with the minor tautomer HP(OH)2. Sometimes the minor tautomer is called hypophosphorous acid and the major tautomer is called phosphinic acid.
Preparation and availability
Hypophosphorous acid was first prepared in 1816 by the French chemist Pierre Louis Dulong (1785–1838).
The acid is prepared industrially via a two step process: Firstly, elemental white phosphorus reacts with alkali and alkaline earth hydroxides to give an aqueous solution of hypophosphites:
P4 + 4 OH− + 4 H2O → 4 + 2 H2
Any phosphites produced in this step can be selectively precipitated out by treatment with calcium salts. The purified material is then treated with a strong, non-oxidizing acid (often sulfuric acid) to give the free hypophosphorous acid:
+ H+ → H3PO2
HPA is usually supplied as a 50% aqueous solution. Anhydrous acid cannot be obtained by simple evaporation of the water, as the acid readily oxidises to phosphorous acid and phosphoric acid and also disproportionates to phosphorous acid and phosphine. Pure anhydrous hypophosphorous acid can be formed by the continuous extraction of aqueous solutions with diethyl ether.
Properties
The molecule displays P(═O)H to P–OH tautomerism similar to that of phosphorous acid; the P(═O) form is strongly favoured.
HPA is usually supplied as a 50% aqueous solution and heating at low temperatures (up to about 90 °C) prompts it to react with water to form phosphorous acid and hydrogen gas.
H3PO2 + H2O → H3PO3 + H2
Heating above 110 °C causes hypophosphorous acid to undergo disproportionation to give phosphorous acid and phosphine.
3 H3PO2 → 2 H3PO3 + PH3
Reactions
Inorganic
Hypophosphorous acid can reduce chromium(III) oxide to chromium(II) oxide:
H3PO2 + 2 Cr2O3 → 4 CrO + H3PO4
Inorganic derivatives
Most metal-hypophosphite complexes are unstable, owing to the tendency of hypophosphites to reduce metal cations back into the bulk metal. Some examples have been characterised, including the important nickel salt [Ni(H2O)6](H2PO2)2.
DEA List I chemical status
Because hypophosphorous acid can reduce elemental iodine to form hydroiodic acid, which is a reagent effective for reducing ephedrine or pseudoephedrine to methamphetamine, the United States Drug Enforcement Administration designated hypophosphorous acid (and its salts) as a List I precursor chemical effective November 16, 2001. Accordingly, handlers of hypophosphorous acid or its salts in the United States are subject to stringent regulatory controls including registration, recordkeeping, reporting, and import/export requirements pursuant to the Controlled Substances Act and 21 CFR §§ 1309 and 1310.
Organic
In organic chemistry, H3PO2 can be used for the reduction of arenediazonium salts, converting to Ar–H. When diazotized in a concentrated solution of hypophosphorous acid, an amine substituent can be removed from arenes.
Owing to its ability to function as a mild reducing agent and oxygen scavenger it is sometimes used as an additive in Fischer esterification reactions, where it prevents the formation of colored impurities.
It is used to prepare phosphinic acid derivatives.
Applications
Hypophosphorous acid (and its salts) are used to reduce metal salts back into bulk metals. It is effective for various transition metals ions (i.e. those of: Co, Cu, Ag, Mn, Pt) but is most commonly used to reduce nickel. This forms the basis of electroless nickel plating (Ni–P), which is the single largest industrial application of hypophosphites. For this application it is principally used as a salt (sodium hypophosphite).
Sources
ChemicalLand21 Listing
References
Oxoacids
Phosphorus oxoacids
Reagents for organic chemistry
Reducing agents | Hypophosphorous acid | [
"Chemistry"
] | 1,052 | [
"Redox",
"Reducing agents",
"Reagents for organic chemistry"
] |
2,615,082 | https://en.wikipedia.org/wiki/Trost%20ligand | {{chembox
| Verifiedfields = changed
| Watchedfields = changed
| verifiedrevid = 470618137
| ImageFile = Trost ligand.png
| ImageSize =
| ImageFile1 = Trost-ligand-from-xtal-1999-Mercury-3D-balls.png
| IUPACName = (1R,2R)-(+)-1,2-diaminocyclohexane-N,N-bis(2-diphenylphosphinobenzoyl)
| PIN = N,N′-[(1R,2R)-cyclohexane-1,2-diyl]bis[2-(diphenylphosphanyl)benzamide]
| OtherNames = Trost's ligand
|Section1=
|Section2=
|Section3=
}}
The Trost ligand''' is a diphosphine used in the palladium-catalyzed Trost asymmetric allylic alkylation. Other C2-symmetric ligands derived from trans-1,2-diaminocyclohexane (DACH) have been developed, such as the (R,R'')-DACH-naphthyl ligand derived from 2-diphenylphosphino-1-naphthalenecarboxylic acid. Related bidentate phosphine-containing ligands derived from other chiral diamines and 2-diphenylphosphinobenzoic acid have also been developed for applications in asymmetric synthesis.
External links
Sigma-Aldrich: Trost Ligands
Trost Research Group
Reagents for organic chemistry
Catalysts
Tertiary phosphines
Benzamides | Trost ligand | [
"Chemistry"
] | 371 | [
"Catalysis",
"Catalysts",
"Chemical process stubs",
"Organic compounds",
"Reagents for organic chemistry",
"Chemical reaction stubs",
"Chemical kinetics",
"Organic compound stubs",
"Organic chemistry stubs"
] |
2,616,258 | https://en.wikipedia.org/wiki/Pole%20figure | A pole figure is a graphical representation of the orientation of objects in space. For example, pole figures in the form of stereographic projections are used to represent the orientation distribution of crystallographic lattice planes in crystallography and texture analysis in materials science.
Definition
Consider an object with a basis attached to it. The orientation of the object in space can be determined by three rotations to transform the reference basis of space to the basis attached to the object; these are the Euler angles.
If we consider a plane of the object, the orientation of the plane can be given by its normal line. If we draw a sphere with the center on the plane, then
the intersection of the sphere and the plane is a circle, called the "trace" ;
the intersection of the normal line and the sphere is the pole.
A single pole is not enough to fully determine the orientation of an object: the pole stays the same if we apply a rotation around the normal line. The orientation of the object is fully determined by the use of poles of two planes that are not parallel.
Stereographic projection
The upper sphere is projected on a plane using the stereographic projection.
Consider the (x,y) plane of the reference basis; its trace on the sphere is the equator of the sphere. We draw a line joining the South pole with the pole of interest P.
It is possible to choose any projection plane parallel to the equator (except the South pole): the figures will be proportional (property of similar triangles). It is usual to place the projection plane at the North pole.
Definition
The pole figure is the stereographic projection of the poles used to represent the orientation of an object in space.
Geometry in the pole figure
A Wulff net is used to read a pole figure.
The stereographic projection of a trace is an arc. The Wulff net is arcs corresponding to planes that share a common axis in the (x,y) plane.
If the pole and the trace of a plane are represented on the same diagram, then
we turn the Wulff net so the trace corresponds to an arc of the net;
the pole is situated on an arc, and the angular distance between this arc and the trace is 90°.
Consider an axis Δ, and planes belonging to the zone of this axis, i.e. Δ is in all these planes, the intersection of all the planes is Δ. If we call P the plane that is perpendicular to Δ, then the normals to the planes all belong to P. Thus, the poles of the planes belonging to the same zone are on the trace of the plane P perpendicular to the axis.
Application
Planes of a crystal
The structure of a crystal is often represented by the pole figure of its crystallographic planes.
A plane is chosen as the equator, usually the (001) or (011) plane; its pole is the center of the figure. Then, the poles of the other planes are placed on the figure, with the Miller indices for each pole. The poles that belong to a zone are sometimes linked with the related trace.
Texture
"Texture" in the context of Materials Science means "crystallographic preferred orientation". If a polycrystalline material (i.e. a material composed of many different crystals or grains, like most metals, ceramics or minerals) has "texture" then that means that the crystal axes are not randomly (or, more correctly, uniformly) distributed.
To draw a pole figure, one chooses a particular crystal direction (e.g. the normal to the (100) plane) and then plots that direction, called a pole, for every crystal relative to a set of directions in the material. In a rolled metal, for example, the directions in the material are the rolling direction, transverse direction and rolling plane normal.
If a large number of crystals are involved, then it is typical to make a contour plot, rather than plotting individual poles.
The full determination of the texture requires the plot of two pole figures corresponding to planes that are not parallel and that do not have the same diffraction angle (thus different interplanar distances).
Diffraction pattern
Consider the diffraction pattern obtained with a single crystal, on a plane that is perpendicular to the beam, e.g. X-ray diffraction with the Laue method, or electron diffraction in a transmission electron microscope. The diffraction figure shows spots.
The position of the spots is determined by the Bragg's law. It gives the orientation of the plane.
If the parameters of the optics are known (especially the distance between the crystal and the photographic film), it is possible to build the stereographic diagram from the diffraction diagram, i.e. to transform the diffraction pattern into a pole figure.
References
Kocks, U. F., C. Tomé, and H.-R. Wenk, Eds. (1998). Texture and Anisotropy, Cambridge University Press, Cambridge, UK, .
Val Randle and Olaf Engler (2000), Macrotexture, Microtexture & Orientation Mapping, Gordon & Breach, Amsterdam, Holland, .
Adam Morawiec, Orientations and Rotations (2003), Springer, .
Piotr Ozga, Pole Figures: Registration and Plot Conventions, http://www.labosoft.com.pl/pf_convention.pdf
External links
Wulff net with a step of 2° (PDF file, 1p, 272KB)
http://www.texture.de
http://mimp.materials.cmu.edu
MTEX — MATLAB toolbox for Texture Analysis
Step-by-step animated construction of a Pole Figure from aluMATTER
StereoPol — plotting & indexing
Materials science | Pole figure | [
"Physics",
"Materials_science",
"Engineering"
] | 1,182 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
2,616,345 | https://en.wikipedia.org/wiki/Drillship | A drillship is a merchant vessel designed for use in exploratory offshore drilling of new oil and gas wells or for scientific drilling purposes. In recent years the vessels have been used in deepwater and ultra-deepwater applications, equipped with the latest and most advanced dynamic positioning systems.
History
The first drillship was the CUSS I, designed by Robert F. Bauer of Global Marine in 1955. The CUSS I had drilled in 400-foot-deep waters by 1957.
Robert F. Bauer became the first president of Global Marine in 1958.
In 1961 Global Marine started a new drillship era. They ordered several self-propelled drillships each with a rated centerline drilling of 20,000 foot-wells in water depths of 600 feet. The first was named CUSS (Glomar) II, a 5,500-deadweight-ton vessel, costing around $4.5 million. Built by a Gulf Coast shipyard, the vessel was almost twice the size of the CUSS I, and became the world's first drillship built as a new construction which set sail in 1962.
In 1962, The Offshore Company elected to build a new type of drillship, larger than that of the Glomar class. This new drillship would feature a first-ever anchor mooring array based on a unique turret system. The vessel was named Discoverer I. The Discoverer I had no main propulsion engines, meaning it needed to be towed out to the drill site.
Application
A drillship can be used as a platform to carry out well maintenance or completion work such as casing and tubing installation, subsea tree installations, and well capping. Drillships are often built to the design specifications set by the oil production company or investors.
From the first drillship CUSS I to the Deepwater Asgard, the fleet size has been growing ever since. In 2013 the worldwide fleet of drillships topped 80 ships, more than double its size in 2009. Drillships are not only growing in size but also in capability, with new technology assisting operations from academic research to ice-breaker class drilling vessels. U.S. President Barack Obama's decision in late March 2010 to expand U.S. domestic exploratory drilling seemed likely to increase further developments of drillship technology.
Design
Drillships are just one way to perform various types of drilling. This function can also be performed by semi-submersibles, jackups, barges, or platform rigs.
Drillships have the functional ability of semi-submersible drilling rigs and also have a few unique features that separate them from all others, first being the ship-shaped design. A drillship has greater mobility and can move quickly under its own propulsion from drill site to drill site in contrast to semi-submersibles and jackup barges and platforms. Drillships have the ability to save time sailing between oilfields worldwide. A drillship takes 20 days to move from the Gulf of Mexico to offshore Angola, whereas a semi-submersible drilling unit must be towed and takes 70 days. Drillship construction costs are much higher than that of a semi-submersible. Although mobility comes at a high price, the drillship owners can charge higher day rates and get the benefit of lower idle times between assignments.
The table below depicts the industry's way of classifying drill sites into different vintages, depending on their age and water depth.
The drilling operations are very detailed and in-depth. A simple way to understand what a drillship does to drill, is that a marine riser is lowered from the drillship to the seabed with a blowout preventer (BOP) at the bottom that connects to the wellhead. The BOP is used to quickly disconnect the riser from the wellhead in times of emergency or in any needed situation. Underneath the derrick is a moonpool, an opening through the hull covered by the rig floor. Some of the modern drillships have larger derricks that allow dual activity operations, for example, simultaneous drilling and casing handling.
Types
There are different types of offshore drilling units such as the oil platform, jackup rig, submersible drilling rig, semi-submersible platform and of course drillships. All drillships have what is called a ”moon pool”. The moon pool is an opening on the base of the hull and depending on the mission the vessel is on, drilling equipment, small submersible crafts and divers may pass through the moon pool. Since the drillship is also a vessel, it can easily relocate to any desired location. Due to their mobility, drillships are not as stable compared to semi-submersible platforms. To maintain its position, drillships may utilize their anchors or use the ship's computer-controlled system on board to run off its dynamic positioning.
Chikyū
One of the world's best-known drillships is Japan's ocean-going drilling vessel Chikyū, which actually is a research vessel. The Chikyū has the remarkable ability to drill to a depth of below the seabed, bringing that to two to four times that of any other drillship.
Dhirubhai Deepwater KG1
In 2011 the Transocean drillship Dhirubhai Deepwater KG1 set the world water-depth record at 10,194 feet of water (3,107 meters) while working for Reliance – LWD and directional drilling done by Sperry Drilling in India.
Drillship operators
Diamond Offshore Drilling
International Ocean Discovery Program
Noble Corporation
Odfjell Drilling
Seadrill
Transocean
Türkiye Petrolleri Anonim Ortaklığı (TPAO)
Valaris Limited
Foresea
References
Ship
Offshore engineering | Drillship | [
"Engineering"
] | 1,164 | [
"Construction",
"Offshore engineering"
] |
2,616,544 | https://en.wikipedia.org/wiki/Websterite | Websterite is ultramafic igneous rock, a type of pyroxenite that has less than 5% olivine and roughly equal proportions of orthopyroxene and clinopyroxene.
Websterite is named after the town Webster in North Carolina.
References
Plutonic rocks
Ultramafic rocks | Websterite | [
"Chemistry"
] | 70 | [
"Ultramafic rocks",
"Igneous rocks by composition"
] |
2,616,612 | https://en.wikipedia.org/wiki/Supersymmetric%20gauge%20theory | In theoretical physics, there are many theories with supersymmetry (SUSY) which also have internal gauge symmetries. Supersymmetric gauge theory generalizes this notion.
Gauge theory
A gauge theory is a field theory with gauge symmetry. Roughly, there are two types of symmetries, global and local. A global symmetry is a symmetry applied uniformly (in some sense) to each point of a manifold. A local symmetry is a symmetry which is position dependent. Gauge symmetry is an example of a local symmetry, with the symmetry described by a Lie group (which mathematically describe continuous symmetries), which in the context of gauge theory is called the gauge group of the theory.
Quantum chromodynamics and quantum electrodynamics are famous examples of gauge theories.
Supersymmetry
In particle physics, there exist particles with two kinds of particle statistics, bosons and fermions. Bosons carry integer spin values, and are characterized by the ability to have any number of identical bosons occupy a single point in space. They are thus identified with forces. Fermions carry half-integer spin values, and by the Pauli exclusion principle, identical fermions cannot occupy a single position in spacetime. Boson and fermion fields are interpreted as matter. Thus, supersymmetry is considered a strong candidate for the unification of radiation (boson-mediated forces) and matter.
This unification is given by an operator (or typically many operators), known as a supercharge or supersymmetry generator, which acts schematically as
For instance, the supersymmetry generator can take a photon as an argument and transform it into a photino and vice versa. This happens through translation in the (parameter) space. This superspace is a -graded vector space , where is the bosonic Hilbert space and is the fermionic Hilbert space.
SUSY gauge theory
The motivation for a supersymmetric version of gauge theory can be the fact that gauge invariance is consistent with supersymmetry.
The first examples were discovered by Bruno Zumino and Sergio Ferrara, and independently by Abdus Salam and James Strathdee in 1974.
Both the half-integer spin fermions and the integer spin bosons can become gauge particles. The gauge vector fields and its spinorial superpartner can be made to both reside in the same representation of the internal symmetry group.
Suppose we have a gauge transformation , where is a vector field and is the gauge function. The main difficulty in construction of a SUSY Gauge Theory is to extend the above transformation in a way that is consistent with SUSY transformations.
The Wess–Zumino gauge (a prescription for supersymmetric gauge fixing) provides a successful solution to this problem. Once such suitable gauge is obtained, the dynamics of the SUSY gauge theory work as follows: we seek a Lagrangian that is invariant under the Super-gauge transformations (these transformations are an important tool needed to develop supersymmetric version of a gauge theory). Then we can integrate the Lagrangian using the Berezin integration rules and thus obtain the action. Which further leads to the equations of motion and hence can provide a complete analysis of the dynamics of the theory.
SUSY in 4D (with 4 real generators)
In four dimensions, the minimal supersymmetry may be written using a superspace. This superspace involves four extra fermionic coordinates , transforming as a two-component spinor and its conjugate.
Every superfield, i.e. a field that depends on all coordinates of the superspace, may be expanded with respect to the new fermionic coordinates. There exists a special kind of superfields, the so-called chiral superfields, that only depend on the variables but not their conjugates (more precisely, ). However, a vector superfield depends on all coordinates. It describes a gauge field and its superpartner, namely a Weyl fermion that obeys a Dirac equation.
is the vector superfield (prepotential) and is real (). The fields on the right hand side are component fields.
The gauge transformations act as
where is any chiral superfield.
It's easy to check that the chiral superfield
is gauge invariant. So is its complex conjugate .
A non-supersymmetric covariant gauge which is often used is the Wess–Zumino gauge. Here, and are all set to zero. The residual gauge symmetries are gauge transformations of the traditional bosonic type.
A chiral superfield with a charge of transforms as
Therefore is gauge invariant. Here is called a bridge since it "bridges" a field which transforms under only with a field which transforms under only.
More generally, if we have a real gauge group that we wish to supersymmetrize, we first have to complexify it to then acts a compensator for the complex gauge transformations in effect absorbing them leaving only the real parts. This is what's being done in the Wess–Zumino gauge.
Differential superforms
Let's rephrase everything to look more like a conventional Yang–Mills gauge theory. We have a gauge symmetry acting upon full superspace with a 1-superform gauge connection A. In the analytic basis for the tangent space, the covariant derivative is given by . Integrability conditions for chiral superfields with the chiral constraint
leave us with
A similar constraint for antichiral superfields leaves us with . This means that we can either gauge fix or but not both simultaneously. Call the two different gauge fixing schemes I and II respectively. In gauge I, and in gauge II, . Now, the trick is to use two different gauges simultaneously; gauge I for chiral superfields and gauge II for antichiral superfields. In order to bridge between the two different gauges, we need a gauge transformation. Call it (by convention). If we were using one gauge for all fields, would be gauge invariant. However, we need to convert gauge I to gauge II, transforming to . So, the gauge invariant quantity is .
In gauge I, we still have the residual gauge where and in gauge II, we have the residual gauge satisfying . Under the residual gauges, the bridge transforms as
Without any additional constraints, the bridge wouldn't give all the information about the gauge field. However, with the additional constraint , there's only one unique gauge field which is compatible with the bridge modulo gauge transformations. Now, the bridge gives exactly the same information content as the gauge field.
Theories with 8 or more SUSY generators ()
In theories with higher supersymmetry (and perhaps higher dimension), a vector superfield typically describes not only a gauge field and a Weyl fermion but also at least one complex scalar field.
Examples
Pure supersymmetric gauge theories
N = 1 Super Yang–Mills
N = 2 Super Yang–Mills
N = 4 Super Yang–Mills
Supersymmetric gauge theories with matter
Super QCD
MSSM (Minimal supersymmetric Standard Model)
NMSSM (Next-to-minimal supersymmetric Standard Model)
See also
superpotential
D-term
F-term
current superfield
Supersymmetric quantum mechanics
References
Stephen P. Martin. A Supersymmetry Primer, .
Prakash, Nirmala. Mathematical Perspective on Theoretical Physics: A Journey from Black Holes to Superstrings, World Scientific (2003).
Supersymmetric quantum field theory
Gauge theories | Supersymmetric gauge theory | [
"Physics"
] | 1,561 | [
"Supersymmetric quantum field theory",
"Supersymmetry",
"Symmetry"
] |
2,616,675 | https://en.wikipedia.org/wiki/Flavor-changing%20neutral%20current | In particle physics, flavor-changing neutral currents or flavour-changing neutral currents (FCNCs) are hypothetical interactions that change the flavor of a fermion without altering its electric charge.
Details
If they occur in nature (as reflected by Lagrangian interaction terms), these processes may induce phenomena that have not yet been observed in experiment. Flavor-changing neutral currents may occur in the Standard Model beyond the tree level, but they are highly suppressed by the GIM mechanism. Several collaborations have searched for FCNC. The Tevatron CDF experiment observed evidence of FCNC in the decay of the strange B-meson to phi mesons in 2005.
FCNCs are generically predicted by theories that attempt to go beyond the Standard Model, such as the models of supersymmetry or technicolor. Their suppression is necessary for an agreement with observations, making FCNCs important constraints on model-building.
Example
Consider a toy model in which an undiscovered boson S may couple both to the electron as well as the tau () via the term
Since the electron and the tau have equal charges, the electric charge of S clearly must vanish to respect the conservation of electric charge. A Feynman diagram with S as the intermediate particle is able to convert a tau into an electron (plus some neutral decay products of the S).
The MEG experiment at the Paul Scherrer Institute near Zürich will search for a similar process, in which an antimuon decays to a photon and an antielectron (a positron). In the Standard Model, such a process proceeds only by emission and re-absorption of a charged , which changes the into a neutrino on emission and then a positron on re-absorption, and finally emits a photon that carries away any difference in energy, spin, and momentum.
In most cases of interest, the boson involved is not a new boson S but the conventional boson itself. This can occur if the coupling to weak neutral currents is (slightly) non-universal. The dominant universal coupling to the Z boson does not change flavor, but sub-dominant non-universal contributions can.
FCNCs involving the boson for the down-type quarks at zero momentum transfer are usually parameterized by the effective action term
This particular example of FCNC is often studied the most because we have some fairly strong constraints coming from the decay of mesons in Belle and BaBar. The off-diagonal entries of U parameterizes the FCNCs and current constraints restrict them to be less than one part in a thousand for |Ubs|. The contribution coming from the one-loop Standard Model corrections are actually dominant, but the experiments are precise enough to measure slight deviations from the Standard Model prediction.
Experiments tend to focus on flavor-changing neutral currents as opposed to charged currents, because the weak neutral current ( boson) does not change flavor in the Standard Model proper at the tree level whereas the weak charged currents ( bosons) do. New physics in charged current events would be swamped by more numerous boson interactions; new physics in the neutral current would not be masked by a large effect due to ordinary Standard Model physics.
See also
Neutral particle oscillation
Penguin diagram
Two-Higgs-doublet model
References
Standard Model
Physics beyond the Standard Model
Hypothetical processes | Flavor-changing neutral current | [
"Physics"
] | 685 | [
"Standard Model",
"Hypotheses in physics",
"Theoretical physics",
"Unsolved problems in physics",
"Particle physics",
"Physics beyond the Standard Model"
] |
2,616,711 | https://en.wikipedia.org/wiki/Nielsen%E2%80%93Olesen%20string | In theoretical physics, Nielsen–Olesen string is a one-dimensional object or equivalently a classical solution of certain equations of motion. The solution does not depend on the direction along the string; the dependence on the other two, transverse dimensions is identical as in the case of a Nielsen–Olesen vortex.
References
Quantum field theory | Nielsen–Olesen string | [
"Physics"
] | 68 | [
"Quantum field theory",
"Quantum mechanics",
"Quantum physics stubs"
] |
2,616,724 | https://en.wikipedia.org/wiki/Nielsen%E2%80%93Olesen%20vortex | In theoretical physics, a Nielsen–Olesen vortex is a point-like object localized in two spatial dimensions or, equivalently, a classical solution of field theory with the same property. This particular solution occurs if the configuration space of scalar fields contains non-contractible circles. A circle surrounding the vortex at infinity may be "wrapped" once on the other circle in the configuration space. A configuration with this non-trivial topological property is called the Nielsen–Olesen vortex, after Holger Bech Nielsen and Poul Olesen (1973). The solution is formally identical to the solution of Quantum vortex in superconductor.
See also
Nielsen–Olsen string
Abrikosov vortex
Montonen–Olive duality
S-duality
References
Quantum field theory
Vortices | Nielsen–Olesen vortex | [
"Physics",
"Chemistry",
"Mathematics"
] | 160 | [
"Quantum field theory",
"Vortices",
"Quantum mechanics",
"Dynamical systems",
"Quantum physics stubs",
"Fluid dynamics"
] |
2,616,772 | https://en.wikipedia.org/wiki/Ultraviolet%20completion | In theoretical physics, ultraviolet completion, or UV completion, of a quantum field theory is the passing from a lower energy quantum field theory to a more general quantum field theory above a threshold value known as the cutoff. In particular, the more general high energy theory must be well-defined at arbitrarily high energies.
The word "ultraviolet" in this so-called "ultraviolet regime" is only figurative, and refers to energies much higher than ultraviolet light per se. Rather, by analogy to the relationship between ultraviolet and visible light, it refers to energies higher than (and wavelengths shorter than) those "visible" to laboratory experiment.
The ultraviolet theory must be renormalizable; it can have no Landau poles; and most typically, it enjoys asymptotic freedom in the case that it is a quantum field theory (or at least has a nontrivial fixed point). However, it may also be a background of string theory whose ultraviolet behavior is at least as good as that of renormalizable quantum field theories. Besides these two known examples (QFT and string theory), it could be a completely different theory than string theory that behaves well at very high energies.
There is an analogous phrase "infrared completion", which applies to length scales longer than those "visible" to normal experiment, particularly cosmology distances.
See also
Ultraviolet divergence
Fermi's interaction
Quantum mechanics
String theory
References
Quantum field theory
Renormalization group | Ultraviolet completion | [
"Physics"
] | 298 | [
"Quantum field theory",
"Physical phenomena",
"Critical phenomena",
"Quantum mechanics",
"Renormalization group",
"Statistical mechanics",
"Quantum physics stubs"
] |
2,616,805 | https://en.wikipedia.org/wiki/Theta%20structure | A theta structure is an intermediate structure formed during the replication of a circular DNA molecule. Two replication forks can proceed independently around the DNA ring and when viewed from above the structure resembles the Greek letter "theta" (θ). Originally discovered by John Cairns, it led to the understanding that (in this case) bidirectional DNA replication could take place. Proof of the bidirectional nature came from providing replicating cells with a pulse of tritiated thymidine, quenching rapidly and then autoradiographing. Results showed that the radioactive thymidine was incorporated into both forks of the theta structure, not just one, indicating synthesis at both forks in opposite directions around the loop.
References
DNA replication | Theta structure | [
"Chemistry",
"Biology"
] | 148 | [
"Genetics techniques",
"Molecular biology stubs",
"DNA replication",
"Molecular genetics",
"Molecular biology"
] |
2,616,884 | https://en.wikipedia.org/wiki/Robert%20Ulanowicz | Robert Edward Ulanowicz ( ) is an American theoretical ecologist and philosopher of Polish descent who in his search for a unified theory of ecology has formulated a paradigm he calls Process Ecology. He was born September 17, 1943, in Baltimore, Maryland.
He served as professor of theoretical ecology at the University of Maryland Center for Environmental Science's Chesapeake Biological Laboratory in Solomons, Maryland, until his retirement in 2008. Ulanowicz received both his BS and PhD in chemical engineering from Johns Hopkins University in 1964 and 1968, respectively.
Ulanowicz currently resides in Gainesville, Florida, where he holds a courtesy professorship in the Department of Biology at the University of Florida. Since relocating to Florida, Ulanowicz has served as a scientific advisor to the Howard T. Odum Florida Springs Institute, an organization dedicated to the preservation and welfare of Florida's natural springs.
Overview
Ulanowicz uses techniques from information theory and thermodynamics to study the organization of flows of energy and nutrients within ecosystems. Although his ideas have been primarily applied in ecology, many of his concepts are abstract and have been applied to other areas in which flow networks arise, such as psychology and economics.
Though Ulanowicz began his career modeling ecological systems using differential equations, he soon reached the limits of this approach. Realizing that any ecosystem is a complex system, he decided to move away from what he saw as the inappropriate use of the reductionist approach, and instead began to work towards development of theoretical measures of the ecosystem as a whole, such as ascendency. Gradually, he came to appreciate the ecosystem behavior is not simply a matter of "mechanics with noise," but rather an intricate interplay between opposing tendencies—autocatalytic-like self-organization and entropic decay. This natural conversation could be followed quantitatively using information-theoretic measures applied to networks of trophic processes.
Following Gregory Bateson, Ulanowicz points out how ecology differs significantly from physics in that constraints that are absent play important roles in ecosystem dynamics. He also argues how the homogeneous laws of physics only constrain the behavior of very heterogeneous ecosystems but are incapable by themselves of determining outcomes. He goes so far as to suggest that an entirely new metaphysics, which he calls Process Ecology, is required to understand complex living systems.
One pertinent discovery by Ulanowicz was that ecosystems do not progress to maximum efficiency. Ecosystems that channel too much activity along the most efficient pathways do so at the expense of redundant, less-efficient processes that can function to take over vital activities in the event that the more efficient processes are disrupted. Ecosystems that persist are those that achieve a balance between the mutually exclusive attributes of efficiency and reliability. This result from nature poses a significant challenge to mainstream economics, wherein market efficiency is held to be the sine qua non.
Publications
Ulanowicz has authored or co-authored over two hundred articles in theoretical ecology and related areas of philosophy, especially those dealing with autocatalysis and causality. He has authored three books to date.
A Third Window: Natural Life Beyond Newton and Darwin, Templeton Foundation Press (2009) ()--A description of Ulanowicz's new metaphysics
Ecology: The Ascendant Perspective, Columbia University Press (1997) () - Causality in living systems, written for a more general audience
Growth and Development - Ecosystems phenomenology, Springer (1986) () - A more technical exposition of Ulanowicz's early ideas
Palms
While living in Maryland, Ulanowicz took up a hobby of cultivating and casually breeding cold-hardy palm trees; he drew attention for a Windmill palm on Solomons Island that grew taller than the one-story building it was planted outside.
Awards
Ulanowicz was named the recipient of the 2007 Ilya Prigogine Medal for outstanding research in ecological systems. He participated in the Stock Exchange of Visions project in 2007.
Ulanowicz was a featured speaker at the 2009 Ill STOQ International Conference entitled "Biological Evolution: Facts and Theories," which discussed the impacts and effects of the publication of On the Origin of Species by Charles Darwin.
See also
List of American philosophers
Ascendency
References
External links
Robert E. Ulanowicz's Home Page
Stock Exchange Of Visions: Visions of Robert E. Ulanowicz (Video Interviews)
Alachua County Clerk of Courts
1943 births
American ecologists
21st-century American philosophers
Mathematical ecologists
Thermodynamicists
Johns Hopkins University alumni
University of Maryland, College Park faculty
Living people
Systems ecologists
People from Calvert County, Maryland
American people of Polish descent | Robert Ulanowicz | [
"Physics",
"Chemistry"
] | 935 | [
"Thermodynamics",
"Thermodynamicists"
] |
18,637,421 | https://en.wikipedia.org/wiki/Tensor%20derivative%20%28continuum%20mechanics%29 | The derivatives of scalars, vectors, and second-order tensors with respect to second-order tensors are of considerable use in continuum mechanics. These derivatives are used in the theories of nonlinear elasticity and plasticity, particularly in the design of algorithms for numerical simulations.
The directional derivative provides a systematic way of finding these derivatives.
Derivatives with respect to vectors and second-order tensors
The definitions of directional derivatives for various situations are given below. It is assumed that the functions are sufficiently smooth that derivatives can be taken.
Derivatives of scalar valued functions of vectors
Let f(v) be a real valued function of the vector v. Then the derivative of f(v) with respect to v (or at v) is the vector defined through its dot product with any vector u being
for all vectors u. The above dot product yields a scalar, and if u is a unit vector gives the directional derivative of f at v, in the u direction.
Properties:
If then
If then
If then
Derivatives of vector valued functions of vectors
Let f(v) be a vector valued function of the vector v. Then the derivative of f(v) with respect to v (or at v) is the second order tensor defined through its dot product with any vector u being
for all vectors u. The above dot product yields a vector, and if u is a unit vector gives the direction derivative of f at v, in the directional u.
Properties:
If then
If then
If then
Derivatives of scalar valued functions of second-order tensors
Let be a real valued function of the second order tensor . Then the derivative of with respect to (or at ) in the direction is the second order tensor defined as
for all second order tensors .
Properties:
If then
If then
If then
Derivatives of tensor valued functions of second-order tensors
Let be a second order tensor valued function of the second order tensor . Then the derivative of with respect to (or at ) in the direction is the fourth order tensor defined as
for all second order tensors .
Properties:
If then
If then
If then
If then
Gradient of a tensor field
The gradient, , of a tensor field in the direction of an arbitrary constant vector c is defined as:
The gradient of a tensor field of order n is a tensor field of order n+1.
Cartesian coordinates
If are the basis vectors in a Cartesian coordinate system, with coordinates of points denoted by (), then the gradient of the tensor field is given by
Since the basis vectors do not vary in a Cartesian coordinate system we have the following relations for the gradients of a scalar field , a vector field v, and a second-order tensor field .
Curvilinear coordinates
If are the contravariant basis vectors in a curvilinear coordinate system, with coordinates of points denoted by (), then the gradient of the tensor field is given by (see for a proof.)
From this definition we have the following relations for the gradients of a scalar field , a vector field v, and a second-order tensor field .
where the Christoffel symbol is defined using
Cylindrical polar coordinates
In cylindrical coordinates, the gradient is given by
Divergence of a tensor field
The divergence of a tensor field is defined using the recursive relation
where c is an arbitrary constant vector and v is a vector field. If is a tensor field of order n > 1 then the divergence of the field is a tensor of order n− 1.
Cartesian coordinates
In a Cartesian coordinate system we have the following relations for a vector field v and a second-order tensor field .
where tensor index notation for partial derivatives is used in the rightmost expressions. Note that
For a symmetric second-order tensor, the divergence is also often written as
The above expression is sometimes used as the definition of
in Cartesian component form (often also written as
). Note that such a definition is not consistent with the rest of this article (see the section on curvilinear co-ordinates).
The difference stems from whether the differentiation is performed with respect to the rows or columns of , and is conventional. This is demonstrated by an example. In a Cartesian coordinate system the second order tensor (matrix) is the gradient of a vector function .
The last equation is equivalent to the alternative definition / interpretation
Curvilinear coordinates
In curvilinear coordinates, the divergences of a vector field v and a second-order tensor field are
More generally,
Cylindrical polar coordinates
In cylindrical polar coordinates
Curl of a tensor field
The curl of an order-n > 1 tensor field is also defined using the recursive relation
where c is an arbitrary constant vector and v is a vector field.
Curl of a first-order tensor (vector) field
Consider a vector field v and an arbitrary constant vector c. In index notation, the cross product is given by
where is the permutation symbol, otherwise known as the Levi-Civita symbol. Then,
Therefore,
Curl of a second-order tensor field
For a second-order tensor
Hence, using the definition of the curl of a first-order tensor field,
Therefore, we have
Identities involving the curl of a tensor field
The most commonly used identity involving the curl of a tensor field, , is
This identity holds for tensor fields of all orders. For the important case of a second-order tensor, , this identity implies that
Derivative of the determinant of a second-order tensor
The derivative of the determinant of a second order tensor is given by
In an orthonormal basis, the components of can be written as a matrix A. In that case, the right hand side corresponds the cofactors of the matrix.
Derivatives of the invariants of a second-order tensor
The principal invariants of a second order tensor are
The derivatives of these three invariants with respect to are
Derivative of the second-order identity tensor
Let be the second order identity tensor. Then the derivative of this tensor with respect to a second order tensor is given by
This is because is independent of .
Derivative of a second-order tensor with respect to itself
Let be a second order tensor. Then
Therefore,
Here is the fourth order identity tensor. In index notation with respect to an orthonormal basis
This result implies that
where
Therefore, if the tensor is symmetric, then the derivative is also symmetric and we get
where the symmetric fourth order identity tensor is
Derivative of the inverse of a second-order tensor
Let and be two second order tensors, then
In index notation with respect to an orthonormal basis
We also have
In index notation
If the tensor is symmetric then
Integration by parts
Another important operation related to tensor derivatives in continuum mechanics is integration by parts. The formula for integration by parts can be written as
where and are differentiable tensor fields of arbitrary order, is the unit outward normal to the domain over which the tensor fields are defined, represents a generalized tensor product operator, and is a generalized gradient operator. When is equal to the identity tensor, we get the divergence theorem
We can express the formula for integration by parts in Cartesian index notation as
For the special case where the tensor product operation is a contraction of one index and the gradient operation is a divergence, and both and are second order tensors, we have
In index notation,
See also
Covariant derivative
Ricci calculus
References
Solid mechanics
Mechanics | Tensor derivative (continuum mechanics) | [
"Physics",
"Engineering"
] | 1,501 | [
"Solid mechanics",
"Mechanics",
"Mechanical engineering"
] |
18,638,809 | https://en.wikipedia.org/wiki/Degenerate%20semiconductor | A degenerate semiconductor is a semiconductor with such a high level of doping that the material starts to act more like a metal than a semiconductor. Unlike non-degenerate semiconductors, these kinds of semiconductor do not obey the law of mass action, which relates intrinsic carrier concentration with temperature and bandgap.
At moderate doping levels, the dopant atoms create individual doping levels that can often be considered as localized states that can donate electrons or holes by thermal promotion (or an optical transition) to the conduction or valence bands respectively. At high enough impurity concentrations, the individual impurity atoms may become close enough neighbors that their doping levels merge into an impurity band and the behavior of such a system ceases to show the typical traits of a semiconductor, e.g. its increase in conductivity with temperature. On the other hand, a degenerate semiconductor still has far fewer charge carriers than a true metal so that its behavior is in many ways intermediary between semiconductor and metal.
Many copper chalcogenides are degenerate p-type semiconductors with relatively large numbers of holes in their valence band. An example is the system LaCuOS1−xSex with Mg doping. It is a wide gap p-type degenerate semiconductor. The hole concentration does not change with temperature, a typical trait of degenerate semiconductors.
Another well known example is indium tin oxide. Because its plasma frequency is in the IR-range, it is a fairly good metallic conductor, but transparent in the visible range of the spectrum.
References
Semiconductor material types
Condensed matter physics | Degenerate semiconductor | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 327 | [
"Semiconductor materials",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Semiconductor material types",
"Matter"
] |
10,658,132 | https://en.wikipedia.org/wiki/Abundance%20of%20elements%20in%20Earth%27s%20crust | The abundance of elements in Earth's crust is shown in tabulated form with the estimated crustal abundance for each chemical element shown as mg/kg, or parts per million (ppm) by mass (10,000 ppm = 1%).
Reservoirs
The Earth's crust is one "reservoir" for measurements of abundance. A reservoir is any large body to be studied as unit, like the ocean, atmosphere, mantle or crust. Different reservoirs may have different relative amounts of each element due to different chemical or mechanical processes involved in the creation of the reservoir.
Difficulties in measurement
Estimates of elemental abundance are difficult because (a) the composition of the upper and lower crust are quite different, and (b) the composition of the continental crust can vary drastically by locality. The composition of the Earth changed after its formation due to loss of volatile compounds, melting and recrystalization, selective loss of some elements to the deep interior, and erosion by water.
The lanthanides are especially difficult to measure accurately.
Graphs of abundance vs atomic number
Graphs of abundance against atomic number can reveal patterns relating abundance to stellar nucleosynthesis and geochemistry.
The alternation of abundance between even and odd atomic number is known as the Oddo–Harkins rule. The rarest elements in the crust are not the heaviest, but are rather the siderophile elements (iron-loving) in the Goldschmidt classification of elements. These have been depleted by being relocated deeper into the Earth's core; their abundance in meteoroids is higher. Tellurium and selenium are concentrated as sulfides in the core and have also been depleted by preaccretional sorting in the nebula that caused them to form volatile hydrogen selenide and hydrogen telluride.
List of abundance by element
This table gives the estimated abundance in parts per million by mass of elements in the continental crust; values of the less abundant elements may vary with location by several orders of magnitude.
Colour indicates each element's Goldschmidt classification:
See also
References
Further reading
External links
BookRags, Periodic Table.
World Book Encyclopedia, Exploring Earth.
HyperPhysics, Georgia State University, Abundance of Elements in Earth's Crust.
Eric Scerri, The Periodic Table, Its Story and Its Significance, Oxford University Press, 2007
Structure of the Earth
Properties of chemical elements
Lists of chemical elements
Earth's crust | Abundance of elements in Earth's crust | [
"Chemistry"
] | 490 | [
"Lists of chemical elements",
"Properties of chemical elements"
] |
10,663,351 | https://en.wikipedia.org/wiki/Oligonucleotide%20synthesis | Oligonucleotide synthesis is the chemical synthesis of relatively short fragments of nucleic acids with defined chemical structure (sequence). The technique is extremely useful in current laboratory practice because it provides a rapid and inexpensive access to custom-made oligonucleotides of the desired sequence. Whereas enzymes synthesize DNA and RNA only in a 5' to 3' direction, chemical oligonucleotide synthesis does not have this limitation, although it is most often carried out in the opposite, 3' to 5' direction. Currently, the process is implemented as solid-phase synthesis using phosphoramidite method and phosphoramidite building blocks derived from protected 2'-deoxynucleosides (dA, dC, dG, and T), ribonucleosides (A, C, G, and U), or chemically modified nucleosides, e.g. LNA or BNA.
To obtain the desired oligonucleotide, the building blocks are sequentially coupled to the growing oligonucleotide chain in the order required by the sequence of the product (see Synthetic cycle below). The process has been fully automated since the late 1970s. Upon the completion of the chain assembly, the product is released from the solid phase to solution, deprotected, and collected. The occurrence of side reactions sets practical limits for the length of synthetic oligonucleotides (up to about 200 nucleotide residues) because the number of errors accumulates with the length of the oligonucleotide being synthesized. Products are often isolated by high-performance liquid chromatography (HPLC) to obtain the desired oligonucleotides in high purity. Typically, synthetic oligonucleotides are single-stranded DNA or RNA molecules around 15–25 bases in length.
Oligonucleotides find a variety of applications in molecular biology and medicine. They are most commonly used as antisense oligonucleotides, small interfering RNA, primers for DNA sequencing and amplification, probes for detecting complementary DNA or RNA via molecular hybridization, tools for the targeted introduction of mutations and restriction sites, and for the synthesis of artificial genes. An emerging application of Oligonucleotide synthesis is the re-creation of viruses from sequence alone — either harmless, such as Phi X 174, or dangerous such as the 1917 influenza virus or SARS-CoV-2.
History
The evolution of oligonucleotide synthesis saw four major methods of the formation of internucleosidic linkages and has been reviewed in the literature in great detail.
Early work and contemporary H-phosphonate synthesis
In the early 1950s, Alexander Todd's group pioneered H-phosphonate and phosphate triester methods of oligonucleotide synthesis. The reaction of compounds 1 and 2 to form H-phosphonate diester 3 is an H-phosphonate coupling in solution while that of compounds 4 and 5 to give 6 is a phosphotriester coupling (see phosphotriester synthesis below).
Thirty years later, this work inspired, independently, two research groups to adopt the H-phosphonate chemistry to the solid-phase synthesis using nucleoside H-phosphonate monoesters 7 as building blocks and pivaloyl chloride, 2,4,6-triisopropylbenzenesulfonyl chloride (TPS-Cl), and other compounds as activators. The practical implementation of H-phosphonate method resulted in a very short and simple synthetic cycle consisting of only two steps, detritylation and coupling (Scheme 2). Oxidation of internucleosidic H-phosphonate diester linkages in 8 to phosphodiester linkages in 9 with a solution of iodine in aqueous pyridine is carried out at the end of the chain assembly rather than as a step in the synthetic cycle. If desired, the oxidation may be carried out under anhydrous conditions. Alternatively, 8 can be converted to phosphorothioate 10 or phosphoroselenoate 11 (X = Se), or oxidized by CCl4 in the presence of primary or secondary amines to phosphoramidate analogs 12. The method is very convenient in that various types of phosphate modifications (phosphate/phosphorothioate/phosphoramidate) may be introduced to the same oligonucleotide for modulation of its properties.
Most often, H-phosphonate building blocks are protected at the 5'-hydroxy group and at the amino group of nucleic bases A, C, and G in the same manner as phosphoramidite building blocks (see below). However, protection at the amino group is not mandatory.
Phosphodiester synthesis
In the 1950s, Har Gobind Khorana and co-workers developed a phosphodiester method where 3'-O-acetylnucleoside-5'-O-phosphate 2 (Scheme 3) was activated with N,N-dicyclohexylcarbodiimide (DCC) or 4-toluenesulfonyl chloride (Ts-Cl). The activated species were reacted with a 5'-O-protected nucleoside 1 to give a protected dinucleoside monophosphate 3. Upon the removal of 3'-O-acetyl group using base-catalyzed hydrolysis, further chain elongation was carried out. Following this methodology, sets of tri- and tetradeoxyribonucleotides were synthesized and were enzymatically converted to longer oligonucleotides, which allowed elucidation of the genetic code. The major limitation of the phosphodiester method consisted in the formation of pyrophosphate oligomers and oligonucleotides branched at the internucleosidic phosphate. The method seems to be a step back from the more selective chemistry described earlier; however, at that time, most phosphate-protecting groups available now had not yet been introduced. The lack of the convenient protection strategy necessitated taking a retreat to a slower and less selective chemistry to achieve the ultimate goal of the study.
Phosphotriester synthesis
In the 1960s, groups led by R. Letsinger and C. Reese developed a phosphotriester approach. The defining difference from the phosphodiester approach was the protection of the phosphate moiety in the building block 1 (Scheme 4) and in the product 3 with 2-cyanoethyl group. This precluded the formation of oligonucleotides branched at the internucleosidic phosphate. The higher selectivity of the method allowed the use of more efficient coupling agents and catalysts, which dramatically reduced the length of the synthesis. The method, initially developed for the solution-phase synthesis, was also implemented on low-cross-linked "popcorn" polystyrene, and later on controlled pore glass (CPG, see "Solid support material" below), which initiated a massive research effort in solid-phase synthesis of oligonucleotides and eventually led to the automation of the oligonucleotide chain assembly.
Phosphite triester synthesis
In the 1970s, substantially more reactive P(III) derivatives of nucleosides, 3'-O-chlorophosphites, were successfully used for the formation of internucleosidic linkages. This led to the discovery of the phosphite triester methodology. The group led by M. Caruthers took the advantage of less aggressive and more selective 1H-tetrazolidophosphites and implemented the method on solid phase. Very shortly after, the workers from the same group further improved the method by using more stable nucleoside phosphoramidites as building blocks. The use of 2-cyanoethyl phosphite-protecting group in place of a less user-friendly methyl group led to the nucleoside phosphoramidites currently used in oligonucleotide synthesis (see Phosphoramidite building blocks below). Many later improvements to the manufacturing of building blocks, oligonucleotide synthesizers, and synthetic protocols made the phosphoramidite chemistry a very reliable and expedient method of choice for the preparation of synthetic oligonucleotides.
Synthesis by the phosphoramidite method
Building blocks
Nucleoside phosphoramidites
As mentioned above, the naturally occurring nucleotides (nucleoside-3'- or 5'-phosphates) and their phosphodiester analogs are insufficiently reactive to afford an expeditious synthetic preparation of oligonucleotides in high yields. The selectivity and the rate of the formation of internucleosidic linkages is dramatically improved by using 3'-O-(N,N-diisopropyl phosphoramidite) derivatives of nucleosides (nucleoside phosphoramidites) that serve as building blocks in phosphite triester methodology. To prevent undesired side reactions, all other functional groups present in nucleosides have to be rendered unreactive (protected) by attaching protecting groups. Upon the completion of the oligonucleotide chain assembly, all the protecting groups are removed to yield the desired oligonucleotides. Below, the protecting groups currently used in commercially available and most common nucleoside phosphoramidite building blocks are briefly reviewed:
The 5'-hydroxyl group is protected by an acid-labile DMT (4,4'-dimethoxytrityl) group.
Thymine and uracil, nucleic bases of thymidine and uridine, respectively, do not have exocyclic amino groups and hence do not require any protection.
Although the nucleic base of guanosine and 2'-deoxyguanosine does have an exocyclic amino group, its basicity is low to an extent that it does not react with phosphoramidites under the conditions of the coupling reaction. However, a phosphoramidite derived from the N2-unprotected 5'-O-DMT-2'-deoxyguanosine is poorly soluble in acetonitrile, the solvent commonly used in oligonucleotide synthesis. In contrast, the N2-protected versions of the same compound dissolve in acetonitrile well and hence are widely used. Nucleic bases adenine and cytosine bear the exocyclic amino groups reactive with the activated phosphoramidites under the conditions of the coupling reaction. By the use of additional steps in the synthetic cycle or alternative coupling agents and solvent systems, the oligonucleotide chain assembly may be carried out using dA and dC phosphoramidites with unprotected amino groups. However, these approaches currently remain in the research stage. In routine oligonucleotide synthesis, exocyclic amino groups in nucleosides are kept permanently protected over the entire length of the oligonucleotide chain assembly.
The protection of the exocyclic amino groups has to be orthogonal to that of the 5'-hydroxy group because the latter is removed at the end of each synthetic cycle. The simplest to implement, and hence the most widely used, strategy is to install a base-labile protection group on the exocyclic amino groups. Most often, two protection schemes are used.
In the first, the standard and more robust scheme (Figure), Bz (benzoyl) protection is used for A, dA, C, and dC, while G and dG are protected with isobutyryl group. More recently, Ac (acetyl) group is used to protect C and dC as shown in Figure.
In the second, mild protection scheme, A and dA are protected with isobutyryl or phenoxyacetyl groups (PAC). C and dC bear acetyl protection, and G and dG are protected with 4-isopropylphenoxyacetyl (iPr-PAC) or dimethylformamidino (dmf) groups. Mild protecting groups are removed more readily than the standard protecting groups. However, the phosphoramidites bearing these groups are less stable when stored in solution.
The phosphite group is protected by a base-labile 2-cyanoethyl protecting group. Once a phosphoramidite has been coupled to the solid support-bound oligonucleotide and the phosphite moieties have been converted to the P(V) species, the presence of the phosphate protection is not mandatory for the successful conducting of further coupling reactions.
In RNA synthesis, the 2'-hydroxy group is protected with TBDMS (t-butyldimethylsilyl) group. or with TOM (tri-iso-propylsilyloxymethyl) group, both being removable by treatment with fluoride ion.
The phosphite moiety also bears a diisopropylamino (iPr2N) group reactive under acidic conditions. Upon activation, the diisopropylamino group leaves to be substituted by the 5'-hydroxy group of the support-bound oligonucleotide (see "Step 2: Coupling" below).
Non-nucleoside phosphoramidites
Non-nucleoside phosphoramidites are the phosphoramidite reagents designed to introduce various functionalities at the termini of synthetic oligonucleotides or between nucleotide residues in the middle of the sequence. In order to be introduced inside the sequence, a non-nucleosidic modifier has to possess at least two hydroxy groups, one of which is often protected with the DMT group while the other bears the reactive phosphoramidite moiety.
Non-nucleosidic phosphoramidites are used to introduce desired groups that are not available in natural nucleosides or that can be introduced more readily using simpler chemical designs. A very short selection of commercial phosphoramidite reagents is shown in Scheme for the demonstration of the available structural and functional diversity. These reagents serve for the attachment of 5'-terminal phosphate (1), NH2 (2), SH (3), aldehydo (4), and carboxylic groups (5), CC triple bonds (6), non-radioactive labels and quenchers (exemplified by 6-FAM amidite 7 for the attachment of fluorescein and dabcyl amidite 8, respectively), hydrophilic and hydrophobic modifiers (exemplified by hexaethyleneglycol amidite 9 and cholesterol amidite 10, respectively), and biotin amidite 11.
Synthesis cycle
Oligonucleotide synthesis is carried out by a stepwise addition of nucleotide residues to the 5'-terminus of the growing chain until the desired sequence is assembled. Each addition is referred to as a synthesis cycle (Scheme 5) and consists of four chemical reactions:
Step 1: De-blocking (detritylation)
The DMT group is removed with a solution of an acid, such as 2% trichloroacetic acid (TCA) or 3% dichloroacetic acid (DCA), in an inert solvent (dichloromethane or toluene). The orange-colored DMT cation formed is washed out; the step results in the solid support-bound oligonucleotide precursor bearing a free 5'-terminal hydroxyl group.
It is worth remembering that conducting detritylation for an extended time or with stronger than recommended solutions of acids leads to depurination of solid support-bound oligonucleotide and thus reduces the yield of the desired full-length product.
Step 2: Coupling
A 0.02–0.2 M solution of nucleoside phosphoramidite (or a mixture of several phosphoramidites) in acetonitrile is activated by a 0.2–0.7 M solution of an acidic azole catalyst, 1H-tetrazole, 5-ethylthio-1H-tetrazole, 2-benzylthiotetrazole, 4,5-dicyanoimidazole, or a number of similar compounds. A more extensive information on the use of various coupling agents in oligonucleotide synthesis can be found in a recent review. The mixing is usually very brief and occurs in fluid lines of oligonucleotide synthesizers (see below) while the components are being delivered to the reactors containing solid support. The activated phosphoramidite in 1.5 – 20-fold excess over the support-bound material is then brought in contact with the starting solid support (first coupling) or a support-bound oligonucleotide precursor (following couplings) whose 5'-hydroxy group reacts with the activated phosphoramidite moiety of the incoming nucleoside phosphoramidite to form a phosphite triester linkage. The coupling of 2'-deoxynucleoside phosphoramidites is very rapid and requires, on small scale, about 20 s for its completion. In contrast, sterically hindered 2'-O-protected ribonucleoside phosphoramidites require 5-15 min to be coupled in high yields. The reaction is also highly sensitive to the presence of water, particularly when dilute solutions of phosphoramidites are used, and is commonly carried out in anhydrous acetonitrile. Generally, the larger the scale of the synthesis, the lower the excess and the higher the concentration of the phosphoramidites is used. In contrast, the concentration of the activator is primarily determined by its solubility in acetonitrile and is irrespective of the scale of the synthesis. Upon the completion of the coupling, any unbound reagents and by-products are removed by washing.
Step 3: Capping
The capping step is performed by treating the solid support-bound material with a mixture of acetic anhydride and 1-methylimidazole or, less often, DMAP as catalysts and, in the phosphoramidite method, serves two purposes.
After the completion of the coupling reaction, a small percentage of the solid support-bound 5'-OH groups (0.1 to 1%) remains unreacted and needs to be permanently blocked from further chain elongation to prevent the formation of oligonucleotides with an internal base deletion commonly referred to as (n-1) shortmers. The unreacted 5'-hydroxy groups are, to a large extent, acetylated by the capping mixture.
It has also been reported that phosphoramidites activated with 1H-tetrazole react, to a small extent, with the O6 position of guanosine. Upon oxidation with I2 /water, this side product, possibly via O6-N7 migration, undergoes depurination. The apurinic sites thus formed are readily cleaved in the course of the final deprotection of the oligonucleotide under the basic conditions (see below) to give two shorter oligonucleotides thus reducing the yield of the full-length product. The O6 modifications are rapidly removed by treatment with the capping reagent as long as the capping step is performed prior to oxidation with I2/water.
The synthesis of oligonucleotide phosphorothioates (OPS, see below) does not involve the oxidation with I2/water, and, respectively, does not suffer from the side reaction described above. On the other hand, if the capping step is performed prior to sulfurization, the solid support may contain the residual acetic anhydride and N-methylimidazole left after the capping step. The capping mixture interferes with the sulfur transfer reaction, which results in the extensive formation of the phosphate triester internucleosidic linkages in place of the desired PS triesters. Therefore, for the synthesis of OPS, it is advisable to conduct the sulfurization step prior to the capping step.
Step 4: Oxidation
The newly formed tricoordinated phosphite triester linkage is not natural and is of limited stability under the conditions of oligonucleotide synthesis. The treatment of the support-bound material with iodine and water in the presence of a weak base (pyridine, lutidine, or collidine) oxidizes the phosphite triester into a tetracoordinated phosphate triester, a protected precursor of the naturally occurring phosphate diester internucleosidic linkage. Oxidation may be carried out under anhydrous conditions using tert-Butyl hydroperoxide or, more efficiently, (1S)-(+)-(10-camphorsulfonyl)-oxaziridine (CSO). The step of oxidation may be substituted with a sulfurization step to obtain oligonucleotide phosphorothioates (see Oligonucleotide phosphorothioates and their synthesis below). In the latter case, the sulfurization step is best carried out prior to capping.
Solid supports
In solid-phase synthesis, an oligonucleotide being assembled is covalently bound, via its 3'-terminal hydroxy group, to a solid support material and remains attached to it over the entire course of the chain assembly. The solid support is contained in columns whose dimensions depend on the scale of synthesis and may vary between 0.05 mL and several liters. The overwhelming majority of oligonucleotides are synthesized on small scale ranging from 10 nmol to 1 μmol. More recently, high-throughput oligonucleotide synthesis where the solid support is contained in the wells of multi-well plates (most often, 96 or 384 wells per plate) became a method of choice for parallel synthesis of oligonucleotides on small scale. At the end of the chain assembly, the oligonucleotide is released from the solid support and is eluted from the column or the well.
Solid support material
In contrast to organic solid-phase synthesis and peptide synthesis, the synthesis of oligonucleotides proceeds best on non-swellable or low-swellable solid supports. The two most often used solid-phase materials are controlled pore glass (CPG) and macroporous polystyrene (MPPS).
CPG is commonly defined by its pore size. In oligonucleotide chemistry, pore sizes of 500, 1000, 1500, 2000, and 3000 Å are used to allow the preparation of about 50, 80, 100, 150, and 200-mer oligonucleotides, respectively. To make native CPG suitable for further processing, the surface of the material is treated with (3-aminopropyl)triethoxysilane to give aminopropyl CPG. The aminopropyl arm may be further extended to result in long chain aminoalkyl (LCAA) CPG. The amino group is then used as an anchoring point for linkers suitable for oligonucleotide synthesis (see below).
MPPS suitable for oligonucleotide synthesis is a low-swellable, highly cross-linked polystyrene obtained by polymerization of divinylbenzene (min 60%), styrene, and 4-chloromethylstyrene in the presence of a porogeneous agent. The macroporous chloromethyl MPPS obtained is converted to aminomethyl MPPS.
Linker chemistry
To make the solid support material suitable for oligonucleotide synthesis, non-nucleosidic linkers or nucleoside succinates are covalently attached to the reactive amino groups in aminopropyl CPG, LCAA CPG, or aminomethyl MPPS. The remaining unreacted amino groups are capped with acetic anhydride. Typically, three conceptually different groups of solid supports are used.
Universal supports. In a more recent, more convenient, and more widely used method, the synthesis starts with the universal support where a non-nucleosidic linker is attached to the solid support material (compounds 1 and 2). A phosphoramidite respective to the 3'-terminal nucleoside residue is coupled to the universal solid support in the first synthetic cycle of oligonucleotide chain assembly using the standard protocols. The chain assembly is then continued until the completion, after which the solid support-bound oligonucleotide is deprotected. The characteristic feature of the universal solid supports is that the release of the oligonucleotides occurs by the hydrolytic cleavage of a P-O bond that attaches the 3'-O of the 3'-terminal nucleotide residue to the universal linker as shown in Scheme 6. The critical advantage of this approach is that the same solid support is used irrespectively of the sequence of the oligonucleotide to be synthesized. For the complete removal of the linker and the 3'-terminal phosphate from the assembled oligonucleotide, the solid support 1 and several similar solid supports require gaseous ammonia, aqueous ammonium hydroxide, aqueous methylamine, or their mixture and are commercially available. The solid support 2 requires a solution of ammonia in anhydrous methanol and is also commercially available.
Nucleosidic solid supports. In a historically first and still popular approach, the 3'-hydroxy group of the 3'-terminal nucleoside residue is attached to the solid support via, most often, 3'-O-succinyl arm as in compound 3. The oligonucleotide chain assembly starts with the coupling of a phosphoramidite building block respective to the nucleotide residue second from the 3'-terminus. The 3'-terminal hydroxy group in oligonucleotides synthesized on nucleosidic solid supports is deprotected under the conditions somewhat milder than those applicable for universal solid supports. However, the fact that a nucleosidic solid support has to be selected in a sequence-specific manner reduces the throughput of the entire synthetic process and increases the likelihood of human error.
Special solid supports are used for the attachment of desired functional or reporter groups at the 3'-terminus of synthetic oligonucleotides. For example, the commercial solid support 4 allows the preparation of oligonucleotides bearing 3'-terminal 3-aminopropyl linker. Similarly to non-nucleosidic phosphoramidites, many other special solid supports designed for the attachment of reactive functional groups, non-radioactive reporter groups, and terminal modifiers (e.c. cholesterol or other hydrophobic tethers) and suited for various applications are commercially available. A more detailed information on various solid supports for oligonucleotide synthesis can be found in a recent review.
Oligonucleotide phosphorothioates and their synthesis
Oligonucleotide phosphorothioates (OPS) are modified oligonucleotides where one of the oxygen atoms in the phosphate moiety is replaced by sulfur. Only the phosphorothioates having sulfur at a non-bridging position as shown in figure are widely used and are available commercially. The replacement of the non-bridging oxygen with sulfur creates a new center of chirality at phosphorus. In a simple case of a dinucleotide, this results in the formation of a diastereomeric pair of Sp- and Rp-dinucleoside monophosphorothioates whose structures are shown in Figure. In an n-mer oligonucleotide where all (n – 1) internucleosidic linkages are phosphorothioate linkages, the number of diastereomers m is calculated as m = 2(n – 1). Being non-natural analogs of nucleic acids, OPS are substantially more stable towards hydrolysis by nucleases, the class of enzymes that destroy nucleic acids by breaking the bridging P-O bond of the phosphodiester moiety. This property determines the use of OPS as antisense oligonucleotides in in vitro and in vivo applications where the extensive exposure to nucleases is inevitable. Similarly, to improve the stability of siRNA, at least one phosphorothioate linkage is often introduced at the 3'-terminus of both sense and antisense strands. In chirally pure OPS, all-Sp diastereomers are more stable to enzymatic degradation than their all-Rp analogs. However, the preparation of chirally pure OPS remains a synthetic challenge. In laboratory practice, mixtures of diastereomers of OPS are commonly used.
Synthesis of OPS is very similar to that of natural oligonucleotides. The difference is that the oxidation step is replaced by sulfur transfer reaction (sulfurization) and that the capping step is performed after the sulfurization. Of many reported reagents capable of the efficient sulfur transfer, only three are commercially available:
3-(Dimethylaminomethylidene)amino-3H-1,2,4-dithiazole-3-thione, DDTT (3) provides rapid kinetics of sulfurization and high stability in solution. The reagent is available from several sources.
3H-1,2-benzodithiol-3-one 1,1-dioxide (4) also known as Beaucage reagent displays a better solubility in acetonitrile and short reaction times. However, the reagent is of limited stability in solution and is less efficient in sulfurizing RNA linkages.
[[Disulfiram|N,N,N'N-Tetraethylthiuram disulfide]] (TETD) is soluble in acetonitrile and is commercially available. However, the sulfurization reaction of an internucleosidic DNA linkage with TETD requires 15 min, which is more than 10 times as slow as that with compounds 3 and 4'''.
Automation
In the past, oligonucleotide synthesis was carried out manually in solution or on solid phase. The solid phase synthesis was implemented using, as containers for the solid phase, miniature glass columns similar in their shape to low-pressure chromatography columns or syringes equipped with porous filters.
Currently, solid-phase oligonucleotide synthesis is carried out automatically using computer-controlled instruments (oligonucleotide synthesizers) and is technically implemented in column, multi-well plate, and array formats. The column format is best suited for research and large scale applications where a high-throughput is not required. Multi-well plate format is designed specifically for high-throughput synthesis on small scale to satisfy the growing demand of industry and academia for synthetic oligonucleotides.
History of mid to large scale oligonucleotide synthesis
Large scale oligonucleotide synthesizers were often developed by augmenting the capabilities of a preexisting instrument platform. One of the first mid scale synthesizers appeared in the late 1980s, manufactured by the Biosearch company in Novato, CA (The 8800). This platform was originally designed as a peptide synthesizer and made use of a fluidized bed reactor essential for accommodating the swelling characteristics of polystyrene supports used in the Merrifield methodology. Oligonucleotide synthesis involved the use of CPG (controlled pore glass) which is a rigid support and is more suited for column reactors as described above. The scale of the 8800 was limited to the flow rate required to fluidize the support. Some novel reactor designs as well as higher than normal pressures enabled the 8800 to achieve scales that would prepare 1 mmol of oligonucleotide. In the mid 1990s several companies developed platforms that were based on semi-preparative and preparative liquid chromatographs. These systems were well suited for a column reactor approach. In most cases all that was required was to augment the number of fluids that could be delivered to the column. Oligo synthesis requires a minimum of 10 and liquid chromatographs usually accommodate 4. This was an easy design task and some semi-automatic strategies worked without any modifications to the preexisting LC equipment. PerSeptive Biosystems as well as Pharmacia (GE) were two of several companies that developed synthesizers out of liquid chromatographs. Genomic Technologies, Inc. was one of the few companies to develop a large scale oligonucleotide synthesizer that was, from the ground up, an oligonucleotide synthesizer. The initial platform called the VLSS for very large scale synthesizer utilized large Pharmacia liquid chromatograph columns as reactors and could synthesize up to 75 mmol of material. Many oligonucleotide synthesis factories designed and manufactured their own custom platforms and little is known due to the designs being proprietary. The VLSS design continued to be refined and is continued in the QMaster synthesizer which is a scaled down platform providing milligram to gram amounts of synthetic oligonucleotide.
The current practices of synthesis of chemically modified oligonucleotides on large scale have been recently reviewed.
Synthesis of oligonucleotide microarrays
One may visualize an oligonucleotide microarray as a miniature multi-well plate where physical dividers between the wells (plastic walls) are intentionally removed. With respect to the chemistry, synthesis of oligonucleotide microarrays is different from the conventional oligonucleotide synthesis in two respects:
Oligonucleotides remain permanently attached to the solid phase, which requires the use of linkers that are stable under the conditions of the final deprotection procedure.
The absence of physical dividers between the sites occupied by individual oligonucleotides, a very limited space on the surface of the microarray (one oligonucleotide sequence occupies a square 25×25 μm) and the requirement of high fidelity of oligonucleotide synthesis dictate the use of site-selective 5'-deprotection techniques. In one approach, the removal of the 5'-O-DMT group is effected by electrochemical generation of the acid at the required site(s). Another approach uses 5'-O-(α-methyl-6-nitropiperonyloxycarbonyl) (MeNPOC) protecting group, which can be removed by irradiation with UV light of 365 nm wavelength.
Post-synthetic processing
After the completion of the chain assembly, the solid support-bound oligonucleotide is fully protected:
The 5'-terminal 5'-hydroxy group is protected with DMT group;
The internucleosidic phosphate or phosphorothioate moieties are protected with 2-cyanoethyl groups;
The exocyclic amino groups in all nucleic bases except for T and U are protected with acyl protecting groups.
To furnish a functional oligonucleotide, all the protecting groups have to be removed. The N-acyl base protection and the 2-cyanoethyl phosphate protection may be, and is often removed simultaneously by treatment with inorganic bases or amines. However, the applicability of this method is limited by the fact that the cleavage of 2-cyanoethyl phosphate protection gives rise to acrylonitrile as a side product. Under the strong basic conditions required for the removal of N-acyl protection, acrylonitrile is capable of alkylation of nucleic bases, primarily, at the N3-position of thymine and uracil residues to give the respective N3-(2-cyanoethyl) adducts via Michael reaction. The formation of these side products may be avoided by treating the solid support-bound oligonucleotides with solutions of bases in an organic solvent, for instance, with 50% triethylamine in acetonitrile or 10% diethylamine in acetonitrile. This treatment is strongly recommended for medium- and large scale preparations and is optional for syntheses on small scale where the concentration of acrylonitrile generated in the deprotection mixture is low.
Regardless of whether the phosphate protecting groups were removed first, the solid support-bound oligonucleotides are deprotected using one of the two general approaches.
(1) Most often, 5'-DMT group is removed at the end of the oligonucleotide chain assembly. The oligonucleotides are then released from the solid phase and deprotected (base and phosphate) by treatment with aqueous ammonium hydroxide, aqueous methylamine, their mixtures, gaseous ammonia or methylamine or, less commonly, solutions of other primary amines or alkalies at ambient or elevated temperature. This removes all remaining protection groups from 2'-deoxyoligonucleotides, resulting in a reaction mixture containing the desired product. If the oligonucleotide contains any 2'-O-protected ribonucleotide residues, the deprotection protocol includes the second step where the 2'-O-protecting silyl groups are removed by treatment with fluoride ion by various methods. The fully deprotected product is used as is, or the desired oligonucleotide can be purified by a number of methods. Most commonly, the crude product is desalted using ethanol precipitation, size exclusion chromatography, or reverse-phase HPLC. To eliminate unwanted truncation products, the oligonucleotides can be purified via polyacrylamide gel electrophoresis or anion-exchange HPLC followed by desalting.
(2) The second approach is only used when the intended method of purification is reverse-phase HPLC. In this case, the 5'-terminal DMT group that serves as a hydrophobic handle for purification is kept on at the end of the synthesis. The oligonucleotide is deprotected under basic conditions as described above and, upon evaporation, is purified by reverse-phase HPLC. The collected material is then detritylated under aqueous acidic conditions. On small scale (less than 0.01–0.02 mmol), the treatment with 80% aqueous acetic acid for 15–30 min at room temperature is often used followed by evaporation of the reaction mixture to dryness in vacuo. Finally, the product is desalted as described above.
For some applications, additional reporter groups may be attached to an oligonucleotide using a variety of post-synthetic procedures.
Characterization
As with any other organic compound, it is prudent to characterize synthetic oligonucleotides upon their preparation. In more complex cases (research and large scale syntheses) oligonucleotides are characterized after their deprotection and after purification. Although the ultimate approach to the characterization is sequencing, a relatively inexpensive and routine procedure, the considerations of the cost reduction preclude its use in routine manufacturing of oligonucleotides. In day-by-day practice, it is sufficient to obtain the molecular mass of an oligonucleotide by recording its mass spectrum. Two methods are currently widely used for characterization of oligonucleotides: electrospray mass spectrometry (ESI MS) and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF). To obtain informative spectra, it is very important to exchange all metal ions that might be present in the sample for ammonium or trialkylammonium [e.c. triethylammonium, (C2H5)3NH+] ions prior to submitting a sample to the analysis by either of the methods.
In ESI MS spectra, a given oligonucleotide generates a set of ions that correspond to different ionization states of the compound. Thus, the oligonucleotide with molecular mass M generates ions with masses (M – nH)/n where M is the molecular mass of the oligonucleotide in the form of a free acid (all negative charges of internucleosidic phosphodiester groups are neutralized with H+), n is the ionization state, and H is the atomic mass of hydrogen (1 Da). Most useful for characterization are the ions with n'' ranging from 2 to 5. Software supplied with the more recently manufactured instruments is capable of performing a deconvolution procedure that is, it finds peaks of ions that belong to the same set and derives the molecular mass of the oligonucleotide.
To obtain more detailed information on the impurity profile of oligonucleotides, liquid chromatography-mass spectrometry (LC-MS or HPLC-MS) or capillary electrophoresis mass spectrometry (CEMS) are used.
See also
Nucleic acids
Nucleic acid analogues
Peptide nucleic acid
Bridged Nucleic Acids
References
Further reading
Comprehensive Natural Products Chemistry, Volume 7: DNA and Aspects of Molecular Biology. Kool, Eric T., Editor. Neth. (1999), 733 pp. Publisher: (Elsevier, Amsterdam, Neth.)
Beaucage, S L. "Oligodeoxyribonucleotides synthesis. Phosphoramidite approach. Methods in Molecular Biology (Totowa, NJ, United States) (1993), 20 (Protocols for Oligonucleotides and Analogs), 33–61.
Genetics techniques
Laboratory techniques
Molecular biology techniques | Oligonucleotide synthesis | [
"Chemistry",
"Engineering",
"Biology"
] | 8,997 | [
"Genetics techniques",
"Genetic engineering",
"Molecular biology techniques",
"nan",
"Molecular biology"
] |
22,373,821 | https://en.wikipedia.org/wiki/Reciprocal%20frame | A reciprocal frame is a class of self-supporting structure made of three or more beams and which requires no center support to create roofs, bridges or similar structures.
Construction
Reciprocal roofs tend to be constructed in one of two ways. If built using dimensioned timber, each rafter is usually jointed into the previous one. More commonly, these roofs are constructed with roundwood poles where each rafter is laid upon the previous one. In both of these approaches, the roof is assembled by installing a temporary central support that holds the first rafter at the correct height. The first rafter is fitted between the wall and the temporary central support and then further rafters are added, each resting on the last. The final rafter fits on top of the previous rafter and under the very first one. The rafters are then tied before the temporary support is removed. The structure is most effective at lower pitches where there is minimal spreading force exerted at the ringbeam, most being transferred directly downward. Unless some extra elements are added to create redundancy, the structure is only as strong as the weakest element, as the failure of a single element may lead to the failure of the whole structure.
History
The reciprocal frame, also known as a Mandala roof, has been used since the twelfth century in Chinese and Japanese architecture although little or no trace of these ancient methods remain. More recently they were used by architects Kazuhiro Ishii (the Spinning House) and Yasufumi Kijima, and engineer Yoishi Kan
(Kijima Stonemason Museum).
Villard de Honnecourt produced sketches showing similar designs in the 13th century and similar structures were also used in the chapter house of Lincoln Cathedral. Josep Maria Jujol used this structure in both the Casa Bofarull and Casa Negre.
Gallery
References
External links
Design forward
Green Building elements
Reciprocal Frame Architecture by Olga Popovic Larsen
The variety of reciprocal frame (RF) morphologies developed for a medium span assembly building : A case study
Structural system
Timber framing | Reciprocal frame | [
"Technology",
"Engineering"
] | 413 | [
"Structural system",
"Structural engineering",
"Timber framing",
"Building engineering"
] |
22,379,034 | https://en.wikipedia.org/wiki/Synthetic%20Communications | Synthetic Communications is a peer-reviewed scientific journal covering the synthesis of organic compounds. It was established in 1971 and is published by Taylor & Francis.
Abstracting and indexing
The journal is abstracted and indexed in:
Chemical Abstracts Service
EBSCO databases
Embase
ProQuest collections
Science Citation Index Expanded
Scopus
According to the Journal Citation Reports, the journal has a 2022 impact factor of 2.1.
References
External links
Biochemistry journals
Academic journals established in 1971
English-language journals
Taylor & Francis academic journals
Semi-monthly journals | Synthetic Communications | [
"Chemistry"
] | 106 | [
"Biochemistry journals",
"Biochemistry literature"
] |
22,379,115 | https://en.wikipedia.org/wiki/Bioorganic%20%26%20Medicinal%20Chemistry%20Letters | Bioorganic & Medicinal Chemistry Letters is a scientific journal focusing on the results of research on the molecular structure of biological organisms and the interaction of biological targets with chemical agents. It is published by Elsevier, which also publishes Bioorganic & Medicinal Chemistry for longer works.
Biochemistry journals
Elsevier academic journals
English-language journals
Medicinal chemistry journals | Bioorganic & Medicinal Chemistry Letters | [
"Chemistry"
] | 71 | [
"Biochemistry journals",
"Biochemistry journal stubs",
"Medicinal chemistry journals",
"Medicinal chemistry stubs",
"Biochemistry stubs",
"Biochemistry literature",
"Medicinal chemistry"
] |
22,379,335 | https://en.wikipedia.org/wiki/Organic%20Preparations%20and%20Procedures%20International | Organic Preparations and Procedures International is a bimonthly scientific journal focusing on organic chemists engaged in synthesis. Topics include original preparative chemistry in association with the synthesis of organic and organometallic compounds.
Organic chemistry journals
English-language journals | Organic Preparations and Procedures International | [
"Chemistry"
] | 51 | [
"Organic chemistry journals"
] |
1,881,339 | https://en.wikipedia.org/wiki/Exponential%20sheaf%20sequence | In mathematics, the exponential sheaf sequence is a fundamental short exact sequence of sheaves used in complex geometry.
Let M be a complex manifold, and write OM for the sheaf of holomorphic functions on M. Let OM* be the subsheaf consisting of the non-vanishing holomorphic functions. These are both sheaves of abelian groups. The exponential function gives a sheaf homomorphism
because for a holomorphic function f, exp(f) is a non-vanishing holomorphic function, and exp(f + g) = exp(f)exp(g). Its kernel is the sheaf 2πiZ of locally constant functions on M taking the values 2πin, with n an integer. The exponential sheaf sequence is therefore
The exponential mapping here is not always a surjective map on sections; this can be seen for example when M is a punctured disk in the complex plane. The exponential map is surjective on the stalks: Given a germ g of an holomorphic function at a point P such that g(P) ≠ 0, one can take the logarithm of g in a neighborhood of P. The long exact sequence of sheaf cohomology shows that we have an exact sequence
for any open set U of M. Here H0 means simply the sections over U, and the sheaf cohomology H1(2πiZ|U) is the singular cohomology of U.
One can think of H1(2πiZ|U) as associating an integer to each loop in U. For each section of OM*, the connecting homomorphism to H1(2πiZ|U) gives the winding number for each loop. So this homomorphism is therefore a generalized winding number and measures the failure of U to be contractible. In other words, there is a potential topological obstruction to taking a global logarithm of a non-vanishing holomorphic function, something that is always locally possible.
A further consequence of the sequence is the exactness of
Here H1(OM*) can be identified with the Picard group of holomorphic line bundles on M. The connecting homomorphism sends a line bundle to its first Chern class.
References
, see especially p. 37 and p. 139
Complex manifolds
Sheaf theory | Exponential sheaf sequence | [
"Mathematics"
] | 492 | [
"Topology",
"Sheaf theory",
"Mathematical structures",
"Category theory"
] |
1,881,464 | https://en.wikipedia.org/wiki/Dental%20composite | Dental composite resins (better referred to as "resin-based composites" or simply "filled resins") are dental cements made of synthetic resins. Synthetic resins evolved as restorative materials since they were insoluble, of good tooth-like appearance, insensitive to dehydration, easy to manipulate and inexpensive. Composite resins are most commonly composed of Bis-GMA and other dimethacrylate monomers (TEGMA, UDMA, HDDMA), a filler material such as silica and in most applications, a photoinitiator. Dimethylglyoxime is also commonly added to achieve certain physical properties such as flow-ability. Further tailoring of physical properties is achieved by formulating unique concentrations of each constituent.
Many studies have compared the lesser longevity of resin-based composite restorations to the longevity of silver-mercury amalgam restorations. Depending on the skill of the dentist, patient characteristics and the type and location of damage, composite restorations can have similar longevity to amalgam restorations. (See Longevity and clinical performance.) In comparison to amalgam, the appearance of resin-based composite restorations is far superior.
Resin-based composites are on the World Health Organization's List of Essential Medicines.
History of use
Traditionally resin-based composites set by a chemical setting reaction through polymerization between two pastes. One paste containing an activator (not a tertiary amine, as these cause discolouration) and the other containing an initiator (benzoyl peroxide). To overcome the disadvantages of this method, such as a short working time, light-curing resin composites were introduced in the 1970s. The first light-curing units used ultra-violet light to set the material, however this method had a limited curing depth and was a high risk to patients and clinicians. Therefore, UV light-curing units were later replaced by visible light-curing systems employing camphorquinone as the photoinitiator.
The Traditional Period
In the late 1960s, composite resins were introduced as an alternative to silicates and unfulfilled resins, which were frequently used by clinicians at the time. Composite resins displayed superior qualities, in that they had better mechanical properties than silicates and unfulfilled resins. Composite resins were also seen to be beneficial in that the resin would be presented in paste form and, with convenient pressure or bulk insertion technique, would facilitate clinical handling. The faults with composite resins at this time were that they had poor appearance, poor marginal adaptation, difficulties with polishing, difficulty with adhesion to the tooth surface, and occasionally, loss of anatomical form.
The Microfilled Period
In 1978, various microfilled systems were introduced into the European market. These composite resins were appealing, in that they were capable of having an extremely smooth surface when finished. These microfilled composite resins also showed a better clinical colour stability and higher resistance to wear than conventional composites, which favoured their tooth tissue-like appearance as well as clinical effectiveness. However, further research showed a progressive weakness in the material over time, leading to micro-cracks and step-like material loss around the composite margin. In 1981, microfilled composites were improved remarkably with regard to marginal retention and adaptation. It was decided, after further research, that this type of composite could be used for most restorations provided the acid etch technique was used and a bonding agent was applied.
The Hybrid Period
Hybrid composites were introduced in the 1980s and are more commonly known as resin-modified glass ionomer cements (RMGICs). The material consists of a powder containing a radio-opaque fluoroaluminosilicate glass and a photoactive liquid contained in a dark bottle or capsule. The material was introduced, as resin composites on their own were not suitable for Class II cavities. RMGICs can be used instead. This mixture or resin and glass ionomer allows the material to be set by light activation (resin), allowing a longer working time. It also has the benefit of the glass ionomer component releasing fluoride and has superior adhesive properties. RMGICs are now recommended over traditional GICs for basing cavities. There is a great difference between the early and new hybrid composites.
Initially, resin-based composite restorations in dentistry were very prone to leakage and breakage due to weak compressive strength. In the 1990s and 2000s, such composites were greatly improved and have a compression strength sufficient for use in posterior teeth.
Method and clinical application
Today's composite resins have low polymerization shrinkage and low coefficients of thermal shrinkage, which allows them to be placed in bulk while maintaining good adaptation to cavity walls. The placement of composite requires meticulous attention to procedure or it may fail prematurely. The tooth must be kept perfectly dry during placement or the resin will likely fail to adhere to the tooth. Composites are placed while still in a soft, dough-like state, but when exposed to light of a certain blue wavelength (typically 470 nm), they polymerize and harden into the solid filling (for more information, see Light activated resin). It is challenging to harden all of the composite, since the light often does not penetrate more than 2–3 mm into the composite. If too thick an amount of composite is placed in the tooth, the composite will remain partially soft, and this soft unpolymerized composite could ultimately lead to leaching of free monomers with potential toxicity and/or leakage of the bonded joint leading to recurring dental pathology. The dentist should place composite in a deep filling in numerous increments, curing each 2–3 mm section fully before adding the next. In addition, the clinician must be careful to adjust the bite of the composite filling, which can be tricky to do. If the filling is too high, even by a subtle amount, that could lead to chewing sensitivity on the tooth. A properly placed composite is comfortable, of good appearance, strong and durable, and could last 10 years or more.
The most desirable finish surface for a composite resin can be provided by aluminum oxide disks. Classically, Class III composite preparations were required to have retention points placed entirely in dentin. A syringe was used for placing composite resin because the possibility of trapping air in a restoration was minimized. Modern techniques vary, but conventional wisdom states that because there have been great increases in bonding strength due to the use of dentin primers in the late 1990s, physical retention is not needed except for the most extreme of cases. Primers allow the dentin's collagen fibers to be "sandwiched" into the resin, resulting in a superior physical and chemical bond of the filling to the tooth. Indeed, composite usage was highly controversial in the dental field until primer technology was standardized in the mid to late 1990s. The enamel margin of a composite resin preparation should be beveled in order to improve the appearance and expose the ends of the enamel rods for acid attack. The correct technique of enamel etching prior to placement of a composite resin restoration includes etching with 30%-50% phosphoric acid and rinsing thoroughly with water and drying with air only. In preparing a cavity for restoration with composite resin combined with an acid etch technique, all enamel cavosurface angles should be obtuse angles. Contraindications for composite include varnish and zinc oxide-eugenol. Composite resins for Class II restorations were not indicated because of excessive occlusal wear in the 1980s and early 1990s. Modern bonding techniques and the increasing unpopularity of amalgam filling material have made composites more attractive for Class II restorations. Opinions vary, but composite is regarded as having adequate longevity and wear characteristics to be used for permanent Class II restorations. Whether composite materials last as long or have similar leakage and sensitivity properties when compared to Class II amalgam restorations was described as a matter of debate in 2008.
Composition
As with other composite materials, a dental composite typically consists of a resin-based oligomer matrix, such as a bisphenol A-glycidyl methacrylate (BISGMA), urethane dimethacrylate (UDMA) or semi-crystalline polyceram (PEX), and an inorganic filler such as silicon dioxide (silica). Without a filler the resin wears easily, exhibits high shrinkage and is exothermic. Compositions vary widely, with proprietary mixes of resins forming the matrix, as well as engineered filler glasses and glass ceramics. The filler gives the composite greater strength, wear resistance, decreased polymerisation shrinkage, improved translucency, fluorescence and colour, and a reduced exothermic reaction on polymerisation. It also however causes the resin composite to become more brittle with an increased elastic modulus. Glass fillers are found in multiple different compositions allowing an improvement on the optical and mechanical properties of the material. Ceramic fillers include zirconia-silica and zirconium oxide.
Matrices such as BisHPPP and BBP, contained in the universal adhesive BiSGMA, have been demonstrated to increase the cariogenicity of bacteria leading to the occurrence of secondary caries at the composite-dentin interface. BisHPPP and BBP cause an increase of glycosyltransferase in S. mutans bacteria, which results in increased production of sticky glucans that allow S.mutans' adherence to the tooth. This results in a cariogenic biofilms at the interface of composite and tooth. The cariogenic activity of bacteria increases with concentration of the matrix materials. BisHPPP has furthermore been shown to regulate bacterial genes, making bacteria more cariogenic, thus compromising the longevity of composite restorations. Researchers are highlighting the need for new composite materials to be developed which eliminate the cariogenic products contained in composite resin and universal adhesives.
A coupling agent such as silane is used to enhance the bond between these two components. An initiator package (such as: camphorquinone (CQ), phenylpropanedione (PPD) or lucirin (TPO)) begins the polymerization reaction of the resins when blue light is applied. Various additives can control the rate of reaction.
Filler types and particle size
Resin filler can be made of glasses or ceramics. Glass fillers are usually made of crystalline silica, silicone dioxide, lithium/barium-aluminium glass, and borosilicate glass containing zinc/strontium/lithium. Ceramic fillers are made of zirconia-silica, or zirconium oxide.
Fillers can be further subdivided based on their particle size and shapes such as:
Macrofilled filler
Macrofilled fillers have a particle size ranging from 5 - 10 μm. They have good mechanical strength but poor wear resistance. Final restoration is difficult to polish adequately leaving rough surfaces, and therefore this type of resin is plaque retentive.
Microfilled filler
Microfilled fillers are made of colloidal silica with a particle size of 0.4 μm. Resin with this type of filler is easier to polish compared to macrofilled. However, its mechanical properties are compromised as filler load is lower than in conventional (only 40-45% by weight). Therefore, it is contraindicated for load-bearing situations, and has poor wear resistance.
Hybrid filler
Hybrid filler contains particles of various sizes with filler load of 75-85% by weight. It was designed to get the benefits of both macrofilled and microfilled fillers. Resins with hybrid filler have reduced thermal expansion and higher mechanical strength. However, it has higher polymerisation shrinkage due to a larger volume of diluent monomer which controls viscosity of resin.
Nanofilled filler
Nanofilled composite has a filler particle size of 20-70 nm Nanoparticles form nanocluster units and act as a single unit. They have high mechanical strength similar to hybrid material, high wear resistance, and are easily polished. However, nanofilled resins are difficult to adapt to the cavity margins due to high volume of filler.
Bulk filler
Bulk filler is composed of non-agglomerated silica and zirconia particles. It has nanohybrid particles and filler load of 77% by weight. Designed to decrease clinical steps with possibility of light curing through 4-5mm incremental depth, and reduce stress within remaining tooth tissue. Unfortunately, it is not as strong in compression and has decreased wear resistance compared to conventional material.
Recently, nanohybrid fillers have seen wide interest.
Advantages
Advantages of composites:
Appearance: The main advantage of a direct dental composite over traditional materials such as amalgam is improved tooth tissue-mimicry. Composites can be in a wide range of tooth colors allowing near invisible restoration of teeth. Composite fillings can be closely matched to the color of existing teeth. Aesthetics are especially critical in anterior teeth region - see Aesthetic anterior composite restorations.
Bonding to tooth structure: Composite fillings micro-mechanically bond to tooth structure. This strengthens the tooth's structure and restores its original physical integrity. The discovery of acid etching (producing enamel irregularities ranging from 5-30 micrometers in depth) of teeth to allow a micro-mechanical bond to the tooth allows good adhesion of the restoration to the tooth. Very high bond strengths to tooth structure, both enamel and dentin, can be achieved with dentin bonding agents.
Tooth-sparing preparation: The fact that composite fillings are glued (bonded) to the tooth means that unlike amalgam fillings, there is no need for the dentist to create retentive features destroying healthy tooth. Unlike amalgam, which just fills a hole and relies on the geometry of the hole to retain the filling, composite materials are bonded to the tooth. In order to achieve the necessary geometry to retain an amalgam filling, the dentist may need to drill out a significant amount of healthy tooth material. In the case of a composite restoration, the geometry of the hole (or "box") is less important because a composite filling bonds to the tooth. Therefore less healthy tooth needs to be removed for a composite restoration.
Less-costly and more conservative alternative to dental crowns: In some situations, a composite restoration may be offered as a less-expensive (though possibly less durable) alternative to a dental crown, which can be a very expensive treatment. Installation of a dental crown usually requires removal of significant healthy tooth material so the crown can fit over or into the natural tooth. Composite restoration conserves more of the natural tooth.
Alternative to tooth removal: As a composite restoration bonds to the tooth and can restore the original physical integrity of a damaged or decayed tooth, in some cases composite restoration can preserve a tooth that might not be salvageable with amalgam restoration. For example, depending on the location and extent of decay, it might not be possible to create a void (a "box") of the geometry necessary to retain an amalgam filling.
Versatility: Composite fillings can be used to repair chipped, broken or worn teeth which would not be repairable using amalgam fillings.
Repairability: In many cases of minor damage to a composite filling, the damage can be easily repaired by adding additional composite. An amalgam filling might require complete replacement.
Longer working time: The light-curing composite allows the on-demand setting and longer working time to some degree for the operator compared to amalgam restoration.
Reduced quantity of mercury released to the environment: Composites avoid mercury environmental contamination associated with dentistry. When amalgam fillings are drilled for height adjustment, repair or replacement, some mercury-containing amalgam is inevitably washed down drains. (See Dental amalgam controversy - Environmental impact) When amalgam fillings are prepared by dentists, improperly disposed excess material may enter landfills or be incinerated. Cremation of bodies containing amalgam fillings releases mercury into the environment. (See Dental amalgam controversy - Cremation)
Reduced mercury exposure for dentists: Preparing new amalgam fillings and drilling into existing amalgam fillings exposes dentists to mercury vapor. Use of composite fillings avoids this risk, unless the procedure also involves removing an existing amalgam filling. A review article found studies indicating that dental work involving mercury may be an occupational hazard with respect to reproductive processes, glioblastoma (brain cancer), renal function changes, allergies and immunotoxicological effects. (See Dental amalgam controversy - Health effects for dentists)
Lack of corrosion: Although corrosion is no longer a major problem with amalgam fillings, resin composites do not corrode at all. (Low-copper amalgams, prevalent before 1963, were more subject to corrosion than modern high-copper amalgams. )
Disadvantages
Composite shrinkage and secondary caries: In the past, composite resins suffered significant shrinkage during curing, which led to inferior bonding interface. Shrinkage permits microleakage, which, if not caught early, can cause secondary caries (subsequent decay), the most significant dental disadvantage of composite restoration. In a study of 1,748 restorations, risk of secondary caries in the composite group was 3.5 times risk of secondary caries in the amalgam group. Good dental hygiene and regular checkups can mitigate this disadvantage. Most microhybrid and nanohybrid composites have a polymerization shrinkage that ranges from 2% to 3.5%. Composite shrinkage can be reduced by altering the molecular and bulk composition of the resin. In the field of dental restorative materials, reduction of composite shrinkage has been achieved with some success. Among the newest materials, silorane resin exhibits lower polymerization shrinkage, compared to the dimethacrylates.
Durability: In some situations, composite fillings may not last as long as amalgam fillings under the pressure of chewing, particularly if used for large cavities. (See Longevity and clinical performance, below.)
Chipping: Composite materials can chip off the tooth.
Skill and training required: Successful outcomes in direct composite fillings is related to the skills of the practitioner and technique of placement. For example, a rubber dam is rated as being important for achieving longevity and low fracture rates similar to amalgam in the more demanding proximal Class II cavities.
Need to keep working area in mouth completely dry: The prepared tooth must be completely dry (free of saliva and blood) when the resin material is being applied and cured. Posterior teeth (molars) are difficult to keep dry. Keeping the prepared tooth completely dry can also be difficult for any work involving treatment of cavities at or below the gumline, though techniques have been described to facilitate this.
Time and expense: Due to the sometimes complicated application procedures and the need to keep the prepared tooth absolutely dry, composite restorations may take up to 20 minutes longer than equivalent amalgam restorations. Longer time in the dental chair may test the patience of children, making the procedure more difficult for the dentist. Due to the longer time involved, the fee charged by a dentist for a composite restoration may be higher than for an amalgam restoration.
Costs: Composite restoration cases generally have limited insurance coverage. Some dental insurance plans may provide reimbursement for composite restoration only on front teeth where amalgam restorations would be particularly objectionable on cosmetic grounds. Thus, patients may be required to pay the entire charge for composite restorations on posterior teeth. For example one dental insurer states that most of their plans will pay for resin (i.e. composite) fillings only "on the teeth where their cosmetic benefit is critical: the six front teeth (incisors and cuspids) and on the facial (cheek side) surfaces of the next two teeth (bicuspids)." Even if charges are paid by private insurance or government programs, the higher cost is incorporated in dental insurance premiums or tax rates. In the UK, dental composites are not covered by NHS for the restoration of posterior teeth. Patients, therefore, may require to pay the entire charge of the treatment or have to pay according to the private charge rate.
Direct dental composites
Direct dental composites are placed by the dentist in a clinical setting. Polymerization is accomplished typically with a hand held curing light that emits specific wavelengths keyed to the initiator and catalyst packages involved. When using a curing light, the light should be held as close to the resin surface as possible, a shield should be placed between the light tip and the operator's eyes. Curing time should be increased for darker resin shades. Light cured resins provide denser restoration than self-cured resins because no mixing is required that might introduce air bubble porosity.
Direct dental composites can be used for:
Filling cavity preparations
Filling gaps (diastemas) between teeth using a shell-like veneer or
Minor reshaping of teeth
Partial crowns on single teeth
Setting mechanisms of resin composite
Types of setting mechanisms:
Chemical cure (self-cure / dark cure)
Light cure
Dual cure (setting both chemically and by light)
Chemically cured resin composite is a two-paste system (base and catalyst) which starts to set when the base and the catalyst are mixed together.
Light cured resin composites contains a photo-initiator (e.g. camphorquinone) and an accelerator. The activator present in light activated composite is diethyl-amino-ethyl-methacrylate (amine) or diketone. They interact when exposed to light at wavelength of 400-500 nm, i.e, blue region of the visible light spectrum. The composite sets when it is exposed to light energy at a set wavelength of light. Light cured resin composites are also sensitive to ambient light, and therefore, polymerisation can begin before use of the curing light.
Dual cured resin composite contains both photo-initiators and chemical accelerators, allowing the material to set even where there is insufficient light exposure for light curing.
Chemical polymerisation inhibitors (e.g. monomethyl ether of hydroquinone) are added to the resin composite to prevent polymerisation of the material during storage, increasing its shelf life.
Classification of resin composites according to handling characteristics
This classification divides resin composite into three broad categories based on their handling characteristics:
Universal: advocated for general use, oldest subtype of resin composite
Flowable: fluid consistency, used for very small restorations
Packable: stiffer, more viscous material used solely for posterior parts of the mouth
Manufacturers manipulate the handling characteristics by altering the constituents of the material. Generally, the stiffer materials (packable) exhibit a higher filler content whilst fluid materials (flowable) exhibit lower filler loading.
Universal:
This is the traditional presentation of resin composites and performs well in many situations. However, their use is limited in specialised practice where more complex aesthetic treatments are undertaken. Indications include: the restoration of class I, II and III and IV where aesthetics is not paramount, and the repair of non-carious tooth surface loss (NCTSL) lesions. Contraindications include: restoration of ultraconservative cavities, in areas where aesthetics is critical, and where insufficient enamel is available for etching.
Flowable:
Flowable composites represent a relatively newer subset of resin-based composite material, dating back to the mid-1990s. Compared to universal composite, flowables have a reduced filler content (37–53%) thereby exhibiting ease of handling, lower viscosity, compressive strength, wear resistance and greater polymerisation shrinkage. Due to the poorer mechanical properties, flowable composites should be used with caution in high stress-bearing areas. However, due to its favourable wetting properties, it can adapt intimately to enamel and dentine surfaces. Indications include: restoration of small class I cavities, preventive resin restorations (PRR), fissure sealants, cavity liners, repair of deficient amalgam margins, and class V (abfraction) lesions caused by NCTSL. Contraindications include: in high stress-bearing areas, restoration of large multi-surface cavities, and if effective moisture control is unattainable.
Packable:
Packable composites were developed to be used in posterior situations. Unlike flowable composite, they exhibit a higher viscosity thereby necessitating greater force upon application to 'pack' the material into the prepared cavity. Their handling characteristics is more similar to dental amalgam, in that greater force is required to condense the material into the cavity. Therefore, they can be thought of as 'tooth-coloured amalgam'. The increased viscosity is achieved by a higher filler content (>60% by volume) – thereby making the material stiffer and more resistant to fracture, two properties that are ideal for materials to be used in the posterior region of the mouth. The disadvantage of the associated increased filler content is the potential risk of introducing voids along the cavity walls and between each layer of material. In order to seal any marginal deficiencies, the use of a single layer of flowable composite at the base of a cavity has been advocated when undertaking Class II posterior composite restorations when using packable composite.
Indirect dental composites
Indirect composite is cured outside the mouth, in a processing unit that is capable of delivering higher intensities and levels of energy than handheld lights can. Indirect composites can have higher filler levels, are cured for longer times and curing shrinkage can be handled in a better way.
As a result, they are less prone to shrinkage stress and marginal gaps and have higher levels and depths of cure than direct composites. For example, an entire crown can be cured in a single process cycle in an extra-oral curing unit, compared to a millimeter layer of a filling.
As a result, full crowns and even bridges (replacing multiple teeth) can be fabricated with these systems.
Indirect dental composites can be used for:
Filling cavities in teeth, as fillings, inlays and/or onlays
Filling gaps (diastemas) between teeth using a shell-like veneer or
Reshaping of teeth
Full or partial crowns on single teeth
Bridges spanning 2-3 teeth
A stronger, tougher and more durable product is expected in principle. But in the case of inlays, not all clinical long-term-studies detect this advantage in clinical practice (see below).
Longevity and clinical performance
Direct composite vs amalgam
Clinical survival of composite restorations placed in posterior teeth are in the range of amalgam restorations, with some studies seeing a slightly lower
or slightly higher survival time compared to amalgam restorations.
Improvements in composite technology and application technique make composites a very good alternative to amalgam, while use in large restorations and in cusp capping situations is still debated.
According to a 2012 review article by Demarco et al. covering 34 relevant clinical studies, "90% of the studies indicated that annual failure rates between 1% and 3% can be achieved with Class I and II posterior [rear tooth] composite restorations depending on the definition of failure, and on several factors such as tooth type and location, operator [dentist], and socioeconomic, demographic, and behavioral elements." This compares to a 3% mean annual failure rate reported in a 2004 review article by Manhart et al. for amalgam restorations in posterior stress-bearing cavities.
The Demarco review found that the main reasons cited for failure of posterior composite restorations are secondary caries (i.e. cavities which develop subsequent to the restoration), fracture, and patient behavior, notably bruxism (grinding/clenching.) Causes of failure for amalgam restorations reported in the Manhart et al.review also include secondary caries, fracture (of the amalgam and/or the tooth), as well as cervical overhang and marginal ditching. The Demarco et al. review of composite restoration studies noted that patient factors affect longevity of restorations: Compared to patients with generally good dental health, patients with poorer dental health (possibly due to poor dental hygiene, diet, genetics, frequency of dental checkups, etc.) experience higher rates of failure of composite restorations due to subsequent decay. Socioeconomic factors also play a role: "People who had always lived in the poorest stratus [sic][stratum?] of the population had more restoration failures than those who lived in the richest layer."
The definition of failure applied in clinical studies may affect the reported statistics. Demarco et al note: "Failed restorations or restorations presenting small defects are routinely treated by replacement by most clinicians. Because of this, for many years, the replacement of defective restorations has been reported as the most common treatment in general dental practice..." Demarco et al observe that when both repaired and replaced restorations were classified as failures in one study, the Annual Failure Rate was 1.9%. However, when repaired restorations were reclassified as successes instead of failures, the AFR decreased to 0.7%. Reclassifying repairable minor defects as successes rather than failures is justifiable: "When a restoration is replaced, a significant amount of sound tooth structure is removed and the preparation [i.e. hole] is enlarged". Applying the narrower definition of failure would improve the reported longevity of composite restorations: Composite restorations can often be easily repaired or extended without drilling out and replacing the entire filling. Resin composites will adhere to the tooth and to undamaged prior composite material. In contrast, amalgam fillings are held in place by the shape of the void being filled rather than by adhesion. This means that it is often necessary to drill out and replace an entire amalgam restoration rather than add to the remaining amalgam.
Direct vs indirect composites
It might be expected that the costlier indirect technique leads to a higher clinical performance, however this is not seen in all studies. A study conducted over the course of 11 years reports similar failure rates of direct composite fillings and indirect composite inlays. Another study concludes that although there is a lower failure rate of composite inlays it would be insignificant and anyway too small to justify the additional effort of the indirect technique.
Also in the case of ceramic inlays a significantly higher survival rate compared to composite direct fillings can not be detected.
In general, a clear superiority of tooth coloured inlays over composite direct fillings could not be established by the review literature (as of 2013).
See also
Dental bonding
Dental sealants
References
External links
Composite materials
Dental materials
World Health Organization essential medicines | Dental composite | [
"Physics"
] | 6,438 | [
"Materials",
"Composite materials",
"Dental materials",
"Matter"
] |
1,881,728 | https://en.wikipedia.org/wiki/Grating%20light%20valve | The grating light valve (GLV) is a "micro projection" technology that operates using a dynamically adjustable diffraction grating. It competes with other light valve technologies such as Digital Light Processing (DLP) and liquid crystal on silicon (LCoS) for implementation in video projector devices such as rear-projection televisions. The use of microelectromechanical systems (MEMS) in optical applications, which is known as optical MEMS or micro-opto-electro-mechanical structures (MOEMS), has enabled the possibility to combine the mechanical, electrical, and optical components in tiny-scale.
Silicon Light Machines (SLM), in Sunnyvale CA, markets and licenses GLV technology with the capitalised trademarks "'Grated Light Valve'" and GLV, previously Grating Light Valve. The valve diffracts laser light using an array of tiny movable ribbons mounted on a silicon base. The GLV uses six ribbons as each pixel's diffraction gratings. Electronic signals alter the alignment of the gratings, and this displacement controls the intensity of the diffracted light in a very smooth gradation.
Brief history
The light valve was initially developed at Stanford University, in California, by electrical engineering professor David M. Bloom, along with William C. Banyai, Raj Apte, Francisco Sandejas, and Olav Solgaard, professor in the Stanford Department of Electrical Engineering. In 1994, the start-up company Silicon Light Machines was founded by Bloom to develop and commercialize the technology. Cypress Semiconductor acquired Silicon Light Machines in 2000 and sold the company to Dainippon Screen. Before the acquisition by Dainippon Screen, several marketing articles were published in EETimes, EETimes China, EETimes Taiwan, Electronica Olgi, and Fibre Systems Europe, highlighting Cypress Semiconductor's new MEMS manufacturing capabilities. The company is now wholly owned by Dainippon Screen Manufacturing Co., Ltd.
In July 2000, Sony announced the signing of a technology licensing agreement with SLM for the implementation of GLV technology in laser projectors for large venues, but by 2004 Sony announced the SRX-R110 front projector using its LCoS-based technology SXRD.
SLM then partnered with Evans & Sutherland (E&S). Using GLV technology, E&S developed the E&S Laser Projector, designed for use in domes and planetariums. The E&S Laser Projector was incorporated into the Digistar 3 dome projection system.
Technology
The GLV device is built on a silicon wafer and consists of parallel rows of "'highly reflective micro-ribbons'" – ribbons of sizes of a few μm with a top layer of aluminium – suspended above an air gap that is configured such that alternate ribbons (active ribbons are interlaced with static ribbons) can be dynamically actuated. Individual electrical connections to each active ribbon electrode provide for independent actuation.
The ribbons and the substrate are electrically conductive so that the deflection of the ribbon can be controlled in an analog manner: When the voltage of the active ribbons is set to ground potential, all ribbons are undeflected, and the device acts as a mirror so the incident light returns along the same path. When a voltage is applied between the ribbon and base conductor, an electrical field is generated and deflects the active ribbon downward toward the substrate. This deflection can be as big as one-quarter wavelength hence creating diffraction effects on incident light that is reflected at an angle that is different from that of the incident light. The wavelength to diffract is determined by the spatial frequency of the ribbons. As this spatial frequency is determined by the photolithographic mask used to form the GLV device in the CMOS fabrication process, the departure angles can be very accurately controlled, which is useful for optical switching applications.
Switching from undeflected to maximum ribbon deflection can occur in 20 nanoseconds, which is a million times faster than conventional LCD display devices and about 1000 times faster than TI's DMD technology. This high speed can be achieved thanks to the small size, small mass, and small excursion (of a few hundreds of nanometers) of the ribbons. Besides, there is no physical contact between moving elements which makes the lifetime of the GLV as long as 15 years without stopping (over 210 billion switching cycles).
Applications
The GLV technology has been applied to various products, from laser-based HDTV sets to computer-to-plate offset printing presses to DWDM components used for wavelength management. Applications of the GLV device in maskless photolithography have also been extensively investigated.
Displays
To build a display system using the GLV device, different approaches can be followed: ranging from a simple process using a single GLV device with white light as a source, thus having a monochrome system, to a more complex solution using three different GLV devices each for one of the RGB primaries' sources that once diffracted require additional optical filters to point the light onto the screen or an intermediate using a single white source with a GLV device.
Besides, the light can be diffracted by the GLV device into an eyepiece for virtual retinal display or into an optical system for image projection onto a screen (projector and rear-projector).
See also
DLP
Liquid crystal on silicon
References
External links
Silicon Light Machines
Dainippon Screen Manufacturing Co., Ltd.
Sony
Evans & Sutherland
MEKO-European Display Data and Market Research
HDTVExpert
Defence Research and Development Canada
Display technology
Optoelectronics | Grating light valve | [
"Engineering"
] | 1,190 | [
"Electronic engineering",
"Display technology"
] |
1,882,219 | https://en.wikipedia.org/wiki/List%20of%20algebraic%20coding%20theory%20topics | This is a list of algebraic coding theory topics.
Algebraic coding theory | List of algebraic coding theory topics | [
"Mathematics"
] | 15 | [
"Discrete mathematics",
"Coding theory"
] |
1,882,322 | https://en.wikipedia.org/wiki/IARC%20group%201 | IARC group 1 Carcinogens are substances, chemical mixtures, and exposure circumstances which have been classified as carcinogenic to humans by the International Agency for Research on Cancer (IARC). This category is used when there is sufficient evidence of carcinogenicity in humans. Exceptionally, an agent (chemical mixture) may be placed in this category when evidence of carcinogenicity in humans is less than sufficient, but when there is sufficient evidence of carcinogenicity in experimental animals and strong evidence in exposed humans that the agent (mixture) acts through a relevant mechanism of carcinogenicity.
This list focuses on the hazard linked to the agents. This means that while carcinogens are capable of causing cancer, it does not take their risk into account, which is the probability of causing a cancer, given the level of exposure to this carcinogen.
The list is up to date as of January 2024.
Agents
Infectious conditions
Viruses
Human immunodeficiency virus type 1 (infection with)
Human T-cell lymphotropic virus type I
Human papillomavirus types 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, and 59
Hepatitis B virus (chronic infection with)
Hepatitis C virus (chronic infection with)
Kaposi's sarcoma-associated herpesvirus
Epstein–Barr virus
Bacterium
Helicobacter pylori (infection with)
Worms
Clonorchis sinensis (infection with)
Opisthorchis viverrini (infection with)
Schistosoma haematobium (infection with)
Chemical substances
Acetaldehyde associated with consumption of alcoholic beverages
Acrylonitrile
Aflatoxins
4-Aminobiphenyl
Aristolochic acids, and plants containing them
Arsenic and inorganic arsenic compounds
Asbestos (all forms, including actinolite, amosite, anthophyllite, chrysotile, crocidolite, tremolite)
Azathioprine
Benzene
Benzidine, and dyes metabolized to
Benzo[a]pyrene
Beryllium and beryllium compounds
1,3-Butadiene
1,4-Butanediol dimethanesulfonate (Busulphan, Myleran)
Cadmium and cadmium compounds
Chlornapazine (N,N-Bis(2-chloroethyl)-2-naphthylamine)
Chlorambucil
Bis(chloromethyl)ether
Chloromethyl methyl ether
Chromium(VI) (Hexavalent chromium) compounds
Ciclosporin
Cyclophosphamide
1,2-Dichloropropane
Diethylstilboestrol
Erionite
Ethanol in alcoholic beverages
Ethylene oxide
Etoposide alone, and in combination with cisplatin and bleomycin
Fluoro-edenite fibrous amphibole
Formaldehyde
Gallium arsenide
Lindane
Melphalan
Methoxsalen (8-Methoxypsoralen) plus ultraviolet A radiation
4,4'-Methylenebis(2-chloroaniline) (MOCA)
MOPP and other combined chemotherapy including alkylating agents
Mustard gas (Sulfur mustard)
2-Naphthylamine
Nickel compounds
4-(N-Nitrosomethylamino)-1-(3-pyridyl)-1-butanone (NNK)
N-Nitrosonornicotine (NNN)
2,3,4,7,8-Pentachlorodibenzofuran
3,4,5,3’,4’-Pentachlorobiphenyl (PCB-126)
Pentachlorophenol
Perfluorooctanoic acid (PFOA), evaluated 2023
Polychlorinated biphenyls
Semustine [1-(2-Chloroethyl)-3-(4-methylcyclohexyl)-1-nitrosourea, Methyl-CCNU]
Silica dust, crystalline, in the form of quartz or cristobalite
Tamoxifen
2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD)
Thiotepa (1,1',1"-Phosphinothioylidynetrisaziridine)
Treosulfan
Trichloroethylene
o-Toluidine
Vinyl chloride
Radiations and physical agents thereof
Ionizing radiation (all types)
Neutron radiation
Phosphorus-32, as phosphate
Plutonium
Radioiodines, including iodine-131
Nuclear fission products, including strontium-90
Radionuclides, α-particle-emitting, internally deposited
Radionuclides, β-particle-emitting, internally deposited
Radium-224 and its decay products
Radium-226 and its decay products
Radium-228 and its decay products
Radon-222 and its decay products
Solar radiation
Thorium-232 and its decay products
Ultraviolet radiation (wavelengths 100-400 nm, encompassing UVA, UVB, and UVC)
X-ray and gamma radiation
Complex mixtures/agents
Aflatoxins (naturally occurring mixtures of)
Outdoor air pollution
Outdoor air pollution, particulate matter in
Alcoholic beverages
Areca nut, also known as betel nut
Betel quid with or without tobacco
Coal-tar pitch
Coal-tars (see Coal-tar distillation)
Engine exhaust, diesel
Estrogen-progestogen menopausal therapy (combined)
Estrogen-progestogen oral contraceptives (combined)
Estrogen therapy, postmenopausal NB There is "evidence suggesting lack of carcinogenicity" for estrogen-only menopausal therapy in humans and colorectal cancer. An inverse association has been observed between estrogen-only menopausal therapy and cancer of the colorectum.
Leather dust
Mineral oils, untreated or mildly treated
Phenacetin, analgesic mixtures containing
Plants containing aristolochic acid
Polychlorinated biphenyls, dioxin-like, with a Toxicity Equivalency Factor (TEF) according to WHO (PCBs 77, 81, 105, 114, 118, 123, 126, 156, 157, 167, 169, 189)
Processed meat (consumption of)
Salted fish, Chinese-style
Shale-oils
Soot, as found in occupational exposure of chimney sweeps
Wood dust
Exposure circumstances
Acheson process, occupational exposure associated with
Acid mists, strong inorganic
Aluminium production
Auramine production
Boot and shoe manufacture and repair (see Leather dust, Benzene)
Chimney sweeping (see Soot)
Coal gasification
Coal, indoor emissions from household combustion of
Coal-tar distillation
Coke production
Firefighter (occupational exposure as a)
Furniture and cabinet making (see Wood dust)
Haematite mining (underground)
Iron and steel founding (occupational exposure during)
Isopropyl alcohol manufacture using strong acids
Magenta production
Opium consumption
Painter (occupational exposure as a)
Paving and roofing with coal-tar pitch (see Coal-tar pitch)
Rubber manufacturing industry
Tobacco, smokeless
Tobacco smoke, second-hand
Tobacco smoking
Ultraviolet-emitting tanning devices
Welding fumes and UV radiation
See also
IARC group 2A
IARC group 2B
IARC group 3
Notes
References
External links
Description of the list of classifications , IARC
List of Classifications (latest version)
List of Classifications by cancer sites with sufficient or limited evidence in humans, Volumes 1 to 124 (Last update: 8 July 2019)
Agents Classified by the IARC Monographs, Volumes 1–123 (Last update: 25 March 2019) | IARC group 1 | [
"Chemistry",
"Environmental_science"
] | 1,603 | [
"Carcinogens",
"Toxicology"
] |
1,882,377 | https://en.wikipedia.org/wiki/IARC%20group%202A | IARC group 2A agents are substances and exposure circumstances that have been classified as probable carcinogens by the International Agency for Research on Cancer (IARC). This designation is applied when there is limited evidence of carcinogenicity in humans, as well as sufficient evidence of carcinogenicity in experimental animals. In some cases, an agent may be classified in this group when there is inadequate evidence of carcinogenicity in humans along with sufficient evidence of carcinogenicity in experimental animals and strong evidence that the carcinogenesis is mediated by a mechanism that also operates in humans. Exceptionally, an agent may be classified in this group solely on the basis of limited evidence of carcinogenicity in humans.
This list is focusing on the hazard linked to the agents. This means that the carcinogenic agents are capable of causing cancer, but this does not take their risk into account, which is the probability of causing a cancer given the level of exposure to this carcinogenic agent. The list is uptodate as of January 2024.
Agents
Substances
Acrolein
Acrylamide
Androgenic (anabolic) steroids
Aniline
Aniline hydrochloride
ortho-Anisidine hydrochloride
Azacitidine
BCNU (Bischloroethyl nitrosourea)
2-Bromopropane
Captafol
Chloral
Chloral hydrate
Chloramphenicol
α-Chlorinated toluenes (benzal chloride, benzotrichloride, benzyl chloride) and benzoyl chloride (combined exposures)
CCNU (1-(2-Chloroethyl)-3-cyclohexyl-1-nitrosourea)
4-Chloro-o-toluidine
Chlorozotocin
Cisplatin
Cobalt(II) salts, soluble
Cyclopenta[c,d]pyrene
Diazinon
Dibenz[a,j]acridine
Dibenz[a,h]anthracene
Dibenzo[a,l]pyrene
Dichloromethane (methylene chloride)
4,4'-Dichlorodiphenyltrichloroethane (DDT)
Diethyl sulfate
Dieldrin, and aldrin metabolized to dieldrin
Dimethylcarbamoyl chloride
Dimethylformamide
1,2-Dimethylhydrazine
Dimethyl sulfate
Doxorubicin
Epichlorohydrin
Ethylene dibromide
Ethyl carbamate (urethane)
N-Ethyl-N-nitrosourea
Glycidol
Glycidyl methacrylate
Glyphosate
Hydrazine
Indium phosphide
2-Amino-3-methylimidazo[4,5-f]quinoline (IQ)
Lead compounds, inorganic
Malathion
5-Methoxypsoralen
Methyleugenol
Methyl methanesulfonate
Mercaptobenzothiozole
MNNG (N-Methyl-N'''-nitro-N-nitrosoguanidine)
N-Methyl-N-nitrosourea
Nitrate or nitrite (ingested) under conditions that result in endogenous nitrosation
ortho-Nitroanisole
Nitrogen mustard
1-Nitropyrene
N-Nitrosodiethylamine (DEN)
N-Nitrosodimethylamine (NDMA)
Nitrotoluene
6-Nitrochrysene
Phenacetin
Pioglitazone
Polybrominated biphenyls
Procarbazine hydrochloride
1,3-Propane sultone
Silicon carbide whiskers
Styrene (industrial exposure)
Styrene-7,8-oxide
Teniposide
Tetrabromobisphenol A
3,3',4,4'-Tetrachloroazobenzene
Tetrachloroethylene
Tetrafluoroethylene
1,1,1-Trichloroethane
1,2,3-Trichloropropane
Tris(2,3-dibromopropyl) phosphate
Trivalent antimony
Vinyl bromide
Vinyl fluoride
Pathogens
Malaria (caused by infection with Plasmodium falciparum'' in holoendemic areas)
Human papillomavirus type 68
Merkel cell polyomavirus (MCV)
Mixtures
Bitumens, occupational exposure to oxidized bitumens and their emissions during roofing
Creosotes (from coal tars)
High-temperature frying, emissions from
Household combustion of biomass fuel (primarily wood), indoor emissions from
Non-arsenical insecticides (occupational exposures in spraying and application of)
Red meat (consumption of)
Mate, hot (see Very hot beverages)
Very hot beverages at above 65 °C (drinking)
Exposure circumstances
Art glass, glass containers and pressed ware (manufacture of)
Carbon electrode manufacture
Cobalt metal with tungsten carbide
Cobalt metal without tungsten carbide or other metal alloys
Hairdresser or barber (occupational exposure as a)
Petroleum refining (occupational exposures in)
Night shift work
See also
IARC group 1
IARC group 2B
IARC group 3
References
External links
Description of the list of classifications , IARC
List of Classifications (latest version)
List of Classifications by cancer sites with sufficient or limited evidence in humans, Volumes 1 to 124 (Last update: 8 July 2019)
ja:IARC発がん性リスク一覧 | IARC group 2A | [
"Chemistry",
"Environmental_science"
] | 1,167 | [
"Carcinogens",
"Toxicology"
] |
1,883,477 | https://en.wikipedia.org/wiki/Astronomical%20constant | An astronomical constant is any of several physical constants used in astronomy. Formal sets of constants, along with recommended values, have been defined by the International Astronomical Union (IAU) several times: in 1964 and in 1976 (with an update in 1994). In 2009 the IAU adopted a new current set, and recognizing that new observations and techniques continuously provide better values for these constants, they decided to not fix these values, but have the Working Group on Numerical Standards continuously maintain a set of Current Best Estimates. The set of constants is widely reproduced in publications such as the Astronomical Almanac of the United States Naval Observatory and HM Nautical Almanac Office.
Besides the IAU list of units and constants, also the International Earth Rotation and Reference Systems Service defines constants relevant to the orientation and rotation of the Earth, in its technical notes.
The IAU system of constants defines a system of astronomical units for length, mass and time (in fact, several such systems), and also includes constants such as the speed of light and the constant of gravitation which allow transformations between astronomical units and SI units. Slightly different values for the constants are obtained depending on the frame of reference used. Values quoted in barycentric dynamical time (TDB) or equivalent time scales such as the Teph of the Jet Propulsion Laboratory ephemerides represent the mean values that would be measured by an observer on the Earth's surface (strictly, on the surface of the geoid) over a long period of time. The IAU also recommends values in SI units, which are the values which would be measured (in proper length and proper time) by an observer at the barycentre of the Solar System: these are obtained by the following transformations:
Astronomical system of units
The astronomical unit of time is a time interval of one day (D) of 86400 seconds. The astronomical unit of mass is the mass of the Sun (S). The astronomical unit of length is that length (A) for which the Gaussian gravitational constant (k) takes the value when the units of measurement are the astronomical units of length, mass and time.
Table of astronomical constants
Notes
* The theories of precession and nutation have advanced since 1976, and these also affect the definition of the ecliptic. The values here are appropriate for the older theories, but additional constants are required for current models.
† The definitions of these derived constants have been taken from the references cited, but the values have been recalculated to take account of the more precise values of the primary constants cited in the table.
References
"2009 Selected Astronomical Constants" in .
"2015 Selected Astronomical Constants
External links
Tables of constants — from Nick Strobel's Astronomy Notes
Astronomical constants index — James Q. Jacobs
Physical constants
Constants
Constant | Astronomical constant | [
"Physics",
"Astronomy",
"Mathematics"
] | 585 | [
"Units of measurement",
"Physical quantities",
"Quantity",
"Units of measurement in astronomy",
"Astrophysics",
"Physical constants",
"Astronomical sub-disciplines"
] |
1,883,888 | https://en.wikipedia.org/wiki/Live%20steam | Live steam is steam under pressure, obtained by heating water in a boiler. The steam may be used to operate stationary or moving equipment.
A live steam machine or device is one powered by steam, but the term is usually reserved for those that are replicas, scale models, toys, or otherwise used for heritage, museum, entertainment, or recreational purposes, to distinguish them from similar devices powered by electricity, internal combustion, or some other more convenient method but designed to look as if they are steam-powered. Revenue-earning steam-powered machines such as mainline and narrow gauge steam locomotives, full-sized steamships, and the worldwide electric power-generating industry steam turbines are not normally referred to as "live steam".
Steamrollers and traction engines are popular, in 1:4 or 1:3 scale, as are model stationary steam engines, ranging from pocket-size to 1:2 scale.
Railroads or railways
Ridable, large-scale live steam railroading on a backyard railroad is a popular aspect of the live steam hobby, but it is time-consuming to build a locomotive from scratch and it can be costly to purchase one already built. Garden railways, in smaller scales (that cannot pull a "live" person nor be ridden on), offer the benefits of real steam engines (and at lower cost and in less space), but do not provide the same experience as operating one's own locomotive in the larger scales and riding on (or behind) it, while doing so.
One of the most famous live steam railroads was Walt Disney's Carolwood Pacific Railroad around his California home; it later inspired Walt Disney to surround his planned Disneyland amusement park with a working, narrow gauge railroad.
The live steam hobby is especially popular in the UK, US, New Zealand, Australia, and Japan. All over the world, there are hundreds of clubs and associations as well as many thousands of private backyard railroads. The world's largest live steam layout, with over of trackage is Train Mountain Railroad in Chiloquin, Oregon. Other notable layouts are operated by the Los Angeles Live Steamers Railroad Museum and the Riverside Live Steamers.
Scale
A live steam locomotive is often an exact, hand-crafted scale model. Live steam railroad scales are generally referred to by the number of inches of scale per foot. For example, a 1:8 scale locomotive will often be referred to as a 1½" scale locomotive. Common modelling scales are Gauge 1 (1:32 scale), 1/2" (1:24 scale), 3/4" (1:16), 1" (1:12), 1½" (1:8), 2½" (~1:5) and 3" (1:4).
Track gauge
Track gauge refers to the distance between the rails. The ridable track gauges range from to , the most popular being , , , and (see Rail transport modelling scales). Some live steam club layouts use dual or even triple gauge tracks. Gauges from and up are called "Miniature Railways" (in the US these are known as "Grand Scale Railroads"), and are used mostly in amusement park rides and commercial settings.
Often the gauge has little to do with the scale of a locomotive since larger equipment can be built in a narrow gauge railway configuration. For instance, scales of 1.5, 1.6, 2.5, and 3 inches per foot (corresponding to scales of 1:8 to 1:4) have been used on a track gauge.
The generally accepted smallest gauge for a live steam locomotive is O scale. Producing smaller-scale models remains problematic, as the laws of physics do not themselves scale: creating a small-scale boiler that produces useful quantities of steam requires careful engineering. Hornby Railways has produced commercial live steam-powered locomotives in OO scale by utilising an electrically heated boiler mounted in the tender, with cylinders in the locomotive, and control provided by electrical signals fed through the track from a remote control unit. They are less mechanically realistic than models in larger scales; the visible valve gear is a dummy, as on the electric-motor-powered models, and steam admission to the cylinders is controlled by a rotary-valve servo inside the boiler casing, which is also a dummy. Nevertheless, the locomotive is driven by steam that is created on board the locomotive and is hence a genuine steam locomotive.
It is technically possible to build even smaller operating steam engines. Hand-made examples, as small as Z scale (1:220), with a gauge of only , have been produced. These are fired with a butane flame from a burner in the engine's tender. AA Sherwood of Australia, an engineering lecturer, produced some miniature scale model live steam engines in the late 1960s and early 1970s. His smallest live steam engines were 1:240 scale which is smaller than the 1:220 of Z Scale. The smallest scale Sherwood worked in was 1:480, though that was not live steam.
Technology
A wide variety of boiler designs are available, ranging from simple externally fired pot boilers to sophisticated multi-flue internally fired boilers and even superheater boilers usually found only on larger, more complex models.
For basic locomotive models, a simple valve gear can be used, with the reversing (if any) performed by a valve, or by using a "slip" eccentric.
More complex locomotive models can use valve gear similar to real steam machine with the reversing done mechanically, most frequently the Walschaerts type.
Fuels
There are several common fuels used to boil water in live steam models:
Hexamine fuel tablets – which produce relatively little heat but are cheap and relatively safe. They are often used on "toy" live steam locomotives and engines, such as the newer models in the range produced by Mamod.
Methylated spirit, (methanol/ethanol mixture) – which burns hotter than solid fuel, but, as with any flammable liquid, requires more careful handling. Cheap and easy to obtain, this fuel was used with early Mamod models.
Butane gas – clean burning and safe, but relatively expensive to engineer the burners.
Electricity – delivered via the track and used to boil the water with an immersion heater. In 2003, Hornby launched a range of 00 gauge models that run from a 10 to 17 Volt power supply, making this method safer than previous, higher voltage versions.
Coal – which is the prototypical fuel for most full-sized steam locomotives, and the preferred fuel for ridable trains. It can also be used in boilers down to at least 16mm:1 foot scale.
Oil – also a popular fuel for large, ridable trains.
Propane gas – an alternative to coal or oil in large-scale models. Propane has also been used successfully as fuel in smaller 1:48 scale live steam locomotives produced most notably by Roland Neff.
Road vehicles
Live steam road vehicles are popular with model engineers because they are not restricted to running on tracks or water and can be easily transported to rallies and exhibitions. They include traction engines & steam rollers, showman's engines, wagons, cars, road-making & agricultural machines, often seen with ancillary equipment like threshing machines.
Boats and ships
Most types of boats and ships that were powered by steam in real life can be found as live steam ship models. These include, amongst others, speed boats, launches, tugboats, ocean liners, warships, paddle steamers and freight carriers. A specialized type is the tethered hydroplane. When steam-powered, these often have flash boilers.
Stationary engines
Stationary engines tend to be less popular with modelers than mobile engines; probably because they are less easily transportable. They are more popular with toy makers. They can be anything from small farm engines to winding engines and mill engines.
Toys
In the late 19th and early-mid 20th centuries, live steam toys were extremely popular, with some large manufacturers like Bing selling hundreds of different models in large quantities. There were very many smaller manufacturers all over the world. Some of these, like Mamod, Wilesco and Jensen, are still in business making live steam toys, although they are now mainly marketed as collectables and novelties rather than toys. Toys tend to be less accurate representations of real life equipment than are models and many are somewhat generic in nature. The range includes all those seen as models and some of a purely novelty kind.
Festivals
A live steam festival (often called a "Steam Fair" in the UK and a live steam "meet" in the US) is a gathering of people interested in steam engine technology. Locomotives, trains, traction engines, steam wagons, steam rollers, showman's engines and tractors, steam boats and cars, and stationary steam engines may be on display, both full-sized and in miniature. Rides may also be offered.
Publications
There are several magazines devoted to the live steam hobby:
"Model Engineer" is an English publication that is published twice a month and was founded in 1898. Most locomotive articles have an "English" flavour popular in the UK, Australia, New Zealand and South Africa. The magazine is aimed at constructors but also covers non-steam and non-rail model engineering interests.
"Live Steam & Outdoor Railroading" is a U.S. magazine, founded in 1966 and devoted to the live steam hobby, as well as to other uses of miniature and full-size steam. Originally, it was a mimeographed newsletter, but soon expanded into magazine format. In 2005, the name was changed from "Live Steam". It is currently published bi-monthly, with a press run of slightly over 10,000 (Dec. 2004).
The now-defunct publication (launched in 2006), was "The Home Railway Journal". which was specifically aimed at enthusiasts with ride-on railways (although not just steam-powered) in North America. It was published quarterly in Sacramento, California, US.
See also
Carpet railway – the 'Birmingham Dribbler' locomotives were a very early form of live steam toy
List of steam fairs
Live Steam & Outdoor Railroading (magazine)
Model engineering
Model steam engine
Railway modelling
Rail transport modelling scales
Saturated steam
Superheated steam
Ridable miniature railway
Backyard railroad
References
Rail transport modelling
Amusement rides based on rail transport
Steam power | Live steam | [
"Physics"
] | 2,108 | [
"Power (physics)",
"Steam power",
"Physical quantities"
] |
1,884,226 | https://en.wikipedia.org/wiki/Spectrofluorometer | A spectrofluorometer is an instrument which takes advantage of fluorescent properties of some compounds in order to provide information regarding their concentration and chemical environment in a sample. A certain excitation wavelength is selected, and the emission is observed either at a single wavelength, or a scan is performed to record the intensity versus wavelength, also called an emission spectrum. The instrument is used in fluorescence spectroscopy.
Operation
Generally, spectrofluorometers use high intensity light sources to bombard a sample with as many photons as possible. This allows for the maximum number of molecules to be in an excited state at any one point in time. The light is either passed through a filter, selecting a fixed wavelength, or a monochromator, which allows a wavelength of interest to be selected for use as the exciting light. The emission is collected at the perpendicular to the emitted light. The emission is also either passed through a filter or a monochromator before being detected by a photomultiplier tube, photodiode, or charge-coupled device detector. The signal can either be processed as digital or analog output.
Systems vary greatly and a number of considerations affect the choice. The first is the signal-to-noise ratio. There are many ways to look at the signal to noise of a given system but the accepted standard is by using the Raman signal of water. Sensitivity or detection limit is another specification to be considered, that is how little light can be measured. The standard would be fluorescein in NaOH, typical values for a high end instrument are in the femtomolar range.
Auxiliary components
These systems come with many options, including:
Polarizers
Peltier temperature controllers
Cryostats
Cold Finger Dewars
Pulsed lasers for lifetime measurements
LEDs for lifetimes
Filter holders
Adjustable optics (very important)
Solid sample holders
Slide holders
Integrating spheres
Near-infrared detectors
Bilateral slits
Manual slits
Computer controlled slits
Fast switching monochromators
Filter wheels
References
Laboratory equipment
Spectrometers | Spectrofluorometer | [
"Physics",
"Chemistry"
] | 408 | [
"Spectrometers",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
1,885,649 | https://en.wikipedia.org/wiki/RNA%20polymerase%20II | RNA polymerase II (RNAP II and Pol II) is a multiprotein complex that transcribes DNA into precursors of messenger RNA (mRNA) and most small nuclear RNA (snRNA) and microRNA. It is one of the three RNAP enzymes found in the nucleus of eukaryotic cells. A 550 kDa complex of 12 subunits, RNAP II is the most studied type of RNA polymerase. A wide range of transcription factors are required for it to bind to upstream gene promoters and begin transcription.
Discovery
Early studies suggested a minimum of two RNAPs: one which synthesized rRNA in the nucleolus, and one which synthesized other RNA in the nucleoplasm, part of the nucleus but outside the nucleolus. In 1969, biochemists Robert G. Roeder and William Rutter discovered there are total three distinct nuclear RNA polymerases, an additional RNAP that was responsible for transcription of some kind of RNA in the nucleoplasm. The finding was obtained by the use of ion-exchange chromatography via DEAE coated Sephadex beads. The technique separated the enzymes by the order of the corresponding elutions, Ι,ΙΙ,ΙΙΙ, by increasing the concentration of ammonium sulfate. The enzymes were named according to the order of the elutions, RNAP I, RNAP II, RNAP IΙI. This discovery demonstrated that there was an additional enzyme present in the nucleoplasm, which allowed for the differentiation between RNAP II and RNAP III.
RNA polymerase II (RNAP2) undergoes regulated transcriptional pausing during early elongation. Various studies has shown that disruption of transcription elongation is implicated in cancer, neurodegeneration, HIV latency etc.
Subunits
The eukaryotic core RNA polymerase II was first purified using transcription assays. The purified enzyme has typically 10–12 subunits (12 in humans and yeast) and is incapable of specific promoter recognition. Many subunit-subunit interactions are known.
DNA-directed RNA polymerase II subunit RPB1 – an enzyme that in humans is encoded by the POLR2A gene and in yeast is encoded by RPO21. RPB1 is the largest subunit of RNA polymerase II. It contains a carboxy terminal domain (CTD) composed of up to 52 heptapeptide repeats (YSPTSPS) that are essential for polymerase activity. The CTD was first discovered in the laboratory of C.J. Ingles at the University of Toronto and by JL Corden at Johns Hopkins University. In combination with several other polymerase subunits, the RPB1 subunit forms the DNA binding domain of the polymerase, a groove in which the DNA template is transcribed into RNA. It strongly interacts with RPB8.
RPB2 (POLR2B) – the second-largest subunit that in combination with at least two other polymerase subunits forms a structure within the polymerase that maintains contact in the active site of the enzyme between the DNA template and the newly synthesized RNA.
RPB3 (POLR2C) – the third-largest subunit. Exists as a heterodimer with another polymerase subunit, POLR2J forming a core subassembly. RPB3 strongly interacts with RPB1-5, 7, 10–12.
RNA polymerase II subunit B4 (RPB4) – encoded by the POLR2D gene is the fourth-largest subunit and may have a stress protective role.
RPB5 – In humans is encoded by the POLR2E gene. Two molecules of this subunit are present in each RNA polymerase II. RPB5 strongly interacts with RPB1, RPB3, and RPB6.
RPB6 (POLR2F) – forms a structure with at least two other subunits that stabilizes the transcribing polymerase on the DNA template.
RPB7 – encoded by POLR2G and may play a role in regulating polymerase function. RPB7 interacts strongly with RPB1 and RPB5.
RPB8 (POLR2H) – interacts with subunits RPB1-3, 5, and 7.
RPB9 – The groove in which the DNA template is transcribed into RNA is composed of RPB9 (POLR2I) and RPB1.
RPB10 – the product of gene POLR2L. It interacts with RPB1-3 and 5, and strongly with RPB3.
RPB11 – the RPB11 subunit is itself composed of three subunits in humans: POLR2J (RPB11-a), POLR2J2 (RPB11-b), and POLR2J3 (RPB11-c).
RPB12 – Also interacts with RPB3 is RPB12 (POLR2K).
Assembly
RPB3 is involved in RNA polymerase II assembly. A subcomplex of RPB2 and RPB3 appears soon after subunit synthesis. This complex subsequently interacts with RPB1. RPB3, RPB5, and RPB7 interact with themselves to form homodimers, and RPB3 and RPB5 together are able to contact all of the other RPB subunits, except RPB9. Only RPB1 strongly binds to RPB5. The RPB1 subunit also contacts RPB7, RPB10, and more weakly but most efficiently with RPB8. Once RPB1 enters the complex, other subunits such as RPB5 and RPB7 can enter, where RPB5 binds to RPB6 and RPB8 and RPB3 brings in RPB10, RPB 11, and RPB12. RPB4 and RPB9 may enter once most of the complex is assembled. RPB4 forms a complex with RPB7.
Kinetics
Enzymes can catalyze up to several million reactions per second. Enzyme rates depend on solution conditions and substrate concentration. Like other enzymes POLR2 has a saturation curve and a maximum velocity (Vmax). It has a Km (substrate concentration required for one-half Vmax) and a kcat (the number of substrate molecules handled by one active site per second). The specificity constant is given by kcat/Km. The theoretical maximum for the specificity constant is the diffusion limit of about 108 to 109 (M−1s−1), where every collision of the enzyme with its substrate results in catalysis. In yeast, mutation in the Trigger-Loop domain of the largest subunit can change the kinetics of the enzyme.
Bacterial RNA polymerase, a relative of RNA Polymerase II, switches between inactivated and activated states by translocating back and forth along the DNA. Concentrations of [NTP]eq = 10 μM GTP, 10 μM UTP, 5 μM ATP and 2.5 μM CTP, produce a mean elongation rate, turnover number, of ~1 bp (NTP)−1 for bacterial RNAP, a relative of RNA polymerase II.
RNA polymerase II undergoes extensive co-transcriptional pausing during transcription elongation. This pausing is especially pronounced at nucleosomes, and arises in part through the polymerase entering a transcriptionally incompetent backtracked state. The duration of these pauses ranges from seconds to minutes or longer, and exit from long-lived pauses can be promoted by elongation factors such as TFIIS. In turn, the transcription rate influences whether the histones of transcribed nucleosomes are evicted from chromatin, or reinserted behind the transcribing polymerase.
Alpha-Amanitin
RNA polymerase II is inhibited by α-Amanitin and other amatoxins. α-Amanitin is a highly poisonous substance found in many mushrooms. The mushroom poison has different effects on each of the RNA Polymerases: I, II, III. RNAP I is completely unresponsive to the substance and will function normally while RNAP III has a moderate sensitivity. RNAP II, however, is completely inhibited by the toxin. Alpha-Amanitin inhibits RNAP II by strong interactions in the enzyme's "funnel", "cleft", and the key "bridge α-helix" regions of the RPB-1 subunit.
Holoenzyme
RNA polymerase II holoenzyme is a form of eukaryotic RNA polymerase II that is recruited to the promoters of protein-coding genes in living cells. It consists of RNA polymerase II, a subset of general transcription factors, and regulatory proteins known as SRB proteins.
Part of the assembly of the holoenzyme is referred to as the preinitiation complex, because its assembly takes place on the gene promoter before the initiation of transcription. The mediator complex acts as a bridge between RNA polymerase II and the transcription factors.
Control by chromatin structure
This is an outline of an example mechanism of yeast cells by which chromatin structure and histone post-translational modification help regulate and record the transcription of genes by RNA polymerase II.
This pathway gives examples of regulation at these points of transcription:
Pre-initiation (promotion by Bre1, histone modification)
Initiation (promotion by TFIIH, Pol II modification and promotion by COMPASS, histone modification)
Elongation (promotion by Set2, Histone Modification)
This refers to various stages of the process as regulatory steps. It has not been proven that they are used for regulation, but is very likely they are.
RNA Pol II elongation promoters can be summarised in 3 classes.
Drug/sequence-dependent arrest-affected factors (Various interfering proteins)
Chromatin structure-oriented factors (Histone posttranscriptional modifiers, e.g., Histone Methyltransferases)
RNA Pol II catalysis-improving factors (Various interfering proteins and Pol II cofactors; see RNA polymerase II).
Transcription mechanisms
Chromatin structure oriented factors:(HMTs (Histone MethylTransferases)): COMPASS§† – (COMplex of Proteins ASsociated with Set1) – Methylates lysine 4 of histone H3: Is responsible of repression/silencing of transcription. A normal part of cell growth and transcription regulation within RNAP II.
Set2 – Methylates lysine 36 of histone H3: Set2 is involved in regulation transcription elongation through its direct contact with the CTD. (interesting irrelevant example: Dot1*‡ – Methylates lysine 79 of histone H3.)
Bre1 – Ubiquinates (adds ubiquitin to) lysine 123 of histone H2B. Associated with pre-initiation and allowing RNA Pol II binding.
C-terminal Domain
The C-terminus of RPB1 is appended to form the C-terminal domain (CTD). The carboxy-terminal domain of RNA polymerase II typically consists of up to 52 repeats of the sequence Tyr-Ser-Pro-Thr-Ser-Pro-Ser. The domain stretches from the core of the RNAPII enzyme to the exit channel, this placement is effective due to its inductions of "RNA processing reactions, through direct or indirect interactions with components of the RNA processing machinery". The CTD domain does not exist in RNA Polymerase I or RNA Polymerase III. The RNA Polymerase CTD was discovered first in the laboratory of C. J. Ingles at the University of Toronto and also in the laboratory of J Corden at Johns Hopkins University during the processes of sequencing the DNA encoding the RPB1 subunit of RNA polymerase from yeast and mice respectively. Other proteins often bind the C-terminal domain of RNA polymerase in order to activate polymerase activity. It is the protein domain that is involved in the initiation of transcription, the capping of the RNA transcript, and attachment to the spliceosome for RNA splicing.
Phosphorylation of the CTD
RNA Polymerase II exists in two forms unphosphorylated and phosphorylated, IIA and IIO respectively. The transition between the two forms facilitates different functions for transcription. The phosphorylation of CTD is catalyzed by one of the six general transcription factors, TFIIH. TFIIH serves two purposes: one is to unwind the DNA at the transcription start site and the other is to phosphorylate. The form polymerase IIA joins the preinitiation complex, this is suggested because IIA binds with higher affinity to the TBP (TATA-box binding protein), the subunit of the general transcription factor TFIID, than polymerase IIO form. The form polymerase IIO facilitates the elongation of the RNA chain. The method for the elongation initiation is done by the phosphorylation of serine at position 5 (Ser5), via TFIIH. The newly phosphorylated Ser5 recruits enzymes to cap the 5' end of the newly synthesized RNA and the "3' processing factors to poly(A) sites". Once the second serine is phosphorylated, Ser2, elongation is activated. In order to terminate elongation dephosphorylation must occur. Once the domain is completely dephosphorylated the RNAP II enzyme is "recycled" and catalyzes the same process with another initiation site.
Transcription coupled recombinational repair
Oxidative DNA damage may block RNA polymerase II transcription and cause strand breaks. An RNA templated transcription-associated recombination process has been described that can protect against DNA damage. During the G1/G0 stages of the cell cycle, cells exhibit assembly of homologous recombination factors at double-strand breaks within actively transcribed regions. It appears that transcription is coupled to repair of DNA double-strand breaks by RNA templated homologous recombination. This repair process efficiently and accurately rejoins double-strand breaks in genes being actively transcribed by RNA polymerase II.
See also
Eukaryotic transcription
Post-transcriptional modification
RNA polymerase I
RNA polymerase II holoenzyme
RNA polymerase III
Transcription (genetics)
References
External links
More information at Berkeley National Lab (Wayback Machine copy)
EC 2.7.7
Proteins
Gene expression | RNA polymerase II | [
"Chemistry",
"Biology"
] | 3,083 | [
"Biomolecules by chemical classification",
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Proteins"
] |
19,728,890 | https://en.wikipedia.org/wiki/Field-theoretic%20simulation | A field-theoretic simulation is a numerical strategy to calculate structure and physical properties of a many-particle system within the framework of a statistical field theory, like e.g. a polymer field theory. A convenient possibility is to use Monte Carlo (MC) algorithms, to sample the full partition function integral expressed in field-theoretic representation. The
procedure is then called the auxiliary field Monte Carlo method. However, it is well known that MC sampling in conjunction with the basic field-theoretic representation of the partition function integral, directly obtained via the Hubbard-Stratonovich transformation, is impracticable, due to the so-called numerical sign problem (Baeurle 2002, Fredrickson 2002). The
difficulty is related to the complex and oscillatory nature of the resulting distribution function, which causes a bad statistical convergence of the ensemble averages of the desired structural and thermodynamic quantities. In such cases special analytical and numerical techniques are required to accelerate the statistical convergence of the field-theoretic simulation (Baeurle 2003, Baeurle 2003a, Baeurle 2004).
Shifted-contour Monte Carlo technique
Mean field representation
To make the field-theoretic methodology amenable for computation, Baeurle proposed to shift the contour of integration of the partition function integral through the homogeneous mean field (MF) solution using Cauchy's integral theorem, which provides its so-called mean-field representation. This strategy was previously successfully employed in field-theoretic electronic structure calculations (Rom 1997, Baer 1998). Baeurle could demonstrate that this technique provides a significant acceleration of the statistical convergence of the ensemble averages in the MC sampling procedure (Baeurle 2002).
Gaussian equivalent representation
In subsequent works Baeurle et al. (Baeurle 2002, Baeurle 2002a) applied the concept of tadpole renormalization, which originates from quantum field theory and leads to the Gaussian equivalent representation of the partition function integral, in conjunction with advanced MC techniques in the grand canonical ensemble. They could convincingly demonstrate that this strategy provides an additional boost in the statistical convergence of the desired ensemble averages (Baeurle 2002).
Alternative techniques
Other promising field-theoretic simulation techniques have been developed recently, but they either still lack the proof of correct statistical convergence, like e.g. the Complex Langevin method (Ganesan 2001), and/or still need to prove their effectiveness on systems, where multiple saddle points are important (Moreira 2003).
References
External links
Theory and Computation of Advanced Materials and Sensors Group
Statistical mechanics
Computational physics | Field-theoretic simulation | [
"Physics"
] | 541 | [
"Statistical mechanics",
"Computational physics"
] |
19,739,736 | https://en.wikipedia.org/wiki/Proebsting%27s%20paradox | In probability theory, Proebsting's paradox is an argument that appears to show that the Kelly criterion can lead to ruin. Although it can be resolved mathematically, it raises some interesting issues about the practical application of Kelly, especially in investing. It was named and first discussed by Edward O. Thorp in 2008. The paradox was named for Todd Proebsting, its creator.
Statement of the paradox
If a bet is equally likely to win or lose, and pays b times the stake for a win, the Kelly bet is:
times wealth. For example, if a 50/50 bet pays 2 to 1, Kelly says to bet 25% of wealth. If a 50/50 bet pays 5 to 1, Kelly says to bet 40% of wealth.
Now suppose a gambler is offered 2 to 1 payout and bets 25%. What should he do if the payout on new bets changes to 5 to 1? He should choose f* to maximize:
because if he wins he will have 1.5 (the 0.5 from winning the 25% bet at 2 to 1 odds) plus 5f*; and if he loses he must pay 0.25 from the first bet, and f* from the second. Taking the derivative with respect to f* and setting it to zero gives:
which can be rewritten:
So f* = 0.225.
The paradox is that the total bet, 0.25 + 0.225 = 0.475, is larger than the 0.4 Kelly bet if the 5 to 1 odds are offered from the beginning. It is counterintuitive that you bet more when some of the bet is at unfavorable odds. Todd Proebsting emailed Ed Thorp asking about this.
Ed Thorp realized the idea could be extended to give the Kelly bettor a nonzero probability of being ruined. He showed that if a gambler is offered 2 to 1 odds, then 4 to 1, then 8 to 1 and so on (2n to 1 for n = 1 to infinity) Kelly says to bet:
each time. The sum of all these bets is 1. So a Kelly gambler has a 50% chance of losing his entire wealth.
In general, if a bettor makes the Kelly bet on a 50/50 proposition with a payout of b1, and then is offered b2, he will bet a total of:
The first term is what the bettor would bet if offered b2 initially. The second term is positive if f2 > f1, meaning that if the payout improves, the Kelly bettor will bet more than he would if just offered the second payout, while if the payout gets worse he will bet less than he would if offered only the second payout.
Practical application
Many bets have the feature that payoffs and probabilities can change before the outcome is determined. In sports betting for example, the line may change several times before the event is held, and news may come out (such as an injury or weather forecast) that changes the probability of an outcome. In investing, a stock originally bought at $20 per share might be available now at $10 or $30 or any other price. Some sports bettors try to make income from anticipating line changes rather than predicting event outcomes. Some traders concentrate on possible short-term price movements of a security rather than its long-term fundamental prospects.
A classic investing example is a trader who has exposure limits, say he is not allowed to have more than $1 million at risk in any one stock. That doesn't mean he cannot lose more than $1 million. If he buys $1 million of the stock at $20 and it goes to $10, he can buy another $500,000. If it then goes to $5, he can buy another $500,000. If it goes to zero, he can lose an infinite amount of money, despite never having more than $1 million at risk.
Resolution
One easy way to dismiss the paradox is to note that Kelly assumes that probabilities do not change. A Kelly bettor who knows odds might change could factor this into a more complex Kelly bet. For example suppose a Kelly bettor is given a one-time opportunity to bet a 50/50 proposition at odds of 2 to 1. He knows there is a 50% chance that a second one-time opportunity will be offered at 5 to 1. Now he should maximize:
with respect to both f1 and f2. The answer turns out to be bet zero at 2 to 1, and wait for the chance of betting at 5 to 1, in which case you bet 40% of wealth. If the probability of being offered 5 to 1 odds is less than 50%, some amount between zero and 25% will be bet at 2 to 1. If the probability of being offered 5 to 1 odds is more than 50%, the Kelly bettor will actually make a negative bet at 2 to 1 odds (that is, bet on the 50/50 outcome with payout of 1/2 if he wins and paying 1 if he loses). In either case, his bet at 5 to 1 odds, if the opportunity is offered, is 40% minus 0.7 times his 2 to 1 bet.
What the paradox says, essentially, is that if a Kelly bettor has incorrect beliefs about what future bets may be offered, he can make suboptimal choices, and even go broke. The Kelly criterion is supposed to do better than any essentially different strategy in the long run and have zero chance of ruin, as long as the bettor knows the probabilities and payouts.
More light on the issues was shed by an independent consideration of the problem by Aaron Brown, also communicated to Ed Thorp by email. In this formulation, the assumption is the bettor first sells back the initial bet, then makes a new bet at the second payout. In this case his total bet is:
which looks very similar to the formula above for the Proebsting formulation, except that the sign is reversed on the second term and it is multiplied by an additional term.
For example, given the original example of a 2 to 1 payout followed by a 5 to 1 payout, in this formulation the bettor first bets 25% of wealth at 2 to 1. When the 5 to 1 payout is offered, the bettor can sell back the original bet for a loss of 0.125. His 2 to 1 bet pays 0.5 if he wins and costs 0.25 if he loses. At the new 5 to 1 payout, he could get a bet that pays 0.625 if he wins and costs 0.125 if he loses, this is 0.125 better than his original bet in both states. Therefore his original bet now has a value of -0.125. Given his new wealth level of 0.875, his 40% bet (the Kelly amount for the 5 to 1 payout) is 0.35.
The two formulations are equivalent. In the original formulation, the bettor has 0.25 bet at 2 to 1 and 0.225 bet at 5 to 1. If he wins, he gets 2.625 and if he loses he has 0.525. In the second formulation, the bettor has 0.875 and 0.35 bet at 5 to 1. If he wins, he gets 2.625 and if he loses he has 0.525.
The second formulation makes clear that the change in behavior results from the mark-to-market loss the investor experiences when the new payout is offered. This is a natural way to think in finance, less natural to a gambler. In this interpretation, the infinite series of doubling payouts does not ruin the Kelly bettor by enticing him to overbet, it extracts all his wealth through changes beyond his control.
References
Gambling mathematics
Information theory
Statistical paradoxes
Paradoxes in utility theory | Proebsting's paradox | [
"Mathematics",
"Technology",
"Engineering"
] | 1,624 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Statistical paradoxes",
"Information theory",
"Mathematical paradoxes",
"Mathematical problems"
] |
12,910,136 | https://en.wikipedia.org/wiki/Bombesin-like%20receptor%203 | The bombesin receptor subtype 3 also known as BRS-3 or BB3 is a protein which in humans is encoded by the BRS3 gene.
Function
Mammalian bombesin-like peptides are widely distributed in the central nervous system as well as in the gastrointestinal tract, where they modulate smooth-muscle contraction, exocrine and endocrine processes, metabolism, and behavior. They bind to G protein-coupled receptors on the cell surface to elicit their effects. Bombesin-like peptide receptors include gastrin-releasing peptide receptor, neuromedin B receptor, and bombesin-like receptor-3 (BRS3; this article).
BB3 is a G protein-coupled receptor. BB3 only interacts with known naturally occurring bombesin-related peptides with low affinity and therefore, as it has no natural high-affinity ligand, is classified as an orphan receptor.
References
Further reading
External links
G protein-coupled receptors | Bombesin-like receptor 3 | [
"Chemistry"
] | 204 | [
"G protein-coupled receptors",
"Signal transduction"
] |
12,910,645 | https://en.wikipedia.org/wiki/Neuromedin%20B%20receptor | The neuromedin B receptor (NMBR), now known as BB1 is a G protein-coupled receptor whose endogenous ligand is neuromedin B. In humans, this protein is encoded by the NMBR gene.
Neuromedin B receptor binds neuromedin B, a potent mitogen and growth factor for normal and neoplastic lung and for gastrointestinal epithelial tissue.
References
Further reading
External links
G protein-coupled receptors | Neuromedin B receptor | [
"Chemistry"
] | 99 | [
"G protein-coupled receptors",
"Signal transduction"
] |
12,910,855 | https://en.wikipedia.org/wiki/Gastrin-releasing%20peptide%20receptor | The gastrin-releasing peptide receptor (GRPR), now properly known as BB2 is a G protein-coupled receptor whose endogenous ligand is gastrin releasing peptide. In humans it is highly expressed in the pancreas and is also expressed in the stomach, adrenal cortex and brain.
Gastrin-releasing peptide (GRP) regulates numerous functions of the gastrointestinal and central nervous systems, including release of gastrointestinal hormones, smooth muscle cell contraction, and epithelial cell proliferation and is a potent mitogen for neoplastic tissues. The effects of GRP are mediated through the gastrin-releasing peptide receptor. This receptor is a glycosylated, 7-transmembrane G-protein coupled receptor that activates the phospholipase C signaling pathway. The receptor is aberrantly expressed in numerous cancers such as those of the lung, colon, and prostate. An individual with autism and multiple exostoses was found to have a balanced translocation between chromosome 8 and a chromosome X breakpoint located within the gastrin-releasing peptide receptor gene.
The transcription factor CREB is a regulator of human GRP-R expression in colon cancer.
Activation MOR1D‐GRPR heteromers in the spinal cord mediate the common troublesome opioid-induced itch.
References
Further reading
External links
IUPHAR GPCR Database - BB2 receptor
G protein-coupled receptors | Gastrin-releasing peptide receptor | [
"Chemistry"
] | 302 | [
"G protein-coupled receptors",
"Signal transduction"
] |
12,912,917 | https://en.wikipedia.org/wiki/Bullough%E2%80%93Dodd%20model | The Bullough–Dodd model is an integrable model in 1+1-dimensional quantum field theory introduced by Robin Bullough and Roger Dodd. Its
Lagrangian density is
where is a mass parameter, is the coupling constant and is a real scalar field.
The Bullough–Dodd model belongs to the class of affine Toda field theories.
The spectrum of the model consists of a single massive particle.
See also
List of integrable models
References
Quantum field theory
Exactly solvable models
Integrable systems | Bullough–Dodd model | [
"Physics"
] | 107 | [
"Quantum field theory",
"Integrable systems",
"Theoretical physics",
"Quantum mechanics",
"Quantum physics stubs"
] |
23,705,618 | https://en.wikipedia.org/wiki/Invariant%20set%20postulate | The invariant set postulate concerns the possible relationship between fractal geometry and quantum mechanics and in particular the hypothesis that the former can assist in resolving some of the challenges posed by the latter. It is underpinned by nonlinear dynamical systems theory and black hole thermodynamics.
Author
The proposer of the postulate is climate scientist and physicist Tim Palmer. Palmer completed a PhD at the University of Oxford under Dennis Sciama, the same supervisor that Stephen Hawking had and then worked with Hawking himself at the University of Cambridge on supergravity theory. He later switched to meteorology and has established a reputation pioneering ensemble forecasting. He now works at the European Centre for Medium-Range Weather Forecasts in Reading, England.
Overview
Palmer argues that the postulate may help to resolve some of the paradoxes of quantum mechanics that have been discussed since the Bohr–Einstein debates of the 1920s and 30s and which remain unresolved. The idea backs Einstein's view that quantum theory is incomplete, but also agrees with Bohr's contention that quantum systems are not independent of the observer.
The key idea involved is that there exists a state space for the Universe, and that the state of the entire Universe can be expressed as a point in this state space. This state space can then be divided into "real" and "unreal" sets (parts), where, for example, the states where the Nazis lost WW2 are in the "real" set, and the states where the Nazis won WW2 are in the "unreal" set of points. The partition of state space into these two sets is unchanging, making the sets invariant.
If the Universe is a complex system affected by chaos then its invariant set (a fixed state of rest) is likely to be a fractal. According to Palmer this could resolve problems posed by the Kochen–Specker theorem, which appears to indicate that physics may have to abandon the idea of any kind of objective reality, and the apparent paradox of action at a distance. In a paper submitted to the Proceedings of the Royal Society he indicates how the idea can account for quantum uncertainty and problems of "contextuality". For example, exploring the quantum problem of wave-particle duality, one of the central mysteries of quantum theory, the author claims that "in terms of the Invariant Set Postulate, the paradox is easily resolved, in principle at least". The paper and related talks given at the Perimeter Institute and University of Oxford also explores the role of gravity in quantum physics.
Critical reception
New Scientist quotes Bob Coeke of Oxford University as stating "What makes this really interesting is that it gets away from the usual debates over multiple universes and hidden variables and so on. It suggests there might be an underlying physical geometry that physics has just missed, which is radical and very positive". He added that "Palmer manages to explain some quantum phenomena, but he hasn't yet derived the whole rigid structure of the theory. This is really necessary."
Robert Spekkens has said: "I think his approach is really interesting and novel. Other physicists have shown how you can find a way out of the Kochen–Specker theorem, but this work actually provides a mechanism to explain the theorem."
According to Todd Brun, it is a tall order, to make a serious rival to quantum mechanics, a really predictive theory, out of Palmer's ideas. This goal has not been achieved yet.
See also
Fractal cosmology
References
Fractals
Quantum mechanics
Fringe physics | Invariant set postulate | [
"Physics",
"Mathematics"
] | 728 | [
"Functions and mappings",
"Mathematical analysis",
"Theoretical physics",
"Mathematical objects",
"Quantum mechanics",
"Fractals",
"Mathematical relations"
] |
23,706,623 | https://en.wikipedia.org/wiki/Operand%20isolation | In electronic low power digital synchronous circuit design, operand isolation is a technique for minimizing the energy overhead associated with redundant operations by selectively blocking the propagation of switching activity through the circuit.
This technique isolates sections of the circuit (operation) from "seeing" changes on their inputs (operands) unless they are expected to respond to them.
This is usually done using latches at the inputs of the circuit. The latches become transparent only when the result of the operation is going to be used. One can also use multiplexers or simple AND gates instead of latches.
Overhead
There is some area overhead associated with this technique since the circuit designer needs to add extra circuitry, i.e. latches, at the inputs. Also, if the latches are being added in a pipeline stage, they might change the critical path, and hence increase the propagation delay and cycle time. In cases where the overhead is not acceptable, one can think of clock gating as an alternative method of low power design.
See also
Glitch removal
Clock gating
Distributive law - a similar idea in mathematics
Reduction (mathematics) - a similar idea in mathematics
Reducing a fraction - a similar idea in mathematics
Reduced Karnaugh map (RKM) - a similar technique in logic optimization
Infrequent variables - a similar technique in logic optimization
References
Electrical circuits | Operand isolation | [
"Engineering"
] | 283 | [
"Electrical engineering",
"Electronic engineering",
"Electrical circuits"
] |
23,706,953 | https://en.wikipedia.org/wiki/Finite%20ring | In mathematics, more specifically abstract algebra, a finite ring is a ring that has a finite number of elements.
Every finite field is an example of a finite ring, and the additive part of every finite ring is an example of an abelian finite group, but the concept of finite rings in their own right has a more recent history.
Although rings have more structure than groups do, the theory of finite rings is simpler than that of finite groups. For instance, the classification of finite simple groups was one of the major breakthroughs of 20th century mathematics, its proof spanning thousands of journal pages. On the other hand, it has been known since 1907 that any finite simple ring is isomorphic to the ring – the n-by-n matrices over a finite field of order q (as a consequence of Wedderburn's theorems, described below).
The number of rings with m elements, for m a natural number, is listed under in the On-Line Encyclopedia of Integer Sequences.
Finite field
The theory of finite fields is perhaps the most important aspect of finite ring theory due to its intimate connections with algebraic geometry, Galois theory and number theory. An important, but fairly old aspect of the theory is the classification of finite fields:
The order or number of elements of a finite field equals pn, where p is a prime number called the characteristic of the field, and n is a positive integer.
For every prime number p and positive integer n, there exists a finite field with pn elements.
Any two finite fields with the same order are isomorphic.
Despite the classification, finite fields are still an active area of research, including recent results on the Kakeya conjecture and open problems regarding the size of smallest primitive roots (in number theory).
A finite field F may be used to build a vector space of n-dimensions over F. The matrix ring A of n × n matrices with elements from F is used in Galois geometry, with the projective linear group serving as the multiplicative group of A.
Wedderburn's theorems
Wedderburn's little theorem asserts that any finite division ring is necessarily commutative:
If every nonzero element r of a finite ring R has a multiplicative inverse, then R is commutative (and therefore a finite field).
Nathan Jacobson later discovered yet another condition which guarantees commutativity of a ring: if for every element r of R there exists an integer such that , then R is commutative. More general conditions that imply commutativity of a ring are also known.
Yet another theorem by Wedderburn has, as its consequence, a result demonstrating that the theory of finite simple rings is relatively straightforward in nature. More specifically, any finite simple ring is isomorphic to the ring , the n-by-n matrices over a finite field of order q. This follows from two theorems of Joseph Wedderburn established in 1905 and 1907 (one of which is Wedderburn's little theorem).
Enumeration
(Warning: the enumerations in this section include rings that do not necessarily have a multiplicative identity, sometimes called rngs.) In 1964 David Singmaster proposed the following problem in the American Mathematical Monthly: "(1) What is the order of the smallest non-trivial ring with identity which is not a field? Find two such rings with this minimal order. Are there more? (2) How many rings of order four are there?"
One can find the solution by D.M. Bloom in a two-page proof that there are eleven rings of order 4, four of which have a multiplicative identity. Indeed, four-element rings introduce the complexity of the subject. There are three rings over the cyclic group C4 and eight rings over the Klein four-group. There is an interesting display of the discriminatory tools (nilpotents, zero-divisors, idempotents, and left- and right-identities) in Gregory Dresden's lecture notes.
The occurrence of non-commutativity in finite rings was described in in two theorems: If the order m of a finite ring with 1 has a cube-free factorization, then it is commutative. And if a non-commutative finite ring with 1 has the order of a prime cubed, then the ring is isomorphic to the upper triangular 2 × 2 matrix ring over the Galois field of the prime.
The study of rings of order the cube of a prime was further developed in and . Next Flor and Wessenbauer (1975) made improvements on the cube-of-a-prime case. Definitive work on the isomorphism classes came with proving that for p > 2, the number of classes is 3p + 50.
There are earlier references in the topic of finite rings, such as Robert Ballieu and Scorza.
These are a few of the facts that are known about the number of finite rings (not necessarily with unity) of a given order (suppose p and q represent distinct prime numbers):
There are two finite rings of order p.
There are four finite rings of order pq.
There are eleven finite rings of order p2.
There are twenty-two finite rings of order p2q.
There are fifty-two finite rings of order eight.
There are 3p + 50 finite rings of order p3, p > 2.
The number of rings with n elements are (with )
1, 1, 2, 2, 11, 2, 4, 2, 52, 11, 4, 2, 22, 2, 4, 4, 390, 2, 22, 2, 22, 4, 4, 2, 104, 11, 4, 59, 22, 2, 8, 2, >18590, 4, 4, 4, 121, 2, 4, 4, 104, 2, 8, 2, 22, 22, 4, 2, 780, 11, 22, ...
See also
Galois ring, finite commutative rings that generalize Z/pnZ and finite fields
Notes
References
a research report of the work of 13 students and Prof. Sieler at a Washington & Lee University class in Abstract algebra (Math 322).
External links
Classification of finite commutative rings
Algebraic combinatorics
Ring theory | Finite ring | [
"Mathematics"
] | 1,292 | [
"Fields of abstract algebra",
"Algebraic combinatorics",
"Ring theory",
"Combinatorics"
] |
23,708,391 | https://en.wikipedia.org/wiki/Formal%20manifold | In geometry and topology, a formal manifold can mean one of a number of related concepts:
In the sense of Dennis Sullivan, a formal manifold is one whose real homotopy type is a formal consequence of its real cohomology ring; algebro-topologically this means in particular that all Massey products vanish.
A stronger notion is a geometrically formal manifold, a manifold on which all wedge products of harmonic forms are harmonic.
References
Manifolds | Formal manifold | [
"Mathematics"
] | 92 | [
"Space (mathematics)",
"Topology stubs",
"Topological spaces",
"Topology",
"Manifolds"
] |
23,714,114 | https://en.wikipedia.org/wiki/C6H9NO | {{DISPLAYTITLE:C6H9NO}}
The molecular formula C6H9NO may refer to:
2-Acetyl-1-pyrroline
Carbapenam
N-Vinylpyrrolidone
Molecular formulas | C6H9NO | [
"Physics",
"Chemistry"
] | 52 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
23,714,121 | https://en.wikipedia.org/wiki/C12H18N2O2 | {{DISPLAYTITLE:C12H18N2O2}}
The molecular formula C12H18N2O2 may refer to:
Doxpicomine
Isophorone diisocyanate
Mexacarbate
Miotine
Molecular formulas | C12H18N2O2 | [
"Physics",
"Chemistry"
] | 56 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
23,714,218 | https://en.wikipedia.org/wiki/C15H21N3O | {{DISPLAYTITLE:C15H21N3O}}
The molecular formula C15H21N3O (molar mass: 259.35 g/mol, exact mass: 259.1685 u) may refer to:
Primaquine
GSK-789,472
Molecular formulas | C15H21N3O | [
"Physics",
"Chemistry"
] | 65 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
23,714,226 | https://en.wikipedia.org/wiki/C13H20N2O2 | {{DISPLAYTITLE:C13H20N2O2}}
The molecular formula C13H20N2O2 (molar mass: 236.31 g/mol, exact mass: 236.1525 u) may refer to:
Dropropizine, a cough suppressant
Levodropropizine
Metabutethamine
Procaine | C13H20N2O2 | [
"Chemistry"
] | 78 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
23,714,384 | https://en.wikipedia.org/wiki/Tube%20sound | Tube sound (or valve sound) is the characteristic sound associated with a vacuum tube amplifier (valve amplifier in British English), a vacuum tube-based audio amplifier. At first, the concept of tube sound did not exist, because practically all electronic amplification of audio signals was done with vacuum tubes and other comparable methods were not known or used. After introduction of solid state amplifiers, tube sound appeared as the logical complement of transistor sound, which had some negative connotations due to crossover distortion in early transistor amplifiers. However, solid state amplifiers have been developed to be flawless and the sound is later regarded neutral compared to tube amplifiers. Thus the tube sound now means 'euphonic distortion.' The audible significance of tube amplification on audio signals is a subject of continuing debate among audio enthusiasts.
Many electric guitar, electric bass, and keyboard players in several genres also prefer the sound of tube instrument amplifiers or preamplifiers. Tube amplifiers are also preferred by some listeners for stereo systems.
History
Before the commercial introduction of transistors in the 1950s, electronic amplifiers used vacuum tubes (known in the United Kingdom as "valves"). By the 1960s, solid state (transistorized) amplification had become more common because of its smaller size, lighter weight, lower heat production, and improved reliability. Tube amplifiers have retained a loyal following amongst some audiophiles and musicians. Some tube designs command very high prices, and tube amplifiers have been going through a revival since Chinese and Russian markets have opened to global trade—tube production never went out of vogue in these countries. Many transistor-based audio power amplifiers use MOSFET (metal–oxide–semiconductor field-effect transistor) devices in their power sections, because their distortion curve is more tube-like.
Musical instrument amplification
Some musicians
prefer the distortion characteristics of tubes over transistors for electric guitar, bass, and other instrument amplifiers. In this case, generating deliberate (and in the case of electric guitars often considerable) audible distortion or overdrive is usually the goal. The term can also be used to describe the sound created by specially-designed transistor amplifiers or digital modeling devices that try to closely emulate the characteristics of the tube sound.
The tube sound is often subjectively described as having a "warmth" and "richness", but the source of this is by no means agreed on. Possible explanations mention non-linear clipping, or the higher levels of second-order harmonic distortion in single-ended designs, resulting from the tube interacting with the inductance of the output transformer.
Harmonic content and distortion
Triodes (and MOSFETs) produce a monotonically decaying harmonic distortion spectrum. Even-order harmonics and odd-order harmonics are both natural number multiples of the input frequency.
A psychoacoustic analysis tells us that high-order harmonics are more offensive than low. For this reason, distortion measurements should weight audible high-order harmonics more than low. The importance of high-order harmonics suggests that distortion should be regarded in terms of the complete series or of the composite wave-form that this series represents. It has been shown that weighting the harmonics by the square of the order correlates well with subjective listening tests. Weighting the distortion wave-form proportionally to the square of the frequency gives a measure of the reciprocal of the radius of curvature of the wave-form, and is therefore related to the sharpness of any corners on it. Based on said discovery, highly sophisticated methods of weighting of distortion harmonics have been developed. Since they concentrate in the origins of the distortion, they are mostly useful for the engineers who develop and design audio amplifiers, but on the other hand they may be difficult to use for the reviewers who only measure the output.
A huge issue is that measurements of objective nature (for example, those indicating magnitude of scientifically quantifiable variables such as current, voltage, power, THD, dB, and so on) fail to address subjective preferences. Especially in case of designing or reviewing instrument amplifiers this is a considerable issue because design goals of such differ widely from design goals of likes of HiFi amplifiers. HiFi design largely concentrates on improving performance of objectively measurable variables. Instrument amplifier design largely concentrates on subjective issues, such as "pleasantness" of certain type of tone. Fine examples are cases of distortion or frequency response: HiFi design tries to minimize distortion and focuses on eliminating "offensive" harmonics. It also aims for ideally flat response. Musical instrument amplifier design deliberately introduces distortion and great non-linearities in frequency response. Former "offensiveness" of certain types of harmonics becomes a highly subjective topic, along with preferences towards certain types of frequency responses (whether flat or un-flat).
Push–pull amplifiers use two nominally identical gain devices in tandem. One consequence of this is that all even-order harmonic products cancel, allowing only odd-order distortion. This is because a push–pull amplifier has a symmetric (odd symmetry) transfer characteristic. Power amplifiers are of the push-pull type to avoid the inefficiency of Class A amplifiers.
A single-ended amplifier will generally produce even as well as odd harmonics. A particularly famous research about "tube sound" compared a selection of single-ended tube microphone preamplifiers to a selection of push-pull transistorized microphone preamplifiers. The difference in harmonic patterns of these two topologies has henceforth been often incorrectly attributed as difference of tube and solid-state devices (or even the amplifier class). Push–pull tube amplifiers can be run in class A (rarely), AB, or B. Also, a class-B amplifier may have crossover distortion that will be typically high order and thus sonically very undesirable indeed.
The distortion content of class-A circuits (SE or PP) typically monotonically reduces as the signal level is reduced, asymptotic to zero during quiet passages of music. For this reason class-A amplifiers are especially desired for classical and acoustic music since the distortion relative to signal decreases as the music gets quieter. Class-A amplifiers measure best at low power. Class-AB and B amplifiers measure best just below max rated power.
Loudspeakers present a reactive load to an amplifier (capacitance, inductance and resistance). This impedance may vary in value with signal frequency and amplitude. This variable loading affects the amplifier's performance both because the amplifier has nonzero output impedance (it cannot keep its output voltage perfectly constant when the speaker load varies) and because the phase of the speaker load can change the stability margin of the amplifier. The influence of the speaker impedance is different between tube amplifiers and transistor amplifiers. The reason is that tube amplifiers normally use output transformers, and cannot use much negative feedback due to phase problems in transformer circuits. Notable exceptions are various "OTL" (output-transformerless) tube amplifiers, pioneered by Julius Futterman in the 1950s, or somewhat rarer tube amplifiers that replace the impedance matching transformer with additional (often, though not necessarily, transistorized) circuitry in order to eliminate parasitics and musically unrelated magnetic distortions. In addition to that, many solid-state amplifiers, designed specifically to amplify electric instruments such as guitars or bass guitars, employ current feedback circuitry. This circuitry increases the amplifier's output impedance, resulting in response similar to that of tube amplifiers.
The design of speaker crossover networks and other electro-mechanical properties may result in a speaker with a very uneven impedance curve, for a nominal 8 Ω speaker, being as low as 6 Ω at some places and as high as 30–50 Ω elsewhere in the curve. An amplifier with little or no negative feedback will always perform poorly when faced with a speaker where little attention was paid to the impedance curve.
Design comparison
There has been considerable debate over the characteristics of tubes versus bipolar junction transistors. Triodes and MOSFETs have certain similarities in their transfer characteristics. Later forms of the tube, the tetrode and pentode, have quite different characteristics that are in some ways similar to the bipolar transistor. Yet MOSFET amplifier circuits typically do not reproduce tube sound any more than typical bipolar designs. The reason is circuit differences between a typical tube design and a typical MOSFET design.
Input impedance
A characteristic feature of most tube amplifier designs is the high input impedance (typically 100 kΩ or more) in modern designs and as much as 1 MΩ in classic designs. The input impedance of the amplifier is a load for the source device. Even for some modern music reproduction devices the recommended load impedance is over 50 kΩ. This implies that the input of an average tube amplifier is a problem-free load for music signal sources. By contrast, some transistor amplifiers for home use have lower input impedances, as low as 15 kΩ. Since it is possible to use high output impedance devices due to the high input impedance, other factors may need to be accounted for, such as cable capacitance and microphonics.
Output impedance
Loudspeakers usually load audio amplifiers. In audio history, nearly all loudspeakers have been electrodynamic loudspeakers. There exists also a minority of electrostatic loudspeakers and some other more exotic loudspeakers. Electrodynamic loudspeakers transform electric current to force and force to acceleration of the diaphragm which causes sound pressure. Due to the principle of an electrodynamic speaker, most loudspeaker drivers ought to be driven by an electric current signal. The current signal drives the electrodynamic speaker more accurately, causing less distortion than a voltage signal.
In an ideal current or transconductance amplifier the output impedance approaches infinity. Practically all commercial audio amplifiers are voltage amplifiers. Their output impedances have been intentionally developed to approach zero. Due to the nature of vacuum tubes and audio transformers, the output impedance of an average tube amplifier is usually considerably higher than the modern audio amplifiers produced completely without vacuum tubes or audio transformers. Most tube amplifiers with their higher output impedance are less ideal voltage amplifiers than the solid state voltage amplifiers with their smaller output impedance.
Soft clipping
Soft clipping is a very important aspect of tube sound especially for guitar amplifiers. A hi-fi amplifier should not normally ever be driven into clipping. The harmonics added to the signal are of lower energy with soft clipping than hard clipping. However, soft clipping is not exclusive to tubes. It can be simulated in transistor circuits (below the point that real hard clipping would occur). (See "Intentional distortion" section.)
Large amounts of global negative feedback are not available in tube circuits, due to phase shift in the output transformer, and lack of sufficient gain without large numbers of tubes. With lower feedback, distortion is higher and predominantly of low order. The onset of clipping is also gradual. Large amounts of feedback, allowed by transformerless circuits with many active devices, leads to numerically lower distortion but with more high harmonics, and harder transition to clipping. As input increases, the feedback uses the extra gain to ensure that the output follows it accurately until the amplifier has no more gain to give and the output saturates.
However, phase shift is largely an issue only with global feedback loops. Design architectures with local feedback can be used to compensate the lack of global negative feedback magnitude. Design "selectivism" is again a trend to observe: designers of sound producing devices may find the lack of feedback and resulting higher distortion beneficial, designers of sound reproducing devices with low distortion have often employed local feedback loops.
Soft clipping is also not a product of lack of feedback alone: Tubes have different characteristic curves. Factors such as bias affect the load line and clipping characteristics. Fixed and cathode-biased amplifiers behave and clip differently under overdrive. The type of phase inverter circuitry can also affect greatly on softness (or lack of it) of clipping: long-tailed pair circuit, for example, has softer transition to clipping than a cathodyne. The coupling of the phase inverter and power tubes is also important, since certain types of coupling arrangements (e.g. transformer coupling) can drive power tubes to class AB2, while some other types can't.
In the recording industry and especially with microphone amplifiers it has been shown that amplifiers are often overloaded by signal transients. Russell O. Hamm, an engineer working for Walter Sear at Sear Sound Studios, wrote in 1973 that there is a major difference between the harmonic distortion components of a signal with greater than 10% distortion that had been amplified with three methods: tubes, transistors, or operational amplifiers.
Mastering engineer R. Steven Mintz wrote a rebuttal to Hamm's paper, saying that the circuit design was of paramount importance, more than tubes vs. solid state components.
Hamm's paper was also countered by Dwight O. Monteith Jr and Richard R. Flowers in their article "Transistors Sound Better Than Tubes", which presented transistor mic preamplifier design that actually reacted to transient overloading similarly as the limited selection of tube preamplifiers tested by Hamm. Monteith and Flowers said: "In conclusion, the high voltage transistor preamplifier presented here supports the viewpoint of Mintz: 'In the field analysis, the characteristics of a typical system using transistors depends on the design, as is the case in tube circuits. A particular 'sound' may be incurred or avoided at the designer's pleasure no matter what active devices he uses.'"
In other words, soft clipping is not exclusive to vacuum tubes or even an inherent property of them. In practice the clipping characteristics are largely dictated by the entire circuitry and as so they can range from very soft to very hard, depending on circuitry. Same applies to both vacuum tube and solid-state -based circuitry. For example, solid-state circuitry such as operational transconductance amplifiers operated open loop, or MOSFET cascades of CMOS inverters, are frequently used in commercial applications to generate softer clipping than what is provided by generic triode gain stages. In fact, the generic triode gain stages can be observed to clip rather "hard" if their output is scrutinized with an oscilloscope.
Bandwidth
Early tube amplifiers often had limited response bandwidth, in part due to the characteristics of the inexpensive passive components then available. In power amplifiers most limitations come from the output transformer; low frequencies are limited by primary inductance and high frequencies by leakage inductance and capacitance. Another limitation is in the combination of high output impedance, decoupling capacitor and grid resistor, which acts as a high-pass filter. If interconnections are made from long cables (for example guitar to amp input), a high source impedance with high cable capacitance will act as a low-pass filter.
Modern premium components make it easy to produce amplifiers that are essentially flat over the audio band, with less than 3 dB attenuation at 6 Hz and 70 kHz, well outside the audible range.
Negative feedback
Typical (non-OTL) tube power amplifiers could not use as much negative feedback (NFB) as transistor amplifiers due to the large phase shifts caused by the output transformers and their lower stage gains. While the absence of NFB greatly increases harmonic distortion, it avoids instability, as well as slew rate and bandwidth limitations imposed by dominant-pole compensation in transistor amplifiers. However, the effects of using low feedback principally apply only to circuits where significant phase shifts are an issue (e.g. power amplifiers). In preamplifier stages, high amounts of negative feedback can easily be employed. Such designs are commonly found from many tube-based applications aiming to higher fidelity.
On the other hand, the dominant pole compensation in transistor amplifiers is precisely controlled: exactly as much of it can be applied as needed to strike a good compromise for the given application.
The effect of dominant pole compensation is that gain is reduced at higher frequencies. There is increasingly less NFB at high frequencies due to the reduced loop gain.
In audio amplifiers, the bandwidth limitations introduced by compensation are still far beyond the audio frequency range, and the slew rate limitations can be configured such that full amplitude 20 kHz signal can be reproduced without the signal encountering slew rate distortion, which is not even necessary for reproducing actual audio material.
Power supplies
Early tube amplifiers had power supplies based on rectifier tubes. These supplies were unregulated, a practice which continues to this day in transistor amplifier designs. The typical anode supply was a rectifier, perhaps half-wave, a choke (inductor) and a filter capacitor. When the tube amplifier was operated at high volume, due to the high impedance of the rectifier tubes, the power supply voltage would dip as the amplifier drew more current (assuming class AB), reducing power output and causing signal modulation. The dipping effect is known as "sag." Sag may be desirable effect for some electric guitarists when compared with hard clipping. As the amplifier load or output increases this voltage drop will increase distortion of the output signal. Sometimes this sag effect is desirable for guitar amplification.
With added resistance in series with the high-voltage supply, silicon rectifiers can emulate the voltage sag of a tube rectifier. The resistance can be switched in when required.
Electric guitar amplifiers often use a class-AB1 amplifier. In a class-A stage the average current drawn from the supply is constant with signal level, consequently it does not cause supply line sag until the clipping point is reached. Other audible effects due to using a tube rectifier with this amplifier class are unlikely.
Unlike their solid-state equivalents, tube rectifiers require time to warm up before they can supply B+/HT voltages. This delay can protect rectifier-supplied vacuum tubes from cathode damage due to application of B+/HT voltages before the tubes have reached their correct operating temperature by the tube's built-in heater.
Class A
The benefit of all class-A amplifiers is the absence of crossover distortion. This crossover distortion was found especially annoying after the first silicon-transistor class-B and class-AB transistor amplifiers arrived on the consumer market. Earlier germanium-based designs with the much lower turn-on voltage of this technology and the non-linear response curves of the devices had not shown large amounts of cross-over distortion. Although crossover distortion is very fatiguing to the ear and perceptible in listening tests, it is also almost invisible (until looked for) in the traditional Total harmonic distortion (THD) measurements of that epoch. It should be pointed out that this reference is somewhat ironic given its publication date of 1952. As such, it most certainly refers to "ear fatigue" distortion commonly found in existing tube-type designs; the world's first prototype transistorized hi-fi amplifier did not appear until 1955.
Push–pull amplifiers
A class-A push–pull amplifier produces low distortion for any given level of applied feedback, and also cancels the flux in the transformer cores, so this topology is often seen by HIFI-audio enthusiasts and do-it-yourself builders as the ultimate engineering approach to the tube Hi-fi amplifier for use with normal speakers. Output power of as high as 15 watts can be achieved even with classic tubes such as the 2A3 or 18 watts from the type 45. Classic pentodes such as the EL34 and KT88 can output as much as 60 and 100 watts respectively. Special types such as the V1505 can be used in designs rated at up to 1100 watts. See "An Approach to Audio Frequency Amplifier Design", a collection of reference designs originally published by G.E.C.
Single-ended triode (SET) amplifiers
SET amplifiers show poor measurements for distortion with a resistive load, have low output power, are inefficient, have poor damping factors and high measured harmonic distortion. But they perform somewhat better in dynamic and impulse response.
The triode, despite being the oldest signal amplification device, also can (depending on the device in question) have a more linear no-feedback transfer characteristic than more advanced devices such as beam tetrodes and pentodes.
All amplifiers, regardless of class, components, or topology, have some measure of distortion. This mainly harmonic distortion is a unique pattern of simple and monotonically decaying series of harmonics, dominated by modest levels of second harmonic. The result is like adding the same tone one octave higher in the case of second-order harmonics, and one octave plus one fifth higher for third-order harmonics. The added harmonic tone is lower in amplitude, at about 1–5% or less in a no feedback amp at full power and rapidly decreasing at lower output levels. Hypothetically, a single-ended power amplifier's second harmonic distortion might reduce similar harmonic distortion in a single driver loudspeaker, if their harmonic distortions were equal and amplifier was connected to the speaker so that the distortions would neutralize each other.
SETs usually only produce about 2 watt (W) for a 2A3 tube amp to 8 W for a 300B up to the practical maximum of 40 W for an 805 tube amp. The resulting sound pressure level depends on the sensitivity of the loudspeaker and the size and acoustics of the room as well as amplifier power output. Their low power also makes them ideal for use as preamps. SET amps have a power consumption of a minimum of 8 times the stated stereo power. For example, a 10 W stereo SET uses a minimum of 80 W, and typically 100 W.
Single-ended pentode and tetrode amplifiers
The special feature among tetrodes and pentodes is the possibility to obtain ultra-linear or distributed load operation with an appropriate output transformer. In practice, in addition to loading the plate terminal, distributed loading (of which ultra linear circuit is a specific form) distributes the load also to cathode and screen terminals of the tube. An Ultra-linear connection and distributed loading are both in essence negative feedback methods, which enable less harmonic distortion along with other characteristics associated with negative feedback. Ultra-linear topology has mostly been associated with amplifier circuits based on research by D. Hafler and H. Keroes of Dynaco fame. Distributed loading (in general and in various forms) has been employed by the likes of McIntosh and Audio Research.
Class AB
The majority of modern commercial Hi-fi amplifier designs have until recently used class-AB topology (with more or less pure low-level class-A capability depending on the standing bias current used), in order to deliver greater power and efficiency, typically 12–25 watts and higher. Contemporary designs normally include at least some negative feedback. However, class-D topology (which is vastly more efficient than class B) is more and more frequently applied where traditional design would use class AB because of its advantages in both weight and efficiency.
Class-AB push–pull topology is nearly universally used in tube amps for electric guitar applications that produce power of more than about 10 watts.
Intentional distortion
Tube sound from transistor amplifiers
Some individual characteristics of the tube sound, such as the waveshaping on overdrive, are straightforward to produce in a transistor circuit or digital filter. For more complete simulations, engineers have been successful in developing transistor amplifiers that produce a sound quality very similar to the tube sound. Usually this involves using a circuit topology similar to that used in tube amplifiers.
More recently, a researcher has introduced the asymmetric cycle harmonic injection (ACHI) method to emulate tube sound with transistors.
Using modern passive components, and modern sources, whether digital or analogue, and wide band loudspeakers, it is possible to have tube amplifiers with the characteristic wide bandwidth of modern transistor amplifiers, including using push–pull circuits, class AB, and feedback. Some enthusiasts, such as Nelson Pass, have built amplifiers using transistors and MOSFETs that operate in class A, including single ended, and these often have the "tube sound."
Hybrid amplifiers
Tubes are added to solid-state amplifiers to impart characteristics that many people find audibly pleasant, such as Musical Fidelity's use of Nuvistors (tiny triode tubes) to control large bipolar transistors in their NuVista 300 power amp. In America, Moscode and Studio Electric use this method, but use MOSFET transistors for power, rather than bipolar. Pathos, an Italian company, has developed an entire line of hybrid amplifiers.
To demonstrate one aspect of this effect, one may use a light bulb in the feedback loop of an infinite gain multiple feedback (IGMF) circuit. The slow response of the light bulb's resistance (which varies according to temperature) can thus be used to moderate the sound and attain a tube-like "soft limiting" of the output, though other aspects of the "tube sound" would not be duplicated in this exercise.
Tube sound reproduction using no tubes (extended)
It is possible to reproduce the warm and rich sound of vacuum tubes using solid-state systems and even by incorporating fast computers and synthesizers to enhance the effect. One advantage of this approach is the increased reliability of a solid state system compared to vacuum tube system. Here are some techniques and methods to achieve this:
1. **Tube Emulation Circuits:** Special electronic circuits based on transistors and other analog components can be used to mimic the nonlinear characteristics of vacuum tubes.
2. **DSP (Digital Signal Processing):** Digital signal processing allows the accurate reproduction of harmonic distortions. Fast processors and advanced algorithms can be used to simulate the characteristic sound of vacuum tubes in real-time.
3. **Synthesizers:** Digital synthesizers can generate warm tones through internal processing and adjustable parameters that mimic the properties of vacuum tubes.
4. **Mathematical Models:** Mathematical models have been developed to simulate the behavior of tubes and their effects on sound. These include models of harmonic distortions and real-time response models.
5. **Hybrid Analog-Digital Components:** Combining analog and digital circuits can provide the best of both worlds. The analog circuit can provide unique distortions and responsiveness, while the digital circuit allows for advanced signal processing.
Using these methods and technologies, it is possible to create audio systems that provide the warm and rich sound characteristic of tubes while maintaining the accuracy and reliability of solid-state systems.
See also
Audio system measurements
Boutique amplifier
British Valve Association
Virtual Valve Amplifier
Notes
References
Barbour, Eric. The Cool Sound of Tubes in IEEE Spectrum Online.
Hamm, Russell O. (September 14, 1972). "Tubes vs. Transistors: Is There An Audible Difference?". Presented at the 43rd convention of the Audio Engineering Society, New York.
Reisch, George. Scientists vs Audiophiles 1999 in Stereophile, March, 1999.
Tube Data Archive - Massive collection (many gigabytes) of scanned original tube data sheets and technical information.
Valve amplifiers
Vacuum tubes
Audio amplifiers
Audio engineering | Tube sound | [
"Physics",
"Engineering"
] | 5,679 | [
"Vacuum tubes",
"Vacuum",
"Electrical engineering",
"Audio engineering",
"Matter"
] |
8,270,295 | https://en.wikipedia.org/wiki/Tetrathionate | The tetrathionate anion, , is a sulfur oxyanion derived from the compound tetrathionic acid, H2S4O6. Two of the sulfur atoms present in the ion are in oxidation state 0 and two are in oxidation state +5. Alternatively, the compound can be viewed as the adduct resulting from the binding of to SO3. Tetrathionate is one of the polythionates, a family of anions with the formula [Sn(SO3)2]2−. Its IUPAC name is 2-(dithioperoxy)disulfate, and the name of its corresponding acid is 2-(dithioperoxy)disulfuric acid. The Chemical Abstracts Service identifies tetrathionate by the CAS Number 15536-54-6.
Formation
Tetrathionate is a product of the oxidation of thiosulfate, , by iodine, I2:
2 + I2 → + 2I−
The use of bromine instead of iodine is dubious as excess bromine will oxidize the thiosulfate to sulfate.
Structure
Tetrathionate's structure can be visualized by following three edges of a rectangular cuboid, as in the diagram below. The structure shown is the configuration of in BaS4O6·2H2O and Na2S4O6·2H2O. Dihedral S–S–S–S angles approaching 90° are common in polysulfides.
Compounds
Compounds containing the tetrathionate anion include sodium tetrathionate, Na2S4O6, potassium tetrathionate, K2S4O6, and barium tetrathionate dihydrate, BaS4O6·2H2O.
Properties
As other species of sulfur at intermediate oxidation state, such as thiosulfate, tetrathionate can be responsible for the pitting corrosion of carbon steel and stainless steel.
Tetrathionate has also been found to serve as a terminal electron acceptor for Salmonella enterica serotype Typhimurium, whereas existing thiosulfate in the small intestines of mammals is oxidized by reactive oxygen species released by the immune system (mainly NADPH oxidase produced superoxide) to form tetrathionate. This aids in the growth of the bacterium, helped by the inflammatory response.
See also
Corrosion
Dithionite
Polysulfides
Thiosulfate
References
Corrosion
Sulfur oxyanions | Tetrathionate | [
"Chemistry",
"Materials_science"
] | 519 | [
"Metallurgy",
"Corrosion",
"Electrochemistry",
"Electrochemistry stubs",
"Materials degradation",
"Physical chemistry stubs",
"Chemical process stubs"
] |
6,278,350 | https://en.wikipedia.org/wiki/Grimm%27s%20conjecture | In number theory, Grimm's conjecture (named after Carl Albert Grimm, 1 April 1926 – 2 January 2018) states that to each element of a set of consecutive composite numbers one can assign a distinct prime that divides it. It was first published in American Mathematical Monthly, 76(1969) 1126-1128.
Formal statement
If n + 1, n + 2, ..., n + k are all composite numbers, then there are k distinct primes pi such that pi divides n + i for 1 ≤ i ≤ k.
Weaker version
A weaker, though still unproven, version of this conjecture states: If there is no prime in the interval , then has at least k distinct prime divisors.
See also
Prime gap
References
Guy, R. K. "Grimm's Conjecture." §B32 in Unsolved Problems in Number Theory, 3rd ed., Springer Science+Business Media, pp. 133–134, 2004.
External links
Prime Puzzles #430
Conjectures about prime numbers
Unsolved problems in number theory | Grimm's conjecture | [
"Mathematics"
] | 214 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Number theory",
"Unsolved problems in number theory"
] |
6,282,950 | https://en.wikipedia.org/wiki/Vacuum%20variable%20capacitor | A vacuum variable capacitor is a variable capacitor which uses a high vacuum as the dielectric instead of air or other insulating material. This allows for a higher voltage rating than an air dielectric using a smaller total volume. However, many dielectrics have higher breakdown field strengths than vacuum: 60-170 MV/m for teflon, 470-670 MV/m for fused silica and 2000 MV/m for diamond, compared with 20-40 MV/m for vacuum. There are several different designs in vacuum variables. The most common form is inter-meshed concentric cylinders, which are contained within a glass or ceramic vacuum envelope, similar to an electron tube. A metal bellows is used to maintain a vacuum seal while allowing positional control for the moving parts of the capacitor.
Invention
Jo Emmett Jennings created the first practical vacuum variable capacitor after he founded his Jennings Radio Manufacturing Company in 1940. Commercial products have been available since 1942.
Applications
Vacuum variable capacitors are commonly used in high-voltage applications: 5000 volts (5 kV) and above. They are used in equipment such as high-powered broadcast transmitters, amateur radio RF amplifiers and large antenna tuners. Industrially they are used in plasma generating equipment, for dielectric heating, and in semiconductor manufacturing. The main applications today are RF plasmas of 2 to 160 MHz where the vacuum capacitor is used as the impedance variation part in an automatic matching network in the fabrication of chips and flat panel displays.
Other variations of vacuum capacitors include fixed-value capacitors, which are designed very much like the variable versions with the exception that the adjustment mechanism is omitted.
Comparison
When compared to other variable capacitors, vacuum variables tend to be more precise and more stable. This is due to the vacuum itself. Because of the sealed chamber, the dielectric constant remains the same over a wider range of operating conditions. With air variable capacitors, the air moving around the plates may change the value slightly; often it is not much but in some applications it is enough to cause undesirable effects.
Vacuum variable capacitors are generally more expensive than air variable capacitors. This is primarily due to their design and the materials used. Although most use copper and glass, some may use other materials such as ceramics and metals such as gold and silver. Vacuum variables also vary in adjustment mechanisms.
References
External links
Webarchive: Meidensha products
MEIDEN AMERICA: Product overview vacuum capacitors
Introduction to capacitors
Capacitors
Inventions by Nikola Tesla
de:Vakuumkondensator#Variable Vakuumkondensatoren | Vacuum variable capacitor | [
"Physics"
] | 551 | [
"Capacitance",
"Capacitors",
"Physical quantities"
] |
6,290,771 | https://en.wikipedia.org/wiki/Whitehead%27s%20point-free%20geometry | In mathematics, point-free geometry is a geometry whose primitive ontological notion is region rather than point. Two axiomatic systems are set out below, one grounded in mereology, the other in mereotopology and known as connection theory.
Point-free geometry was first formulated by Alfred North Whitehead, not as a theory of geometry or of spacetime, but of "events" and of an "extension relation" between events. Whitehead's purposes were as much philosophical as scientific and mathematical.
Formalizations
Whitehead did not set out his theories in a manner that would satisfy present-day canons of formality. The two formal first-order theories described in this entry were devised by others in order to clarify and refine Whitehead's theories. The domain of discourse for both theories consists of "regions." All unquantified variables in this entry should be taken as tacitly universally quantified; hence all axioms should be taken as universal closures. No axiom requires more than three quantified variables; hence a translation of first-order theories into relation algebra is possible. Each set of axioms has but four existential quantifiers.
Inclusion-based point-free geometry (mereology)
The fundamental primitive binary relation is inclusion, denoted by the infix operator "≤", which corresponds to the binary Parthood relation that is a standard feature in mereological theories. The intuitive meaning of x ≤ y is "x is part of y." Assuming that equality, denoted by the infix operator "=", is part of the background logic, the binary relation Proper Part, denoted by the infix operator "<", is defined as:
The axioms are:
Inclusion partially orders the domain.
G1. (reflexive)
G2. (transitive) WP4.
G3. (antisymmetric)
Given any two regions, there exists a region that includes both of them. WP6.
G4.
Proper Part densely orders the domain. WP5.
G5.
Both atomic regions and a universal region do not exist. Hence the domain has neither an upper nor a lower bound. WP2.
G6.
Proper Parts Principle. If all the proper parts of x are proper parts of y, then x is included in y. WP3.
G7.
A model of G1–G7 is an inclusion space.
Definition. Given some inclusion space S, an abstractive class is a class G of regions such that S\G is totally ordered by inclusion. Moreover, there does not exist a region included in all of the regions included in G.
Intuitively, an abstractive class defines a geometrical entity whose dimensionality is less than that of the inclusion space. For example, if the inclusion space is the Euclidean plane, then the corresponding abstractive classes are points and lines.
Inclusion-based point-free geometry (henceforth "point-free geometry") is essentially an axiomatization of Simons's system W. In turn, W formalizes a theory of Whitehead whose axioms are not made explicit. Point-free geometry is W with this defect repaired. Simons did not repair this defect, instead proposing in a footnote that the reader do so as an exercise. The primitive relation of W is Proper Part, a strict partial order. The theory of Whitehead (1919) has a single primitive binary relation K defined as xKy ↔ y < x. Hence K is the converse of Proper Part. Simons's WP1 asserts that Proper Part is irreflexive and so corresponds to G1. G3 establishes that inclusion, unlike Proper Part, is antisymmetric.
Point-free geometry is closely related to a dense linear order D, whose axioms are G1-3, G5, and the totality axiom Hence inclusion-based point-free geometry would be a proper extension of D (namely D ∪ {G4, G6, G7}), were it not that the D relation "≤" is a total order.
Connection theory (mereotopology)
A different approach was proposed in Whitehead (1929), one inspired by De Laguna (1922). Whitehead took as primitive the topological notion of "contact" between two regions, resulting in a primitive "connection relation" between events. Connection theory C is a first-order theory that distills the first 12 of Whitehead's 31 assumptions into 6 axioms, C1-C6. C is a proper fragment of the theories proposed by Clarke, who noted their mereological character. Theories that, like C, feature both inclusion and topological primitives, are called mereotopologies.
C has one primitive relation, binary "connection," denoted by the prefixed predicate letter C. That x is included in y can now be defined as x ≤ y ↔ ∀z[Czx→Czy]. Unlike the case with inclusion spaces, connection theory enables defining "non-tangential" inclusion, a total order that enables the construction of abstractive classes. Gerla and Miranda (2008) argue that only thus can mereotopology unambiguously define a point.
C is reflexive. C.1.
C1.
C is symmetric. C.2.
C2.
C is extensional. C.11.
C3.
All regions have proper parts, so that C is an atomless theory. P.9.
C4.
Given any two regions, there is a region connected to both of them.
C5.
All regions have at least two unconnected parts. C.14.
C6.
A model of C is a connection space.
Following the verbal description of each axiom is the identifier of the corresponding axiom in Casati and Varzi (1999). Their system SMT (strong mereotopology) consists of C1-C3, and is essentially due to Clarke (1981). Any mereotopology can be made atomless by invoking C4, without risking paradox or triviality. Hence C extends the atomless variant of SMT by means of the axioms C5 and C6, suggested by chapter 2 of part 4 of Process and Reality.
Biacino and Gerla (1991) showed that every model of Clarke's theory is a Boolean algebra, and models of such algebras cannot distinguish connection from overlap. It is doubtful whether either fact is faithful to Whitehead's intent.
See also
Mereology
Mereotopology
Pointless topology
Notes
References
Bibliography
Biacino L., and Gerla G., 1991, "Connection Structures," Notre Dame Journal of Formal Logic 32: 242-47.
Casati, R., and Varzi, A. C., 1999. Parts and places: the structures of spatial representation. MIT Press.
Clarke, Bowman, 1981, "A calculus of individuals based on 'connection'," Notre Dame Journal of Formal Logic 22: 204-18.
------, 1985, "Individuals and Points," Notre Dame Journal of Formal Logic 26: 61-75.
De Laguna, T., 1922, "Point, line and surface as sets of solids," The Journal of Philosophy 19: 449-61.
Gerla, G., 1995, "Pointless Geometries" in Buekenhout, F., Kantor, W. eds., Handbook of incidence geometry: buildings and foundations. North-Holland: 1015-31.
--------, and Miranda A., 2008, "Inclusion and Connection in Whitehead's Point-free Geometry," in Michel Weber and Will Desmond, (eds.), Handbook of Whiteheadian Process Thought, Frankfurt / Lancaster, ontos verlag, Process Thought X1 & X2.
Gruszczynski R., and Pietruszczak A., 2008, "Full development of Tarski's geometry of solids," Bulletin of Symbolic Logic 14:481-540. The paper contains presentation of point-free system of geometry originating from Whitehead's ideas and based on Lesniewski's mereology. It also briefly discusses the relation between point-free and point-based systems of geometry. Basic properties of mereological structures are given as well.
Grzegorczyk, A., 1960, "Axiomatizability of geometry without points," Synthese 12: 228-235.
Kneebone, G., 1963. Mathematical Logic and the Foundation of Mathematics. Dover reprint, 2001.
Lucas, J. R., 2000. Conceptual Roots of Mathematics. Routledge. Chpt. 10, on "prototopology," discusses Whitehead's systems and is strongly influenced by the unpublished writings of David Bostock.
Roeper, P., 1997, "Region-Based Topology," Journal of Philosophical Logic 26: 251-309.
Simons, P., 1987. Parts: A Study in Ontology. Oxford Univ. Press.
Whitehead, A.N., 1916, "La Theorie Relationiste de l'Espace," Revue de Metaphysique et de Morale 23: 423-454. Translated as Hurley, P.J., 1979, "The relational theory of space," Philosophy Research Archives 5: 712-741.
--------, 1919. An Enquiry Concerning the Principles of Natural Knowledge. Cambridge Univ. Press. 2nd ed., 1925.
--------, 1920. The Concept of Nature. Cambridge Univ. Press. 2004 paperback, Prometheus Books. Being the 1919 Tarner Lectures delivered at Trinity College.
--------, 1979 (1929). Process and Reality. Free Press.
Alfred North Whitehead
History of mathematics
Mathematical axioms
Mereology
Ontology
Topology | Whitehead's point-free geometry | [
"Physics",
"Mathematics"
] | 2,031 | [
"Mathematical logic",
"Mathematical axioms",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
477,808 | https://en.wikipedia.org/wiki/Portico | A portico is a porch leading to the entrance of a building, or extended as a colonnade, with a roof structure over a walkway, supported by columns or enclosed by walls. This idea was widely used in ancient Greece and has influenced many cultures, including most Western cultures.
Porticos are sometimes topped with pediments.
Palladio was a pioneer of using temple-fronts for secular buildings. In the UK, the temple-front applied to The Vyne, Hampshire, was the first portico applied to an English country house.
A pronaos ( or ) is the inner area of the portico of a Greek or Roman temple, situated between the portico's colonnade or walls and the entrance to the cella, or shrine. Roman temples commonly had an open pronaos, usually with only columns and no walls, and the pronaos could be as long as the cella. The word pronaos () is Greek for "before a temple". In Latin, a pronaos is also referred to as an anticum or prodomus. The pronaos of a Greek and Roman temple is typically topped with a pediment.
Types
The different variants of porticos are named by the number of columns they have. The "style" suffix comes from the Greek , "column". In Greek and Roman architecture, the pronaos of a temple is typically topped with a pediment.
Tetrastyle
The tetrastyle has four columns; it was commonly employed by the Greeks and the Etruscans for small structures such as public buildings and amphiprostyles.
The Romans favoured the four columned portico for their pseudoperipteral temples like the Temple of Portunus, and for amphiprostyle temples such as the Temple of Venus and Roma, and for the prostyle entrance porticos of large public buildings like the Basilica of Maxentius and Constantine. Roman provincial capitals also manifested tetrastyle construction, such as the Capitoline Temple in Volubilis.
The North Portico of the White House is perhaps the most notable four-columned portico in the United States.
Hexastyle
Hexastyle buildings had six columns and were the standard façade in canonical Greek Doric architecture between the archaic period 600–550 BCE up to the Age of Pericles 450–430 BCE.
Greek hexastyle
Some well-known examples of classical Doric hexastyle Greek temples:
The group at Paestum comprising the Temple of Hera (c. 550 BCE), the Temple of Apollo (c. 450 BCE), the first Temple of Athena ("Basilica") (c. 500 BCE) and the second Temple of Hera (460–440 BCE)
The Temple of Aphaea at Aegina c. 495 BCE
Temple E at Selinus (465–450 BCE) dedicated to Hera
The Temple of Zeus at Olympia, now a ruin
Temple F or the so-called "Temple of Concordia" at Agrigentum (c. 430 BCE), one of the best-preserved classical Greek temples, retaining almost all of its peristyle and entablature
The "unfinished temple" at Segesta (c. 430 BCE)
The Temple of Hephaestus below the Acropolis at Athens, long known as the "Theseum" (449–444 BCE), also one of the most intact Greek temples surviving from antiquity
The Temple of Poseidon on Cape Sunium (c. 449 BCE)
Hexastyle was also applied to Ionic temples, such as the prostyle porch of the sanctuary of Athena on the Erechtheum, at the Acropolis of Athens.
Roman hexastyle
With the colonization by the Greeks of Southern Italy, hexastyle was adopted by the Etruscans and subsequently acquired by the ancient Romans. Roman taste favoured narrow pseudoperipteral and amphiprostyle buildings with tall columns, raised on podiums for the added pomp and grandeur conferred by considerable height. The Maison Carrée at Nîmes, France, is the best-preserved Roman hexastyle temple surviving from antiquity.
Octastyle
Octastyle buildings had eight columns; they were considerably rarer than the hexastyle ones in the classical Greek architectural canon. The best-known octastyle buildings surviving from antiquity are the Parthenon in Athens, built during the Age of Pericles (450–430 BCE), and the Pantheon in Rome (125 CE). The destroyed Temple of Divus Augustus in Rome, the centre of the Augustan cult, is shown on Roman coins of the 2nd century CE as having been built in octastyle.
Decastyle
The decastyle has ten columns; as in the temple of Apollo Didymaeus at Miletus, and the portico of University College London.
The only known Roman decastyle portico is on the Temple of Venus and Roma, built by Hadrian in about 130 CE.
Gallery
See also
Citations
General and cited references
External links
Ancient Roman architectural elements
Architectural elements
Columns and entablature | Portico | [
"Technology",
"Engineering"
] | 1,043 | [
"Building engineering",
"Structural system",
"Architectural elements",
"Columns and entablature",
"Components",
"Architecture"
] |
477,989 | https://en.wikipedia.org/wiki/Sequence%20assembly | In bioinformatics, sequence assembly refers to aligning and merging fragments from a longer DNA sequence in order to reconstruct the original sequence. This is needed as DNA sequencing technology might not be able to 'read' whole genomes in one go, but rather reads small pieces of between 20 and 30,000 bases, depending on the technology used. Typically, the short fragments (reads) result from shotgun sequencing genomic DNA, or gene transcript (ESTs).
The problem of sequence assembly can be compared to taking many copies of a book, passing each of them through a shredder with a different cutter, and piecing the text of the book back together just by looking at the shredded pieces. Besides the obvious difficulty of this task, there are some extra practical issues: the original may have many repeated paragraphs, and some shreds may be modified during shredding to have typos. Excerpts from another book may also be added in, and some shreds may be completely unrecognizable.
Types
There are three approaches to assembling sequencing data:
De-novo: assembling sequencing reads to create full-length (sometimes novel) sequences, without using a template (see de novo sequence assemblers, de novo transcriptome assembly)
Mapping/Aligning: assembling reads by aligning reads against a template (AKA reference). The assembled consensus may not be identical to the template.
Reference-guided: grouping of reads by similarity to the most similar region within the reference (step wise mapping). Reads within each group are then shortened down to mimic short reads quality. A typical method to do so is the k-mer approach. Reference-guided assembly is most useful using long-reads.
Referenced-guided assembly is a combination of the other types. This type is applied on long reads to mimic short reads advantages (i.e. call quality). The logic behind it is to group the reads by smaller windows within the reference. Reads in each group will then be reduced in size using the k-mere approach to select the highest quality and most probable contiguous (contig). Contigs will then will be joined together to create a scaffold. The final consense is made by closing any gaps in the scaffold.
Assemblies
Genome
The first sequence assemblers began to appear in the late 1980s and early 1990s as variants of simpler sequence alignment programs to piece together vast quantities of fragments generated by automated sequencing instruments called DNA sequencers. As the sequenced organisms grew in size and complexity (from small viruses over plasmids to bacteria and finally eukaryotes), the assembly programs used in these genome projects needed increasingly sophisticated strategies to handle:
terabytes of sequencing data which need processing on computing clusters;
identical and nearly identical sequences (known as repeats) which can, in the worst case, increase the time and space complexity of algorithms quadratically;
DNA read errors in the fragments from the sequencing instruments, which can confound assembly.
Faced with the challenge of assembling the first larger eukaryotic genomes—the fruit fly Drosophila melanogaster in 2000 and the human genome just a year later,—scientists developed assemblers like Celera Assembler and Arachne able to handle genomes of 130 million (e.g., the fruit fly D. melanogaster) to 3 billion (e.g., the human genome) base pairs. Subsequent to these efforts, several other groups, mostly at the major genome sequencing centers, built large-scale assemblers, and an open source effort known as AMOS was launched to bring together all the innovations in genome assembly technology under the open source framework.
EST
Expressed sequence tag or EST assembly was an early strategy, dating from the mid-1990s to the mid-2000s, to assemble individual genes rather than whole genomes. The problem differs from genome assembly in several ways. The input sequences for EST assembly are fragments of the transcribed mRNA of a cell and represent only a subset of the whole genome. A number of algorithmical problems differ between genome and EST assembly. For instance, genomes often have large amounts of repetitive sequences, concentrated in the intergenic regions. Transcribed genes contain many fewer repeats, making assembly somewhat easier. On the other hand, some genes are expressed (transcribed) in very high numbers (e.g., housekeeping genes), which means that unlike whole-genome shotgun sequencing, the reads are not uniformly sampled across the genome.
EST assembly is made much more complicated by features like (cis-) alternative splicing, trans-splicing, single-nucleotide polymorphism, and post-transcriptional modification. Beginning in 2008 when RNA-Seq was invented, EST sequencing was replaced by this far more efficient technology, described under de novo transcriptome assembly.
De-novo vs mapping assembly
In terms of complexity and time requirements, de-novo assemblies are orders of magnitude slower and more memory intensive than mapping assemblies. This is mostly due to the fact that the assembly algorithm needs to compare every read with every other read (an operation that has a naive time complexity of O(n2)). Current de-novo genome assemblers may use different types of graph-based algorithms, such as the:
Overlap/Layout/Consensus (OLC) approach, which was typical of the Sanger-data assemblers and relies on an overlap graph;
de Bruijn Graph (DBG) approach, which is most widely applied to the short reads from the Solexa and SOLiD platforms. It relies on K-mer graphs, which performs well with vast quantities of short reads;
Greedy graph-based approach, which may also use one of the OLC or DBG approaches. With greedy graph-based algorithms, the contigs, series of reads aligned together, grow by greedy extension, always taking on the read that is found by following the highest-scoring overlap.
Referring to the comparison drawn to shredded books in the introduction: while for mapping assemblies one would have a very similar book as a template (perhaps with the names of the main characters and a few locations changed), de-novo assemblies present a more daunting challenge in that one would not know beforehand whether this would become a science book, a novel, a catalogue, or even several books. Also, every shred would be compared with every other shred.
Handling repeats in de-novo assembly requires the construction of a graph representing neighboring repeats. Such information can be derived from reading a long fragment covering the repeats in full or only its two ends. On the other hand, in a mapping assembly, parts with multiple or no matches are usually left for another assembling technique to look into.
Technological advances
The complexity of sequence assembly is driven by two major factors: the number of fragments and their lengths. While more and longer fragments allow better identification of sequence overlaps, they also pose problems as the underlying algorithms show quadratic or even exponential complexity behaviour to both number of fragments and their length. And while shorter sequences are faster to align, they also complicate the layout phase of an assembly as shorter reads are more difficult to use with repeats or near identical repeats.
In the earliest days of DNA sequencing, scientists could only gain a few sequences of short length (some dozen bases) after weeks of work in laboratories. Hence, these sequences could be aligned in a few minutes by hand.
In 1975, the dideoxy termination method (AKA Sanger sequencing) was invented and until shortly after 2000, the technology was improved up to a point where fully automated machines could churn out sequences in a highly parallelised mode 24 hours a day. Large genome centers around the world housed complete farms of these sequencing machines, which in turn led to the necessity of assemblers to be optimised for sequences from whole-genome shotgun sequencing projects where the reads
are about 800–900 bases long
contain sequencing artifacts like sequencing and cloning vectors
have error rates between 0.5 and 10%
With the Sanger technology, bacterial projects with 20,000 to 200,000 reads could easily be assembled on one computer. Larger projects, like the human genome with approximately 35 million reads, needed large computing farms and distributed computing.
By 2004 / 2005, pyrosequencing had been brought to commercial viability by 454 Life Sciences. This new sequencing method generated reads much shorter than those of Sanger sequencing: initially about 100 bases, now 400-500 bases. Its much higher throughput and lower cost (compared to Sanger sequencing) pushed the adoption of this technology by genome centers, which in turn pushed development of sequence assemblers that could efficiently handle the read sets. The sheer amount of data coupled with technology-specific error patterns in the reads delayed development of assemblers; at the beginning in 2004 only the Newbler assembler from 454 was available. Released in mid-2007, the hybrid version of the MIRA assembler by Chevreux et al. was the first freely available assembler that could assemble 454 reads as well as mixtures of 454 reads and Sanger reads. Assembling sequences from different sequencing technologies was subsequently coined hybrid assembly.
From 2006, the Illumina (previously Solexa) technology has been available and can generate about 100 million reads per run on a single sequencing machine. Compare this to the 35 million reads of the human genome project which needed several years to be produced on hundreds of sequencing machines. Illumina was initially limited to a length of only 36 bases, making it less suitable for de novo assembly (such as de novo transcriptome assembly), but newer iterations of the technology achieve read lengths above 100 bases from both ends of a 3-400bp clone. Announced at the end of 2007, the SHARCGS assembler by Dohm et al. was the first published assembler that was used for an assembly with Solexa reads. It was quickly followed by a number of others.
Later, new technologies like SOLiD from Applied Biosystems, Ion Torrent and SMRT were released and new technologies (e.g. Nanopore sequencing) continue to emerge. Despite the higher error rates of these technologies they are important for assembly because their longer read length helps to address the repeat problem. It is impossible to assemble through a perfect repeat that is longer than the maximum read length; however, as reads become longer the chance of a perfect repeat that large becomes small. This gives longer sequencing reads an advantage in assembling repeats even if they have low accuracy (~85%).
Quality control
Most sequence assemblers have some algorithms built in for quality control, such as Phred. However, such measures do not assess assembly completeness in terms of gene content. Some tools evaluate the quality of an assembly after the fact.
For instance, BUSCO (Benchmarking Universal Single-Copy Orthologs) is a measure of gene completeness in a genome, gene set, or transcriptome, using the fact that many genes are present only as single-copy genes in most genomes. The initial BUSCO sets represented 3023 genes for vertebrates, 2675 for arthropods, 843 for metazoans, 1438 for fungi and 429 for eukaryotes. This table shows an example for human and fruit fly genomes:
Assembly algorithms
Different organisms have a distinct region of higher complexity within their genome. Hence, the need of different computational approaches is needed. Some of the commonly used algorithms are:
Graph Assembly is based on Graph theory in computer science. The de Bruijn Graph is an example of this approach and utilizes k-mers to assemble a contiguous from reads.
Greedy Graph Assembly this approach score each added read to the assembly and selects the highest possible score from the overlapping region.
Given a set of sequence fragments, the object is to find a longer sequence that contains all the fragments (see figure under Types of Sequence Assembly):
Сalculate pairwise alignments of all fragments.
Choose two fragments with the largest overlap.
Merge chosen fragments.
Repeat step 2 and 3 until only one fragment is left.
The result might not be an optimal solution to the problem.
Bioinformatics pipeline
In general, there are three steps in assembling sequencing reads into a scaffold:
Pre-assembly: This step is essential to ensure the integrity of downstream analysis such as variant calling or final scaffold sequence. This step consists of two chronological workflows:
Quality check: Depending on the type of sequencing technology, different errors might arise that would lead to a false base call. For example, sequencing "NAAAAAAAAAAAAN" and "NAAAAAAAAAAAN" which include 12 adenine might be wrongfully called with 11 adenine instead. Sequencing a highly repetitive segment of the target DNA/RNA might result in a call that is one base shorter or one base longer. Read quality is typically measured by Phred which is an encoded score of each nucleotide quality within a read's sequence. Some sequencing technologies such as PacBio do not have a scoring method for their sequenced reads. A common tool used in this step is FastQC.
Filtering of reads: Reads that failed to pass the quality check should be removed from the FASTQ file to get the best assembly contigs.
Assembly: During this step, reads alignment will be utilized with different criteria to map each read to the possible location. The predicted position of a read is based on either how much of its sequence aligns with other reads or a reference. Different alignment algorithms are used for reads from different sequencing technologies. Some of the commonly used approaches in the assembly are de Bruijn graph and overlapping. Read length, coverage, quality, and the sequencing technique used plays a major role in choosing the best alignment algorithm in the case of Next Generation Sequencing. On the other hand, algorithms aligning 3rd generation sequencing reads requires advance approaches to account for the high error rate associated with them.
Post-assembly: This step is focusing on extracting valuable information from the assembled sequence. Comparative genomics and population analysis are examples of post-assembly analysis.
Programs
For a lists of de-novo assemblers, see De novo sequence assemblers. For a list of mapping aligners, see List of sequence alignment software § Short-read sequence alignment.
Some of the common tools used in different assembly steps are listed in the following table:
See also
De novo sequence assemblers
Sequence alignment
De novo transcriptome assembly
Set cover problem
List of sequenced animal genomes
Plant genome assembly
References
Bioinformatics
DNA sequencing methods | Sequence assembly | [
"Engineering",
"Biology"
] | 2,966 | [
"Genetics techniques",
"Biological engineering",
"Bioinformatics",
"DNA sequencing methods",
"DNA sequencing"
] |
478,004 | https://en.wikipedia.org/wiki/Furosemide | Furosemide, sold under the brand name Lasix among others, is a loop diuretic medication used to treat edema due to heart failure, liver scarring, or kidney disease. Furosemide may also be used for the treatment of high blood pressure. It can be taken intravenously or orally. When given intravenously, furosemide typically takes effect within five minutes; when taken orally, it typically metabolizes within an hour.
Common side effects include orthostatic hypotension (decrease in blood pressure while standing, and associated lightheadedness), tinnitus (ringing in the ears), and photosensitivity (sensitivity to light). Potentially serious side effects include electrolyte abnormalities, low blood pressure, and hearing loss. It is recommended that serum electrolytes (especially potassium), serum , creatinine, BUN levels, and liver and kidney functioning be monitored in patients taking furosemide. It is also recommended to be alert for the occurrence of any potential blood dyscrasias.
Furosemide works by decreasing the reabsorption of sodium by the kidneys. Common side effects of furosemide injection include hypokalemia (low potassium level), hypotension (low blood pressure), and dizziness.
Furosemide was patented in 1959 and approved for medical use in 1964. It is on the World Health Organization's List of Essential Medicines. In the United States, it is available as a generic medication. In 2022, it was the 24th most commonly prescribed medication in the United States, with more than 23million prescriptions. In 2020/21 it was the twentieth most prescribed medication in England. It is on the World Anti-Doping Agency's banned drug list due to concerns that it may mask other drugs. It has also been used in race horses for the treatment and prevention of exercise-induced pulmonary hemorrhage.
Medical uses
Furosemide is primarily used for the treatment of edema, but also in some cases of hypertension (where there is also kidney or heart impairment). It is often viewed as a first-line agent in most people with edema caused by congestive heart failure because of its anti-vasoconstrictor and diuretic effects. Compared with furosemide, however, torasemide (aka "torsemide") has been demonstrated to show improvements to heart failure symptoms, possibly lowering the rates of rehospitalization associated with heart failure, with no difference in risk of death. Torsemide may also be safer than furosemide. Providing self-administered subcutaneous furosemide has been found to reduce hospital admissions in people with heart failure, resulting in significant savings in healthcare costs.
Furosemide is also used for liver cirrhosis, kidney impairment, nephrotic syndrome, in adjunct therapy for swelling of the brain or lungs where rapid diuresis is required (IV injection), and in the management of severe hypercalcemia in combination with adequate rehydration.
Kidney disease
In chronic kidney diseases with hypoalbuminemia, furosemide is used along with albumin to increase diuresis. It is also used along with albumin in nephrotic syndrome to reduce edema.
Other information
Furosemide is mainly excreted by tubular secretion in the kidney. In kidney impairment, clearance is reduced, increasing the risk of adverse effects. Lower initial doses are recommended in older patients (to minimize side effects) and high doses may be needed in kidney failure. It can also cause kidney damage; this is mainly by loss of excessive fluid (i.e., dehydration), and is usually reversible.
Furosemide acts within 1 hour of oral administration (after IV injection, the peak effect is within 30 minutes). Diuresis is usually complete within 6–8 hours of oral administration, but there is significant variation between individuals.
Adverse effects
Furosemide also can lead to gout caused by hyperuricemia. Hyperglycemia is also a common side effect.
The tendency, as for all loop diuretics, to cause low serum potassium concentration (hypokalemia) has given rise to combination products, either with potassium or with the potassium-sparing diuretic amiloride (Co-amilofruse). Other electrolyte abnormalities that can result from furosemide use include hyponatremia, hypochloremia, hypomagnesemia, and hypocalcemia.
In the treatment of heart failure, many studies have shown that the long-term use of furosemide can cause varying degrees of thiamine deficiency, so thiamine supplementation is also suggested.
Furosemide is a known ototoxic agent generally causing transient hearing loss but can be permanent. Reported cases of furosemide-induced hearing loss appeared to be associated with rapid intravenous administration, high dosages, concomitant renal disease, and coadministration with other ototoxic medication. However, a recently reported longitudinal study showed that participants treated with loop diuretics over 10 years were 40% more likely to develop hearing loss and 33% more likely of progressive hearing loss compared to participants who did not use loop diuretics. This suggests the long-term consequences of loop diuretics on hearing could be a more significant than previously thought and further research is required in this area.
Other precautions include nephrotoxicity, sulfonamide (sulfa) allergy, and increased free thyroid hormone effects with large doses.
Interactions
Furosemide has potential interactions with these medications:
Aspirin and other salicylates
Other diuretics (e.g. ethacrynic acid, hydrochlorothiazide)
Synergistic effects with other antihypertensives (e.g. doxazosin)
Sucralfate
Potentially hazardous interactions with other drugs:
Analgesics: increased risk of kidney damage (nephrotoxicity) with nonsteroidal anti-inflammatory drugs; antagonism of diuretic effect with NSAIDs
Antiarrhythmics: a risk of cardiac toxicity exists with antiarrhythmics if hypokalemia occurs; the effects of lidocaine and mexiletine are antagonized.
Antibacterials: increased risk of ototoxicity with aminoglycosides, polymyxins and vancomycin; avoid concomitant use with lymecycline
Antidepressants: increased risk of hypokalemia with reboxetine; enhanced hypotensive effect with MAOIs; increased risk of postural hypotension with tricyclic antidepressants
Antiepileptics: increased risk of hyponatremia with carbamazepine
Antifungals: increased risk of hypokalemia with amphotericin
Antihypertensives: enhanced hypotensive effect; increased risk of first dose hypotensive effect with alpha-blockers; increased risk of ventricular arrhythmias with sotalol if hypokalemia occurs
Antipsychotics: increased risk of ventricular arrhythmias with amisulpride, sertindole, or pimozide (avoid with pimozide) if hypokalemia occurs; enhanced hypotensive effect with phenothiazines
Atomoxetine: hypokalemia increases risk of ventricular arrhythmias
Cardiac glycosides: increased toxicity if hypokalemia occurs
Cyclosporine: variable reports of increased nephrotoxicity, ototoxicity and hepatotoxicity
Lithium: risk of toxicity.
Mechanism of action
Furosemide, like other loop diuretics, acts by inhibiting the luminal Na–K–Cl cotransporter in the thick ascending limb of the loop of Henle, by binding to the Na-K-2Cl transporter, thus causing more sodium, chloride, and potassium to be excreted in the urine.
The action on the distal tubules is independent of any inhibitory effect on carbonic anhydrase or aldosterone; it also abolishes the corticomedullary osmotic gradient and blocks negative, as well as positive, free water clearance. Because of the large NaCl absorptive capacity of the loop of Henle, diuresis is not limited by the development of acidosis, as it is with the carbonic anhydrase inhibitors.
Additionally, furosemide is a noncompetitive subtype-specific blocker of GABA-A receptors. Furosemide has been reported to reversibly antagonize GABA-evoked currents of α6β2γ2 receptors at μM concentrations, but not α1β2γ2 receptors. During development, the α6β2γ2 receptor increases in expression in cerebellar granule neurons, corresponding to increased sensitivity to furosemide.
Pharmacokinetics
Molecular weight (daltons) 330.7
% Bioavailability 47 – 70%
Bioavailability with end-stage renal disease 43 – 46%
% Protein binding 91 – 99
Volume of distribution (L/kg) 0.07 – 0.2
Volume of distribution may be higher in patients with cirrhosis or nephrotic syndrome
Excretion
% Excreted in urine (% of total dose) 60 – 90
% Excreted unchanged in urine (% of total dose) 53.1 – 58.8
% Excreted in feces (% of total dose) 7 – 9
% Excreted in bile (% of total dose) 6 – 9
Approximately 10% is metabolized by the liver in healthy individuals, but this percentage may be greater in individuals with severe kidney failure
Renal clearance (mL/min/kg) 2.0
Elimination half-life (hrs) 2
Prolonged in congestive heart failure (mean 3.4 hrs)
Prolonged in severe kidney failure (4 – 6 hrs) and anephric patients (1.5 – 9 hrs)
Time to peak concentration (hrs)
Intravenous administration 0.3
Oral solution 0.83
Oral tablet 1.45
The pharmacokinetics of furosemide are not significantly altered by food.
No direct relationship has been found between furosemide concentration in the plasma and furosemide efficacy. Efficacy depends upon the concentration of furosemide in urine.
Names
Furosemide is the INN and BAN. The previous BAN was frusemide.
Brand names under which furosemide is marketed include Aisemide, Apo-Furosemide, Beronald, Desdemin, Discoid, Diural, Diurapid, Dryptal, Durafurid, Edemid, Errolon, Eutensin, Farsiretic, Flusapex, Frudix, Frusemide, Frusetic, Frusid, Fulsix, Fuluvamide, Furantril, Furesis, Furix, Furo-Puren, Furon, Furosedon, Fusid.frusone, Hydro-rapid, Impugan, Katlex, Lasilix, Lasix, Lodix, Lowpston, Macasirool, Mirfat, Nicorol, Odemase, Oedemex, Profemin, Rosemide, Rusyde, Salix, Seguril, Teva-Furosemide, Trofurit, Uremide, and Urex.
Veterinary uses
The diuretic effects are put to use most commonly in horses to prevent bleeding during a race. In the United States of America, under the racing rules of most states, horses that bleed from the nostrils (exercise-induced pulmonary hemorrhage) three times are permanently barred from racing. Sometime in the early 1970s, furosemide's ability to prevent, or at least greatly reduce, the incidence of bleeding by horses during races was discovered accidentally. Clinical trials followed, and by the decade's end, racing commissions in some states in the USA began legalizing its use on race horses. In 1995, New York became the last state in the United States to approve such use, after years of refusing to consider doing so. Some states allow its use for all racehorses; some allow it only for confirmed "bleeders". Its use for this purpose is still prohibited in many other countries.
Furosemide is also used in horses for pulmonary edema, congestive heart failure (in combination with other drugs), and allergic reactions. Although it increases circulation to the kidneys, it does not help kidney function and is not recommended for kidney disease.
It is also used to treat congestive heart failure (pulmonary edema, pleural effusion, and/or ascites) in cats and dogs.
Horses
Furosemide is injected either intramuscularly or intravenously, usually 0.5-1.0 mg/kg twice/day, although less before a horse is raced. As with many diuretics, it can cause dehydration and electrolyte imbalance, including loss of potassium, calcium, sodium, and magnesium. Excessive use of furosemide will most likely lead to a metabolic alkalosis due to hypochloremia and hypokalemia. The drug should, therefore, not be used in horses that are dehydrated or experiencing kidney failure. It should be used with caution in horses with liver problems or electrolyte abnormalities. Overdose may lead to dehydration, change in drinking patterns and urination, seizures, gastrointestinal problems, kidney damage, lethargy, collapse, and coma.
Furosemide should be used with caution when combined with corticosteroids (as this increases the risk of electrolyte imbalance), aminoglycoside antibiotics (increases the risk of kidney or ear damage), and trimethoprim sulfa (causes decreased platelet count). It may also cause interactions with anesthetics, so its use should be related to the veterinarian if the animal is going into surgery, it decreases the kidneys' ability to excrete aspirin, so dosages will need to be adjusted if combined with that drug.
Furosemide may increase the risk of digoxin toxicity due to hypokalemia.
It is recommended that furosemide not be used during pregnancy or in a lactating mare, as it is passed through the placenta and milk in studies with other species. It should not be used in horses with pituitary pars intermedia dysfunction (Equine Cushing's Disease).
Furosemide is detectable in urine 36–72 hours following injection. Its use is restricted by most equestrian organizations.
US major racetracks ban the use of furosemide on race days.
References
Further reading
Aventis Pharma (1998). Lasix Approved Product Information. Lane Cove: Aventis Pharma Pty Ltd.
External links
Lasix and horse bleeding
Anthranilic acids
Carbonic anhydrase inhibitors
Chemical substances for emergency medicine
Chloroarenes
Equine medications
2-Furyl compounds
GABAA receptor negative allosteric modulators
Glycine receptor antagonists
Loop diuretics
Nephrotoxins
NMDA receptor antagonists
Wikipedia medicine articles ready to translate
Sanofi
Sulfonamides
World Anti-Doping Agency prohibited substances
World Health Organization essential medicines | Furosemide | [
"Chemistry"
] | 3,265 | [
"Chemicals in medicine",
"Chemical substances for emergency medicine"
] |
478,128 | https://en.wikipedia.org/wiki/Protein%20isoform | A protein isoform, or "protein variant", is a member of a set of highly similar proteins that originate from a single gene and are the result of genetic differences. While many perform the same or similar biological roles, some isoforms have unique functions. A set of protein isoforms may be formed from alternative splicings, variable promoter usage, or other post-transcriptional modifications of a single gene; post-translational modifications are generally not considered. (For that, see Proteoforms.) Through RNA splicing mechanisms, mRNA has the ability to select different protein-coding segments (exons) of a gene, or even different parts of exons from RNA to form different mRNA sequences. Each unique sequence produces a specific form of a protein.
The discovery of isoforms could explain the discrepancy between the small number of protein coding regions of genes revealed by the human genome project and the large diversity of proteins seen in an organism: different proteins encoded by the same gene could increase the diversity of the proteome. Isoforms at the RNA level are readily characterized by cDNA transcript studies. Many human genes possess confirmed alternative splicing isoforms. It has been estimated that ~100,000 expressed sequence tags (ESTs) can be identified in humans. Isoforms at the protein level can manifest in the deletion of whole domains or shorter loops, usually located on the surface of the protein.
Definition
One single gene has the ability to produce multiple proteins that differ both in structure and composition; this process is regulated by the alternative splicing of mRNA, though it is not clear to what extent such a process affects the diversity of the human proteome, as the abundance of mRNA transcript isoforms does not necessarily correlate with the abundance of protein isoforms. Three-dimensional protein structure comparisons can be used to help determine which, if any, isoforms represent functional protein products, and the structure of most isoforms in the human proteome has been predicted by AlphaFold and publicly released at isoform.io. The specificity of translated isoforms is derived by the protein's structure/function, as well as the cell type and developmental stage during which they are produced. Determining specificity becomes more complicated when a protein has multiple subunits and each subunit has multiple isoforms.
For example, the 5' AMP-activated protein kinase (AMPK), an enzyme, which performs different roles in human cells, has 3 subunits:
α, catalytic domain, has two isoforms: α1 and α2 which are encoded from PRKAA1 and PRKAA2
β, regulatory domain, has two isoforms: β1 and β2 which are encoded from PRKAB1 and PRKAB2
γ, regulatory domain, has three isoforms: γ1, γ2, and γ3 which are encoded from PRKAG1, PRKAG2, and PRKAG3
In human skeletal muscle, the preferred form is α2β2γ1. But in the human liver, the most abundant form is α1β2γ1.
Mechanism
The primary mechanisms that produce protein isoforms are alternative splicing and variable promoter usage, though modifications due to genetic changes, such as mutations and polymorphisms are sometimes also considered distinct isoforms.
Alternative splicing is the main post-transcriptional modification process that produces mRNA transcript isoforms, and is a major molecular mechanism that may contribute to protein diversity. The spliceosome, a large ribonucleoprotein, is the molecular machine inside the nucleus responsible for RNA cleavage and ligation, removing non-protein coding segments (introns).
Because splicing is a process that occurs between transcription and translation, its primary effects have mainly been studied through genomics techniques—for example, microarray analyses and RNA sequencing have been used to identify alternatively spliced transcripts and measure their abundances. Transcript abundance is often used as a proxy for the abundance of protein isoforms, though proteomics experiments using gel electrophoresis and mass spectrometry have demonstrated that the correlation between transcript and protein counts is often low, and that one protein isoform is usually dominant. One 2015 study states that the cause of this discrepancy likely occurs after translation, though the mechanism is essentially unknown. Consequently, although alternative splicing has been implicated as an important link between variation and disease, there is no conclusive evidence that it acts primarily by producing novel protein isoforms.
Alternative splicing generally describes a tightly regulated process in which alternative transcripts are intentionally generated by the splicing machinery. However, such transcripts are also produced by splicing errors in a process called "noisy splicing," and are also potentially translated into protein isoforms. Although ~95% of multi-exonic genes are thought to be alternatively spliced, one study on noisy splicing observed that most of the different low-abundance transcripts are noise, and predicts that most alternative transcript and protein isoforms present in a cell are not functionally relevant.
Other transcriptional and post-transcriptional regulatory steps can also produce different protein isoforms. Variable promoter usage occurs when the transcriptional machinery of a cell (RNA polymerase, transcription factors, and other enzymes) begin transcription at different promoters—the region of DNA near a gene that serves as an initial binding site—resulting in slightly modified transcripts and protein isoforms.
Characteristics
Generally, one protein isoform is labeled as the canonical sequence based on criteria such as its prevalence and similarity to orthologous—or functionally analogous—sequences in other species. Isoforms are assumed to have similar functional properties, as most have similar sequences, and share some to most exons with the canonical sequence. However, some isoforms show much greater divergence (for example, through trans-splicing), and can share few to no exons with the canonical sequence. In addition, they can have different biological effects—for example, in an extreme case, the function of one isoform can promote cell survival, while another promotes cell death—or can have similar basic functions but differ in their sub-cellular localization. A 2016 study, however, functionally characterized all the isoforms of 1,492 genes and determined that most isoforms behave as "functional alloforms." The authors came to the conclusion that isoforms behave like distinct proteins after observing that the functional of most isoforms did not overlap. Because the study was conducted on cells in vitro, it is not known if the isoforms in the expressed human proteome share these characteristics. Additionally, because the function of each isoform must generally be determined separately, most identified and predicted isoforms still have unknown functions.
Related concepts
Glycoform
A glycoform is an isoform of a protein that differs only with respect to the number or type of attached glycan. Glycoproteins often consist of a number of different glycoforms, with alterations in the attached saccharide or oligosaccharide. These modifications may result from differences in biosynthesis during the process of glycosylation, or due to the action of glycosidases or glycosyltransferases. Glycoforms may be detected through detailed chemical analysis of separated glycoforms, but more conveniently detected through differential reaction with lectins, as in lectin affinity chromatography and lectin affinity electrophoresis. Typical examples of glycoproteins consisting of glycoforms are the blood proteins as orosomucoid, antitrypsin, and haptoglobin. An unusual glycoform variation is seen in neuronal cell adhesion molecule, NCAM involving polysialic acids, PSA.
Examples
G-actin: despite its conserved nature, it has a varying number of isoforms (at least six in mammals).
Creatine kinase, the presence of which in the blood can be used as an aid in the diagnosis of myocardial infarction, exists in 3 isoforms.
Hyaluronan synthase, the enzyme responsible for the production of hyaluronan, has three isoforms in mammalian cells.
UDP-glucuronosyltransferase, an enzyme superfamily responsible for the detoxification pathway of many drugs, environmental pollutants, and toxic endogenous compounds has 16 known isoforms encoded in the human genome.
G6PDA: normal ratio of active isoforms in cells of any tissue is 1:1 shared with G6PDG. This is precisely the normal isoform ratio in hyperplasia. Only one of these isoforms is found during neoplasia.
Monoamine oxidase, a family of enzymes that catalyze the oxidation of monoamines, exists in two isoforms, MAO-A and MAO-B.
See also
Gene isoform
References
External links
MeSH entry protein isoforms
Definitions Isoform
Protein structure | Protein isoform | [
"Chemistry"
] | 1,855 | [
"Protein structure",
"Structural biology"
] |
478,195 | https://en.wikipedia.org/wiki/Normal%20mode | A normal mode of a dynamical system is a pattern of motion in which all parts of the system move sinusoidally with the same frequency and with a fixed phase relation. The free motion described by the normal modes takes place at fixed frequencies. These fixed frequencies of the normal modes of a system are known as its natural frequencies or resonant frequencies. A physical object, such as a building, bridge, or molecule, has a set of normal modes and their natural frequencies that depend on its structure, materials and boundary conditions.
The most general motion of a linear system is a superposition of its normal modes. The modes are normal in the sense that they can move independently, that is to say that an excitation of one mode will never cause motion of a different mode. In mathematical terms, normal modes are orthogonal to each other.
General definitions
Mode
In the wave theory of physics and engineering, a mode in a dynamical system is a standing wave state of excitation, in which all the components of the system will be affected sinusoidally at a fixed frequency associated with that mode.
Because no real system can perfectly fit under the standing wave framework, the mode concept is taken as a general characterization of specific states of oscillation, thus treating the dynamic system in a linear fashion, in which linear superposition of states can be performed.
Typical examples include:
In a mechanical dynamical system, a vibrating rope is the most clear example of a mode, in which the rope is the medium, the stress on the rope is the excitation, and the displacement of the rope with respect to its static state is the modal variable.
In an acoustic dynamical system, a single sound pitch is a mode, in which the air is the medium, the sound pressure in the air is the excitation, and the displacement of the air molecules is the modal variable.
In a structural dynamical system, a high tall building oscillating under its most flexural axis is a mode, in which all the material of the building -under the proper numerical simplifications- is the medium, the seismic/wind/environmental solicitations are the excitations and the displacements are the modal variable.
In an electrical dynamical system, a resonant cavity made of thin metal walls, enclosing a hollow space, for a particle accelerator is a pure standing wave system, and thus an example of a mode, in which the hollow space of the cavity is the medium, the RF source (a Klystron or another RF source) is the excitation and the electromagnetic field is the modal variable.
When relating to music, normal modes of vibrating instruments (strings, air pipes, drums, etc.) are called "overtones".
The concept of normal modes also finds application in other dynamical systems, such as optics, quantum mechanics, atmospheric dynamics and molecular dynamics.
Most dynamical systems can be excited in several modes, possibly simultaneously. Each mode is characterized by one or several frequencies, according to the modal variable field. For example, a vibrating rope in 2D space is defined by a single-frequency (1D axial displacement), but a vibrating rope in 3D space is defined by two frequencies (2D axial displacement).
For a given amplitude on the modal variable, each mode will store a specific amount of energy because of the sinusoidal excitation.
The normal or dominant mode of a system with multiple modes will be the mode storing the minimum amount of energy for a given amplitude of the modal variable, or, equivalently, for a given stored amount of energy, the dominant mode will be the mode imposing the maximum amplitude of the modal variable.
Mode numbers
A mode of vibration is characterized by a modal frequency and a mode shape. It is numbered according to the number of half waves in the vibration. For example, if a vibrating beam with both ends pinned displayed a mode shape of half of a sine wave (one peak on the vibrating beam) it would be vibrating in mode 1. If it had a full sine wave (one peak and one trough) it would be vibrating in mode 2.
In a system with two or more dimensions, such as the pictured disk, each dimension is given a mode number. Using polar coordinates, we have a radial coordinate and an angular coordinate. If one measured from the center outward along the radial coordinate one would encounter a full wave, so the mode number in the radial direction is 2. The other direction is trickier, because only half of the disk is considered due to the anti-symmetric (also called skew-symmetry) nature of a disk's vibration in the angular direction. Thus, measuring 180° along the angular direction you would encounter a half wave, so the mode number in the angular direction is 1. So the mode number of the system is 2–1 or 1–2, depending on which coordinate is considered the "first" and which is considered the "second" coordinate (so it is important to always indicate which mode number matches with each coordinate direction).
In linear systems each mode is entirely independent of all other modes. In general all modes have different frequencies (with lower modes having lower frequencies) and different mode shapes.
Nodes
In a one-dimensional system at a given mode the vibration will have nodes, or places where the displacement is always zero. These nodes correspond to points in the mode shape where the mode shape is zero. Since the vibration of a system is given by the mode shape multiplied by a time function, the displacement of the node points remain zero at all times.
When expanded to a two dimensional system, these nodes become lines where the displacement is always zero. If you watch the animation above you will see two circles (one about halfway between the edge and center, and the other on the edge itself) and a straight line bisecting the disk, where the displacement is close to zero. In an idealized system these lines equal zero exactly, as shown to the right.
In mechanical systems
In the analysis of conservative systems with small displacements from equilibrium, important in acoustics, molecular spectra, and electrical circuits, the system can be transformed to new coordinates called normal coordinates. Each normal coordinate corresponds to a single vibrational frequency of the system and the corresponding motion of the system is called the normal mode of vibration.
Coupled oscillators
Consider two equal bodies (not affected by gravity), each of mass , attached to three springs, each with spring constant . They are attached in the following manner, forming a system that is physically symmetric:
where the edge points are fixed and cannot move. Let denote the horizontal displacement of the left mass, and denote the displacement of the right mass.
Denoting acceleration (the second derivative of with respect to time) as the equations of motion are:
Since we expect oscillatory motion of a normal mode (where is the same for both masses), we try:
Substituting these into the equations of motion gives us:
Omitting the exponential factor (because it is common to all terms) and simplifying yields:
And in matrix representation:
If the matrix on the left is invertible, the unique solution is the trivial solution . The non trivial solutions are to be found for those values of whereby the matrix on the left is singular; i.e. is not invertible. It follows that the determinant of the matrix must be equal to 0, so:
Solving for , the two positive solutions are:
Substituting into the matrix and solving for , yields . Substituting results in . (These vectors are eigenvectors, and the frequencies are eigenvalues.)
The first normal mode is:
Which corresponds to both masses moving in the same direction at the same time. This mode is called antisymmetric.
The second normal mode is:
This corresponds to the masses moving in the opposite directions, while the center of mass remains stationary. This mode is called symmetric.
The general solution is a superposition of the normal modes where , , , and are determined by the initial conditions of the problem.
The process demonstrated here can be generalized and formulated using the formalism of Lagrangian mechanics or Hamiltonian mechanics.
Standing waves
A standing wave is a continuous form of normal mode. In a standing wave, all the space elements (i.e. coordinates) are oscillating in the same frequency and in phase (reaching the equilibrium point together), but each has a different amplitude.
The general form of a standing wave is:
where represents the dependence of amplitude on location and the cosine/sine are the oscillations in time.
Physically, standing waves are formed by the interference (superposition) of waves and their reflections (although one may also say the opposite; that a moving wave is a superposition of standing waves). The geometric shape of the medium determines what would be the interference pattern, thus determines the form of the standing wave. This space-dependence is called a normal mode.
Usually, for problems with continuous dependence on there is no single or finite number of normal modes, but there are infinitely many normal modes. If the problem is bounded (i.e. it is defined on a finite section of space) there are countably many normal modes (usually numbered ). If the problem is not bounded, there is a continuous spectrum of normal modes.
Elastic solids
In any solid at any temperature, the primary particles (e.g. atoms or molecules) are not stationary, but rather vibrate about mean positions. In insulators the capacity of the solid to store thermal energy is due almost entirely to these vibrations. Many physical properties of the solid (e.g. modulus of elasticity) can be predicted given knowledge of the frequencies with which the particles vibrate. The simplest assumption (by Einstein) is that all the particles oscillate about their mean positions with the same natural frequency . This is equivalent to the assumption that all atoms vibrate independently with a frequency . Einstein also assumed that the allowed energy states of these oscillations are harmonics, or integral multiples of . The spectrum of waveforms can be described mathematically using a Fourier series of sinusoidal density fluctuations (or thermal phonons).
Debye subsequently recognized that each oscillator is intimately coupled to its neighboring oscillators at all times. Thus, by replacing Einstein's identical uncoupled oscillators with the same number of coupled oscillators, Debye correlated the elastic vibrations of a one-dimensional solid with the number of mathematically special modes of vibration of a stretched string (see figure). The pure tone of lowest pitch or frequency is referred to as the fundamental and the multiples of that frequency are called its harmonic overtones. He assigned to one of the oscillators the frequency of the fundamental vibration of the whole block of solid. He assigned to the remaining oscillators the frequencies of the harmonics of that fundamental, with the highest of all these frequencies being limited by the motion of the smallest primary unit.
The normal modes of vibration of a crystal are in general superpositions of many overtones, each with an appropriate amplitude and phase. Longer wavelength (low frequency) phonons are exactly those acoustical vibrations which are considered in the theory of sound. Both longitudinal and transverse waves can be propagated through a solid, while, in general, only longitudinal waves are supported by fluids.
In the longitudinal mode, the displacement of particles from their positions of equilibrium coincides with the propagation direction of the wave. Mechanical longitudinal waves have been also referred to as . For transverse modes, individual particles move perpendicular to the propagation of the wave.
According to quantum theory, the mean energy of a normal vibrational mode of a crystalline solid with characteristic frequency is:
The term represents the "zero-point energy", or the energy which an oscillator will have at absolute zero. tends to the classic value at high temperatures
By knowing the thermodynamic formula,
the entropy per normal mode is:
The free energy is:
which, for , tends to:
In order to calculate the internal energy and the specific heat, we must know the number of normal vibrational modes a frequency between the values and . Allow this number to be . Since the total number of normal modes is , the function is given by:
The integration is performed over all frequencies of the crystal. Then the internal energy will be given by:
In quantum mechanics
Bound states
in quantum mechanics are analogous to modes. The waves in quantum systems are oscillations in probability amplitude rather than material displacement. The frequency of oscillation, , relates to the mode energy by where is the Planck constant. Thus a system like an atom consists of a linear combination of modes of definite energy. These energies are characteristic of the particular atom. The (complex) square of the probability amplitude at a point in space gives the probability of measuring an electron at that location. The spatial distribution of this probability is characteristic of the atom.
In seismology
Normal modes are generated in the Earth from long wavelength seismic waves from large earthquakes interfering to form standing waves.
For an elastic, isotropic, homogeneous sphere, spheroidal, toroidal and radial (or breathing) modes arise. Spheroidal modes only involve P and SV waves (like Rayleigh waves) and depend on overtone number and angular order but have degeneracy of azimuthal order . Increasing concentrates fundamental branch closer to surface and at large this tends to Rayleigh waves. Toroidal modes only involve SH waves (like Love waves) and do not exist in fluid outer core. Radial modes are just a subset of spheroidal modes with . The degeneracy does not exist on Earth as it is broken by rotation, ellipticity and 3D heterogeneous velocity and density structure.
It may be assumed that each mode can be isolated, the self-coupling approximation, or that many modes close in frequency resonate, the cross-coupling approximation. Self-coupling will solely change the phase velocity and not the number of waves around a great circle, resulting in a stretching or shrinking of standing wave pattern. Modal cross-coupling occurs due to the rotation of the Earth, from aspherical elastic structure, or due to Earth's ellipticity and leads to a mixing of fundamental spheroidal and toroidal modes.
See also
Antiresonance
Critical speed
Harmonic oscillator
Harmonic series (music)
Infrared spectroscopy
Leaky mode
Mechanical resonance
Modal analysis
Mode (electromagnetism)
Quasinormal mode
Sturm–Liouville theory
Torsional vibration
Vibrations of a circular membrane
References
Further reading
External links
Harvard lecture notes on normal modes
Ordinary differential equations
Classical mechanics
Quantum mechanics
Spectroscopy
Singular value decomposition
Articles containing video clips | Normal mode | [
"Physics",
"Chemistry"
] | 3,026 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Theoretical physics",
"Quantum mechanics",
"Classical mechanics",
"Mechanics",
"Spectroscopy"
] |
478,346 | https://en.wikipedia.org/wiki/Algebraically%20compact%20module | In mathematics, algebraically compact modules, also called pure-injective modules, are modules that have a certain "nice" property which allows the solution of infinite systems of equations in the module by finitary means. The solutions to these systems allow the extension of certain kinds of module homomorphisms. These algebraically compact modules are analogous to injective modules, where one can extend all module homomorphisms. All injective modules are algebraically compact, and the analogy between the two is made quite precise by a category embedding.
Definitions
Let be a ring, and a left -module. Consider a system of infinitely many linear equations
where both sets and may be infinite, and for each the number of nonzero is finite.
The goal is to decide whether such a system has a solution, that is whether there exist elements of such that all the equations of the system are simultaneously satisfied. (It is not required that only finitely many are non-zero.)
The module M is algebraically compact if, for all such systems, if every subsystem formed by a finite number of the equations has a solution, then the whole system has a solution. (The solutions to the various subsystems may be different.)
On the other hand, a module homomorphism is a pure embedding if the induced homomorphism between the tensor products is injective for every right -module . The module is pure-injective if any pure injective homomorphism splits (that is, there exists with ).
It turns out that a module is algebraically compact if and only if it is pure-injective.
Examples
All modules with finitely many elements are algebraically compact.
Every vector space is algebraically compact (since it is pure-injective). More generally, every injective module is algebraically compact, for the same reason.
If R is an associative algebra with 1 over some field k, then every R-module with finite k-dimension is algebraically compact. This, together with the fact that all finite modules are algebraically compact, gives rise to the intuition that algebraically compact modules are those (possibly "large") modules which share the nice properties of "small" modules.
The Prüfer groups are algebraically compact abelian groups (i.e. Z-modules). The ring of p-adic integers for each prime p is algebraically compact as both a module over itself and a module over Z. The rational numbers are algebraically compact as a Z-module. Together with the indecomposable finite modules over Z, this is a complete list of indecomposable algebraically compact modules.
Many algebraically compact modules can be produced using the injective cogenerator Q/Z of abelian groups. If H is a right module over the ring R, one forms the (algebraic) character module H* consisting of all group homomorphisms from H to Q/Z. This is then a left R-module, and the *-operation yields a faithful contravariant functor from right R-modules to left R-modules.
Every module of the form H* is algebraically compact. Furthermore, there are pure injective homomorphisms H → H**, natural in H. One can often simplify a problem by first applying the *-functor, since algebraically compact modules are easier to deal with.
Facts
The following condition is equivalent to M being algebraically compact:
For every index set I, the addition map M(I) → M can be extended to a module homomorphism MI → M (here M(I) denotes the direct sum of copies of M, one for each element of I; MI denotes the product of copies of M, one for each element of I).
Every indecomposable algebraically compact module has a local endomorphism ring.
Algebraically compact modules share many other properties with injective objects because of the following: there exists an embedding of R-Mod into a Grothendieck category G under which the algebraically compact R-modules precisely correspond to the injective objects in G.
Every R-module is elementary equivalent to an algebraically compact R-module and to a direct sum of indecomposable algebraically compact R-modules.
References
C.U. Jensen and H. Lenzing: Model Theoretic Algebra, Gordon and Breach, 1989
Module theory
Model theory | Algebraically compact module | [
"Mathematics"
] | 922 | [
"Fields of abstract algebra",
"Mathematical logic",
"Module theory",
"Model theory"
] |
479,129 | https://en.wikipedia.org/wiki/Serial%20Storage%20Architecture | Serial Storage Architecture (SSA) was a serial transport protocol used to attach disk drives to server computers. It was developed by IBM employee Ian Judd in 1990 to provide data redundancy for critical applications. SSA was deployed in server RAID environments, where it was capable of providing for up to 80 MB/s of data throughput, with sustained data rates as high as 60 MB/s in non-RAID mode and 35 MB/s in RAID mode.
SSA was promoted as an open standard by the SSA Industry Association, unlike its predecessor, the first generation Serial Disk Subsystem. A number of vendors including IBM, Pathlight Technology and Vicom Systems produced products based on SSA. It was also adopted as an American National Standards Institute (ANSI) X3T10.1 standard. SSA devices are logically SCSI devices and conform to all of the SCSI command protocols.
History
SSA was invented by Ian Judd of IBM in 1990. IBM produced a number of successful products based upon this standard before it was overtaken by the more widely adopted Fibre Channel protocol.
Link characteristics
All the components in a typical SSA subsystem are connected by bi-directional cabling. Data sent from the adaptor can travel in either direction around the loop to its destination. SSA detects interruptions in the loop and automatically reconfigures the system to help maintain connection while a link is restored.
Up to 192 hot swappable hard disk drives can be supported per system. Drives can be designated for use by an array in the event of hardware failure. Up to 32 separate RAID arrays can be supported per adaptor, and arrays can be mirrored across servers to provide cost-effective protection for critical applications. Arrays are connected by thin and inexpensive copper cables situated up to 25 metres apart, allowing subsystems to be located in secure, convenient locations, far from the server itself.
The copper cables used in SSA configurations are round bundles of two or four twisted pairs, up to 25 metres long and terminated with 9-pin micro-D connectors. Impedances are 75 ohm single-ended, and 150 ohm differential. For longer-distance connections, it is possible to use fiber-optic cables up to 10 km (6 mi) in length. Signals are differential TTL. The transmission capacity is 20 megabytes per second in each direction per channel, with up to two channels per cable. The transport layer protocol is non-return-to-zero, with 8B/10B encoding (10 bits per character). Higher protocol layers were based on the SCSI-3 standard.
Products
IBM 7133 Disk expansion enclosures
IBM 2105 Versatile Storage Server (VSS)
IBM 2105 Enterprise Storage Server (ESS)
IBM 7190 SBUS SSA Adapter
Pathlight Technology Streamline PCI Host Bus Adapter, SSA Data Pump, storage area network gateway
See also
List of device bandwidths
References
Serial buses
SCSI
IBM storage devices
American National Standards Institute standards
Computer storage buses | Serial Storage Architecture | [
"Technology"
] | 619 | [
"American National Standards Institute standards",
"Computer standards"
] |
479,392 | https://en.wikipedia.org/wiki/Myosin | Myosins () are a family of motor proteins (though most often protein complexes) best known for their roles in muscle contraction and in a wide range of other motility processes in eukaryotes. They are ATP-dependent and responsible for actin-based motility.
The first myosin (M2) to be discovered was in 1864 by Wilhelm Kühne. Kühne had extracted a viscous protein from skeletal muscle that he held responsible for keeping the tension state in muscle. He called this protein myosin. The term has been extended to include a group of similar ATPases found in the cells of both striated muscle tissue and smooth muscle tissue.
Following the discovery in 1973 of enzymes with myosin-like function in Acanthamoeba castellanii, a global range of divergent myosin genes have been discovered throughout the realm of eukaryotes.
Although myosin was originally thought to be restricted to muscle cells (hence myo-(s) + -in), there is no single "myosin"; rather it is a very large superfamily of genes whose protein products share the basic properties of actin binding, ATP hydrolysis (ATPase enzyme activity), and force transduction. Virtually all eukaryotic cells contain myosin isoforms. Some isoforms have specialized functions in certain cell types (such as muscle), while other isoforms are ubiquitous. The structure and function of myosin is globally conserved across species, to the extent that rabbit muscle myosin II will bind to actin from an amoeba.
Structure and functions
Domains
Most myosin molecules are composed of a head, neck, and tail domain.
The head domain binds the filamentous actin, and uses ATP hydrolysis to generate force and to "walk" along the filament towards the barbed (+) end (with the exception of myosin VI, which moves towards the pointed (-) end).
the neck domain acts as a linker and as a lever arm for transducing force generated by the catalytic motor domain. The neck domain can also serve as a binding site for myosin light chains which are distinct proteins that form part of a macromolecular complex and generally have regulatory functions.
The tail domain generally mediates interaction with cargo molecules and/or other myosin subunits. In some cases, the tail domain may play a role in regulating motor activity.
Power stroke
Multiple myosin II molecules generate force in skeletal muscle through a power stroke mechanism fuelled by the energy released from ATP hydrolysis. The power stroke occurs at the release of phosphate from the myosin molecule after the ATP hydrolysis while myosin is tightly bound to actin. The effect of this release is a conformational change in the molecule that pulls against the actin. The release of the ADP molecule leads to the so-called rigor state of myosin. The binding of a new ATP molecule will release myosin from actin. ATP hydrolysis within the myosin will cause it to bind to actin again to repeat the cycle. The combined effect of the myriad power strokes causes the muscle to contract.
Nomenclature, evolution, and the family tree
The wide variety of myosin genes found throughout the eukaryotic phyla were named according to different schemes as they were discovered. The nomenclature can therefore be somewhat confusing when attempting to compare the functions of myosin proteins within and between organisms.
Skeletal muscle myosin, the most conspicuous of the myosin superfamily due to its abundance in muscle fibers, was the first to be discovered. This protein makes up part of the sarcomere and forms macromolecular filaments composed of multiple myosin subunits. Similar filament-forming myosin proteins were found in cardiac muscle, smooth muscle, and nonmuscle cells. However, beginning in the 1970s, researchers began to discover new myosin genes in simple eukaryotes encoding proteins that acted as monomers and were therefore entitled Class I myosins. These new myosins were collectively termed "unconventional myosins" and have been found in many tissues other than muscle. These new superfamily members have been grouped according to phylogenetic relationships derived from a comparison of the amino acid sequences of their head domains, with each class being assigned a Roman numeral (see phylogenetic tree). The unconventional myosins also have divergent tail domains, suggesting unique functions. The now diverse array of myosins likely evolved from an ancestral precursor (see picture).
Analysis of the amino acid sequences of different myosins shows great variability among the tail domains, but strong conservation of head domain sequences. Presumably this is so the myosins may interact, via their tails, with a large number of different cargoes, while the goal in each case – to move along actin filaments – remains the same and therefore requires the same machinery in the motor. For example, the human genome contains over 40 different myosin genes.
These differences in shape also determine the speed at which myosins can move along actin filaments. The hydrolysis of ATP and the subsequent release of the phosphate group causes the "power stroke", in which the "lever arm" or "neck" region of the heavy chain is dragged forward. Since the power stroke always moves the lever arm by the same angle, the length of the lever arm determines the displacement of the cargo relative to the actin filament. A longer lever arm will cause the cargo to traverse a greater distance even though the lever arm undergoes the same angular displacement – just as a person with longer legs can move farther with each individual step. The velocity of a myosin motor depends upon the rate at which it passes through a complete kinetic cycle of ATP binding to the release of ADP.
Myosin classes
Myosin I
Myosin I, a ubiquitous cellular protein, functions as monomer and functions in vesicle transport. It has a step size of 10 nm and has been implicated as being responsible for the adaptation response of the stereocilia in the inner ear.
Myosin II
Myosin II (also known as conventional myosin) is the myosin type responsible for producing muscle contraction in muscle cells in most animal cell types. It is also found in non-muscle cells in contractile bundles called stress fibers.
Myosin II contains two heavy chains, each about 2000 amino acids in length, which constitute the head and tail domains. Each of these heavy chains contains the N-terminal head domain, while the C-terminal tails take on a coiled-coil morphology, holding the two heavy chains together (imagine two snakes wrapped around each other, as in a caduceus). Thus, myosin II has two heads. The intermediate neck domain is the region creating the angle between the head and tail. In smooth muscle, a single gene (MYH11)) codes for the heavy chains myosin II, but splice variants of this gene result in four distinct isoforms.
It also contains 4 myosin light chains (MLC), resulting in 2 per head, weighing 20 (MLC20) and 17 (MLC17) kDa. These bind the heavy chains in the "neck" region between the head and tail.
The MLC20 is also known as the regulatory light chain and actively participates in muscle contraction.
The MLC17 is also known as the essential light chain. Its exact function is unclear, but is believed to contribute to the structural stability of the myosin head along with MLC20. Two variants of MLC17 (MLC17a/b) exist as a result of alternative splicing at the MLC17 gene.
In muscle cells, the long coiled-coil tails of the individual myosin molecules can auto-inhibit active function in the 10S conformation or upon phosphorylation, change to the 6S conformation and join, forming the thick filaments of the sarcomere. The force-producing head domains stick out from the side of the thick filament, ready to walk along the adjacent actin-based thin filaments in response to the proper chemical signals and may be in either auto-inhibited or active conformation. The balance/transition between active and inactive states is subject to extensive chemical regulation.
Myosin III
Myosin III is a poorly understood member of the myosin family. It has been studied in vivo in the eyes of Drosophila, where it is thought to play a role in phototransduction. A human homologue gene for myosin III, MYO3A, has been uncovered through the Human Genome Project and is expressed in the retina and cochlea.
Myosin IV
Myosin IV has a single IQ motif and a tail that lacks any coiled-coil forming sequence. It has homology similar to the tail domains of Myosin VII and XV.
Myosin V
Myosin V is an unconventional myosin motor, which is processive as a dimer and has a step size of 36 nm. It translocates (walks) along actin filaments traveling towards the barbed end (+ end) of the filaments. Myosin V is involved in the transport of cargo (e.g. RNA, vesicles, organelles, mitochondria) from the center of the cell to the periphery, but has been furthermore shown to act like a dynamic tether, retaining vesicles and organelles in the actin-rich periphery of cells. A recent single molecule in vitro reconstitution study on assembling actin filaments suggests that Myosin V travels farther on newly assembling (ADP-Pi rich) F-actin, while processive runlengths are shorter on older (ADP-rich) F-actin.
The Myosin V motor head can be subdivided into the following functional regions:
Nucleotide-binding site - These elements together coordinate di-valent metal cations (usually magnesium) and catalyze hydrolysis:
Switch I - This contains a highly conserved SSR motif. Isomerizes in the presence of ATP.
Switch II - This is the Kinase-GTPase version of the Walker B motif DxxG. Isomerizes in the presence of ATP.
P-loop - This contains the Walker A motif GxxxxGK(S,T). This is the primary ATP binding site.
Transducer - The seven β-strands that underpin the motor head's structure.
U50 and L50 - The Upper (U50) and Lower (L50) domains are each around 50kDa. Their spatial separation forms a cleft critical for binding to actin and some regulatory compounds.
SH1 helix and Relay - These elements together provide an essential mechanism for coupling the enzymatic state of the motor domain to the powerstroke-producing region (converter domain, lever arm, and light chains).
Converter - This converts a change of conformation in the motor head to an angular displacement of the lever arm (in most cases reinforced with light chains).
Myosin VI
Myosin VI is an unconventional myosin motor, which is primarily processive as a dimer, but also acts as a nonprocessive monomer. It walks along actin filaments, travelling towards the pointed end (- end) of the filaments. Myosin VI is thought to transport endocytic vesicles into the cell.
Myosin VII
Myosin VII is an unconventional myosin with two FERM domains in the tail region. It has an extended lever arm consisting of five calmodulin binding IQ motifs followed by a single alpha helix (SAH) Myosin VII is required for phagocytosis in Dictyostelium discoideum, spermatogenesis in C. elegans and stereocilia formation in mice and zebrafish.
Myosin VIII
Myosin VIII is a plant-specific myosin linked to cell division; specifically, it is involved in regulating the flow of cytoplasm between cells and in the localization of vesicles to the phragmoplast.
Myosin IX
Myosin IX is a group of single-headed motor proteins. It was first shown to be minus-end directed, but a later study showed that it is plus-end directed. The movement mechanism for this myosin is poorly understood.
Myosin X
Myosin X is an unconventional myosin motor, which is functional as a dimer. The dimerization of myosin X is thought to be antiparallel. This behavior has not been observed in other myosins. In mammalian cells, the motor is found to localize to filopodia. Myosin X walks towards the barbed ends of filaments. Some research suggests it preferentially walks on bundles of actin, rather than single filaments. It is the first myosin motor found to exhibit this behavior.
Myosin XI
Myosin XI directs the movement of organelles such as plastids and mitochondria in plant cells. It is responsible for the light-directed movement of chloroplasts according to light intensity and the formation of stromules interconnecting different plastids. Myosin XI also plays a key role in polar root tip growth and is necessary for proper root hair elongation. A specific Myosin XI found in Nicotiana tabacum was discovered to be the fastest known processive molecular motor, moving at 7μm/s in 35 nm steps along the actin filament.
Myosin XII
Myosin XIII
Myosin XIV
This myosin group has been found in the Apicomplexa phylum. The myosins localize to plasma membranes of the intracellular parasites and may then be involved in the cell invasion process.
This myosin is also found in the ciliated protozoan Tetrahymena thermophila. Known functions include: transporting phagosomes to the nucleus and perturbing the developmentally regulated elimination of the macronucleus during conjugation.
Myosin XV
Myosin XV is necessary for the development of the actin core structure of the non-motile stereocilia located in the inner ear. It is thought to be functional as a monomer.
Myosin XVI
Myosin XVII
Myosin XVIII
MYO18A A gene on chromosome 17q11.2 that encodes actin-based motor molecules with ATPase activity, which may be involved in maintaining stromal cell scaffolding required for maintaining intercellular contact.
Myosin XIX
Unconventional myosin XIX (Myo19) is a mitochondrial associated myosin motor.
Genes in humans
Note that not all of these genes are active.
Class I: MYO1A, MYO1B, MYO1C, MYO1D, MYO1E, MYO1F, MYO1G, MYO1H
Class II: MYH1, MYH2, MYH3, MYH4, MYH6, MYH7, MYH7B, MYH8, MYH9, MYH10, MYH11, MYH13, MYH14, MYH15, MYH16
Class III: MYO3A, MYO3B
Class V: MYO5A, MYO5B, MYO5C
Class VI: MYO6
Class VII: MYO7A, MYO7B
Class IX: MYO9A, MYO9B
Class X: MYO10
Class XV: MYO15A, MYO15B
Class XVI: MYO16
Class XVIII: MYO18A, MYO18B
Class XIX: MYO19
Myosin light chains are distinct and have their own properties. They are not considered "myosins" but are components of the macromolecular complexes that make up the functional myosin enzymes.
Light chain: MYL1, MYL2, MYL3, MYL4, MYL5, MYL6, MYL6B, MYL7, MYL9, MYLIP, MYLK, MYLK2, MYLL1
Paramyosin
Paramyosin is a large, 93-115kDa muscle protein that has been described in a number of diverse invertebrate phyla. Invertebrate thick filaments are thought to be composed of an inner paramyosin core surrounded by myosin. The myosin interacts with actin, resulting in fibre contraction. Paramyosin is found in many different invertebrate species, for example, Brachiopoda, Sipunculidea, Nematoda, Annelida, Mollusca, Arachnida, and Insecta. Paramyosin is responsible for the "catch" mechanism that enables sustained contraction of muscles with very little energy expenditure, such that a clam can remain closed for extended periods.
Paramyosins can be found in seafood. A recent computational study showed that following human intestinal digestion, paramyosins of common octopus, Humboldt squid, Japanese abalone, Japanese scallop, Mediterranean mussel, Pacific oyster, sea cucumber, and Whiteleg shrimp could release short peptides that inhibit the enzymatic activities of angiotensin converting enzyme and dipeptidyl peptidase.
References
Further reading
Molecular Biology of the Cell. Alberts, Johnson, Lewis, Raff, Roberts, and Walter. 4th Edition. 949–952.
Additional images
External links
MBInfo – Myosin Isoforms
MBInfo – The Myosin Powerstroke
Myosin Video A video of a moving myosin motor protein.
The Myosin Homepage
http://cellimages.ascb.org/cdm4/item_viewer.php?CISOROOT=/p4041coll12&CISOPTR=101&CISOBOX=1&REC=2 Animation of a moving myosin motor protein
3D macromolecular structures of myosin from the EM Data Bank(EMDB)
Motor proteins
Cytoskeleton proteins
Protein families
Skeletal muscle | Myosin | [
"Chemistry",
"Biology"
] | 3,822 | [
"Molecular machines",
"Protein families",
"Motor proteins",
"Protein classification"
] |
479,719 | https://en.wikipedia.org/wiki/Applied%20Materials | Applied Materials, Inc. is an American corporation that supplies equipment, services and software for the manufacture of semiconductor (integrated circuit) chips for electronics, flat panel displays for computers, smartphones, televisions, and solar products. The company also supplies equipment to produce coatings for flexible electronics, packaging and other applications. The company is headquartered in Santa Clara, California, and is the second largest supplier of semiconductor equipment in the world based on revenue behind Dutch company ASML.
History
Founded in 1967 by Michael A. McNeilly and others, Applied Materials went public in 1972. In subsequent years, the company diversified, until James C. Morgan became CEO in 1976 and returned the company's focus to its core business of semiconductor manufacturing equipment. By 1978, sales increased by 17%.
In 1984, Applied Materials became the first U.S. semiconductor equipment manufacturer to open its own technology center in Japan, and the first semiconductor equipment company to operate a service center in China. In 1987, Applied introduced a chemical vapor deposition (CVD) machine called the Precision 5000, which differed from existing machines by incorporating diverse processes into a single machine that had multiple process chambers.
In 1992, the corporation settled a lawsuit with three former employees for an estimated $600,000. The suit complained that the employees were driven out of the company after complaining about the courses Applied Scholastics had been hired to teach there.
In 1993, the Applied Materials' Precision 5000 was inducted into the Smithsonian Institution's permanent collection of Information Age technology.
In November 1996, Applied Materials acquired two Israeli companies for an aggregate amount of $285 million: Opal Technologies and Orbot Instruments for $175 million and $110 million in cash, respectively. Orbot produces systems for inspecting patterned silicon wafers for yield enhancement during the semiconductor manufacturing process, as well as systems for inspecting masks used during the patterning process. Opal develops and manufactures high-speed metrology systems used by semiconductor manufacturers to verify critical dimensions during the production of integrated circuits.
In 2000, Etec Systems, Inc. was purchased. On June 27, 2001, Applied Materials acquired Israeli company Oramir Semiconductor Equipment Ltd., a supplier of laser cleaning technologies for semiconductor wafers, in a purchase business combination for $21 million in cash.
In January 2008, Applied Materials purchased Baccini, an Italian company and designer of tools used in manufacturing solar cells.
In 2009, Applied Materials opened its Solar Technology Center, the world's largest commercial solar energy research and development facility, in Xi'an, China.
Applied Materials acquired Semitool Inc. in December 2009, and announced its acquisition of Varian Semiconductor in May 2011. Applied Materials then announced a planned merger with Tokyo Electron on September 24, 2013. If it had been approved by government regulators, the proposed combined company, to be called Eteris, would have been the world's largest supplier of semiconductor processing equipment, with a total market value of $29 billion. However, on April 27, 2015, Applied Materials announced that its merger with Tokyo Electron has been scrapped due to antitrust concerns and fears of dominating the semiconductor equipment industry.
In 2015, Applied Materials left the solar wafer sawing and the solar ion implantation businesses.
Applied Materials was named among FORTUNE World's Most Admired Companies in 2018.
In 2019, Applied Materials announced its intention to buy semiconductor equipment manufacturer (and former Hitachi group member) Kokusai Electric Corporation from private equity firm KKR for $2.2 billion, but terminated the deal in March 2021 citing delays in getting approval from China's regulator.
In November 2023, Applied Materials was reported to be under criminal investigation by the United States Department of Justice for routing equipment to Semiconductor Manufacturing International Corporation via South Korea in violation of US sanctions.
Finances
For the fiscal year 2021, Applied Materials reported earnings of US$5.888 billion, with an annual revenue of US$23.063 billion, a 34% increase over the previous fiscal. Applied Materials market capitalization was valued at over US$36.6 billion in November 2018.
Organization
Applied is organized into three major business sectors: Semiconductor Products, Applied Global Services, and Display and Adjacent Markets. Applied Materials also operates a venture investing arm called Applied Ventures.
Semiconductor Products
The company develops and manufactures equipment used in the wafer fabrication steps of creating a semiconductor device, including atomic layer deposition (ALD), chemical vapor deposition (CVD), physical vapor deposition (PVD), rapid thermal processing (RTP), chemical mechanical polishing (CMP), etch, ion implantation and wafer inspection. The company acquired Semitool for this group in late 2009. In 2019, Applied Materials agreed to buy semiconductor manufacturer Kokusai for $2.2 Billion.
Applied Global Services
The Applied Global Services (AGS) group offers equipment installation support and warranty extended support, as well as maintenance support. AGS also offers new and refurbished equipment, as well as upgrades and enhancements for installed base equipment. This sector also includes automation software for manufacturing environments.
Display and Adjacent Markets
AGS combined an existing business unit with the display business of Applied Films Corporation, acquired in mid-2006.
The manufacturing process for TFT LCDs (thin film transistor liquid crystal displays), commonly employed in computer monitors and televisions, is similar to that employed for integrated circuits. In cleanroom environments both TFT-LCD and integrated circuit production use photolithography, chemical and physical vapor deposition, and testing.
Energy and Environmental Solutions (former sector)
In 2006, the company acquired Applied Films, a glass coating and web coating business. Also in 2006, Applied announced it was entering the solar manufacturing equipment business. The solar, glass and web businesses were organized into the company's Energy and Environmental Solutions (EES) sector.
In 2007, Applied Materials announced the Applied SunFab thin film photovoltaic module production line, with single or tandem junction capability. SunFab applies silicon thin film layers to glass substrate that then produce electricity when exposed to sunlight. In 2009, the company's SunFab line was certified by the International Electrotechnical Commission (IEC). In 2010, Applied announced that it was abandoning the thin film market and closing down their SunFab division. Also in 2007, the company acquired privately held, Switzerland-based HCT Shaping Systems SA, a specialist in wafer sawing tools for both solar and semiconductor wafer manufacture, paying approximately $475 million.
In 2008, Applied acquired privately held, Italy-based Baccini SpA for $330M, company that worked in the metallization steps of solar cell manufacturing. The company was listed at the top of VLSI Research's list of supplier of photovoltaic manufacturing equipment for 2008, with sales of $797M.
Since July 2016 the Energy and Environmental Solutions sector is no longer reported separately. Remaining solar business activities have been included in "Corporate and Others".
Locations
Applied moved into its Bowers Avenue headquarters in Santa Clara, California, in 1974 and operates in Europe, Japan, Canada, the United States, Israel, China, Italy, India, Korea, Southeast Asia, Singapore and Taiwan.
Management
Chairman of the Board of Directors: Thomas J. Iannotti
President and chief executive officer: Gary E. Dickerson
Chief Financial Officer: Brice Hill
Chief Technology Officer: Omkaram Nalamasu
See also
Lam Research
References
External links
1967 establishments in California
Companies based in Santa Clara, California
Companies listed on the Nasdaq
American companies established in 1967
Computer companies established in 1967
Computer companies of the United States
Computer hardware companies
Electronics companies established in 1967
Solar energy companies of the United States
Equipment semiconductor companies
Semiconductor companies of the United States
Superfund sites in California
Technology companies based in the San Francisco Bay Area
Thin-film cell manufacturers
1970s initial public offerings | Applied Materials | [
"Technology",
"Engineering"
] | 1,591 | [
"Equipment semiconductor companies",
"Computer hardware companies",
"Semiconductor fabrication equipment",
"Computers"
] |
480,130 | https://en.wikipedia.org/wiki/Index%20of%20wave%20articles | This is a list of wave topics.
0–9
21 cm line
A
Abbe prism
Absorption spectroscopy
Absorption spectrum
Absorption wavemeter
Acoustic wave
Acoustic wave equation
Acoustics
Acousto-optic effect
Acousto-optic modulator
Acousto-optics
Airy disc
Airy wave theory
Alfvén wave
Alpha waves
Amphidromic point
Amplitude
Amplitude modulation
Animal echolocation
Antarctic Circumpolar Wave
Antiphase
Aquamarine Power
Arrayed waveguide grating
Artificial wave
Atmospheric diffraction
Atmospheric wave
Atmospheric waveguide
Atom laser
Atomic clock
Atomic mirror
Audience wave
Autowave
Averaged Lagrangian
B
Babinet's principle
Backward wave oscillator
Bandwidth-limited pulse
beat
Berry phase
Bessel beam
Beta wave
Black hole
Blazar
Bloch's theorem
Blueshift
Boussinesq approximation (water waves)
Bow wave
Bragg diffraction
Bragg's law
Breaking wave
Bremsstrahlung, Electromagnetic radiation
Brillouin scattering
Bullet bow shockwave
Burgers' equation
Business cycle
C
Capillary wave
Carrier wave
Cherenkov radiation
Chirp
Ernst Chladni
Circular polarization
Clapotis
Closed waveguide
Cnoidal wave
Coherence (physics)
Coherence length
Coherence time
Cold wave
Collimated light
Collimator
Compton effect
Comparison of analog and digital recording
Computation of radiowave attenuation in the atmosphere
Continuous phase modulation
Continuous wave
Convective heat transfer
Coriolis frequency
Coronal mass ejection
Cosmic microwave background radiation
Coulomb wave function
Cutoff frequency
Cutoff wavelength
Cymatics
D
Damped wave
Decollimation
Delta wave
Dielectric waveguide
Diffraction
Direction finding
Dispersion (optics)
Dispersion (water waves)
Dispersion relation
Dominant wavelength
Doppler effect
Doppler radar
Douglas Sea Scale
Draupner wave
Droplet-shaped wave
Duhamel's principle
E
E-skip
Earthquake
Echo (phenomenon)
Echo sounding
Echolocation (animal)
Echolocation (human)
Eddy (fluid dynamics)
Edge wave
Eikonal equation
Ekman layer
Ekman spiral
Ekman transport
El Niño–Southern Oscillation
Electroencephalography
Electromagnetic electron wave
Electromagnetic radiation
Electromagnetic wave
Electromagnetic wave cut-off
Electron
Elliott wave
Elliptical polarization
Emission spectrum
Envelope (waves)
Equatorial Rossby wave
Equatorial waves
Essential bandwidth
Evanescent wave
Extratropical cyclone
Extremely low frequency
F
F wave
Fabry–Pérot interferometer
Faraday wave
Fetch (geography)
Fourier series
Fraunhofer diffraction
Fraunhofer distance
Freak wave
Frequency
Frequency modulation
Fresnel diffraction
Fresnel equations
Fresnel integral
Fresnel lens
Fresnel number
Fresnel rhomb
Fresnel zone
Fresnel–Arago laws
Fundamental frequency
G
Gamma ray
Gamma ray burst
Gamma wave
Gaussian beam
Geometric optics, Geometrical optics
Geostrophic current
Gravitational radiation
Gravity wave
Groundwave
Group delay
Group velocity
H
Harmonic
Heat wave
Holography
Human echolocation
Hundred-year wave
Hurricane
Huygens' principle
Hydraulic jump
Hydrography
Hydropower
Hyperbolic partial differential equation
I
In phase
Inertial wave
Infragravity wave
Infrared gas analyzer
Inhomogeneous electromagnetic wave equation
Interference (wave propagation)
Interferometry
Internal wave
Inverse scattering transform
Ion acoustic wave
Irradiance
K
Kelvin wave
Kinematic wave
Knife-edge effect
Kondratiev wave
L
Lamb waves
Landau damping
Lee wave
Linear elasticity
Linear polarization
List of waves named after people
Long wavelength limit
Longitudinal mode
Longitudinal wave
Longwave
Love wave
M
Mach wave
Mach–Zehnder interferometer
Maelstrom (disambiguation)
Magnetometer
Magnetosonic wave
Matter wave
Maxwell's equations
Mayer waves
Mechanical wave
Medical ultrasonography
Mediumwave
Megatsunami
Microbarom
Microwave
Microwave auditory effect
Microwave oven
Microwave plasma
Microwaving
Mie scattering
Millimeter cloud radar
Modulation
Monochromatic electromagnetic plane wave
Monochromator
Moonlight
Morning Glory cloud
Mu wave
Multipath propagation
N
Neural oscillation
Neutron
Nondispersive infrared sensor
Nonlinear Schrödinger equation
Nonlinear wave
Nonlinear X-wave
Normal mode
O
Ocean surface wave
One-Way Wave Equation
Optical fiber
Optical waveguide
Oscillon
Out of phase
Outgoing longwave radiation
Overtone
Oyster wave energy converter
P
P-wave
Parabolic reflector
Periodic function
Periodic travelling wave
Phase (waves)
Phase difference
Phase modulation
Phase velocity
Phonon
Photon
Pitch shifter (audio processor)
Planck constant
Planck's law
Plane wave
Polarization (waves)
Ponto-geniculo-occipital waves, PGO waves
Power standing wave ratio
pp-wave spacetime
Pressure wave
Prism
Proton
Pulsar
Pulsar wind nebula
Pulse wave velocity
Pulse-density modulation
Q
QT interval
quadrature
Quadrature amplitude modulation
Quantum optics
Quantum tunneling
Quantum Zeno effect
R
Radar
Radar astronomy
Radar cross section
Radar gun
Radio propagation
Radio waves
Radiosity (heat transfer)
Rayleigh scattering
Rayleigh wave
Rayleigh–Jeans law
Redshift
Reflection coefficient
Reflection seismology
Refraction
Relativistic Doppler effect
Resonance
Resonator
Ring laser gyroscope
Ring modulation
Ring wave guide
Rip current
ripple
Ripple tank
Rogue wave (oceanography)
Rossby wave
Rossby-gravity waves
Rydberg constant
Rydberg formula
S
S-wave
Sampling (signal processing)
Sawtooth wave
Schrödinger equation
Sea state
Seiche
Seismic wave
Seismograph
Seismology
Sellmeier equation
Shallow water equations
Shive wave machine
Shock wave
Shortwave radio
Signal velocity
Significant wave height
Sine wave
Single-sideband modulation
Sinusoidal plane-wave solutions of the electromagnetic wave equation
Skywave
Slow-wave potential
Slow-wave sleep
Sneaker wave
Solitary wave
Soliton
Sonar
Sonic anemometers
Sound wave
Spark-gap transmitter
Spectroscopy
Speed of gravity
Speed of light
Speed of sound
Spike-and-wave
Spin wave
Square wave
Standing wave
Standing wave ratio
Stefan–Boltzmann law
Stokes drift
Stokes wave
Subharmonic
Super low frequency
Superharmonic
Superposition principle
Supersonic Wave Filter
Surface acoustic wave
Surface wave
Surface wave inversion
Surface-wave magnitude
Surface-wave-sustained discharge
Surfing
Sverdrup wave
Swell (ocean)
Synthetic-aperture radar
T
T wave
Terrestrial gamma-ray flash
Terrestrial stationary waves
Theta wave
Tidal bore
Tidal power
Tidal resonance
Tide
Tired light theory
Transverse mode
Transverse wave
Traveling plane wave
Traveling wave antenna
Traveling wave reactor
Traveling-wave tube
Triangle wave
Trigonometric function
Trojan wave packet
Tropical wave
Tsunami
Turbidity current
U
Ultra low frequency
Ultrasound
Ultraviolet catastrophe
Undertow (wave action)
Underwater wave
Undular bore
V
Velocity factor
Vestigial-sideband modulation
Vibrating string
Voltage standing wave ratio
Vortex
Vorticity
W
Wake
Wave (audience)
Wave base
Wave disk engine
Wave drag
Wave equation
Wave farm
Wave field synthesis
Wave function
Wave function collapse
Wave height
Wave impedance
Wave loading
Wave motor
Wave packet
Wave period
Wave plate
Wave pool
Wave pounding
Wave power
Wave propagation
Wave shoaling
Wave surface
Wave tank
Wave turbulence
Wave vector
Wave velocity
Wave–current interaction
Wave-cut platform
Wave-making resistance
Waveform
Waveform monitor
Wave-formed ripple
Wavefront
Wavefunction
Wavefunction collapse
Waveguide
Waveguide (acoustics)
Waveguide (electromagnetism)
Waveguide (optics)
Waveguide flange
Wavelength
Wavelength selective switching
Wavelength-division multiplexing
Wavelet
Wavelet transform
Wavenumber
Zonal wavenumber
Wavenumber-frequency diagram
Wave–particle duality
Waverider
Waves and shallow water
Waves in plasmas
Whitham's method
Wien approximation
Wien's displacement law
Wien's law
Wind wave
Windsurfing
X
X-band radar
X-ray
X-wave
Z
Zero-dispersion slope
Zero-dispersion wavelength
Zigzag
Zodiacal light
Zone plate
Wave topics
Wave topics | Index of wave articles | [
"Physics"
] | 1,550 | [
"Waves",
"Physical phenomena",
"Motion (physics)"
] |
480,178 | https://en.wikipedia.org/wiki/Infinitesimal%20rotation%20matrix | An infinitesimal rotation matrix or differential rotation matrix is a matrix representing an infinitely small rotation.
While a rotation matrix is an orthogonal matrix representing an element of (the special orthogonal group), the differential of a rotation is a skew-symmetric matrix in the tangent space (the special orthogonal Lie algebra), which is not itself a rotation matrix.
An infinitesimal rotation matrix has the form
where is the identity matrix, is vanishingly small, and
For example, if representing an infinitesimal three-dimensional rotation about the -axis, a basis element of
The computation rules for infinitesimal rotation matrices are as usual except that infinitesimals of second order are routinely dropped. With these rules, these matrices do not satisfy all the same properties as ordinary finite rotation matrices under the usual treatment of infinitesimals. It turns out that the order in which infinitesimal rotations are applied is irrelevant.
Discussion
An infinitesimal rotation matrix is a skew-symmetric matrix where:
As any rotation matrix has a single real eigenvalue, which is equal to +1, the corresponding eigenvector defines the rotation axis.
Its module defines an infinitesimal angular displacement.
The shape of the matrix is as follows:
Associated quantities
Associated to an infinitesimal rotation matrix is an infinitesimal rotation tensor :
Dividing it by the time difference yields the angular velocity tensor:
Order of rotations
These matrices do not satisfy all the same properties as ordinary finite rotation matrices under the usual treatment of infinitesimals. To understand what this means, consider
First, test the orthogonality condition, . The product is
differing from an identity matrix by second-order infinitesimals, discarded here. So, to first order, an infinitesimal rotation matrix is an orthogonal matrix.
Next, examine the square of the matrix,
Again discarding second-order effects, note that the angle simply doubles. This hints at the most essential difference in behavior, which we can exhibit with the assistance of a second infinitesimal rotation,
Compare the products to ,
Since is second-order, we discard it: thus, to first order, multiplication of infinitesimal rotation matrices is commutative. In fact,
again to first order. In other words, .
This useful fact makes, for example, derivation of rigid body rotation relatively simple. But one must always be careful to distinguish (the first-order treatment of) these infinitesimal rotation matrices from both finite rotation matrices and from Lie algebra elements. When contrasting the behavior of finite rotation matrices in the Baker–Campbell–Hausdorff formula above with that of infinitesimal rotation matrices, where all the commutator terms will be second-order infinitesimals, one finds a bona fide vector space. Technically, this dismissal of any second-order terms amounts to Group contraction.
Generators of rotations
Suppose we specify an axis of rotation by a unit vector [x, y, z], and suppose we have an infinitely small rotation of angle Δθ about that vector. Expanding the rotation matrix as an infinite addition, and taking the first-order approach, the rotation matrix ΔR is represented as:
A finite rotation through angle θ about this axis may be seen as a succession of small rotations about the same axis. Approximating Δθ as θ/N, where N is a large number, a rotation of θ about the axis may be represented as:
It can be seen that Euler's theorem essentially states that all rotations may be represented in this form. The product Aθ is the "generator" of the particular rotation, being the vector associated with the matrix A. This shows that the rotation matrix and the axis-angle format are related by the exponential function.
One can derive a simple expression for the generator G. One starts with an arbitrary plane defined by a pair of perpendicular unit vectors a and b. In this plane one can choose an arbitrary vector x with perpendicular y. One then solves for y in terms of x and substituting into an expression for a rotation in a plane yields the rotation matrix R, which includes the generator .
To include vectors outside the plane in the rotation one needs to modify the above expression for R by including two projection operators that partition the space. This modified rotation matrix can be rewritten as an exponential function.
Analysis is often easier in terms of these generators, rather than the full rotation matrix. Analysis in terms of the generators is known as the Lie algebra of the rotation group.
Exponential map
Connecting the Lie algebra to the Lie group is the exponential map, which is defined using the standard matrix exponential series for For any skew-symmetric matrix , is always a rotation matrix.
An important practical example is the case. In rotation group SO(3), it is shown that one can identify every with an Euler vector , where is a unit magnitude vector.
By the properties of the identification , is in the null space of . Thus, is left invariant by and is hence a rotation axis.
Using Rodrigues' rotation formula on matrix form with , together with standard double angle formulae one obtains,
This is the matrix for a rotation around axis by the angle in half-angle form. For full detail, see exponential map SO(3).
Notice that for infinitesimal angles second-order terms can be ignored and remains
Relationship to skew-symmetric matrices
Skew-symmetric matrices over the field of real numbers form the tangent space to the real orthogonal group at the identity matrix; formally, the special orthogonal Lie algebra. In this sense, then, skew-symmetric matrices can be thought of as infinitesimal rotations.
Another way of saying this is that the space of skew-symmetric matrices forms the Lie algebra of the Lie group The Lie bracket on this space is given by the commutator:
It is easy to check that the commutator of two skew-symmetric matrices is again skew-symmetric:
The matrix exponential of a skew-symmetric matrix is then an orthogonal matrix :
The image of the exponential map of a Lie algebra always lies in the connected component of the Lie group that contains the identity element. In the case of the Lie group this connected component is the special orthogonal group consisting of all orthogonal matrices with determinant 1. So will have determinant +1. Moreover, since the exponential map of a connected compact Lie group is always surjective, it turns out that every orthogonal matrix with unit determinant can be written as the exponential of some skew-symmetric matrix. In the particular important case of dimension the exponential representation for an orthogonal matrix reduces to the well-known polar form of a complex number of unit modulus. Indeed, if a special orthogonal matrix has the form
with . Therefore, putting and it can be written
which corresponds exactly to the polar form of a complex number of unit modulus.
The exponential representation of an orthogonal matrix of order can also be obtained starting from the fact that in dimension any special orthogonal matrix can be written as where is orthogonal and S is a block diagonal matrix with blocks of order 2, plus one of order 1 if is odd; since each single block of order 2 is also an orthogonal matrix, it admits an exponential form. Correspondingly, the matrix S writes as exponential of a skew-symmetric block matrix of the form above, so that exponential of the skew-symmetric matrix Conversely, the surjectivity of the exponential map, together with the above-mentioned block-diagonalization for skew-symmetric matrices, implies the block-diagonalization for orthogonal matrices.
See also
Generators of rotations
Infinitesimal rotations
Infinitesimal rotation tensor
Infinitesimal transformation
Rotation group SO(3)#Infinitesimal rotations
Notes
References
Sources
Rotation
Mathematics of infinitesimals | Infinitesimal rotation matrix | [
"Physics",
"Mathematics"
] | 1,576 | [
"Physical phenomena",
"Classical mechanics",
"Rotation",
"Motion (physics)",
"Mathematics of infinitesimals"
] |
480,465 | https://en.wikipedia.org/wiki/Spectrophotometry | Spectrophotometry is a branch of electromagnetic spectroscopy concerned with the quantitative measurement of the reflection or transmission properties of a material as a function of wavelength. Spectrophotometry uses photometers, known as spectrophotometers, that can measure the intensity of a light beam at different wavelengths. Although spectrophotometry is most commonly applied to ultraviolet, visible, and infrared radiation, modern spectrophotometers can interrogate wide swaths of the electromagnetic spectrum, including x-ray, ultraviolet, visible, infrared, or microwave wavelengths.
Overview
Spectrophotometry is a tool that hinges on the quantitative analysis of molecules depending on how much light is absorbed by colored compounds. Important features of spectrophotometers are spectral bandwidth (the range of colors it can transmit through the test sample), the percentage of sample transmission, the logarithmic range of sample absorption, and sometimes a percentage of reflectance measurement.
A spectrophotometer is commonly used for the measurement of transmittance or reflectance of solutions, transparent or opaque solids, such as polished glass, or gases. Although many biochemicals are colored, as in, they absorb visible light and therefore can be measured by colorimetric procedures, even colorless biochemicals can often be converted to colored compounds suitable for chromogenic color-forming reactions to yield compounds suitable for colorimetric analysis. However, they can also be designed to measure the diffusivity on any of the listed light ranges that usually cover around 200–2500 nm using different controls and calibrations. Within these ranges of light, calibrations are needed on the machine using standards that vary in type depending on the wavelength of the photometric determination.
An example of an experiment in which spectrophotometry is used is the determination of the equilibrium constant of a solution. A certain chemical reaction within a solution may occur in a forward and reverse direction, where reactants form products and products break down into reactants. At some point, this chemical reaction will reach a point of balance called an equilibrium point. To determine the respective concentrations of reactants and products at this point, the light transmittance of the solution can be tested using spectrophotometry. The amount of light that passes through the solution is indicative of the concentration of certain chemicals that do not allow light to pass through.
The absorption of light is due to the interaction of light with the electronic and vibrational modes of molecules. Each type of molecule has an individual set of energy levels associated with the makeup of its chemical bonds and nuclei and thus will absorb light of specific wavelengths, or energies, resulting in unique spectral properties. This is based upon its specific and distinct makeup.
The use of spectrophotometers spans various scientific fields, such as physics, materials science, chemistry, biochemistry, chemical engineering, and molecular biology. They are widely used in many industries including semiconductors, laser and optical manufacturing, printing and forensic examination, as well as in laboratories for the study of chemical substances. Spectrophotometry is often used in measurements of enzyme activities, determinations of protein concentrations, determinations of enzymatic kinetic constants, and measurements of ligand binding reactions. Ultimately, a spectrophotometer is able to determine, depending on the control or calibration, what substances are present in a target and exactly how much through calculations of observed wavelengths.
In astronomy, the term spectrophotometry refers to the measurement of the spectrum of a celestial object in which the flux scale of the spectrum is calibrated as a function of wavelength, usually by comparison with an observation of a spectrophotometric standard star, and corrected for the absorption of light by the Earth's atmosphere.
History
Invented by Arnold O. Beckman in 1940 , the spectrophotometer was created with the aid of his colleagues at his company National Technical Laboratories founded in 1935 which would become Beckman Instrument Company and ultimately Beckman Coulter. This would come as a solution to the previously created spectrophotometers which were unable to absorb the ultraviolet correctly. He would start with the invention of Model A where a glass prism was used to absorb the UV light. It would be found that this did not give satisfactory results, therefore in Model B, there was a shift from a glass to a quartz prism which allowed for better absorbance results. From there, Model C was born with an adjustment to the wavelength resolution which ended up having three units of it produced. The last and most popular model became Model D which is better recognized now as the DU spectrophotometer which contained the instrument case, hydrogen lamp with ultraviolet continuum, and a better monochromator. It was produced from 1941 to 1976 where the price for it in 1941 was US$723 (far-UV accessories were an option at additional cost). In the words of Nobel chemistry laureate Bruce Merrifield, it was "probably the most important instrument ever developed towards the advancement of bioscience."
Once it became discontinued in 1976, Hewlett-Packard created the first commercially available diode-array spectrophotometer in 1979 known as the HP 8450A. Diode-array spectrophotometers differed from the original spectrophotometer created by Beckman because it was the first single-beam microprocessor-controlled spectrophotometer that scanned multiple wavelengths at a time in seconds. It irradiates the sample with polychromatic light which the sample absorbs depending on its properties. Then it is transmitted back by grating the photodiode array which detects the wavelength region of the spectrum. Since then, the creation and implementation of spectrophotometry devices has increased immensely and has become one of the most innovative instruments of our time.
Design
There are two major classes of devices: single-beam and double-beam. A double-beam spectrophotometer compares the light intensity between two light paths, one path containing a reference sample and the other the test sample. A single-beam spectrophotometer measures the relative light intensity of the beam before and after a test sample is inserted. Although comparison measurements from double-beam instruments are easier and more stable, single-beam instruments can have a larger dynamic range and are optically simpler and more compact. Additionally, some specialized instruments, such as spectrophotometers built onto microscopes or telescopes, are single-beam instruments due to practicality.
Historically, spectrophotometers use a monochromator containing a diffraction grating to produce the analytical spectrum. The grating can either be movable or fixed. If a single detector, such as a photomultiplier tube or photodiode is used, the grating can be scanned stepwise (scanning spectrophotometer) so that the detector can measure the light intensity at each wavelength (which will correspond to each "step"). Arrays of detectors (array spectrophotometer), such as charge-coupled devices (CCD) or photodiode arrays (PDA) can also be used. In such systems, the grating is fixed and the intensity of each wavelength of light is measured by a different detector in the array. Additionally, most modern mid-infrared spectrophotometers use a Fourier transform technique to acquire the spectral information. This technique is called Fourier transform infrared spectroscopy.
When making transmission measurements, the spectrophotometer quantitatively compares the fraction of light that passes through a reference solution and a test solution, then electronically compares the intensities of the two signals and computes the percentage of transmission of the sample compared to the reference standard. For reflectance measurements, the spectrophotometer quantitatively compares the fraction of light that reflects from the reference and test samples. Light from the source lamp is passed through a monochromator, which diffracts the light into a "rainbow" of wavelengths through a rotating prism and outputs narrow bandwidths of this diffracted spectrum through a mechanical slit on the output side of the monochromator. These bandwidths are transmitted through the test sample. Then the photon flux density (watts per meter squared usually) of the transmitted or reflected light is measured with a photodiode, CCD or other light sensor. The transmittance or reflectance value for each wavelength of the test sample is then compared with the transmission or reflectance values from the reference sample. Most instruments will apply a logarithmic function to the linear transmittance ratio to calculate the 'absorbency' of the sample, a value which is proportional to the 'concentration' of the chemical being measured.
In short, the sequence of events in a scanning spectrophotometer is as follows:
The light source is shone into a monochromator, diffracted into a rainbow, and split into two beams. It is then scanned through the sample and the reference solutions.
Fractions of the incident wavelengths are transmitted through, or reflected from, the sample and the reference.
The resultant light strikes the photodetector device, which compares the relative intensity of the two beams.
Electronic circuits convert the relative currents into linear transmission percentages or absorbance or concentration values.
In an array spectrophotometer, the sequence is as follows:
The light source is shone into the sample and focused into a slit
The transmitted light is refracted into a rainbow with the reflection grating
The resulting light strikes the photodetector device which compares the intensity of the beam
Electronic circuits convert the relative currents into linear transmission percentages and/or absorbance/concentration values
Many older spectrophotometers must be calibrated by a procedure known as "zeroing", to balance the null current output of the two beams at the detector. The transmission of a reference substance is set as a baseline (datum) value, so the transmission of all other substances is recorded relative to the initial "zeroed" substance. The spectrophotometer then converts the transmission ratio into 'absorbency', the concentration of specific components of the test sample relative to the initial substance.
Types Of Spectrophotometers
There are some common types of spectrophotometers include:
UV-vis spectrophotometer: Measures light absorption in UV and visible ranges (200-800 nm). Used for quantification of many inorganic and organic compounds.
1. Infrared spectrophotometer: Measures infrared light absorption, allowing identification of chemical bonds and functional groups.
2. Atomic absorption spectrophotometer (AAS): Uses absorption of light by vaporized analyte atoms to determine concentrations of metals and metalloids.
3. Fluorescence spectrophotometer: Measures intensity of fluorescent light emitted from samples after excitation. Allows highly sensitive analysis of samples with native or induced fluorescence.
4. Colorimeter: Simple spectrophotometers used to measure light absorption for colorimetric assays and tests.
Applications in biochemistry
Spectrophotometry is an important technique used in many biochemical experiments that involve DNA, RNA, and protein isolation, enzyme kinetics and biochemical analyses. Since samples in these applications are not readily available in large quantities, they are especially suited to be analyzed in this non-destructive technique. In addition, precious sample can be saved by utilizing a micro-volume platform where as little as 1uL of sample is required for complete analyses.
A brief explanation of the procedure of spectrophotometry includes comparing the absorbency of a blank sample that does not contain a colored compound to a sample that contains a colored compound. This coloring can be accomplished by either a dye such as Coomassie Brilliant Blue G-250 dye measured at 595 nm or by an enzymatic reaction as seen between β-galactosidase and ONPG (turns sample yellow) measured at 420 nm. The spectrophotometer is used to measure colored compounds in the visible region of light (between 350 nm and 800 nm), thus it can be used to find more information about the substance being studied.
In biochemical experiments, a chemical and/or physical property is chosen and the procedure that is used is specific to that property to derive more information about the sample, such as the quantity, purity, enzyme activity, etc.
Spectrophotometry can be used for a number of techniques such as determining optimal wavelength absorbance of samples, determining optimal pH for absorbance of samples, determining concentrations of unknown samples, and determining the pKa of various samples. Spectrophotometry is also a helpful process for protein purification and can also be used as a method to create optical assays of a compound. Spectrophotometric data can also be used in conjunction with the Beer–Lambert Equation, , to determine various relationships between transmittance and concentration, and absorbance and concentration. Because a spectrophotometer measures the wavelength of a compound through its color, a dye-binding substance can be added so that it can undergo a color change and be measured. It is possible to know the concentrations of a two-component mixture using the absorption spectra of the standard solutions of each component. To do this, it is necessary to know the extinction coefficient of this mixture at two wavelengths and the extinction coefficients of solutions that contain the known weights of the two components. In addition to the traditional Beer-Lamberts law model, cuvette based label free spectroscopy can be used, which add an optical filter in the pathways of the light, enabling the spectrophotometer to quantify concentration, size and refractive index of samples following the hands law. Spectrophotometers have been developed and improved over decades and have been widely used among chemists. Additionally, Spectrophotometers are specialized to measure either UV or Visible light wavelength absorbance values. It is considered to be a highly accurate instrument that is also very sensitive and therefore extremely precise, especially in determining color change. This method is also convenient for use in laboratory experiments because it is an inexpensive and relatively simple process.
UV-visible spectrophotometry
Most spectrophotometers are used in the UV and visible regions of the spectrum, and some of these instruments also operate into the near-infrared region as well. The concentration of a protein can be estimated by measuring the OD at 280 nm due to the presence of tryptophan, tyrosine and phenylalanine. This method is not very accurate since the composition of proteins varies greatly and proteins with none of these amino acids do not have maximum absorption at 280 nm. Nucleic acid contamination can also interfere. This method requires a spectrophotometer capable of measuring in the UV region with quartz cuvettes.
Ultraviolet-visible (UV-vis) spectroscopy involves energy levels that excite electronic transitions. Absorption of UV-vis light excites molecules that are in ground-states to their excited-states.
Visible region 400–700 nm spectrophotometry is used extensively in colorimetry science. It is a known fact that it operates best at the range of 0.2–0.8 O.D. Ink manufacturers, printing companies, textiles vendors, and many more, need the data provided through colorimetry. They take readings in the region of every 5–20 nanometers along the visible region, and produce a spectral reflectance curve or a data stream for alternative presentations. These curves can be used to test a new batch of colorant to check if it makes a match to specifications, e.g., ISO printing standards.
Traditional visible region spectrophotometers cannot detect if a colorant or the base material has fluorescence. This can make it difficult to manage color issues if for example one or more of the printing inks is fluorescent. Where a colorant contains fluorescence, a bi-spectral fluorescent spectrophotometer is used. There are two major setups for visual spectrum spectrophotometers, d/8 (spherical) and 0/45. The names are due to the geometry of the light source, observer and interior of the measurement chamber. Scientists use this instrument to measure the amount of compounds in a sample. If the compound is more concentrated more light will be absorbed by the sample; within small ranges, the Beer–Lambert law holds and the absorbance between samples vary with concentration linearly. In the case of printing measurements two alternative settings are commonly used- without/with uv filter to control better the effect of uv brighteners within the paper stock.
Samples are usually prepared in cuvettes; depending on the region of interest, they may be constructed of glass, plastic (visible spectrum region of interest), or quartz (Far UV spectrum region of interest). Some applications require small volume measurements which can be performed with micro-volume platforms.
Applications
Estimating dissolved organic carbon concentration
Specific ultraviolet absorbance for metric of aromaticity
Bial's test for concentration of pentoses
Experimental application
As described in the applications section, spectrophotometry can be used in both qualitative and quantitative analysis of DNA, RNA, and proteins. Qualitative analysis can be used and spectrophotometers are used to record spectra of compounds by scanning broad wavelength regions to determine the absorbance properties (the intensity of the color) of the compound at each wavelength. One experiment that can demonstrate the various uses that visible spectrophotometry can have is the separation of β-galactosidase from a mixture of various proteins. Largely, spectrophotometry is best used to help quantify the amount of purification your sample has undergone relative to total protein concentration. By running an affinity chromatography, B-Galactosidase can be isolated and tested by reacting collected samples with Ortho-Nitrophenyl-β-galactoside (ONPG) and determining if the sample turns yellow. Following this testing the sample at 420 nm for specific interaction with ONPG and at 595 for a Bradford Assay the amount of purification can be assessed quantitatively. In addition to this spectrophotometry can be used in tandem with other techniques such as SDS-Page electrophoresis in order to purify and isolate various protein samples.
IR spectrophotometry
Spectrophotometers designed for the infrared region are quite different because of the technical requirements of measurement in that region. One major factor is the type of photosensors that are available for different spectral regions, but infrared measurement is also challenging because virtually everything emits IR as thermal radiation, especially at wavelengths beyond about 5 μm.
Another complication is that quite a few materials such as glass and plastic absorb infrared, making it incompatible as an optical medium. Ideal optical materials are salts, which do not absorb strongly. Samples for IR spectrophotometry may be smeared between two discs of potassium bromide or ground with potassium bromide and pressed into a pellet. Where aqueous solutions are to be measured, insoluble silver chloride is used to construct the cell.
Spectroradiometers
Spectroradiometers, which operate almost like the visible region spectrophotometers, are designed to measure the spectral density of illuminants. Applications may include evaluation and categorization of lighting for sales by the manufacturer, or for the customers to confirm the lamp they decided to purchase is within their specifications. Components:
The light source shines onto or through the sample.
The sample transmits or reflects light.
The detector detects how much light was reflected from or transmitted through the sample.
The detector then converts how much light the sample transmitted or reflected into a number.
See also
Atomic absorption spectrophotometry
Atomic emission spectroscopy
Inductively coupled plasma atomic emission spectroscopy
Inductively coupled plasma mass spectrometry
LBOZ
Microspectrophotometry
Slope spectroscopy
Spectroradiometry
References
External links
Spectrophotometry Handbook
Spectroscopy
Photochemistry
Radiometry | Spectrophotometry | [
"Physics",
"Chemistry",
"Engineering"
] | 4,096 | [
"Telecommunications engineering",
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"nan",
"Spectroscopy",
"Radiometry"
] |
481,053 | https://en.wikipedia.org/wiki/Hund%27s%20rule%20of%20maximum%20multiplicity | Hund's rule of maximum multiplicity is a rule based on observation of atomic spectra, which is used to predict the ground state of an atom or molecule with one or more open electronic shells. The rule states that for a given electron configuration, the lowest energy term is the one with the greatest value of spin multiplicity. This implies that if two or more orbitals of equal energy are available, electrons will occupy them singly before filling them in pairs. The rule, discovered by Friedrich Hund in 1925, is of important use in atomic chemistry, spectroscopy, and quantum chemistry, and is often abbreviated to Hund's rule, ignoring Hund's other two rules.
Atoms
The multiplicity of a state is defined as 2S + 1, where S is the total electronic spin. A high multiplicity state is therefore the same as a high-spin state. The lowest-energy state with maximum multiplicity usually has unpaired electrons all with parallel spin. Since the spin of each electron is 1/2, the total spin is one-half the number of unpaired electrons, and the multiplicity is the number of unpaired electrons + 1. For example, the nitrogen atom ground state has three unpaired electrons of parallel spin, so that the total spin is 3/2 and the multiplicity is 4.
The lower energy and increased stability of the atom arise because the high-spin state has unpaired electrons of parallel spin, which must reside in different spatial orbitals according to the Pauli exclusion principle. An early but incorrect explanation of the lower energy of high multiplicity states was that the different occupied spatial orbitals create a larger average distance between electrons, reducing electron-electron repulsion energy. However, quantum-mechanical calculations with accurate wave functions since the 1970s have shown that the actual physical reason for the increased stability is a decrease in the screening of electron-nuclear attractions, so that the unpaired electrons can approach the nucleus more closely and the electron-nuclear attraction is increased.
As a result of Hund's rule, constraints are placed on the way atomic orbitals are filled in the ground state using the Aufbau principle. Before any two electrons occupy an orbital in a subshell, other orbitals in the same subshell must first each contain one electron. Also, the electrons filling a subshell will have parallel spin before the shell starts filling up with the opposite spin electrons (after the first orbital gains a second electron). As a result, when filling up atomic orbitals, the maximum number of unpaired electrons (and hence maximum total spin state) is assured.
For example, in the oxygen atom, the 2p4 subshell arranges its electrons as [↑↓] [↑] [↑] rather than [↑↓] [↑] [↓] or [↑↓] [↑↓][ ]. The manganese (Mn) atom has a 3d5 electron configuration with five unpaired electrons all of parallel spin, corresponding to a 6S ground state. The superscript 6 is the value of the multiplicity, corresponding to five unpaired electrons with parallel spin in accordance with Hund's rule.
An atom can have a ground state with two incompletely filled subshells which are close in energy. The lightest example is the chromium (Cr) atom with a 3d54s electron configuration. Here there are six unpaired electrons all of parallel spin for a 7S ground state.
Molecules
Although most stable molecules have closed electron shells, a few have unpaired electrons for which Hund's rule is applicable. The most important example is the dioxygen molecule, O2, which has two degenerate pi antibonding molecular orbitals (π*) occupied by only two electrons. In accordance with Hund's rule, the ground state is triplet oxygen with two unpaired electrons in singly occupied orbitals. The singlet oxygen state with one doubly occupied and one empty π* is an excited state with different chemical properties and greater reactivity than the ground state.
Exception
In 2004, researchers reported the synthesis of 5-dehydro-m-xylylene (DMX), the first organic molecule known to violate Hund's rule.
See also
Hund's rules (includes this plus 2 other rules)
High spin metal complexes
References
External links
A glossary entry hosted on the web site of the Chemistry Department of Purdue University
Quantum chemistry
Rules | Hund's rule of maximum multiplicity | [
"Physics",
"Chemistry"
] | 922 | [
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"Atomic",
" and optical physics"
] |
481,100 | https://en.wikipedia.org/wiki/Analyticity%20of%20holomorphic%20functions | In complex analysis, a complex-valued function of a complex variable :
is said to be holomorphic at a point if it is differentiable at every point within some open disk centered at , and
is said to be analytic at if in some open disk centered at it can be expanded as a convergent power series (this implies that the radius of convergence is positive).
One of the most important theorems of complex analysis is that holomorphic functions are analytic and vice versa. Among the corollaries of this theorem are
the identity theorem that two holomorphic functions that agree at every point of an infinite set with an accumulation point inside the intersection of their domains also agree everywhere in every connected open subset of their domains that contains the set , and
the fact that, since power series are infinitely differentiable, so are holomorphic functions (this is in contrast to the case of real differentiable functions), and
the fact that the radius of convergence is always the distance from the center to the nearest non-removable singularity; if there are no singularities (i.e., if is an entire function), then the radius of convergence is infinite. Strictly speaking, this is not a corollary of the theorem but rather a by-product of the proof.
no bump function on the complex plane can be entire. In particular, on any connected open subset of the complex plane, there can be no bump function defined on that set which is holomorphic on the set. This has important ramifications for the study of complex manifolds, as it precludes the use of partitions of unity. In contrast the partition of unity is a tool which can be used on any real manifold.
Proof
The argument, first given by Cauchy, hinges on Cauchy's integral formula and the power series expansion of the expression
Let be an open disk centered at and suppose is differentiable everywhere within an open neighborhood containing the closure of . Let be the positively oriented (i.e., counterclockwise) circle which is the boundary of and let be a point in . Starting with Cauchy's integral formula, we have
Interchange of the integral and infinite sum is justified by observing that is bounded on by some positive number , while for all in
for some positive as well. We therefore have
on , and as the Weierstrass M-test shows the series converges uniformly over , the sum and the integral may be interchanged.
As the factor does not depend on the variable of integration , it may be factored out to yield
which has the desired form of a power series in :
with coefficients
Remarks
Since power series can be differentiated term-wise, applying the above argument in the reverse direction and the power series expression for gives This is a Cauchy integral formula for derivatives. Therefore the power series obtained above is the Taylor series of .
The argument works if is any point that is closer to the center than is any singularity of . Therefore, the radius of convergence of the Taylor series cannot be smaller than the distance from to the nearest singularity (nor can it be larger, since power series have no singularities in the interiors of their circles of convergence).
A special case of the identity theorem follows from the preceding remark. If two holomorphic functions agree on a (possibly quite small) open neighborhood of , then they coincide on the open disk , where is the distance from to the nearest singularity.
External links
holomorphic functions
Theorems in complex analysis
Article proofs | Analyticity of holomorphic functions | [
"Mathematics"
] | 722 | [
"Theorems in mathematical analysis",
"Article proofs",
"Theorems in complex analysis"
] |
23,892,026 | https://en.wikipedia.org/wiki/Sharq%20El%20Owainat | Sharq El Owainat, or East Oweinat is a 110,000 acre desert land reclamation project that started in 1991, in the New Valley Governorate, Egypt. It is in a remote location in the Western Desert in the extreme south-west of the country, east of Oweinat Mountain, delimiting Egypt's south western border with Libya and Sudan. The project is operated by the Egyptian Military's National Company for Reclamation and Agriculture in East Oweinat, and in 2021 a further 1.4 million acres were added to its concession.
Water management
The Sharq El Owainat project depends on “fossil water” from the Nubian Sandstone Aquifer, which recharges slowly and is considered a non-renewable resource. The water is pumped from underground and delivered to sprinklers that rotate around a central pivot point, creating green crop circles.
Operators
The initial phase of the project resulted in 27,000 feddans of barren desert land was converted to fertile land. There are about 400 water wells in the area with a further 250 under construction. There is also a nursery that includes 26 greenhouses.
The National Company for Reclamation and Agriculture in East Oweinat has undertaken a large part of the land cultivation, in addition to selling vast plots to other government agencies. The Awkaf Agency owns 48,000 acres of which it has cultivated 20,000.
In addition to Egyptian government companies, a number of private and foreign companies operate in Oweinat. For example, the United Arab Emirates' Jenaan owns 50,000 acres and Al Dahra 23,500 acres. Jenaan's agreement also included signing an agreement with the national airline of Egypt, EgyptAir Express (subsidiary of EgyptAir), to operate a weekly flight from Cairo International Airport to Sharq El Owainat Airport in order to serve the movement of workers and investors to encourage agricultural investment in the region. The flights began 1 November 2009 for an initial 1-year period.
Their cultivation works are seen by some researchers as part of a UAE policy towards consolidating a pivotal role as a food re-export hub, intensifying the industrialisation and commodification of agriculture in the region.
See also
Sharq El Owainat Airport
New Valley Governorate
Toshka
New Valley Project
References
New Valley Governorate
Geography of Egypt
Agriculture in Egypt
Interbasin transfer | Sharq El Owainat | [
"Environmental_science"
] | 489 | [
"Hydrology",
"Interbasin transfer"
] |
23,892,932 | https://en.wikipedia.org/wiki/GoldSim | GoldSim is dynamic, probabilistic simulation software developed by GoldSim Technology Group.
This general-purpose simulator is a hybrid of several simulation approaches, combining an extension of system dynamics with some aspects of discrete event simulation, and embedding the dynamic simulation engine within a Monte Carlo simulation framework.
While it is a general-purpose simulator, GoldSim has been most extensively used for environmental and engineering risk analysis, with applications in the areas of water resource management
, mining
, radioactive waste management
, geological carbon sequestration
, aerospace mission risk analysis
and energy.
History
In 1990, Golder Associates, an international engineering consulting firm, was asked by the United States Department of Energy (DOE) to develop probabilistic simulation software that could be used to help with decision support and management within the Office of Civilian Radioactive Waste Management. The results of this effort were two DOS-based programs (RIP and STRIP), which were used to support radioactive waste management projects within the DOE.
In 1996, in an effort funded by Golder Associates, the US DOE, the Japan Nuclear Cycle Development Institute (currently the Japan Atomic Energy Agency) and the Spanish National Radioactive Waste Company (ENRESA), the capabilities of RIP and STRIP were incorporated into a general purpose Windows-based simulator called GoldSim. Subsequent funding was also provided by NASA.
Initially only offered to the original funding organizations, GoldSim was released to the public in 2002. In 2004, GoldSim Technology Group LLC was spun off from Golder Associates and is now a wholly independent company.
Notable applications include providing the simulation framework for: 1) the Yucca Mountain Repository Performance Assessment model developed by Sandia National Laboratories; 2) a comprehensive system-level computational model for performance assessment of geological sequestration of CO2 developed by Los Alamos National Laboratory; 3) a flood operations model to help better understand and fine tune operations of a large dam used for water supply and flood control in Queensland, Australia; and 4) models for simulating risks associated with future crewed space missions by NASA Ames Research Center.
Modeling Environment
GoldSim provides a visual and hierarchical modeling environment, which allows users to construct models by adding “elements” (model objects) that represent data, equations, processes or events, and linking them together into graphical representations that resemble influence diagrams. Influence arrows are automatically drawn as elements are referenced by other elements. Complex systems can be translated into hierarchical GoldSim models by creating layer of “containers” (or sub-models). Visual representations and hierarchical structures help users to build very large, complex models that can still be explained to interested stakeholders (e.g., government regulators, elected officials, and the public).
Though it is primarily a continuous simulator, GoldSim has a number of features typically associated with discrete simulators. By combining these two simulation methods, systems that are best represented using both continuous and discrete dynamics can often be more accurately simulated. Examples include tracking the quantity of water in a reservoir that is subject to both continuous inflows and outflows, as well as sudden storm events; and tracking the quantity of fuel in a space vehicle as it is subjected to random perturbations (e.g., component failures, extreme environmental conditions).
Because the software was originally developed for complex environmental applications, in which many inputs are uncertain and/or stochastic, in addition to being a dynamic simulator, GoldSim is a Monte Carlo simulator, such that inputs can be defined as distributions and the entire system simulated a large number of times to provide probabilistic outputs. As such, the software incorporates a number of computational features to facilitate probabilistic simulation of complex systems, including tools for generating and correlating stochastic time series, advanced sampling capabilities (including latin hypercube sampling, nested Monte Carlo analysis, and importance sampling), and support for distributed processing.
See also
Computer Simulation
Monte Carlo Simulation
List of computer simulation software
References
External links
Simulation software
Risk management software
Scientific simulation software
Mathematical software
Environmental science software
Numerical software
Simulation programming languages
Probabilistic software
Science software for Windows | GoldSim | [
"Mathematics",
"Environmental_science"
] | 835 | [
"Probabilistic software",
"Environmental science software",
"Numerical software",
"Mathematical software"
] |
23,895,065 | https://en.wikipedia.org/wiki/Ehrenfest%20equations | Ehrenfest equations (named after Paul Ehrenfest) are equations which describe changes in specific heat capacity and derivatives of specific volume in second-order phase transitions. The Clausius–Clapeyron relation does not make sense for second-order phase transitions, as both specific entropy and specific volume do not change in second-order phase transitions.
Quantitative consideration
Ehrenfest equations are the consequence of continuity of specific entropy and specific volume , which are first derivatives of specific Gibbs free energy – in second-order phase transitions. If one considers specific entropy as a function of temperature and pressure, then its differential is:
.
As , then the differential of specific entropy also is:
,
where and are the two phases which transit one into other. Due to continuity of specific entropy, the following holds in second-order phase transitions: . So,
Therefore, the first Ehrenfest equation is:
.
The second Ehrenfest equation is got in a like manner, but specific entropy is considered as a function of temperature and specific volume:
The third Ehrenfest equation is got in a like manner, but specific entropy is considered as a function of and :
.
Continuity of specific volume as a function of and gives the fourth Ehrenfest equation:
.
Limitations
Derivatives of Gibbs free energy are not always finite. Transitions between different magnetic states of metals can't be described by Ehrenfest equations.
See also
Paul Ehrenfest
Clausius–Clapeyron relation
Phase transition
References
Thermodynamic equations | Ehrenfest equations | [
"Physics",
"Chemistry"
] | 301 | [
"Thermodynamic equations",
"Equations of physics",
"Thermodynamics"
] |
4,807,288 | https://en.wikipedia.org/wiki/List%20of%20backmasked%20messages | The following is an incomplete list of backmasked messages in music.
See also
Backmasking
Phonetic reversal
Hidden message
Subliminal message
References
External links
Backmask Online — clips and analysis of various alleged and actual backmasked messages
Jeff Milner's Backmasking Page — a Flash player which allows backwards playback of various alleged and actual messages with and without lyrics; the focus of the Wall Street Journal article
Audio engineering
Urban legends
Music-related lists
Perception
Popular music | List of backmasked messages | [
"Engineering"
] | 96 | [
"Electrical engineering",
"Audio engineering"
] |
4,807,473 | https://en.wikipedia.org/wiki/Homestake%20experiment | The Homestake experiment (sometimes referred to as the Davis experiment or Solar Neutrino Experiment and in original literature called Brookhaven Solar Neutrino Experiment or Brookhaven 37Cl (Chlorine) Experiment) was an experiment headed by astrophysicists Raymond Davis, Jr. and John N. Bahcall in the late 1960s. Its purpose was to collect and count neutrinos emitted by nuclear fusion taking place in the Sun. Bahcall performed the theoretical calculations and Davis designed the experiment. After Bahcall calculated the rate at which the detector should capture neutrinos, Davis's experiment turned up only one third of this figure. The experiment was the first to successfully detect and count solar neutrinos, and the discrepancy in results created the solar neutrino problem. The experiment operated continuously from 1970 until 1994. The University of Pennsylvania took it over in 1984. The discrepancy between the predicted and measured rates of neutrino detection was later found to be due to neutrino "flavour" oscillations.
Methodology
The experiment took place in the Homestake Gold Mine in Lead, South Dakota. Davis placed a 380 cubic meter (100,000 gallon) tank of perchloroethylene, a common dry-cleaning fluid, 1,478 meters (4,850 feet) underground. A big target deep underground was needed to prevent interference from cosmic rays, taking into account the very small probability of a successful neutrino capture, and, therefore, very low effect rate even with the huge mass of the target. Perchloroethylene was chosen because it is rich in chlorine. Upon interaction with an electron neutrino, a 37Cl atom transforms into a radioactive isotope of 37Ar, which can then be extracted and counted. The reaction of the neutrino capture is
The reaction threshold is 0.814 MeV, i.e. the neutrino should have at least this energy to be captured by the 37Cl nucleus.
Because 37Ar has a half-life of 35 days, every few weeks, Davis bubbled helium through the tank to collect the argon that had formed. A small (few cubic cm) gas counter was filled by the collected few tens of atoms of 37Ar (together with the stable argon) to detect its decays. In such a way, Davis was able to determine how many neutrinos had been captured.
Conclusions
Davis' figures were consistently very close to one-third of Bahcall's calculations. The first response from the scientific community was that either Bahcall or Davis had made a mistake. Bahcall's calculations were checked repeatedly, with no errors found. Davis scrutinized his own experiment and insisted there was nothing wrong with it. The Homestake experiment was followed by other experiments with the same purpose, such as Kamiokande in Japan, SAGE in the former Soviet Union, GALLEX in Italy, Super Kamiokande, also in Japan, and SNO (Sudbury Neutrino Observatory) in Ontario, Canada. SNO was the first detector able to detect neutrino oscillation, solving the solar neutrino problem. The results of the experiment, published in 2001, revealed that of the three "flavours" between which neutrinos are able to oscillate, Davis's detector was sensitive to only one. After it had been proven that his experiment was sound, Davis shared the 2002 Nobel Prize in Physics for contributions to neutrino physics with Masatoshi Koshiba of Japan, who worked on the Kamiokande and the Super Kamiokande (the prize was also shared with Riccardo Giacconi for his contributions to x-ray astronomy).
See also
Cowan–Reines neutrino experiment (a previous experiment by Reines and Cowan which discovered the antineutrino)
Sanford Underground Research Facility
References
Physics experiments
Neutrino observatories | Homestake experiment | [
"Physics"
] | 813 | [
"Experimental physics",
"Physics experiments"
] |
4,810,180 | https://en.wikipedia.org/wiki/Picarin | Picarin (Tsurupica) is a plastic used to make optics such as lenses for terahertz radiation.
Optical properties
Picarin is useful for this purpose because it is highly transparent in both the THz and visible spectral ranges. The refractive index of Picarin is almost the same for THz (n=1.52) and visible light (n=1.52). It is very strong mechanically, and withstands optical polishing.
Unpolished picarin lenses offer the best performance at spectral range. Unfortunately, unpolished surfaces scatter visible and near-IR light.
References
Optical materials | Picarin | [
"Physics"
] | 132 | [
"Materials stubs",
"Materials",
"Optical materials",
"Matter"
] |
4,810,280 | https://en.wikipedia.org/wiki/Glycoconjugate | In molecular biology and biochemistry, glycoconjugates are the classification family for carbohydrates – referred to as glycans – which are covalently linked with chemical species such as proteins, peptides, lipids, and other compounds. Glycoconjugates are formed in processes termed glycosylation.
Glycoconjugates are very important compounds in biology and consist of many different categories such as glycoproteins, glycopeptides, peptidoglycans, glycolipids, glycosides, and lipopolysaccharides. They are involved in cell–cell interactions, including cell–cell recognition; in cell–matrix interactions; and in detoxification processes.
Generally, the carbohydrate part(s) play an integral role in the function of a glycoconjugate; prominent examples of this are neural cell adhesion molecule (NCAM) and blood proteins where fine details in the carbohydrate structure determine cell binding (or not) or lifetime in circulation.
Although the important molecular species DNA, RNA, ATP, cAMP, cGMP, NADH, NADPH, and coenzyme A all contain a carbohydrate part, generally they are not considered as glycoconjugates.
Glycocojugates of carbohydrates covalently linked to antigens and protein scaffolds can achieve a long term immunological response in the body. Immunization with glycoconjugates successfully induced long term immune memory against carbohydrates antigens. Glycoconjugate vaccines was introduced since the 1990s have yielded effective results against influenza and meningococcus.
In 2021 glycoRNAs were observed for the first time.
References
Carbohydrates
Glycobiology | Glycoconjugate | [
"Chemistry",
"Biology"
] | 395 | [
"Pharmacology",
"Carbohydrates",
"Biomolecules by chemical classification",
"Medicinal chemistry stubs",
"Organic compounds",
"Carbohydrate chemistry",
"Biochemistry",
"Glycobiology",
"Pharmacology stubs"
] |
4,811,381 | https://en.wikipedia.org/wiki/Sulfite%20oxidase | Sulfite oxidase () is an enzyme in the mitochondria of all eukaryotes, with exception of the yeasts. It oxidizes sulfite to sulfate and, via cytochrome c, transfers the electrons produced to the electron transport chain, allowing generation of ATP in oxidative phosphorylation. This is the last step in the metabolism of sulfur-containing compounds and the sulfate is excreted.
Sulfite oxidase is a metallo-enzyme that utilizes a molybdopterin cofactor and a heme group (in the case of animals). It is one of the cytochromes b5 and belongs to the enzyme super-family of molybdenum oxotransferases that also includes DMSO reductase, xanthine oxidase, and nitrite reductase.
In mammals, the expression levels of sulfite oxidase is high in the liver, kidney, and heart, and very low in spleen, brain, skeletal muscle, and blood.
Structure
As a homodimer, sulfite oxidase contains two identical subunits with an N-terminal domain and a C-terminal domain. These two domains are connected by ten amino acids forming a loop. The N-terminal domain has a heme cofactor with three adjacent antiparallel beta sheets and five alpha helices. The C-terminal domain hosts a molybdopterin cofactor that is surrounded by thirteen beta sheets and three alpha helices. The molybdopterin cofactor has a Mo(VI) center, which is bonded to a sulfur from cysteine, an ene-dithiolate from pyranopterin, and two terminal oxygens. It is at this molybdenum center that the catalytic oxidation of sulfite takes place.
The pyranopterin ligand which coordinates the molybdenum centre via the enedithiolate. The molybdenum centre has a square pyramidal geometry and is distinguished from the xanthine oxidase family by the orientation of the oxo group facing downwards rather than up.
Active site and mechanism
The active site of sulfite oxidase contains the molybdopterin cofactor and supports molybdenum in its highest oxidation state, +6 (MoVI). In the enzyme's oxidized state, molybdenum is coordinated by a cysteine thiolate, the dithiolene group of molybdopterin, and two terminal oxygen atoms (oxos). Upon reacting with sulfite, one oxygen atom is transferred to sulfite to produce sulfate, and the molybdenum center is reduced by two electrons to MoIV. Water then displaces sulfate, and the removal of two protons (H+) and two electrons (e−) returns the active site to its original state. A key feature of this oxygen atom transfer enzyme is that the oxygen atom being transferred arises from water, not from dioxygen (O2).
Electrons are passed one at a time from the molybdenum to the heme group which reacts with cytochrome c to reoxidize the enzyme. The electrons from this reaction enter the electron transport chain (ETC).
This reaction is generally the rate limiting reaction. Upon reaction of the enzyme with sulfite, it is reduced by 2 electrons. The negative potential seen with re-reduction of the enzyme shows the oxidized state is favoured.
Among the Mo enzyme classes, sulfite oxidase is the most easily oxidized. Although under low pH conditions the oxidative reaction become partially rate limiting.
Deficiency
Sulfite oxidase is required to metabolize the sulfur-containing amino acids cysteine and methionine in foods. Lack of functional sulfite oxidase causes a disease known as sulfite oxidase deficiency. This rare but fatal disease causes neurological disorders, mental retardation, physical deformities, the degradation of the brain, and death. Reasons for the lack of functional sulfite oxidase include a genetic defect that leads to the absence of a molybdopterin cofactor and point mutations in the enzyme. A G473D mutation impairs dimerization and catalysis in human sulfite oxidase.
See also
Sulfur metabolism
Bioinorganic chemistry
References
Further reading
Kisker, C. “Sulfite oxidase”, Messerschimdt, A.; Huber, R.; Poulos, T.; Wieghardt, K.; eds. Handbook of Metalloproteins, vol 2; John Wiley and Sons, Ltd: New York, 2002
External links
Research Activity of Sarkar Group
PDBe-KB provides an overview of all the structure information available in the PDB for Human Sulfite oxidase, mitochondrial
EC 1.8.3
Metalloproteins
Molybdenum compounds | Sulfite oxidase | [
"Chemistry"
] | 1,062 | [
"Metalloproteins",
"Bioinorganic chemistry"
] |
4,812,392 | https://en.wikipedia.org/wiki/Tube%20cleaning | Tube cleaning describes the activity of, or device for, the cleaning and maintenance of fouled tubes.
The need for cleaning arises because the medium that is transported through the tubes may cause deposits and finally even obstructions. In system engineering and in industry, particular demands are placed upon surface roughness or heat transfer. In the food and pharmaceutical industries as well as in medical technology, the requirements are germproofness, and that the tubes are free from foreign matter, for example after the installation of the tube or after a change of product. Another trouble source may be corrosion due to deposits which may also cause tube failure.
Cleaning methods
Depending on application, conveying medium and tube material, the following methods of tube cleaning are available:
Lost tubes
In the medical field, "lost" tubes are tubes which have to be replaced after single use. This is not genuine tube cleaning in the proper sense and is very often applied in the medical sector, for instance with cannulas of syringes, infusion needles or medical appliances, such as kidney machines at dialysis. The reasons for the single use are primarily the elimination of infection risks but also the fact that cleaning would be very expensive and, particularly with cheap mass products, out of all proportion in terms of cost. Single use is therefore common practice with tubes of up to 20 mm diameter. For the same reasons as in the medical sector, single use may also be applicable in the food and pharmaceutical process technology, however in these sectors the tube diameters may exceed 20 mm.
In other fields (e.g., in heat exchangers), tubing may also sometimes need to be replaced (or removed, plugged, etc.), but this typically occurs only after a prolonged use, when the tube develops serious flaws (e.g., due to corrosion).
Chemical process
Chemical tube cleaning is understood to be the use of cleaning liquids or chemicals for removing layers and deposits. A typical example is the deliming of a coffee maker where scale is removed by means of acetic acid or citric acid. Depending on the field of application and tube material, special cleaning liquids may be used which also require a multi-stage treatment:
chemical activation
cleaning
rinsing
This method of cleaning calls for a shutdown of the relevant system which causes undesired standstill periods. To safeguard a continuous production operation it may be necessary to install several systems. Another disadvantage: in the field of large-scale technology (reactor, heat exchanger, condenser, etc.), huge quantities of cleaning liquids would be required which would cause major disposal problems. A further problem occurs in the food industry through the possible toxicity of the cleaning liquid. Only the strict observance of rinsing instructions and an exact control of the admissible residue tolerances can remedy things here. This in turn requires expensive detection methods. Generally the process of chemical tube cleaning is applicable for any diameter, however practical limits of use ensue from the volume of a pipeline.
Mechanical process
A mechanical tube cleaning system is a cleaning body that is moved through the tube in order to remove deposits from the tube wall. In the most simple case it is a matter of a brush that is moved in the tube by means of a rod or a flexible spring (device). In large-scale technology and industrial sector, however, several processes have developed which necessitate a more detailed definition.
Off-line process
An off-line process is characterized by the fact that the system to be cleaned has to be taken out of operation in order to inject the cleaning body(ies) and to execute the cleaning procedure. An additional distinction must be made between active and passive cleaning bodies.
Passive cleaning bodies may be a matter of brushes or special constructions like scrapers or so-called "pigs", for instance, which are conveyed through the tubes by means of pressurized air, water, or other media. In most cases, cleaning is implemented through the oversize of the cleaning bodies compared to the tube inner diameter. The types range from brushes with bristles of plastic or steel to scrapers (with smaller tube diameters) and more expensive designs with spraying nozzles for pipelines. This method is applied for tube and pipe diameters from around 5 mm to several metres. Also belonging to this field is the cleaning of obstructed soil pipes of domestic sewage systems that is done by means of a rotating, flexible shaft.
The active cleaning bodies are more or less remote controlled robots that move through the tubes and fulfill their cleaning task, pulling along with them not only cables for power supply and communication but also hoses for the cleaning liquid. Also measuring devices or cameras are carried along to monitor the function. To date, such devices have still required minimum diameters of around 300 mm, however a further diminution is being worked on. The reasonable maximum diameter of this kind of devices is 2 m because above this diameter an inspection of the pipe would certainly be less expensive. For such large diameters a robot application is imaginable only if health-hazardous chemicals are in use.
On-line process
In the on-line process, the cleaning body moves through the tubes with the conveying medium and cleans them by means of its oversize compared to the tube diameter. In the range of diameters of up to 50 mm these cleaning bodies consist of sponge rubber, in larger diameters up to the size of oil pipelines it is a matter of scrapers or so-called pigs. Sponge rubber balls are applied mainly for cooling water, like sea, river, or cooling tower water. For the chemical or pharmaceutical industry, specially adapted cleaning bodies are imaginable but the conveying media flows are so weak that off-line processes are employed in most cases. Given the fact that the cleaning bodies are not allowed to remain in the conveying medium they have to be collected after passing through the tubes. In the case of sponge rubber balls this is done through special strainer sections; for scrapers or pigs an outward transfer station is provided. According to the Taprogge process, the sponge rubber balls are re-injected upstream of the system to be cleaned by a corresponding ball recirculating unit whereas the scraper or pig is mostly taken out by hand and re-injected into another collector. Sponge rubber balls therefore safeguard a continuous cleaning while the scraper or pig system is discontinuous.
Thermal process
In thermal tube cleaning the layer or deposit is dried through a heating whereby it flakes off due to its embrittlement and is then discharged, either by the conveying medium or a rinsing liquid. Depending on the required temperature, the heating can be either a parallel tube heating or an induction heating. This process is an off-line process. Occasionally it is also used for the sterilization of tubes in the pharmaceutical or food industry. A diameter range cannot be indicated here because this method can be applied for certain processes only; a technical limitation of the heating is given only by the materials and the required amount of heat.
Special types
Special types of tube cleaning are all such types which are partly in experimental stage only and do not come under the process types mentioned before, such as, for example:
induction of water hammers, so that the layer or deposit comes off through short-term material elongation
use of vibration generators, partly at the tubes through vibration exciters, partly by means of piezoelectric crystals in the conveying medium, in order to transform the conveying medium into a cleaning medium through reduction of the surface tension
magnetic fields to avoid tube calcification
nanotechnical treatment of tube surfaces to avoid layers and deposits
See also
Fouling
Fouling Mitigation
Biofouling
References
Fouling
Piping
Tubing (material) | Tube cleaning | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,578 | [
"Building engineering",
"Chemical engineering",
"Mechanical engineering",
"Piping",
"Materials degradation",
"Fouling"
] |
4,812,837 | https://en.wikipedia.org/wiki/Omega%20oxidation | Omega oxidation (ω-oxidation) is a process of fatty acid metabolism in some species of animals. It is an alternative pathway to beta oxidation that, instead of involving the β carbon, involves the oxidation of the ω carbon (the carbon most distant from the carboxyl group of the fatty acid). The process is normally a minor catabolic pathway for medium-chain fatty acids (10-12 carbon atoms), but becomes more important when β oxidation is defective.
In vertebrates, the enzymes for ω oxidation are located in the smooth ER of liver and kidney cells, instead of in the mitochondria as with β oxidation. The steps of the process are as follows:
After these three steps, either end of the fatty acid can be attached to coenzyme A. The molecule can then enter the mitochondrion and undergo β oxidation. The final products after successive oxidations include succinic acid, which can enter the citric acid cycle, and adipic acid.
The first step in ω-oxidation, i.e. addition of a hydroxy residue to the omega carbon of short, intermediate, and long chain unsaturated or saturated fatty acids, can serve to produce or inactivate signaling molecules. In humans, a subset of Cytochrome P450 (CYP450) microsome-bound ω-hydroxylases (termed Cytochrome P450 omega hydroxylases) metabolize arachidonic acid (also known as eicosatetraenoic acid) to 20-hydroxyeicosatetraenoic acid (20-HETE). 20-HETE possesses a range of activities in animal and cellular model systems, e.g. it constricts blood vessels, alters the kidney's reabsorption of salt and water, and promotes the growth of cancer cells; genetic studies in humans suggest that 20-HETE contributes to hypertension, myocardial infarction, and brain stroke (see 20-Hydroxyeicosatetraenoic acid). Among the CYP450 superfamily, members of the CYP4A and CYP4F subfamilies viz., CYP4A11, CYP4F2, CYP4F3, are considered the predominant cytochrome P450 enzymes responsible in most tissues for forming 20-HETE. CYP2U1 and CYP4Z1 contribute to 20-HETE production in a more limited range of tissues. The cytochrome ω-oxidases including those belonging to the CYP4A and CYP4F sub-families and CYPU21 also ω-hydroxylate and thereby reduce the activity of various fatty acid metabolites of arachidonic acid including LTB4, 5-HETE, 5-oxo-eicosatetraenoic acid, 12-HETE, and several prostaglandins that are involved in regulating various inflammatory, vascular, and other responses in animals and humans. This hydroxylation-induced inactivation may underlie the proposed roles of the cytochromes in dampening inflammatory responses and the reported associations of certain CYP4F2 and CYP4F3 single nucleotide variants with human Crohn's disease and Celiac disease, respectively.
See also
Beta oxidation
Alpha oxidation
References
Nelson, D. L. & Cox, M. M. (2005). Lehninger Principles of Biochemistry, 4th Edition. New York: W. H. Freeman and Company, pp. 648–649. .
External links
http://www.biocarta.com/pathfiles/omegaoxidationPathway.asp
Biochemical reactions
Lipid metabolism
Organic redox reactions | Omega oxidation | [
"Chemistry",
"Biology"
] | 781 | [
"Lipid biochemistry",
"Organic redox reactions",
"Organic reactions",
"Biochemical reactions",
"Biochemistry",
"Lipid metabolism",
"Metabolism"
] |
4,813,617 | https://en.wikipedia.org/wiki/Alternating%20permutation | In combinatorial mathematics, an alternating permutation (or zigzag permutation) of the set {1, 2, 3, ..., n} is a permutation (arrangement) of those numbers so that each entry is alternately greater or less than the preceding entry. For example, the five alternating permutations of {1, 2, 3, 4} are:
1, 3, 2, 4 because 1 < 3 > 2 < 4,
1, 4, 2, 3 because 1 < 4 > 2 < 3,
2, 3, 1, 4 because 2 < 3 > 1 < 4,
2, 4, 1, 3 because 2 < 4 > 1 < 3, and
3, 4, 1, 2 because 3 < 4 > 1 < 2.
This type of permutation was first studied by Désiré André in the 19th century.
Different authors use the term alternating permutation slightly differently: some require that the second entry in an alternating permutation should be larger than the first (as in the examples above), others require that the alternation should be reversed (so that the second entry is smaller than the first, then the third larger than the second, and so on), while others call both types by the name alternating permutation.
The determination of the number An of alternating permutations of the set {1, ..., n} is called André's problem. The numbers An are known as Euler numbers, zigzag numbers, or up/down numbers. When n is even the number An is known as a secant number, while if n is odd it is known as a tangent number. These latter names come from the study of the generating function for the sequence.
Definitions
A permutation is said to be alternating if its entries alternately rise and descend. Thus, each entry other than the first and the last should be either larger or smaller than both of its neighbors. Some authors use the term alternating to refer only to the "up-down" permutations for which , calling the "down-up" permutations that satisfy by the name reverse alternating. Other authors reverse this convention, or use the word "alternating" to refer to both up-down and down-up permutations.
There is a simple one-to-one correspondence between the down-up and up-down permutations: replacing each entry with reverses the relative order of the entries.
By convention, in any naming scheme the unique permutations of length 0 (the permutation of the empty set) and 1 (the permutation consisting of a single entry 1) are taken to be alternating.
André's theorem
The determination of the number An of alternating permutations of the set {1, ..., n} is called André's problem. The numbers An are variously known as Euler numbers, zigzag numbers, up/down numbers, or by some combinations of these names. The name Euler numbers in particular is sometimes used for a closely related sequence. The first few values of An are 1, 1, 1, 2, 5, 16, 61, 272, 1385, 7936, 50521, ... .
These numbers satisfy a simple recurrence, similar to that of the Catalan numbers: by splitting the set of alternating permutations (both down-up and up-down) of the set { 1, 2, 3, ..., n, n + 1 } according to the position k of the largest entry , one can show that
for all . used this recurrence to give a differential equation satisfied by the exponential generating function
for the sequence . In fact, the recurrence gives:
where we substitute and . This gives the integral equation
which after differentiation becomes .
This differential equation can be solved by separation of variables (using the initial condition ), and simplified using a tangent half-angle formula, giving the final result
,
the sum of the secant and tangent functions. This result is known as André's theorem. A geometric interpretation of this result can be given using a generalization of a theorem by Johann Bernoulli
It follows from André's theorem that the radius of convergence of the series is /2. This allows one to compute the asymptotic expansion
Related sequences
The odd-indexed zigzag numbers (i.e., the tangent numbers) are closely related to Bernoulli numbers. The relation is given by the formula
for n > 0.
If Zn denotes the number of permutations of {1, ..., n} that are either up-down or down-up (or both, for n < 2) then it follows from the pairing given above that Zn = 2An for n ≥ 2. The first few values of Zn are 1, 1, 2, 4, 10, 32, 122, 544, 2770, 15872, 101042, ... .
The Euler zigzag numbers are related to Entringer numbers, from which the zigzag numbers may be computed. The Entringer numbers can be defined recursively as follows:
.
The nth zigzag number is equal to the Entringer number E(n, n).
The numbers A2n with even indices are called secant numbers or zig numbers: since the secant function is even and tangent is odd, it follows from André's theorem above that they are the numerators in the Maclaurin series of . The first few values are 1, 1, 5, 61, 1385, 50521, ... .
Secant numbers are related to the signed Euler numbers (Taylor coefficients of hyperbolic secant) by the formula E2n = (−1)nA2n. (En = 0 when n is odd.)
Correspondingly, the numbers A2n+1 with odd indices are called tangent numbers or zag numbers. The first few values are 1, 2, 16, 272, 7936, ... .
Explicit formula in terms of Stirling numbers of the second kind
The relationships of Euler zigzag numbers with the Euler numbers, and the Bernoulli numbers can be used to prove the following
where
denotes the rising factorial, and denotes Stirling numbers of the second kind.
See also
Longest alternating subsequence
Boustrophedon transform
Fence (mathematics), a partially ordered set that has alternating permutations as its linear extensions
Citations
References
.
.
.
External links
Ross Tang, "An Explicit Formula for the Euler zigzag numbers (Up/down numbers) from power series" A simple explicit formula for An.
"A Survey of Alternating Permutations", a preprint by Richard P. Stanley
Permutations
Enumerative combinatorics | Alternating permutation | [
"Mathematics"
] | 1,413 | [
"Functions and mappings",
"Permutations",
"Mathematical objects",
"Enumerative combinatorics",
"Combinatorics",
"Mathematical relations"
] |
3,562,876 | https://en.wikipedia.org/wiki/Diffraction%20spike | Diffraction spikes are lines radiating from bright light sources, causing what is known as the starburst effect or sunstars in photographs and in vision. They are artifacts caused by light diffracting around the support vanes of the secondary mirror in reflecting telescopes, or edges of non-circular camera apertures, and around eyelashes and eyelids in the eye.
While similar in appearance, this is a different effect to "vertical smear" or "blooming" that appears when bright light sources are captured by a charge-coupled device (CCD) image sensor.
Causes
Support vanes
In the vast majority of reflecting telescope designs, the secondary mirror has to be positioned at the central axis of the telescope and so has to be held by struts within the telescope tube. No matter how fine these support rods are they diffract the incoming light from a subject star and this appears as diffraction spikes which are the Fourier transform of the support struts. The spikes represent a loss of light that could have been used to image the star.
Although diffraction spikes can obscure parts of a photograph and are undesired in professional contexts, some amateur astronomers like the visual effect they give to bright stars – the "Star of Bethlehem" appearance – and even modify their refractors to exhibit the same effect, or to assist with focusing when using a CCD.
A small number of reflecting telescopes designs avoid diffraction spikes by placing the secondary mirror off-axis. Early off-axis designs such as the Herschelian and the Schiefspiegler telescopes have serious limitations such as astigmatism and long focal ratios, which make them useless for research. The brachymedial design by Ludwig Schupmann, which uses a combination of mirrors and lenses, is able to correct chromatic aberration perfectly over a small area and designs based on the Schupmann brachymedial are currently used for research of double stars.
There are also a small number of off-axis unobstructed all-reflecting anastigmats which give optically perfect images.
Refracting telescopes and their photographic images do not have the same problem as their lenses are not supported with spider vanes.
Non-circular aperture
Iris diaphragms with moving blades are used in most modern camera lenses to restrict the light received by the film or sensor. While manufacturers attempt to make the aperture circular for a pleasing bokeh, when stopped down to high f-numbers (small apertures), its shape tends towards a polygon with the same number of sides as blades. Diffraction spreads out light waves passing through the aperture perpendicular to the roughly-straight edge, each edge yielding two spikes 180° apart. As the blades are uniformly distributed around the circle, on a diaphragm with an even number of blades, the diffraction spikes from blades on opposite sides overlap. Consequently, a diaphragm with n blades yields n spikes if n is even, and 2n spikes if n is odd.
Segmented mirrors
Images from telescopes with segmented mirrors also exhibit diffraction spikes due to diffraction from the mirrors' edges. As before, two spikes are perpendicular to each edge orientation, resulting in six spikes (plus two fainter ones due to the spider supporting the secondary mirror) in photographs taken by the James Webb Space Telescope.
Dirty optics
An improperly cleaned lens or cover glass, or one with a fingerprint may have parallel lines which diffract light similarly to support vanes. They can be distinguished from spikes due to non-circular aperture as they form a prominent smear in a single direction, and from CCD bloom by their oblique angle.
In vision
In normal vision, diffraction through eyelashes – and due to the edges of the eyelids if one is squinting – produce many diffraction spikes. If it is windy, then the motion of the eyelashes cause spikes that move around and scintillate. After a blink, the eyelashes may come back in a different position and cause the diffraction spikes to jump around. This is classified as an entoptic phenomenon.
Diffraction spike in normal human vision can also be caused by some fibers in the eye lens sometimes called suture lines.
Other uses
Special effects
A cross screen filter, also known as a star filter, creates a star pattern using a very fine diffraction grating embedded in the filter, or sometimes by the use of prisms in the filter. The number of stars varies by the construction of the filter, as does the number of points each star has.
A similar effect is achieved by photographing bright lights through a window screen with vertical and horizontal wires. The angles of the bars of the cross depend on the orientation of the screen relative to the camera.
Bahtinov mask
In amateur astrophotography, a Bahtinov mask can be used to focus small astronomical telescopes accurately. Light from a bright point such as an isolated bright star reaching different quadrants of the primary mirror or lens is first passed through grilles at three different orientations. Half of the mask generates a narrow "X" shape from four diffraction spikes (blue and green in the illustration); the other half generates a straight line from two spikes (red). Changing the focus causes the shapes to move with respect to each other. When the line passes exactly through the middle of the "X", the telescope is in focus and the mask can be removed.
References
External links
Diffraction spikes explained by Astronomy Picture of the Day.
Astrophotography
Science of photography
Diffraction | Diffraction spike | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,143 | [
"Crystallography",
"Diffraction",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
3,564,560 | https://en.wikipedia.org/wiki/Seyferth%E2%80%93Gilbert%20homologation | The Seyferth–Gilbert homologation is a chemical reaction of an aryl ketone 1 (or aldehyde) with dimethyl (diazomethyl)phosphonate 2 and potassium tert-butoxide to give substituted alkynes 3. Dimethyl (diazomethyl)phosphonate 2 is often called the Seyferth–Gilbert reagent.
This reaction is called a homologation because the product has exactly one additional carbon more than the starting material.
Reaction mechanism
Deprotonation of the Seyferth–Gilbert reagent A gives an anion B, which reacts with the ketone to form the oxaphosphetane D. Elimination of dimethylphosphate E gives the vinyl diazo-intermediate Fa and Fb. The generation of nitrogen gas gives a vinyl carbene G, which via a 1,2-migration forms the desired alkyne H.
Bestmann modification
The dimethyl (diazomethyl)phosphonate carbanion can be generated in situ from dimethyl-1-diazo-2-oxopropylphosphonate (also called the Ohira-Bestmann reagent) by reaction with methanol and potassium carbonate as the base by cleavage of the acetyl group as methyl acetate. Reaction of Bestmann's reagent with aldehydes gives terminal alkynes often in very high yield and fewer steps than the Corey–Fuchs reaction.
The use of the milder potassium carbonate makes this procedure much more compatible with a wide variety of functional groups.
Improved in situ generation of the Ohira-Bestmann reagent
Recently a safer and more scalable approach has been developed for the synthesis of alkynes from aldehydes. This protocol takes advantage of a stable sulfonyl azide, rather than tosyl azide, for the in situ generation of the Ohira−Bestmann reagent.
Other modifications
Another modification for less reactive aldehydes is made by replacement of potassium carbonate with caesium carbonate in MeOH and results in a drastic yield increase.
See also
Corey–Fuchs reaction
Horner–Wadsworth–Emmons reaction
Wittig reaction
References
Carbon-carbon bond forming reactions
Rearrangement reactions
Name reactions | Seyferth–Gilbert homologation | [
"Chemistry"
] | 484 | [
"Name reactions",
"Carbon-carbon bond forming reactions",
"Rearrangement reactions",
"Organic reactions"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.