id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
22,064,595
https://en.wikipedia.org/wiki/Bromatometry
Bromatometry is a titration process in which the bromination of a chemical indicator is observed. Potassium bromate alone can be used for the analysis of organoarsenicals. Notes See also Iodometry Chemical tests Titration Bromine
Bromatometry
[ "Chemistry" ]
55
[ "Instrumental analysis", "Titration", "Chemical tests", "Analytical chemistry stubs" ]
22,066,537
https://en.wikipedia.org/wiki/Sperm%20guidance
Sperm guidance is the process by which sperm cells (spermatozoa) are directed to the oocyte (egg) for the aim of fertilization. In the case of marine invertebrates the guidance is done by chemotaxis. In the case of mammals, it appears to be done by chemotaxis, thermotaxis and rheotaxis. Background Since the discovery of sperm attraction to the female gametes in ferns over a century ago, sperm guidance in the form of sperm chemotaxis has been established in a large variety of species Although sperm chemotaxis is prevalent throughout the Metazoa kingdom, from marine species with external fertilization such as sea urchins and corals, to humans, much of the current information on sperm chemotaxis is derived from studies of marine invertebrates, primarily sea urchin and starfish. As a matter of fact, until not too long ago, the dogma was that, in mammals, guidance of spermatozoa to the oocyte was unnecessary. This was due to the common belief that, following ejaculation into the female genital tract, large numbers of spermatozoa 'race' towards the oocyte and compete to fertilize it. This belief was taken apart when it became clear that only few of the ejaculated spermatozoa — in humans, only ~1 of every million spermatozoa — succeed in entering the oviducts (fallopian tubes) and when more recent studies showed that mammalian spermatozoa employ at least three different mechanisms, each of which can potentially serve as a guidance mechanism: chemotaxis, thermotaxis and rheotaxis. Sperm guidance in non-mammalian species Sperm guidance in non-mammalian species is performed by chemotaxis. The oocyte secretes a chemoattractant, which, as it diffuses away, forms a concentration gradient: a high concentration close to the egg, and a gradually lower concentration as the distance from the oocyte increases. Spermatozoa can sense this chemoattractant and orient their swimming direction up the concentration gradient towards the oocyte. Sperm chemotaxis was demonstrated in a large number of non-mammalian species, from marine invertebrates to frogs. Chemoattractants The sperm chemoattractants in non-mammalian species vary to a large extent. Some examples are shown in Table 1. So far, most sperm chemoattractants that have been identified in non-mammalian species are peptides or low-molecular-weight proteins (1–20 kDa), which are heat stable and sensitive to proteases. Exceptions to this rule are the sperm chemoattractants of corals, ascidians, plants such as ferns, and algae (Table 1). Table 1. Some sperm chemoattractants in non-mammalian species* Taken from reference. Species specificity The variety of chemoattractants raises the question of species specificity with respect to the chemoattractant identity. There is no single rule for chemoattractant-related specificity. Thus, in some groups of marine invertebrates (e.g., hydromedusae and certain ophiuroids), the specificity is very high; in others (e.g., starfish), the specificity is at the family level and, within the family, there is no specificity. In mollusks, there appears to be no specificity at all. Likewise, in plants, a unique simple compound [e.g., fucoserratene — a linear, unsaturated alkene (1,3-trans 5-cis-octatriene)] might be a chemoattractant for various species. Behavioral mechanism Here, too, there is no single rule. In some species (for example, in hydroids like Campanularia or tunicate like Ciona), the swimming direction of the spermatozoa changes abruptly towards the chemoattractant source. In others (for example, in sea urchin, hydromedusa, fern, or fish such as Japanese bitterlings), the approach to the chemoattractant source is indirect and the movement is by repetitive loops of small radii. In some species (for example, herring or the ascidian Ciona) activation of motility precedes chemotaxis. In chemotaxis, cells may either sense a temporal gradient of the chemoattractant, comparing the occupancy of its receptors at different time points (as do bacteria), or they may detect a spatial gradient, comparing the occupancy of receptors at different locations along the cell (as do leukocytes). In the best-studied species, sea urchin, the spermatozoa sense a temporal gradient and respond to it with a transient increase in flagellar asymmetry. The outcome is a turn in the swimming path, followed by a period of straight swimming, leading to the observed epicycloid-like movements directed towards the chemoattractant source. Molecular mechanism The molecular mechanism of sperm chemotaxis is still not fully known. The current knowledge is mainly based on studies in the sea urchin Arbacia punctulata, where binding of the chemoattractant resact (Table 1) to its receptor, a guanylyl cyclase, activates cGMP synthesis (Figure 1). The resulting rise of cGMP possibly activates K+-selective ion channels. The consequential hyperpolarization activates hyperpolarization-activated and cyclic nucleotide-gated (HCN) channels. The depolarizing inward current through HCN channels possibly activates voltage-activated Ca2+ channels, resulting in elevation of intracellular Ca2+. This rise leads to flagellar asymmetry and, consequently, a turn of the sperm cell. Figure 1. A model of the signal-transduction pathway during sperm chemotaxis of the sea urchin Arbacia punctulata. Binding of a chemoattractant (ligand) to the receptor — a membrane-bound guanylyl cyclase (GC) — activates the synthesis of cGMP from GTP. Cyclic GMP possibly opens cyclic nucleotide-gated (CNG) K+-selective channels, thereby causing hyperpolarization of the membrane. The cGMP signal is terminated by the hydrolysis of cGMP through phosphodiesterase (PDE) activity and inactivation of GC. On hyperpolarization, hyperpolarization-activated and cyclic nucleotide-gated (HCN) channels allow the influx of Na+ that leads to depolarization and thereby causes a rapid Ca2+ entry through voltage-activated Ca2+ channels (Cav), Ca2+ ions interact by unknown mechanisms with the axoneme of the flagellum and cause an increase of the asymmetry of flagellar beat and eventually a turn or bend in the swimming trajectory. Ca2+ is removed from the flagellum by a Na+/Ca2+ exchange mechanism. [Taken from ref.] Sperm guidance in mammals Three different guidance mechanisms have been proposed to occur in the mammalian oviduct: thermotaxis, rheotaxis, and chemotaxis. Indeed, due to obvious restrictions, all these mechanisms were demonstrated in vitro only. However, the discoveries of proper stimuli in the female – an ovulation-dependent temperature gradient in the oviduct, post-coitus oviductal fluid flow in female mice, and sperm chemoattractants secreted from the oocyte and its surrounding cumulus cells, respectively – strongly suggest the mutual occurrence of these mechanisms in vivo. I. Chemotaxis Following the findings that human spermatozoa accumulate in follicular fluid and that there is a remarkable correlation between this in vitro accumulation and oocyte fertilization, chemotaxis was substantiated as the cause of this accumulation. Sperm chemotaxis was later also demonstrated in mice and rabbits. In addition, sperm accumulation in follicular fluid (but without substantiating that it truly reflects chemotaxis) was demonstrated in horses and pigs. A key feature of sperm chemotaxis in humans is that this process is restricted to capacitated cells — the only cells that possess the ability to penetrate the oocyte and fertilize it. This raised the possibility that, in mammals, chemotaxis is not solely a guidance mechanism but it is also a mechanism of sperm selection. Importantly, the fraction of capacitated (and, hence, chemotactically responsive) spermatozoa is low (~10% in humans), the life span of the capacitated/chemotactic state is short (1–4 hours in humans), a spermatozoon can be at this state only once in its lifetime, and sperm individuals become capacitated/chemotactic at different time points, resulting in continuous replacement of capacitated/chemotactic cells within the sperm population, i.e., prolonged availability of capacitated cells. These sperm features raised the possibility that prolonging the time period, during which capacitated spermatozoa can be found in the female genital tract, is a mechanism, evolved in humans, to compensate for the lack of coordination between insemination and ovulation. Chemotaxis is a short-range guidance mechanism. As such, it can guide spermatozoa for short distances only, estimated at the order of millimeters. Chemoattractants In humans, there are at least two different origins of sperm chemoattractants. One is the cumulus cells that surround the oocyte, and the other is the mature oocyte itself. The chemoattractant secreted from the cumulus cells is the steroid progesterone, shown to be effective at the picomolar range. The chemoattractant secreted from the oocyte is even more potent. It is a hydrophobic non-peptide molecule which, when secreted from the oocyte, is in complex with a carrier protein Additional compounds have been shown to act as chemoattractants for mammalian spermatozoa. They include the chemokine CCL20, atrial natriuretic peptide (ANP), specific odorants, natriuretic peptide type C (NPPC), and allurin, to mention a few. It is reasonable to assume that not all of them are physiologically relevant. Species specificity Species specificity was not detected in experiments that compared the chemotactic responsiveness of human and rabbit spermatozoa to follicular fluids or egg-conditioned media obtained from human, bovine, and rabbit. The subsequent findings that cumulus cells of both human and rabbit (and, probably, of other mammals as well) secrete the chemoattractant progesterone is sufficient to account for the lack of specificity in the chemotactic response of mammalian spermatozoa. Behavioral mechanism Mammalian spermatozoa, like sea-urchin spermatozoa, appear to sense the chemoattractant gradient temporally (comparing receptor occupancy over time) rather than spatially (comparing receptor occupancy over space). This is because the establishment of a temporal gradient in the absence of spatial gradient, achieved by mixing human spermatozoa with a chemoattractant or by photorelease of a chemoattractant from its caged compound, results in delayed transient changes in swimming behavior that involve increased frequency of turns and hyperactivation events. On the basis of these observations and the finding that the level of hyperactivation events is reduced when chemotactically responsive spermatozoa swim in a spatial chemoattractant gradient it was proposed that turns and hyperactivation events are suppressed when capacitated spermatozoa swim up a chemoattractant gradient, and vice versa when they swim down a gradient. In other words, human spermatozoa approach chemoattractants by modulating the frequency of turns and hyperactivation events, similarly to Escherichia coli bacteria. Molecular mechanism As in non-mammalian species, the end signal in chemotaxis for changing the direction of swimming is Ca2+. The discovery of progesterone as a chemoattractant led to the identification of its receptor on the sperm surface – CatSper, a Ca2+ channel present exclusively in the tail of mammalian spermatozoa. (Note, though, that progesterone only stimulates human CatSper but not mouse CatSper. Consistently, sperm chemotaxis to progesterone was not found in mice.) However, the molecular steps subsequent to CatSper activation by progesterone are obscure, though the involvement of trans-membrane adenylyl cyclase, cAMP and protein kinase A as well as soluble guanylyl cyclase, cGMP, inositol trisphosphate receptor and store-operated Ca2+ channel was proposed. II. Thermotaxis The realization that sperm chemotaxis can guide spermatozoa for short distances only, triggered a search for potential long-range guidance mechanisms. The findings that, at least in rabbits and pigs, a temperature difference exists within the oviduct, and that this temperature difference is established at ovulation in rabbits due to a temperature drop in the oviduct near the junction with the uterus, creating a temperature gradient between the sperm storage site and the fertilization site in the oviduct, led to a study of whether mammalian spermatozoa can respond to a temperature gradient by thermotaxis. Establishing sperm thermotaxis as an active process Mammalian sperm thermotaxis was, hitherto, demonstrated in three species: humans, rabbits, and mice. This was done by two methods. One involved a Zigmond chamber, modified to make the temperature in each well separately controllable and measurable. A linear temperature gradient was established between the wells and the swimming of spermatozoa in this gradient was analyzed. A small fraction of the spermatozoa (at the order of ~10%), shown to be the capacitated cells, biased their swimming direction according to the gradient, moving towards the warmer temperature. The other method involved two- or three-compartment separation tube placed within a thermoseparation device that maintains a linear temperature gradient. Sperm accumulation at the warmer end of the separation tube was much higher than the accumulation at the same temperature but in the absence of a temperature gradient. This gradient-dependent sperm accumulation was observed over a wide temperature range (29-41 °C). Since temperature affects almost every process, much attention has been devoted to the question of whether the measurements, mentioned just above, truly demonstrate thermotaxis or whether they reflect another temperature-dependent process. The most pronounced effect of temperature in liquid is convection, which raised the concern that the apparent thermotactic response could have been a reflection of a passive drift in the liquid current or a rheotactic response to the current (rather than to the temperature gradient per se). Another concern was that the temperature could have changed the local pH of the buffer solution in which the spermatozoa are suspended. This could generate a pH gradient along the temperature gradient, and the spermatozoa might have responded to the formed pH gradient by chemotaxis. However, careful experimental examinations of all these possibilities with proper controls demonstrated that the measured responses to temperature are true thermotactic responses and that they are not a reflection of any other temperature-sensitive process, including rheotaxis and chemotaxis. Behavioral mechanism of mammalian sperm thermotaxis The behavioral mechanism of sperm thermotaxis has been so far only investigated in human spermatozoa. Like the behavioral mechanisms of bacterial chemotaxis and human sperm chemotaxis, the behavioral mechanism of human sperm thermotaxis appears to be stochastic rather than deterministic. Capacitated human spermatozoa swim in rather straight lines interrupted by turns and brief episodes of hyperactivation. Each such episode results in swimming in a new direction. When the spermatozoa sense a decrease in temperature, the frequency of turns and hyperactivation events increases due to increased flagellar-wave amplitude that results in enhanced side-to-side head displacement. With time, this response undergoes partial adaptation. The opposite happens in response to an increase in temperature. This suggests that when capacitated spermatozoa swim up a temperature gradient, turns are repressed and the spermatozoa continue swimming in the gradient direction. When they happen to swim down the gradient, they turn again and again until their swimming direction is again up the gradient. Temperature sensing The response of spermatozoa to temporal temperature changes even when the temperature is kept constant spatially suggests that, as in the case of human sperm chemotaxis, sperm thermotaxis involves temporal gradient sensing. In other words, spermatozoa apparently compare the temperature (or a temperature-dependent function) between consecutive time points. This, however, does not exclude the occurrence of spatial temperature sensing in addition to temporal sensing. Human spermatozoa can respond thermotactically within a wide temperature range (at least 29–41 °C). Within this range they preferentially accumulate in warmer temperatures rather than at a single specific, preferred temperature. Amazingly, they can sense and thermotactically respond to temperature gradients as low as <0.014 °C/mm. This means that when human spermatozoa swim a distance that equals their body length (~46 μm) they respond to a temperature difference of <0.0006 °C! Molecular mechanism The molecular mechanism underlying thermotaxis, in general, and thermosensing with such extreme sensitivity, in particular, is obscure. It is known that, unlike other recognized thermosensors in mammals, the thermosensors for sperm thermotaxis do not seem to be temperature-sensitive ion channels. They are rather opsins, known to be G-protein-coupled receptors that act as photosensors in vision. The opsins are present in spermatozoa at specific sites, which depend on the species and the opsin type. They are involved in sperm thermotaxis via at least two signaling pathways: a phospholipase C signaling pathway and a cyclic-nucleotide pathway. The former was shown by pharmacological means in human spermatozoa to involve the enzyme phospholipase C, an inositol trisphosphate receptor located on internal calcium stores, the calcium channel TRPC3, and intracellular calcium. The cyclic-nucleotide pathway was, hitherto, shown to involve phosphodiesterase. Blocking both pathways fully inhibits sperm thermotaxis. III. Rheotaxis When human and mouse spermatozoa are exposed to a fluid flow, roughly one half of them (i.e., both capacitated and noncapacitated spermatozoa) reorient and swim against the current. The flow, which is prolactin-triggered oviductal fluid secretion, is generated in female mice within 4 h of sexual stimulation and coitus. Thus, rheotaxis orients spermatozoa towards the fertilization site. It was proposed that capacitated spermatozoa might detach from the oviductal surface faster than non-capacitated spermatozoa, enabling them to swim into the main current. To understand the mechanism of sperm turning in rheotaxis, quantitative analysis of human sperm flagellar behavior during rheotaxis turning was carried out. The results revealed, both at the single cell and population levels, that there is no significant difference in flagellar beating between rheotaxis turning spermatozoa and free-swimming spermatozoa. This finding taken together with the constant internal Ca2+ signal, measured during rheotaxis turning, demonstrated that, in contrast to the active process of chemotaxis and thermotaxis, human sperm rheotaxis is a passive process and no flow sensing is involved. All mechanisms combined Like in any other highly essential system in biology, mammalian sperm guidance is expected to involve redundancy. Indeed, at least three guidance mechanisms are likely to act in the female genital tract, two active mechanisms — chemotaxis and thermotaxis, and a passive mechanism — rheotaxis. When one of these mechanisms is not functional for any reason, guidance is not expected to be lost and the cells should still be able to navigate to the oocyte. This resembles guidance of migrating birds, where the birds' navigation is unaffected when one of the guidance mechanisms is not functional. It has been suggested that capacitated spermatozoa, released from the sperm storage site at the isthmus, may be first actively guided by thermotaxis from the cooler sperm storage site towards the warmer fertilization site (Figure 2). Two passive processes, rheotaxis and contractions of the oviduct may assist the spermatozoa to reach there. At this location, the spermatozoa may be chemotactically guided to the oocyte-cumulus complex by the gradient of progesterone, secreted from the cumulus cells. In addition, progesterone may inwardly guide spermatozoa, already present within the periphery of the cumulus oophorus. Spermatozoa that are already deep within the cumulus oophorus may sense the more potent chemoattractant that is secreted from the oocyte and chemotactically guide themselves to the oocyte according to the gradient of this chemoattractant. It should be borne in mind, however, that this is only a model. Figure 2. A simplified scheme describing the suggested sequence of active sperm guidance mechanisms in mammals. In addition, two passive processes, sperm rheotaxis and contractions of the oviduct, may assist sperm movement towards the fertilization site. A number of observations point to the possibility that chemotaxis and thermotaxis also occur at lower parts of the female genital tract. For example, small, gradual estrus cycle-correlated temperature increase was measured in cows from the vagina towards the uterine horns, and a gradient of natriuretic peptide precursor A, shown to be a chemoattractant for mouse spermatozoa, was found, in decreasing concentration order, in the ampulla, isthmus, and uterotubal junction. The physiological functions, if any, of these chemical and temperature gradients are yet to be resolved. Potential clinical applications Sperm guidance by either chemotaxis or thermotaxis can potentially be used to obtain sperm populations that are enriched with capacitated spermatozoa for in vitro fertilization procedures. Indeed, sperm populations selected by thermotaxis were recently shown to have much higher DNA integrity and lower chromatin compaction than unselected spermatozoa and, in mice, to give rise to more and better embryos through intracytoplasmic sperm injection (ICSI), doubling the number of successful pregnancies. Chemotaxis and thermotaxis can also be exploited possibly as a diagnostic tool to assess sperm quality. In addition, these processes can potentially be used, in the long run, as a means of contraception by interfering with the normal process of fertilization. References Semen Cell biology
Sperm guidance
[ "Biology" ]
5,060
[ "Cell biology" ]
41,882,948
https://en.wikipedia.org/wiki/Alexander%20Edgar%20Douglas
Alexander Edgar Douglas, (12 April 1916, in Melfort, Saskatchewan – 26 July 1981, in Ottawa) was a Canadian physicist, known for his work in molecular spectroscopy. He was president of the Canadian Association of Physicists in 1975–1976. Biography Born on a farm in Saskatchewan, Douglas received his BA and MA degrees from the University of Saskatchewan. Gerhard Herzberg was his MA thesis advisor. During World War II, Douglas interrupted his studies to do military-related research in the Physics Division at the NRC. After the war, he earned his PhD in physics at Pennsylvania State University under David H. Rank. In 1949 Douglas became NRC's head of the Spectroscopy Section of the Physics Division, which was directed by Gerhard Herzberg. From 1969 to 1973 Douglas was the director of the Physics Division of the NRC. He returned to his previous job as head of the Spectroscopy Section in 1973 and remained in that position until his retirement from the NRC in 1980. A. E. Douglas was the first to observe the spectra of B2, Si2, CH+, SiH+, NF, PF, BN, CN+ and many other diatomic or triatomic molecules. He first identified the 4050 group of lines observed in comets as being due to the C3 molecule. Using a method that he developed, Douglas made the first studies of the Zeeman effect in polyatomic molecules. According to Gerhard Herzberg:One of Douglas' most important contributions was his recognition of the reason for "anomalous lifetimes," that is, the failure of a simple relationship between absorption coefficient and lifetime to account for lifetimes in such compounds as NO2, SO2, C6H6. This phenomenon, referred to in the most recent literature as the Douglas effect, is closely connected with internal conversion in larger molecules. In astrophysical applications of molecular spectroscopy, Douglas is known for his identification of interstellar CH+ and of cometary C3 and for the reproduction in the laboratory of the Meinel bands of N2+ and other spectra. Honours and awards 1956 – Elected Fellow of the Royal Society of Canada 1970 – Medal for Achievement in Physics from the Canadian Association of Physicists 1979 – Elected Fellow of the Royal Society of London 1980 – International meeting on molecular spectroscopy sponsored in June in honour of A. E. Douglas by the NRC 1981 – Henry Marshall Tory Medal References 1916 births 1981 deaths Canadian physicists University of Saskatchewan alumni Eberly College of Science alumni Fellows of the Royal Society Fellows of the Royal Society of Canada People from Melfort, Saskatchewan Spectroscopists Presidents of the Canadian Association of Physicists
Alexander Edgar Douglas
[ "Physics", "Chemistry" ]
540
[ "Physical chemists", "Spectrum (physical sciences)", "Analytical chemists", "Spectroscopists", "Spectroscopy" ]
41,883,177
https://en.wikipedia.org/wiki/List%20of%20biophysically%20important%20macromolecular%20crystal%20structures
Crystal structures of protein and nucleic acid molecules and their complexes are central to the practice of most parts of biophysics, and have shaped much of what we understand scientifically at the atomic-detail level of biology. Their importance is underlined by the United Nations declaring 2014 as the International Year of Crystallography, as the 100th anniversary of Max von Laue's 1914 Nobel Prize for discovering the diffraction of X-rays by crystals. This chronological list of biophysically notable protein and nucleic acid structures is loosely based on a review in the Biophysical Journal. The list includes all the first dozen distinct structures, those that broke new ground in subject or method, and those that became model systems for work in future biophysical areas of research. Myoglobin 1958 Myoglobin was the very first crystal structure of a protein molecule. Myoglobin cradles an iron-containing heme group that reversibly binds oxygen for use in powering muscle fibers, and those first crystals were of myoglobin from the sperm whale, whose muscles need copious oxygen storage for deep dives. The myoglobin 3-dimensional structure is made up of 8 alpha-helices, and the crystal structure showed that their conformation was right-handed and very closely matched the geometry proposed by Linus Pauling, with 3.6 residues per turn and backbone hydrogen bonds from the peptide NH of one residue to the peptide CO of residue i+4. Myoglobin is a model system for many types of biophysical studies, especially involving the binding process of small ligands such as oxygen and carbon monoxide. Hemoglobin 1960 The hemoglobin crystal structure showed a tetramer of two related chain types and was solved at much lower resolution than the monomeric myoglobin, but it clearly had the same basic 8-helix architecture (now called the "globin fold"). Further hemoglobin crystal structures at higher resolution (PDB 1MHB, 1DHB) soon showed the coupled change of both local and quaternary conformation between the oxy and deoxy states of hemoglobin, which explains the cooperativity of oxygen binding in the blood and the allosteric effect of factors such as pH and DPG. For decades hemoglobin was the primary teaching example for the concept of allostery, as well as being an intensive focus of research and discussion on allostery. In 1909, hemoglobin crystals from >100 species were used to relate taxonomy to molecular properties. That book was cited by Perutz in the 1938 report of horse hemoglobin crystals that began his long saga to solve the crystal structure. Hemoglobin crystals are pleochroic dark red in two directions and pale red in the third because of the orientation of the hemes, and the bright Soret band of the heme porphyrin groups is used in spectroscopic analysis of hemoglobin ligand binding. Hen-egg-white lysozyme 1965 Hen-egg-white lysozyme (PDB file 1lyz). was the first crystal structure of an enzyme (it cleaves small carbohydrates into simple sugars), used for early studies of enzyme mechanism. It contained beta sheet (antiparallel) as well as helices, and was also the first macromolecular structure to have its atomic coordinates refined (in real space). The starting material for preparation can be bought at the grocery store, and hen-egg lysozyme crystallizes very readily in many different space groups; it is the favorite test case for new crystallographic experiments and instruments. Recent examples are nanocrystals of lysozyme for free-electron laser data collection and microcrystals for micro electron diffraction. Ribonuclease 1967 Ribonuclease A (PDB file 2RSA) is an RNA-cleaving enzyme stabilized by 4 disulfide bonds. It was used in Anfinsen's seminal research on protein folding which led to the concept that a protein's 3-dimensional structure was determined by its amino-acid sequence. Ribonuclease S, the cleaved, two-component form studied by Fred Richards, was also enzymatically active, had a nearly identical crystal structure (PDB file 1RNS), and was shown to be catalytically active even in the crystal, helping dispel doubts about the relevance of protein crystal structures to biological function. Serine proteases 1967 The serine proteases are a historically very important group of enzyme structures, because collectively they illuminated catalytic mechanism (in their case, by the Ser-His-Asp "catalytic triad"), the basis of differing substrate specificities, and the activation mechanism by which a controlled enzymatic cleavage buries the new chain end to properly rearrange the active site. The early crystal structures included chymotrypsin (PDB file 2CHA), chymotrypsinogen (PDB file 1CHG), trypsin (PDB file 1PTN), and elastase (PDB file 1EST). They also were the first protein structures that showed two near-identical domains, presumably related by gene duplication. One reason for their wide use as textbook and classroom examples was the insertion-code numbering system, which made Ser195 and His57 consistent and memorable despite the protein-specific sequence differences. Papain 1968 Papain Carboxypeptidase 1969 Carboxypeptidase A is a zinc metalloprotease. Its crystal structure (PDB file 1CPA) showed the first parallel beta structure: a large, twisted, central sheet of 8 strands with the active-site Zn located at the C-terminal end of the middle strands and the sheet flanked on both sides with alpha helices. It is an exopeptidase that cleaves peptides or proteins from the carboxy-terminal end rather than internal to the sequence. Later a small protein inhibitor of carboxypeptidase was solved (PDB file 4CPA) that mechanically stops the catalysis by presenting its C-terminal end just sticking out from between a ring of disulfide bonds with tight structure behind it, preventing the enzyme from sucking in the chain past the first residue. Subtilisin 1969 Subtilisin (PDB file 1sbt ) was a second type of serine protease with a near-identical active site to the trypsin family of enzymes, but with a completely different overall fold. This gave the first view of convergent evolution at the atomic level. Later, an intensive mutational study on subtilisin documented the effects of all 19 other amino acids at each individual position. Lactate dehydrogenase 1970 Lactate dehydrogenase Trypsin inhibitor 1970 Basic pancreatic trypsin inhibitor, or BPTI (PDB file 2pti), is a small, very stable protein that has been a highly productive model system for study of super-tight binding, disulfide bond (SS) formation, protein folding, molecular stability by amino-acid mutations or hydrogen-deuterium exchange, and fast local dynamics by NMR. Biologically, BPTI binds and inhibits trypsin while stored in the pancreas, allowing activation of protein digestion only after trypsin is released into the stomach. Rubredoxin 1970 Rubredoxin (PDB file 2rxn) was the first redox structure solved, a minimalist protein with the iron bound by 4 Cys sidechains from 2 loops at the top of β hairpins. It diffracted to 1.2Å, enabling the first reciprocal-space refinement of a protein (4,5rxn). (NB: note that 4rxn was done without geometry restraints.) Archaeal rubredoxins account for many of the highest-resolution small structures in the PDB. Insulin 1971 Insulin (PDB file 1INS) is a hormone central to the metabolism of sugar and fat storage, and important in human diseases such as obesity and diabetes. It is biophysically notable for its Zn binding, its equilibrium between monomer, dimer, and hexamer states, its ability to form crystals in vivo, and its synthesis as a longer "pro" form which is then cleaved to fold up as the active 2-chain, SS-linked monomer. Insulin was a success of NASA's crystal-growth program on the Space Shuttle, producing bulk preparations of very uniform tiny crystals for controlled dosage. Staphylococcal nuclease 1971 Staphylococcal nuclease Cytochrome C 1971 Cytochrome C T4 phage lysozyme 1974 T4 phage lysozyme Immunoglobulins 1974 Immunoglobulins Superoxide dismutase 1975 Cu,Zn Superoxide dismutase Transfer RNA 1976 Transfer RNA Triose phosphate isomerase 1976 Triose phosphate isomerase Pepsin-like aspartic proteases 1976 Rhizopuspepsin 1976 Endothiapepsin 1976 Penicillopepsin Later structures (1978 onwards) 1978 Icosahedral virus 1981 Dickerson B-form DNA dodecamer 1981 Crambin 1985 Calmodulin 1985 DNA polymerase 1985 Photosynthetic reaction center: Pairs of bacteriochlorophylls (green) inside the membrane capture energy from sunlight, then traveling by many steps to become available at the heme groups (red) in the cytochrome-C module at the top. This was first crystal structure solved for a membrane protein, a milestone recognized by a Nobel Prize to Hartmut Michel, Hans Deisenhofer, and Robert Huber. 1986 Repressor/DNA interactions 1987 Major histocompatibility complex 1987 Ubiquitin 1987 ROP protein 1989 HIV-1 protease 1990 Bacteriorhodopsin 1991 GCN4 coiled coil 1991 HIV-1 reverse transcriptase 1993 Beta helix of Pectate lyase 1994 Collagen 1994 Barnase/barstar complex 1994 F1 ATPase 1995 Heterotrimeric G proteins 1996 Green fluorescent protein 1996 CDK/cyclin complex 1996 Kinesin motor protein 1997 GroEL/ES chaperone 1997 Nucleosome 1998 Group I self-splicing intron 1998 DNA topoisomerases perform the biologically important and necessary job of untangling DNA strands or helices that get entwined with each other or twisted too tightly during normal cellular processes such as the transcription of genetic information. 1998 Tubulin alpha/beta dimer 1998 Potassium channel 1998 Holliday junction 2000 Ribosomes are a central part of biology and biophysics, which first became accessible structurally in 2000 2000 AAA+ ATPase 2002 Ankyrin repeats 2003 TOP7 protein design 2004 Cyanobacterial Circadian clock proteins 2004 Riboswitch 2006 Human exosome 2007 G-protein-coupled receptor 2009 The vault particle is an intriguing new discovery of a large hollow particle common in cells, with several different suggestions for its possible biological function. The crystal structures (PDB files 2zuo, 2zv4, 2zv5 and 4hl8) show that each half of the vault is made up of 39 copies of a long 12-domain protein that swirl together to form the enclosure. Disorder at the very top and bottom ends suggests openings for possible access to the interior of the vault. References Structural biology Macromolecular crystal structures Macromolecular crystal structures Macromolecular crystal structures
List of biophysically important macromolecular crystal structures
[ "Physics", "Chemistry", "Biology" ]
2,425
[ "Applied and interdisciplinary physics", "Biochemistry", "Molecular-biology-related lists", "Biophysics", "nan", "Molecular biology", "Structural biology" ]
41,886,790
https://en.wikipedia.org/wiki/Carboxycyclophosphamide
Carboxycyclophosphamide is an inactive metabolite of the cytotoxic antineoplastic drug cyclophosphamide. In the metabolic pathway of cyclophosphamide inactivation it first metabolizes to 4-hydroxycyclophosphamide, then partially tautomerizes into aldophosphamide. Aldophosphamide then, in turn, is oxidized into carboxycyclophosphamide by the enzyme ALDH (aldehyde dehydrogenase). References Human drug metabolites Nitrogen mustards Phosphorodiamidates Chloroethyl compounds
Carboxycyclophosphamide
[ "Chemistry" ]
141
[ "Human drug metabolites", "Organic compounds", "Chemicals in medicine", "Organic compound stubs", "Organic chemistry stubs" ]
41,889,305
https://en.wikipedia.org/wiki/Biotechcellence
Biotechcellence is a national level technical symposium that was established by the co-operative efforts of Department of Biotechnology (DBT) and Association of Bio-technologists of Anna University in India. The symposium aims to highlight advancements in biotechnology that have taken place over the years in the medical, industrial, and agricultural fields. Centre for Biotechnology The Centre for Biotechnology (CBT) was established in 1987 at the Anna University - supported by the Department of Biotechnology, Delhi and the University Grants Commission, Delhi. Its objectives were: To provide educational and training facilities in different areas of Biotechnology To carry out fundamental research in the frontier areas of Biotechnology To promote research and consultancy activities in the development of various areas of Biotechnology The Centre for Biotechnology was one of the first departments to offer Industrial Biotechnology (IBT) as a professional course, and later began courses in Pharmaceutical Technology and Food Technology as added specializations. History Biotechcellence was started in the year 1994 by the Association of Biotechnologists, Anna University. Biotechcellence has hosted many notable people of both scientific and industrial backgrounds including James Watson, Jules Hoffmann, and Dr. Madhan Babu, Cambridge University. Biotechcellence 2017 Biotechcellence 2017, the 23rd edition of Biotechcellence, was hosted from March 9-11, 2017 at Anna University. It consists of the symposium, events and workshops. The following events were held as a part of Biotechcellence 2017: Oral Presentation Poster Presentation Bacteriography Pick Your Brains Cerebrus 5+ Online Events The workshops a part of Biotechcellence 2017 were: Stem Cell Technology Food Adulteration Analysis Bio-Informatics References Further reading Hindu article on Biotechcellence, 2006 Hindu article on Biotechcellence, 2008 Anna University Biotechnology organizations Academic conferences Biotechnology in India 1994 establishments in Tamil Nadu Organizations established in 1994
Biotechcellence
[ "Engineering", "Biology" ]
375
[ "Biotechnology in India", "Biotechnology organizations", "Biotechnology by country" ]
41,890,491
https://en.wikipedia.org/wiki/Nadezhda%20%28cockroach%29
Nadezhda (, Hope) was a cockroach that was sent into space during the Foton-M 3 bio-satellite flight between September 14 and 26, 2007 by Russian scientists. Scientists monitoring the mission from Voronezh announced that Nadezhda had successfully produced 33 offspring on Earth. While it was reported that Nadezhda's offspring were the first earthlings to be conceived in microgravity, Japanese rice fish successfully conceived and produced offspring in microgravity as part of the IML-2 experiment aboard STS-65 in 1994. Nadezhda and the rest of the insects were traveling inside a sealed special container, and a video-camera was filming the whole process. What was considered unnatural for the newborn cockroaches was that their carapace had darkened in colour much earlier, in comparison with natural-condition cockroaches who develop that darker tone later in their life cycle. But the rest of the conditions and capacities of the cockroaches remained normal. Later it was reported that Nadezhda's grandchildren, born to one of the space-born insects, had given birth on Earth to normal cockroaches, with a life cycle and development similar to that of any other cockroach. See also Animals in space References Animals in space Individual cockroaches
Nadezhda (cockroach)
[ "Chemistry", "Biology" ]
272
[ "Animal testing", "Space-flown life", "Animals in space" ]
45,143,673
https://en.wikipedia.org/wiki/Paul%20Parks
Paul Parks (May 7, 1923 – August 1, 2009) was an American civil engineer. Parks became the first African American Secretary of Education for Massachusetts, and was appointed by Governor Michael Dukakis to serve from 1975 until 1979. Mayor Raymond Flynn appointed Parks to the Boston School Committee, where he was also the first African American. Parks fought as a combat engineer for the U.S. Military and took part in the Normandy landings on Omaha Beach. Following his service in World War II, Parks was renowned for his work and dedication to desegregating Boston public schools through his role in the execution of the Boston Model City program, a program designed to use federal funding to develop selected areas in Boston and achieve economic stability. Parks was also a member of the Massachusetts State Advisory Committee to the U.S. Commission on Civil Rights, in which he was involved in the development of METCO, a program dedicated to resolving segregation in Boston public schools through desegregated busing and increased enrollment of black students in predominantly white schools. Early life and education Parks’s father, Cleab, was a disabled World War I veteran of Seminole descent. His mother, Hazel, was a social worker. Parks grew up in Indianapolis, which was characterized by its segregated education system at the time. He attended Crispus Attucks High School, an all-black institution in Indianapolis. Parks was awarded a $4,000 scholarship for winning an oratory contest in high school, and this monetary prize contributed to his college education when he enrolled at Purdue University in 1941. He was a member of the Omega Psi Phi fraternity. Before completing his Bachelor of Science in civil engineering, his education was interrupted in 1942 when he was drafted to fight in World War II as a combat engineer. Afforded by the benefits of the Serviceman's Readjustment Act, he resumed formal education at Purdue to complete his civil engineering degree, and he later earned a doctorate in engineering from Northeastern University after moving to Boston. Career Military service In 1942, Parks was drafted into the United States Army and was subsequently sent to Europe as a combat engineer in 1943 during World War II, where he served until 1945. Parks was a member of the 365th Engineer Regiment that sailed out of New York City on September 30, 1943 en route to Europe. As a combat engineer, his primary role was the detonation of mines. On June 6, 1944, the Allied Forces invaded the coast of Normandy on D-Day, and Parks was present on Omaha Beach during this invasion. Parks was also involved with the liberation of the Dachau concentration camp in 1945 after being detached from his original engineer unit. At Dachau, responsibilities included identifying and burying bodies. With the conclusion of his Western European military campaigns, Parks was eventually relocated to the Pacific South to assist in the liberation of the Philippines. Civil engineer Upon discharge from the military, Parks's initial work experience came in the form of planning and designing the new freeway system in Indiana as part of the Indiana Department of Transportation (1949-1951). He then moved to Boston to join Stone and Webster (1951), where he contributed to the design of dams and hydroelectric powerhouses as a hired engineer. At Fay, Spofford & Thorndike (1951-1952), Parks helped with the design of the New Jersey Garden State Parkway. Following these experiences, he worked on the design of missiles and contributed to nuclear engineering research at Chance Vought Aircraft (1952-1953) and Pratt and Whitney Aircraft (1953-1958), respectively. In 1957, Parks co-founded an architectural firm called Associated Architects and Engineer with fellow African-American Henry Clifford Boles. Notable commissions for the firm included the Methuen Junior High School, the Saint Stephen’s Episcopal Church Parish Hall, and a major hospital in Philadelphia, amongst others. The firm was eventually dissolved in 1967. Parks's engineering work also led him to numerous international opportunities. While still with his architectural firm, Parks traveled to regions of West Africa in 1967, including Liberia, Ivory Coast, Ghana, and Nigeria, to assist in housing projects. Furthermore, he was invited by the Israeli government in 1968 to serve as a consultant to its public systems involving education, housing, health, and justice. Parks was a member of professional organizations including the American Society of Civil Engineers and the National Society of Professional Engineers. Public service Parks was well known for his involvement in desegregating public schools in Boston, Massachusetts. He was appointed as Massachusetts's Secretary of Educational Affairs, succeeding Joseph M. Cronin, who was the first to ever assume that role. Parks was appointed by Governor-elect, in 1974, Michael Dukakis. Parks was the first African American to be selected as a member of Dukakis's cabinet. As the Secretary of Educational Affairs, Parks was also the Executive Director of the Boston Model City Program, with the overarching goal of desegregating Boston schools and busing. Parks formed a council that would frequently report back to Governor Dukakis amidst a host of issues arising from the busing programs. Two decades following Parks's appointment as the Secretary of Educational Affairs, Parks then became the chairman of the Boston School Committee. As the Chairman of the NAACP Education Committee, Parks was responsible for the Boston Model City Program. Parks was able to identify the growing economic issues found within the selected areas the program was in effect, noting that the unemployment rate was four times that of metropolitan Boston. Parks attributed these economic downfalls to the funding cuts to the program and advocated for it to remain in effect amidst several discussions of its termination. A statistical analysis conducted by Parks and his colleagues estimated that continued cuts to funding and termination of the program would cause more than $51 million in economic damage and a loss of 5,000 jobs. The program used federal funding of approximately $20 million to provide aid to 60,000 individuals in Dorchester, Jamaica Plain, and Roxbury, the latter of which was subject to a march in 1963 to protest segregation in Boston schools prior to Parks's appointment as Chairman. Parks, in 1964, as a member of the Massachusetts State Advisory Committee to the U.S. Commission on Civil Rights, met with other members of the committee to revise evidence suggesting segregation in Boston public schools at that time. Parks and colleagues found that reading scores in black schools were far less than that of all-white public schools. Furthermore, they found that school administrators preached separate but equal quality of education for black and white students, despite evidence insisting on the opposite. The Kiernan Commission, spearheaded by Dr. Owen Kiernan in 1964, gathered exceptional individuals working in the field of education and business to assess the status and quality of education in Boston public schools. They returned with evidence to back Parks and the committee's claims of unequal quality of education and found that at least 32 schools were subject to this. Despite the condemning evidence presented to the Boston School Committee, they rejected the proof and dismissed the report. In 1965, Parks and the Massachusetts State Advisory Committee to the U.S. Commission on Civil Rights founded Operation Exodus, a program that buses black students to white schools outside of traditionally black neighborhoods in Boston. Additionally, Parks worked on establishing the Metropolitan Council for Educational Opportunity (METCO), a program that further supported racial desegregation in Boston schools by diversifying the student body and urged black students to enroll in predominantly white schools. Personal life Parks was of African American, Muscogee, and Seminole ancestry. Parks married Dorothy Alexander on February 2, 1947 with whom he had three children: Paul Jr., Pamela, and Stacy. In 1972, Parks married Virginia Loftman. Parks died of cancer in 2009. References External links Northeastern interview 1923 births 2009 deaths American people of Seminole descent Muscogee people United States Army personnel of World War II Purdue University College of Engineering alumni Northeastern University alumni Omega Psi Phi Activists for African-American civil rights African-American engineers 20th-century American engineers State cabinet secretaries of Massachusetts African-American state cabinet secretaries American Society of Civil Engineers Vought United States Army soldiers 20th-century African-American people 21st-century African-American people Crispus Attucks High School alumni
Paul Parks
[ "Engineering" ]
1,664
[ "American Society of Civil Engineers", "Civil engineering organizations" ]
45,145,987
https://en.wikipedia.org/wiki/Mikhail%20Katsnelson
Mikhail Iosifovich Katsnelson (; born 10 August 1957) is a Russian-Dutch professor of theoretical physics. He works at Radboud University Nijmegen where he specializes in theoretical solid-state physics and many-body quantum physics. He is one of the most cited scientists in the field of condensed matter physics. Early life Katsnelson was born in Magnitogorsk, Russia. From 1972 to 1977 he attended and then graduated from the Ural State University in Sverdlovsk. In 1980 he obtained his Ph.D. from Institute of Metal Physics in the same place where his advisor was Serghey V. Vonsovsky. In 1985 he defended his thesis for his Doctor of Science degree called Strong electron correlations in transition metals, their alloys and compounds and from 1990 to 1998 became Max-Planck-Institute visiting professor. Career From 2004 to 2007 Katsnelson worked with many Russian and Dutch physicists on the nitrogen dioxide and discovered that its closed shell dimer creates only weak doping which is also known as density of states in a graphene. He also discovered that density of states is ideal for chemical sensing and explained its single molecule detection. On 23 September 2007 he along with Annalisa Fasolino have proven that chemical bonding in carbon is caused by setting ripples' thermal fluctuations to 80 angstroms. In 2010 Katsnelson worked with physicists from India such as Rashid Jalil, Rahul R. Nair, and nanotechnologist Fredrik Schedin of University of Manchester and have discovered that fluorine atoms are attached to the carbon of the graphene, therefore creating a new version called fluorographene that can be stable in the air with a temperature of . In 2012 he and his colleagues have used prototype device which contained graphene heterojunctions which was combined with either thin boron nitride or molybdenum disulfide which was used as a vertical transport barrier. During the experiment, and they prove that using such prototypes is beneficial for high-frequency operations and large-scale integrations. Since 2014 Katsnelson is member of the Royal Netherlands Academy of Arts and Sciences. Awards Lenin Komsomol Prize (1988) Knight of the Order of the Netherlands Lion (2011) Spinoza Prize (2013) Hamburg Prize for Theoretical Physics (2016) References External links 1957 births Living people 21st-century Dutch physicists 20th-century Russian physicists People from Magnitogorsk Spinoza Prize winners Theoretical physicists Academic staff of Radboud University Nijmegen Recipients of the Lenin Komsomol Prize Members of the Royal Netherlands Academy of Arts and Sciences Ural State University alumni
Mikhail Katsnelson
[ "Physics" ]
549
[ "Theoretical physics", "Theoretical physicists" ]
45,154,254
https://en.wikipedia.org/wiki/Mathematical%20Foundations%20of%20Quantum%20Mechanics
Mathematical Foundations of Quantum Mechanics () is a quantum mechanics book written by John von Neumann in 1932. It is an important early work in the development of the mathematical formulation of quantum mechanics. The book mainly summarizes results that von Neumann had published in earlier papers. Von Neumman formalized quantum mechanics using the concept of Hilbert spaces and linear operators. He acknowledged the previous work by Paul Dirac on the mathematical formalization of quantum mechanics, but was skeptical of Dirac's use of delta functions. He wrote the book in an attempt to be even more mathematically rigorous than Dirac. It was von Neumann's last book in German, afterwards he started publishing in English. Publication history The book was originally published in German in 1932 by Springer. It was translated into French by Alexandru Proca in 1946, and into Spanish in 1949. An English translation by Robert T. Beyer was published in 1955 by Princeton University Press. A Russian translation, edited by Nikolay Bogolyubov, was published by Nauka in 1964. A new English edition, edited by Nicholas A. Wheeler, was published in 2018 by Princeton University Press. Table of contents According to the 2018 version, the main chapters are: Introductory considerations Abstract Hilbert space The quantum statistics Deductive development of the theory General considerations The measuring process No hidden variables proof One significant passage is its mathematical argument against the idea of hidden variables. Von Neumann's claim rested on the assumption that any linear combination of Hermitian operators represents an observable and the expectation value of such combined operator follows the combination of the expectation values of the operators themselves. Von Neumann's makes the following assumptions: For an observable , a function of that observable is represented by . For the sum of observables and is represented by the operation , independently of the mutual commutation relations. The correspondence between observables and Hermitian operators is one to one. If the observable is a non-negative operator, then its expected value . Additivity postulate: For arbitrary observables and , and real numbers and , we have for all possible ensembles. Von Neumann then shows that one can write for some , where and are the matrix elements in some basis. The proof concludes by noting that must be Hermitian and non-negative definite () by construction. For von Neumann, this meant that the statistical operator representation of states could be deduced from the postulates. Consequently, there are no "dispersion-free" states: it is impossible to prepare a system in such a way that all measurements have predictable results. But if hidden variables existed, then knowing the values of the hidden variables would make the results of all measurements predictable, and hence there can be no hidden variables. Von Neumann's argues that if dispersion-free states were found, assumptions 1 to 3 should be modified. Von Neumann's concludes: Rejection This proof was rejected as early as 1935 by Grete Hermann who found a flaw in the proof. The additive postulate above holds for quantum states, but it does not need to apply for measurements of dispersion-free states, specifically when considering non-commuting observables. Dispersion-free states only require to recover additivity when averaging over the hidden parameters. For example, for a spin-1/2 system, measurements of can take values for a dispersion-free state, but independent measurements of and can only take values of (their sum can be or ). Thus there still the possibility that a hidden variable theory could reproduce quantum mechanics statistically. However, Hermann's critique remained relatively unknown until 1974 when it was rediscovered by Max Jammer. In 1952, David Bohm constructed the Bohmian interpretation of quantum mechanics in terms of statistical argument, suggesting a limit to the validity of von Neumann's proof. The problem was brought back to wider attention by John Stewart Bell in 1966. Bell showed that the consequences of that assumption are at odds with results of incompatible measurements, which are not explicitly taken into von Neumann's considerations. Reception It was considered the most complete book written in quantum mechanics at the time of release. It was praised for its axiomatic approach. A review by Jacob Tamarkin compared von Neumann's book to what the works on Niels Henrik Abel or Augustin-Louis Cauchy did for mathematical analysis in the 19th century, but for quantum mechanics. Freeman Dyson said that he learned quantum mechanics from the book. Dyson remarks that in the 1940s, von Neumann's work not so well cited in the English world, as the book was not translated into English until 1955, but also because the worlds of mathematics and physics were significantly distant at the time. Works adapted in the book See also Dirac–von Neumann axioms The Principles of Quantum Mechanics by Paul Dirac Notes References External links Full online text of the 1932 German edition (facsimile) at the University of Göttingen. 1932 non-fiction books Mathematics books Physics textbooks Quantum mechanics Hidden variables
Mathematical Foundations of Quantum Mechanics
[ "Physics" ]
1,027
[ "Quantum mechanics", "Works about quantum mechanics" ]
31,061,495
https://en.wikipedia.org/wiki/Amino%20acid%20response
Amino acid response is the mechanism triggered in mammalian cells by amino acid starvation. The amino acid response pathway is triggered by shortage of any essential amino acid, and results in an increase in activating transcription factor ATF4, which in turn affects many processes by sundry pathways to limit or increase the production of other proteins. Essential amino acids are crucial to maintain homeostasis within an organism. Diet plays an important role in the health of an organism, as evidence ranging from human epidemiological to model organism experimental data suggests that diet-dependent pathways impact a variety of adult stem cells. Amino acid response pathway Amino acid deficiency detection At low concentration of amino acid, GCN2 is activated due to the increase level of uncharged tRNA molecules. Uncharged tRNA activates GCN2 due to the displacement of the protein kinase moiety from a bipartite tRNA-binding domain. Activated GCN2 phosphorylates itself and eIF2α, it triggers a transcriptional and translational response to restore amino acid homeostasis by affecting the utilization, acquisition, and mobilization of amino acid in an organism. Increased synthesis of ATF4 In homeostasis, eIF2 combines with guanosine triphosphate (GTP) to activate the mRNA which will start transcription and simultaneously lead to the hydrolysis of GTP so that the process can start again. However during an essential amino acid shortage, P-eIF2α is phosphorylated and binds tightly to eIF2B preventing GDP from turning back to GTP leading to fewer mRNAs being activated and fewer proteins being synthesized. This response causes translation to be increased for some mRNAs, including ATF4, which regulates the transcription of other genes. Proteins increased by the amino acid response Some of the proteins whose concentration is increased by the amino acid response include: Membrane transporters Transcription factors from the basic region/leucine zipper (bZIP) superfamily Growth factors Metabolic enzymes Leucine starvation Starvation induces the lysosomal retention of leucine such that it requires RAG-GTPases and the lysosomal protein complex regulator. PCAF is recruited specifically to the CHOP amino acid response element to enhance the ATF4 transcriptional activity. References Transcription factors
Amino acid response
[ "Chemistry", "Biology" ]
465
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
31,062,681
https://en.wikipedia.org/wiki/Sharklet%20%28material%29
Sharklet, manufactured by Sharklet Technologies, is a bio-inspired plastic sheet product structured to impede microorganism growth, particularly bacterial growth. It is marketed for use in hospitals and other places with a relatively high potential for bacteria to spread and cause infections. The inspiration for Sharklet's texture came through analysis of the texture of shark skin, which does not attract barnacles or other biofouling, unlike ship hulls and other smooth surfaces. The texture was later found to also repel microbial activity. History Sharklet is a bio-inspired material that was invented by Anthony Brennan, a materials science and engineering professor at the University of Florida, while working to improve antifouling technology for ships and submarines at Pearl Harbor. Brennan noticed that sharks do not get fouled. He discovered that shark skin denticles are structured in a characteristic diamond-repeating micro-pattern with millions of small ribs at the micrometer scale. His mathematical model for the texture of a substance that would deter microorganisms from settling corresponds to the width-to-height ratio of shark denticle riblets. When compared to smooth surfaces, the first test resulted in an 85% reduction in green algae settlement. Stress gradient Adherence prevention and translocation restriction have been demonstrated and are believed to significantly reduce the risk of device-associated infections. Sharklet's topography creates mechanical stress on settling bacteria, a phenomenon known as mechanotransduction. The surface variations induce stress gradients within the plane, which disrupt normal cell functions, forcing the microorganism to adjust its contact area on each topographical feature to equalize the stresses. Sharklet is made, however, with the same material as other plastics. Sharklet micro-patterns can be incorporated onto the surfaces of a variety of medical devices during the manufacturing process. Sharklet micro-patterns have been tested to control the bio-adhesion of marine microorganisms, pathogenic bacteria, and eukaryotic cells. They reduce S. aureus and S. epidermidis colonization in a simulated vascular environment by around 70% when compared to smooth controls. This micro-pattern similarly reduces platelet adhesion and fibrin sheath formation by approximately 80%. An in vitro study found that it reduced the colonization of S. aureus and P. aeruginosa bacterial pathogens in a central venous catheters-relevant thermoplastic polyurethane. See also Antimicrobial polymer Bioinspiration Micropatterning Nanomaterial Shark skin References External links Technologies Inspired by Sharks Antimicrobials Products introduced in 2004 Biomimetics
Sharklet (material)
[ "Engineering", "Biology" ]
540
[ "Biological engineering", "Antimicrobials", "Bionics", "Bioinformatics", "Biomimetics", "Biocides" ]
31,070,915
https://en.wikipedia.org/wiki/Principles%20of%20motion%20sensing
Sensors able to detect three-dimensional motion have been commercially available for several decades and have been used in automobiles, aircraft and ships. However, initial size, power consumption and price had prevented their mass adoption in consumer electronics. While there are other kinds of motion detector technologies available commercially, there are four principle types of motion sensors which are important for motion processing in the consumer electronics market. Motion sensors Accelerometers Accelerometers measure linear acceleration and tilt angle. Single- and multi-axis accelerometers detect the combined magnitude and direction of linear, rotational and gravitational acceleration. They can be used to provide limited motion sensing functionality. For example, a device with an accelerometer can detect rotation from vertical to horizontal state in a fixed location. As a result, accelerometers are primarily used for simple motion sensing applications in consumer devices such as changing the screen of a mobile device from portrait to landscape orientation. Apple iPhones and the Nintendo Wii incorporate accelerometers. Gyroscopes Gyroscopes measure the angular rate of rotational movement about one or more axes. Gyroscopes can measure complex motion accurately in multiple dimensions, tracking the position and rotation of a moving object unlike accelerometers which can only detect the fact that an object has moved or is moving in a particular direction. Further, unlike accelerometers and compasses, gyroscopes are not affected by errors related to external environmental factors such as gravitational and magnetic fields. Hence, gyroscopes greatly enhance the motion sensing capabilities of devices and are used for advanced motion sensing applications in consumer devices such as full gesture and movement detection and simulation in video gaming. The Nintendo Wii MotionPlus accessory and the Nintendo 3DS incorporate gyroscopes. Compasses Magnetic sensors, commonly referred to as compasses, detect magnetic fields and measure their absolute position relative to Earth's magnetic north and nearby magnetic materials. Information from magnetic sensors can also be used to correct errors from other sensors such as accelerometers. One example of how compass sensors are used in consumer devices is reorienting a displayed map to match up with the general direction a user is facing. Barometers Pressure sensors, also known as barometers, measure relative and absolute altitude through the analysis of changing atmospheric pressure. Pressure sensors can be used in consumer devices for sports and fitness or location-based applications where information on elevation can be valuable. Additional An example of an immersive musical experience through motion and gesture is accomplished with a musical infrared harp. References Motion (physics) Sensors
Principles of motion sensing
[ "Physics", "Technology", "Engineering" ]
532
[ "Physical phenomena", "Measuring instruments", "Motion (physics)", "Space", "Mechanics", "Spacetime", "Sensors" ]
32,046,774
https://en.wikipedia.org/wiki/Rehbinder%20effect
In physics, the Rehbinder effect is the reduction in the hardness and ductility of a material, particularly metals, by a surfactant film. The effect is named for Soviet scientist , who discovered the effect in 1928. A proposed explanation for this effect is the disruption of surface oxide films, and the reduction of surface energy by surfactants. The effect is of particular importance in machining, as lubricants reduce cutting forces. References Further reading Condensed matter physics Surface science
Rehbinder effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
101
[ "Materials science stubs", "Phases of matter", "Materials science", "Surface science", "Condensed matter physics", "Condensed matter stubs", "Matter" ]
32,050,924
https://en.wikipedia.org/wiki/WSSUS%20model
The WSSUS (Wide-Sense Stationary Uncorrelated Scattering) model provides a statistical description of the transmission behavior of wireless channels. "Wide-sense stationarity" means the second-order moments of the channel are stationary, which means that they depends only on the time difference, while "uncorrelated scattering" refers to the delay τ due to scatterers. Modelling of mobile channels as WSSUS (wide sense stationary uncorrelated scattering) has become popular among specialists. The model was introduced by Phillip A. Bello in 1963. A commonly used description of time variant channel applies the set of Bello functions and the theory of stochastic processes. References Kurth, R. R.; Snyder, D. L.; Hoversten, E. V. (1969) "Detection and Estimation Theory", Massachusetts Institute of Technology, Research Laboratory of Electronics, Quarterly Progress Report, No. 93 (IX), 177–205 Primary documents Bello, Phillip A., "Characterization of randomly time-variant linear channels", IEEE Transactions on Communications Systems, vol. 11, iss. 4, pp. 360-393, December 1963. External links Wide Sense Stationary Uncorrelated Scattering at www.WirelessCommunication.NL Information theory Scattering Scattering, absorption and radiative transfer (optics) Signal processing Stochastic models Telecommunication theory Wireless Wireless networking
WSSUS model
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Technology", "Engineering" ]
286
[ " absorption and radiative transfer (optics)", "Telecommunications engineering", "Computer engineering", "Signal processing", "Applied mathematics", "Wireless networking", "Computer networks engineering", "Wireless", "Scattering", "Information theory", "Computer science", "Condensed matter phy...
25,007,627
https://en.wikipedia.org/wiki/Serotonylation
Serotonylation is a receptor independent signaling mechanism by which serotonin activates intracellular processes by creating long lasting covalent bonds upon proteins. It occurs through the modification of proteins by the attachment of serotonin on their glutamine residues. This happens through the enzyme transglutaminase and the creation of glutamyl-amide bonds. This process occurs following serotonin transportation into the cell rather on plasma membranes as with the brief interactions that serotonin has when it activates 5-HT receptors. Functions Serotonylation is the process by which serotonin effects the exocytosis of alpha-granules from platelets (also known as thrombocytes). This involves the serotonylation of small GTPases such as Rab4 and RhoA. It has been suggested that "further understanding of the specific hormonal role of 5-HT in hemostasis and thrombosis is important to possibly prevent and treat deleterious hemorrhagic and cardiovascular disorders." Serotonylation has recently identified as playing a critical role in pulmonary hypertension. Serotonylation also through small GTPases is involved in the process by which serotonin controls the release of insulin from beta cells in the pancreas and so the regulation of blood glucose levels. This role helps explain why defects in transglutaminase can lead to glucose intolerance. Though small GTPases are involved, the existence of a large amount of protein-bound serotonin suggests the presence of yet unidentified other serotonylation interactions. Serotonylation of proteins other than small GTPases underlies the regulation of vascular smooth muscle "tone" in blood vessels including the aorta. This may occur through serotonylation modifying proteins integral to the contractility and the cytoskeleton such as alpha-actin, beta-actin, gamma-actin, myosin heavy chain and filamin A History According to some serotonin was "named for its source (sero-) and ability to modify smooth muscle tone (tonin)" an effect that may be dependent (some controversy exists) upon serotonylation. The term serotonylation was created in 2003 by Diego J. Walther and colleagues of the Max Planck Institute for Molecular Genetics in a paper in the journal Cell. References Biochemical reactions Cell signaling Serotonin Signal transduction
Serotonylation
[ "Chemistry", "Biology" ]
506
[ "Biochemistry", "Neurochemistry", "Biochemical reactions", "Signal transduction" ]
25,008,226
https://en.wikipedia.org/wiki/Multi-particle%20collision%20dynamics
Multi-particle collision dynamics (MPC), also known as stochastic rotation dynamics (SRD), is a particle-based mesoscale simulation technique for complex fluids which fully incorporates thermal fluctuations and hydrodynamic interactions. Coupling of embedded particles to the coarse-grained solvent is achieved through molecular dynamics. Method of simulation The solvent is modelled as a set of point particles of mass with continuous coordinates and velocities . The simulation consists of streaming and collision steps. During the streaming step, the coordinates of the particles are updated according to where is a chosen simulation time step which is typically much larger than a molecular dynamics time step. After the streaming step, interactions between the solvent particles are modelled in the collision step. The particles are sorted into collision cells with a lateral size . Particle velocities within each cell are updated according to the collision rule where is the centre of mass velocity of the particles in the collision cell and is a rotation matrix. In two dimensions, performs a rotation by an angle or with probability . In three dimensions, the rotation is performed by an angle around a random rotation axis. The same rotation is applied for all particles within a given collision cell, but the direction (axis) of rotation is statistically independent both between all cells and for a given cell in time. If the structure of the collision grid defined by the positions of the collision cells is fixed, Galilean invariance is violated. It is restored with the introduction of a random shift of the collision grid. Explicit expressions for the diffusion coefficient and viscosity derived based on Green-Kubo relations are in excellent agreement with simulations. Simulation parameters The set of parameters for the simulation of the solvent are: solvent particle mass average number of solvent particles per collision box lateral collision box size stochastic rotation angle kT (energy) time step The simulation parameters define the solvent properties, such as mean free path diffusion coefficient shear viscosity thermal diffusivity where is the dimensionality of the system. A typical choice for normalisation is . To reproduce fluid-like behaviour, the remaining parameters may be fixed as . Applications MPC has become a notable tool in the simulations of many soft-matter systems, including colloid dynamics polymer dynamics vesicles active systems liquid crystals References Computational fluid dynamics
Multi-particle collision dynamics
[ "Physics", "Chemistry" ]
466
[ "Computational fluid dynamics", "Fluid dynamics", "Computational physics" ]
25,008,405
https://en.wikipedia.org/wiki/Primer%20dimer
A primer dimer (PD) is a potential by-product in the polymerase chain reaction (PCR), a common biotechnological method. As its name implies, a PD consists of two primer molecules that have attached (hybridized) to each other because of strings of complementary bases in the primers. As a result, the DNA polymerase amplifies the PD, leading to competition for PCR reagents, thus potentially inhibiting amplification of the DNA sequence targeted for PCR amplification. In quantitative PCR, PDs may interfere with accurate quantification. Mechanism of formation A primer dimer is formed and amplified in three steps. In the first step, two primers anneal at their respective 3' ends (step I in the figure). If this construct is stable enough, the DNA polymerase will bind and extend the primers according to the complementary sequence (step II in the figure). An important factor contributing to the stability of the construct in step I is a high GC-content at the 3' ends and length of the overlap. The third step occurs in the next cycle, when a single strand of the product of step II is used as a template to which fresh primers anneal leading to synthesis of more PD product. Detection Primer dimers may be visible after gel electrophoresis of the PCR product. PDs in ethidium bromide-stained gels are typically seen as a 30-50 base-pair (bp) band or smear of moderate to high intensity and distinguishable from the band of the target sequence, which is typically longer than 50 bp. In quantitative PCR, PDs may be detected by melting curve analysis with intercalating dyes, such as SYBR Green I, a nonspecific dye for detection of double-stranded DNA. Because they usually consist of short sequences, the PDs denature at a lower temperature than the target sequence and hence can be distinguished by their melting-curve characteristics. Preventing primer-dimer formation One approach to prevent PDs consists of physical-chemical optimization of the PCR system, i.e. changing the concentrations of primers, magnesium chloride, nucleotides, ionic strength and temperature of the reaction. This method is somewhat limited by the physical-chemical characteristics that also determine the efficiency of amplification of the target sequence in the PCR. Therefore, reducing PDs formation may also result in reduced PCR efficiency. To overcome this limitation, other methods aim to reduce the formation of PDs only, including primer design, and use of different PCR enzyme systems or reagents. Primer-design software Primer-design software uses algorithms that check for the potential of DNA secondary structure formation and annealing of primers to itself or within primer pairs. Physical parameters that are taken into account by the software are potential self-complementarity and GC content of the primers; similar melting temperatures of the primers; and absence of secondary structures, such as stem-loops, in the DNA target sequence. Hot-start PCR Because primers are designed to have low complementarity to each other, they may anneal (step I in the figure) only at low temperature, e.g. room temperature, such as during the preparation of the reaction mixture. Although DNA polymerases used in PCR are most active around 70 °C, they have some polymerizing activity also at lower temperatures, which can cause DNA synthesis from primers after annealing to each other. Several methods have been developed to prevent PDs formation until the reaction reaches the working temperature of 60-70 °C, and these include initial inhibition of the DNA polymerase, or physical separation of reaction components reaction until the reaction mixture reaches the higher temperatures. These methods are referred to as hot-start PCR. Wax: in this method the enzyme is spatially separated from the reaction mixture by wax that melts when the reaction reaches high temperature. Slow release of magnesium: DNA polymerase requires magnesium ions for activity, so the magnesium is chemically separated from the reaction by binding to a chemical compound, and is released into the solution only at high temperature Non-covalent binding of inhibitor: in this method a peptide, antibody or aptamer are non-covalently bound to the enzyme at low temperature and inhibit its activity. After an incubation of 1–5 minutes at 95 °C, the inhibitor is released and the reaction starts. Cold-sensitive Taq polymerase: is a modified DNA polymerase with almost no activity at low temperature. Chemical modification: in this method a small molecule is covalently bound to the side chain of an amino acid in the active site of the DNA polymerase. The small molecule is released from the enzyme by incubation of the reaction mixture for 10–15 minutes at 95 °C. Once the small molecule is released, the enzyme is activated. Structural modifications of primers Another approach to prevent or reduce PD formation is by modifying the primers so that annealing with themselves or each other does not cause extension. HANDS (Homo-Tag Assisted Non-Dimer System): a nucleotide tail, complementary to the 3' end of the primer is added to the 5' end of the primer. Because of the close proximity of the 5' tail it anneals to the 3' end of the primer. The result is a stem-loop primer that excludes annealing involving shorter overlaps, but permits annealing of the primer to its fully complementary sequence in the target. Chimeric primers: some DNA bases in the primer are replaced with RNA bases, creating a chimeric sequence. The melting temperature of a chimeric sequence with another chimeric sequence is lower than that of chimeric sequence with DNA. This difference enables setting the annealing temperature such that the primer will anneal to its target sequence, but not to other chimeric primers. Blocked-cleavable primers: a method known as RNase H-dependent PCR (rhPCR), utilizes a thermostable RNase HII to remove a blocking group from the PCR primers at high temperature. This RNase HII enzyme displays almost no activity at low temperature, making the removal of the block only occur at high temperature. The enzyme also possess inherent primer:template mismatch discrimination, resulting in additional selection against primer-dimers. Self-Avoiding molecular recognition systems :also known as SAMRS, eliminating primer dimers by introducing nucleotide analogues T*, A*, G* and C* into the primer. The SAMRS DNA could bind to natural DNA, but not to other members of the same SAMRS species. For example, T* could bind to A but not A*, and A* could bind to T but not T*. Thus, through careful design, primers build from SAMRS could avoid primer-primer interactions and allowing sensitive SNP detection as well as multiplex PCR. Preventing signal acquisition from primer dimers While the methods above are designed to reduce PD formation, another approach aims to minimize signal generated from PDs in quantitative PCR. This approach is useful as long as there are few PDs formed and their inhibitory effect on product accumulation is minor. Four steps PCR: used when working with nonspecific dyes, such as SYBR Green I. It is based on the different length, and hence, different melting temperature of the PDs and the target sequence. In this method the signal is acquired below the melting temperature of the target sequence, but above the melting temperature of the PDs. Sequence-specific probes: TaqMan and molecular beacon probes generate signal only in the presence of their target (complementary) sequence, and this enhanced specificity precludes signal acquisition (but not possible inhibitory effects on product accumulation) from PDs. References External links Molecular biology Biotechnology Polymerase chain reaction
Primer dimer
[ "Chemistry", "Biology" ]
1,641
[ "Biochemistry methods", "Genetics techniques", "Polymerase chain reaction", "Biotechnology", "nan", "Molecular biology", "Biochemistry" ]
25,010,958
https://en.wikipedia.org/wiki/Dinitrotoluene
Dinitrotoluenes could refer to one of the following compounds: 2,3-Dinitrotoluene 2,4-Dinitrotoluene 2,5-Dinitrotoluene 2,6-Dinitrotoluene 3,4-Dinitrotoluene 3,5-Dinitrotoluene External links Explosive chemicals Nitrotoluenes
Dinitrotoluene
[ "Chemistry" ]
86
[ "Explosive chemicals" ]
26,412,019
https://en.wikipedia.org/wiki/Physics%20of%20magnetic%20resonance%20imaging
Magnetic resonance imaging (MRI) is a medical imaging technique mostly used in radiology and nuclear medicine in order to investigate the anatomy and physiology of the body, and to detect pathologies including tumors, inflammation, neurological conditions such as stroke, disorders of muscles and joints, and abnormalities in the heart and blood vessels among other things. Contrast agents may be injected intravenously or into a joint to enhance the image and facilitate diagnosis. Unlike CT and X-ray, MRI uses no ionizing radiation and is, therefore, a safe procedure suitable for diagnosis in children and repeated runs. Patients with specific non-ferromagnetic metal implants, cochlear implants, and cardiac pacemakers nowadays may also have an MRI in spite of effects of the strong magnetic fields. This does not apply on older devices, and details for medical professionals are provided by the device's manufacturer. Certain atomic nuclei are able to absorb and emit radio frequency energy when placed in an external magnetic field. In clinical and research MRI, hydrogen atoms are most often used to generate a detectable radio-frequency signal that is received by antennas close to the anatomy being examined. Hydrogen atoms are naturally abundant in people and other biological organisms, particularly in water and fat. For this reason, most MRI scans essentially map the location of water and fat in the body. Pulses of radio waves excite the nuclear spin energy transition, and magnetic field gradients localize the signal in space. By varying the parameters of the pulse sequence, different contrasts may be generated between tissues based on the relaxation properties of the hydrogen atoms therein. When inside the magnetic field (B0) of the scanner, the magnetic moments of the protons align to be either parallel or anti-parallel to the direction of the field. While each individual proton can only have one of two alignments, the collection of protons appear to behave as though they can have any alignment. Most protons align parallel to B0 as this is a lower energy state. A radio frequency pulse is then applied, which can excite protons from parallel to anti-parallel alignment, only the latter are relevant to the rest of the discussion. In response to the force bringing them back to their equilibrium orientation, the protons undergo a rotating motion (precession), much like a spun wheel under the effect of gravity. The protons will return to the low energy state by the process of spin-lattice relaxation. This appears as a magnetic flux, which yields a changing voltage in the receiver coils to give a signal. The frequency at which a proton or group of protons in a voxel resonates depends on the strength of the local magnetic field around the proton or group of protons, a stronger field corresponds to a larger energy difference and higher frequency photons. By applying additional magnetic fields (gradients) that vary linearly over space, specific slices to be imaged can be selected, and an image is obtained by taking the 2-D Fourier transform of the spatial frequencies of the signal (k-space). Due to the magnetic Lorentz force from B0 on the current flowing in the gradient coils, the gradient coils will try to move producing loud knocking sounds, for which patients require hearing protection. History The MRI scanner was developed from 1975 to 1977 at the University of Nottingham by Prof Raymond Andrew FRS FRSE following from his research into nuclear magnetic resonance. The full body scanner was created in 1978. Nuclear magnetism Subatomic particles have the quantum mechanical property of spin. Certain nuclei such as 1H (protons), 2H, 3He, 23Na or 31P, have a non–zero spin and therefore a magnetic moment. In the case of the so-called spin- nuclei, such as 1H, there are two spin states, sometimes referred to as up and down. Nuclei such as 12C have no unpaired neutrons or protons, and no net spin; however, the isotope 13C does. When these spins are placed in a strong external magnetic field they precess around an axis along the direction of the field. Protons align in two energy eigenstates (the Zeeman effect): one low-energy and one high-energy, which are separated by a very small splitting energy. Resonance and relaxation Quantum mechanics is required to accurately model the behaviour of a single proton. However, classical mechanics can be used to describe the behaviour of an ensemble of protons adequately. As with other spin particles, whenever the spin of a single proton is measured it can only have one of two results commonly called parallel and anti-parallel. When we discuss the state of a proton or protons we are referring to the wave function of that proton which is a linear combination of the parallel and anti-parallel states. In the presence of the magnetic field, B0, the protons will appear to precess at the Larmor frequency determined by the particle's gyro-magnetic ratio and the strength of the field. The static fields used most commonly in MRI cause precession which corresponds to a radiofrequency (RF) photon. The net longitudinal magnetization in thermodynamic equilibrium is due to a tiny excess of protons in the lower energy state. This gives a net polarization that is parallel to the external field. Application of an RF pulse can tip this net polarization vector sideways (with, i.e., a so-called 90° pulse), or even reverse it (with a so-called 180° pulse). The protons will come into phase with the RF pulse and therefore each other. The recovery of longitudinal magnetization is called longitudinal or T1 relaxation and occurs exponentially with a time constant T1. The loss of phase coherence in the transverse plane is called transverse or T2 relaxation. T1 is thus associated with the enthalpy of the spin system, or the number of nuclei with parallel versus anti-parallel spin. T2 on the other hand is associated with the entropy of the system, or the number of nuclei in phase. When the radio frequency pulse is turned off, the transverse vector component produces an oscillating magnetic field which induces a small current in the receiver coil. This signal is called the free induction decay (FID). In an idealized nuclear magnetic resonance experiment, the FID decays approximately exponentially with a time constant T2. However, in practical MRI there are small differences in the static magnetic field at different spatial locations ("inhomogeneities") that cause the Larmor frequency to vary across the body. This creates destructive interference, which shortens the FID. The time constant for the observed decay of the FID is called the T relaxation time, and is always shorter than T2. At the same time, the longitudinal magnetization starts to recover exponentially with a time constant T1 which is much larger than T2 (see below). In MRI, the static magnetic field is augmented by a field gradient coil to vary across the scanned region, so that different spatial locations become associated with different precession frequencies. Only those regions where the field is such that the precession frequencies match the RF frequency will experience excitation. Usually, these field gradients are modulated to sweep across the region to be scanned, and it is the almost infinite variety of RF and gradient pulse sequences that gives MRI its versatility. Change of field gradient spreads the responding FID signal in the frequency domain, but this can be recovered and measured by a refocusing gradient (to create a so-called "gradient echo"), or by a radio frequency pulse (to create a so-called "spin-echo"), or in digital post-processing of the spread signal. The whole process can be repeated when some T1-relaxation has occurred and the thermal equilibrium of the spins has been more or less restored. The repetition time (TR) is the time between two successive excitations of the same slice. Typically, in soft tissues T1 is around one second while T2 and T are a few tens of milliseconds. However, these values can vary widely between different tissues, as well as between different external magnetic fields. This behavior is one factor giving MRI its tremendous soft tissue contrast. MRI contrast agents, such as those containing Gadolinium(III) work by altering (shortening) the relaxation parameters, especially T1. Imaging Imaging schemes A number of schemes have been devised for combining field gradients and radio frequency excitation to create an image: 2D or 3D reconstruction from projections, such as in computed tomography. Building the image point-by-point or line-by-line. Gradients in the RF field rather than the static field. Although each of these schemes is occasionally used in specialist applications, the majority of MR Images today are created either by the two-dimensional Fourier transform (2DFT) technique with slice selection, or by the three-dimensional Fourier transform (3DFT) technique. Another name for 2DFT is spin-warp. What follows here is a description of the 2DFT technique with slice selection. The 3DFT technique is rather similar except that there is no slice selection and phase-encoding is performed in two separate directions. Echo-planar imaging Another scheme which is sometimes used, especially in brain scanning or where images are needed very rapidly, is called echo-planar imaging (EPI): In this case, each RF excitation is followed by a train of gradient echoes with different spatial encoding. Multiplexed-EPI is even faster, e.g., for whole brain functional MRI (fMRI) or diffusion MRI. Image contrast and contrast enhancement Image contrast is created by differences in the strength of the NMR signal recovered from different locations within the sample. This depends upon the relative density of excited nuclei (usually water protons), on differences in relaxation times (T1, T2, and T) of those nuclei after the pulse sequence, and often on other parameters discussed under specialized MR scans. Contrast in most MR images is actually a mixture of all these effects, but careful design of the imaging pulse sequence allows one contrast mechanism to be emphasized while the others are minimized. The ability to choose different contrast mechanisms gives MRI tremendous flexibility. In the brain, T1-weighting causes the nerve connections of white matter to appear white, and the congregations of neurons of gray matter to appear gray, while cerebrospinal fluid (CSF) appears dark. The contrast of white matter, gray matter and cerebrospinal fluid is reversed using T2 or T imaging, whereas proton-density-weighted imaging provides little contrast in healthy subjects. Additionally, functional parameters such as cerebral blood flow (CBF), cerebral blood volume (CBV) or blood oxygenation can affect T1, T2, and T and so can be encoded with suitable pulse sequences. In some situations it is not possible to generate enough image contrast to adequately show the anatomy or pathology of interest by adjusting the imaging parameters alone, in which case a contrast agent may be administered. This can be as simple as water, taken orally, for imaging the stomach and small bowel. However, most contrast agents used in MRI are selected for their specific magnetic properties. Most commonly, a paramagnetic contrast agent (usually a gadolinium compound) is given. Gadolinium-enhanced tissues and fluids appear extremely bright on T1-weighted images. This provides high sensitivity for detection of vascular tissues (e.g., tumors) and permits assessment of brain perfusion (e.g., in stroke). There have been concerns raised recently regarding the toxicity of gadolinium-based contrast agents and their impact on persons with impaired kidney function. (See Safety/Contrast agents below.) More recently, superparamagnetic contrast agents, e.g., iron oxide nanoparticles, have become available. These agents appear very dark on T-weighted images and may be used for liver imaging, as normal liver tissue retains the agent, but abnormal areas (e.g., scars, tumors) do not. They can also be taken orally, to improve visualization of the gastrointestinal tract, and to prevent water in the gastrointestinal tract from obscuring other organs (e.g., the pancreas). Diamagnetic agents such as barium sulfate have also been studied for potential use in the gastrointestinal tract, but are less frequently used. k-space In 1983, Ljunggren and Twieg independently introduced the k-space formalism, a technique that proved invaluable in unifying different MR imaging techniques. They showed that the demodulated MR signal S(t) generated by the interaction between an ensemble of freely precessing nuclear spins in the presence of a linear magnetic field gradient G and a receiver-coil equals the Fourier transform of the effective spin density, . Fundamentally, the signal is derived from Faraday's law of induction: where: In other words, as time progresses the signal traces out a trajectory in k-space with the velocity vector of the trajectory proportional to the vector of the applied magnetic field gradient. By the term effective spin density we mean the true spin density corrected for the effects of T1 preparation, T2 decay, dephasing due to field inhomogeneity, flow, diffusion, etc. and any other phenomena that affect that amount of transverse magnetization available to induce signal in the RF probe or its phase with respect to the receiving coil' s electromagnetic field. From the basic k-space formula, it follows immediately that we reconstruct an image by taking the inverse Fourier transform of the sampled data, viz. Using the k-space formalism, a number of seemingly complex ideas became simple. For example, it becomes very easy (for physicists, in particular) to understand the role of phase encoding (the so-called spin-warp method). In a standard spin echo or gradient echo scan, where the readout (or view) gradient is constant (e.g., G), a single line of k-space is scanned per RF excitation. When the phase encoding gradient is zero, the line scanned is the kx axis. When a non-zero phase-encoding pulse is added in between the RF excitation and the commencement of the readout gradient, this line moves up or down in k-space, i.e., we scan the line ky = constant. The k-space formalism also makes it very easy to compare different scanning techniques. In single-shot EPI, all of k-space is scanned in a single shot, following either a sinusoidal or zig-zag trajectory. Since alternating lines of k-space are scanned in opposite directions, this must be taken into account in the reconstruction. Multi-shot EPI and fast spin echo techniques acquire only part of k-space per excitation. In each shot, a different interleaved segment is acquired, and the shots are repeated until k-space is sufficiently well-covered. Since the data at the center of k-space represent lower spatial frequencies than the data at the edges of k-space, the TE value for the center of k-space determines the image's T2 contrast. The importance of the center of k-space in determining image contrast can be exploited in more advanced imaging techniques. One such technique is spiral acquisition—a rotating magnetic field gradient is applied, causing the trajectory in k-space to spiral out from the center to the edge. Due to T2 and T decay the signal is greatest at the start of the acquisition, hence acquiring the center of k-space first improves contrast to noise ratio (CNR) when compared to conventional zig-zag acquisitions, especially in the presence of rapid movement. Since and are conjugate variables (with respect to the Fourier transform) we can use the Nyquist theorem to show that a step in k-space determines the field of view of the image (maximum frequency that is correctly sampled) and the maximum value of k sampled determines the resolution; i.e., (These relationships apply to each axis independently.) Example of a pulse sequence In the timing diagram, the horizontal axis represents time. The vertical axis represents: (top row) amplitude of radio frequency pulses; (middle rows) amplitudes of the three orthogonal magnetic field gradient pulses; and (bottom row) receiver analog-to-digital converter (ADC). Radio frequencies are transmitted at the Larmor frequency of the nuclide to be imaged. For example, for 1H in a magnetic field of 1 T, a frequency of 42.5781 MHz would be employed. The three field gradients are labeled GX (typically corresponding to a patient's left-to-right direction and colored red in diagram), GY (typically corresponding to a patient's front-to-back direction and colored green in diagram), and GZ (typically corresponding to a patient's head-to-toe direction and colored blue in diagram). Where negative-going gradient pulses are shown, they represent reversal of the gradient direction, i.e., right-to-left, back-to-front or toe-to-head. For human scanning, gradient strengths of 1–100 mT/m are employed: Higher gradient strengths permit better resolution and faster imaging. The pulse sequence shown here would produce a transverse (axial) image. The first part of the pulse sequence, SS, achieves "slice selection". A shaped pulse (shown here with a sinc modulation) causes a 90° nutation of longitudinal nuclear magnetization within a slab, or slice, creating transverse magnetization. The second part of the pulse sequence, PE, imparts a phase shift upon the slice-selected nuclear magnetization, varying with its location in the Y direction. The third part of the pulse sequence, another slice selection (of the same slice) uses another shaped pulse to cause a 180° rotation of transverse nuclear magnetization within the slice. This transverse magnetisation refocuses to form a spin echo at a time TE. During the spin echo, a frequency-encoding (FE) or readout gradient is applied, making the resonant frequency of the nuclear magnetization vary with its location in the X direction. The signal is sampled nFE times by the ADC during this period, as represented by the vertical lines. Typically nFE of between 128 and 512 samples are taken. The longitudinal magnetisation is then allowed to recover somewhat and after a time TR the whole sequence is repeated nPE times, but with the phase-encoding gradient incremented (indicated by the horizontal hatching in the green gradient block). Typically nPE of between 128 and 512 repetitions are made. The negative-going lobes in GX and GZ are imposed to ensure that, at time TE (the spin echo maximum), phase only encodes spatial location in the Y direction. Typically TE is between 5 ms and 100 ms, while TR is between 100 ms and 2000 ms. After the two-dimensional matrix (typical dimension between 128 × 128 and 512 × 512) has been acquired, producing the so-called k-space data, a two-dimensional inverse Fourier transform is performed to provide the familiar MR image. Either the magnitude or phase of the Fourier transform can be taken, the former being far more common. Overview of main sequences MRI scanner Construction and operation The major components of an MRI scanner are: the main magnet, which polarizes the sample, the shim coils for correcting inhomogeneities in the main magnetic field, the gradient system which is used to localize the MR signal and the RF system, which excites the sample and detects the resulting NMR signal. The whole system is controlled by one or more computers. Magnet The magnet is the largest and most expensive component of the scanner, and the remainder of the scanner is built around it. The strength of the magnet is measured in teslas (T). Clinical magnets generally have a field strength in the range 0.1–3.0 T, with research systems available up to 9.4 T for human use and 21 T for animal systems. In the United States, field strengths up to 7 T have been approved by the FDA for clinical use. Just as important as the strength of the main magnet is its precision. The straightness of the magnetic lines within the center (or, as it is technically known, the iso-center) of the magnet needs to be near-perfect. This is known as homogeneity. Fluctuations (inhomogeneities in the field strength) within the scan region should be less than three parts per million (3 ppm). Three types of magnets have been used: Permanent magnet: Conventional magnets made from ferromagnetic materials (e.g., steel alloys containing rare-earth elements such as neodymium) can be used to provide the static magnetic field. A permanent magnet that is powerful enough to be used in an MRI will be extremely large and bulky; they can weigh over 100 tonnes. Permanent magnet MRIs are very inexpensive to maintain; this cannot be said of the other types of MRI magnets, but there are significant drawbacks to using permanent magnets. They are only capable of achieving weak field strengths compared to other MRI magnets (usually less than 0.4 T) and they are of limited precision and stability. Permanent magnets also present special safety issues; since their magnetic fields cannot be "turned off," ferromagnetic objects are virtually impossible to remove from them once they come into direct contact. Permanent magnets also require special care when they are being brought to their site of installation. Resistive electromagnet: A solenoid wound from copper wire is an alternative to a permanent magnet. An advantage is low initial cost, but field strength and stability are limited. The electromagnet requires considerable electrical energy during operation which can make it expensive to operate. This design is essentially obsolete. Superconducting electromagnet: When a niobium-titanium or niobium-tin alloy is cooled by liquid helium to 4 K (−269 °C, −452 °F) it becomes a superconductor, losing resistance to flow of electric current. An electromagnet constructed with superconductors can have extremely high field strengths, with very high stability. The construction of such magnets is extremely costly, and the cryogenic helium is expensive and difficult to handle. However, despite their cost, helium cooled superconducting magnets are the most common type found in MRI scanners today. Most superconducting magnets have their coils of superconductive wire immersed in liquid helium, inside a vessel called a cryostat. Despite thermal insulation, sometimes including a second cryostat containing liquid nitrogen, ambient heat causes the helium to slowly boil off. Such magnets, therefore, require regular topping-up with liquid helium. Generally a cryocooler, also known as a coldhead, is used to recondense some helium vapor back into the liquid helium bath. Several manufacturers now offer 'cryogenless' scanners, where instead of being immersed in liquid helium the magnet wire is cooled directly by a cryocooler. Alternatively, the magnet may be cooled by carefully placing liquid helium in strategic spots, dramatically reducing the amount of liquid helium used, or, high temperature superconductors may be used instead. Magnets are available in a variety of shapes. However, permanent magnets are most frequently C-shaped, and superconducting magnets most frequently cylindrical. C-shaped superconducting magnets and box-shaped permanent magnets have also been used. Magnetic field strength is an important factor in determining image quality. Higher magnetic fields increase signal-to-noise ratio, permitting higher resolution or faster scanning. However, higher field strengths require more costly magnets with higher maintenance costs, and have increased safety concerns. A field strength of 1.0–1.5 T is a good compromise between cost and performance for general medical use. However, for certain specialist uses (e.g., brain imaging) higher field strengths are desirable, with some hospitals now using 3.0 T scanners. Shims When the MR scanner is placed in the hospital or clinic, its main magnetic field is far from being homogeneous enough to be used for scanning. That is why before doing fine tuning of the field using a sample, the magnetic field of the magnet must be measured and shimmed. After a sample is placed into the scanner, the main magnetic field is distorted by susceptibility boundaries within that sample, causing signal dropout (regions showing no signal) and spatial distortions in acquired images. For humans or animals the effect is particularly pronounced at air-tissue boundaries such as the sinuses (due to paramagnetic oxygen in air) making, for example, the frontal lobes of the brain difficult to image. To restore field homogeneity a set of shim coils is included in the scanner. These are resistive coils, usually at room temperature, capable of producing field corrections distributed as several orders of spherical harmonics. After placing the sample in the scanner, the B0 field is 'shimmed' by adjusting currents in the shim coils. Field homogeneity is measured by examining an FID signal in the absence of field gradients. The FID from a poorly shimmed sample will show a complex decay envelope, often with many humps. Shim currents are then adjusted to produce a large amplitude exponentially decaying FID, indicating a homogeneous B0 field. The process is usually automated. Gradients Gradient coils are used to spatially encode the positions of protons by varying the magnetic field linearly across the imaging volume. The Larmor frequency will then vary as a function of position in the x, y and z-axes. Gradient coils are usually resistive electromagnets powered by sophisticated amplifiers which permit rapid and precise adjustments to their field strength and direction. Typical gradient systems are capable of producing gradients from 20 to 100 mT/m (i.e., in a 1.5 T magnet, when a maximal z-axis gradient is applied, the field strength may be 1.45 T at one end of a 1 m long bore and 1.55 T at the other). It is the magnetic gradients that determine the plane of imaging—because the orthogonal gradients can be combined freely, any plane can be selected for imaging. Scan speed is dependent on performance of the gradient system. Stronger gradients allow for faster imaging, or for higher resolution; similarly, gradient systems capable of faster switching can also permit faster scanning. However, gradient performance is limited by safety concerns over nerve stimulation. Some important characteristics of gradient amplifiers and gradient coils are slew rate and gradient strength. As mentioned earlier, a gradient coil will create an additional, linearly varying magnetic field that adds or subtracts from the main magnetic field. This additional magnetic field will have components in all 3 directions, viz. x, y and z; however, only the component along the magnetic field (usually called the z-axis, hence denoted Gz) is useful for imaging. Along any given axis, the gradient will add to the magnetic field on one side of the zero position and subtract from it on the other side. Since the additional field is a gradient, it has units of gauss per centimeter or millitesla per meter (mT/m). High performance gradient coils used in MRI are typically capable of producing a gradient magnetic field of approximate 30 mT/m or higher for a 1.5 T MRI. The slew rate of a gradient system is a measure of how quickly the gradients can be ramped on or off. Typical higher performance gradients have a slew rate of up to 100–200 T·m−1·s−1. The slew rate depends both on the gradient coil (it takes more time to ramp up or down a large coil than a small coil) and on the performance of the gradient amplifier (it takes a lot of voltage to overcome the inductance of the coil) and has significant influence on image quality. Radio frequency system The radio frequency (RF) transmission system consists of an RF synthesizer, power amplifier and transmitting coil. That coil is usually built into the body of the scanner. The power of the transmitter is variable, but high-end whole-body scanners may have a peak output power of up to 35 kW, and be capable of sustaining average power of 1 kW. Although these electromagnetic fields are in the RF range of tens of megahertz (often in the shortwave radio portion of the electromagnetic spectrum) at powers usually exceeding the highest powers used by amateur radio, there is very little RF interference produced by the MRI machine. The reason for this is that the MRI is not a radio transmitter. The RF frequency electromagnetic field produced in the "transmitting coil" is a magnetic near-field with very little associated changing electric field component (such as all conventional radio wave transmissions have). Thus, the high-powered electromagnetic field produced in the MRI transmitter coil does not produce much electromagnetic radiation at its RF frequency, and the power is confined to the coil space and not radiated as "radio waves." Thus, the transmitting coil is a good EM field transmitter at radio frequency, but a poor EM radiation transmitter at radio frequency. The receiver consists of the coil, pre-amplifier and signal processing system. The RF electromagnetic radiation produced by nuclear relaxation inside the subject is true EM radiation (radio waves), and these leave the subject as RF radiation, but they are of such low power as to also not cause appreciable RF interference that can be picked up by nearby radio tuners (in addition, MRI scanners are generally situated in metal mesh lined rooms which act as Faraday cages.) While it is possible to scan using the integrated coil for RF transmission and MR signal reception, if a small region is being imaged, then better image quality (i.e., higher signal-to-noise ratio) is obtained by using a close-fitting smaller coil. A variety of coils are available which fit closely around parts of the body such as the head, knee, wrist, breast, or internally, e.g., the rectum. A recent development in MRI technology has been the development of sophisticated multi-element phased array coils which are capable of acquiring multiple channels of data in parallel. This 'parallel imaging' technique uses unique acquisition schemes that allow for accelerated imaging, by replacing some of the spatial coding originating from the magnetic gradients with the spatial sensitivity of the different coil elements. However, the increased acceleration also reduces the signal-to-noise ratio and can create residual artifacts in the image reconstruction. Two frequently used parallel acquisition and reconstruction schemes are known as SENSE and GRAPPA. A detailed review of parallel imaging techniques can be found here: References Further reading Magnetic resonance imaging
Physics of magnetic resonance imaging
[ "Chemistry" ]
6,392
[ "Nuclear magnetic resonance", "Magnetic resonance imaging" ]
26,415,811
https://en.wikipedia.org/wiki/COLD-PCR
COLD-PCR (co-amplification at lower denaturation temperature PCR) is a modified polymerase chain reaction (PCR) protocol that enriches variant alleles from a mixture of wildtype and mutation-containing DNA. The ability to preferentially amplify and identify minority alleles and low-level somatic DNA mutations in the presence of excess wildtype alleles is useful for the detection of mutations. Detection of mutations is important in the case of early cancer detection from tissue biopsies and body fluids such as blood plasma or serum, assessment of residual disease after surgery or chemotherapy, disease staging and molecular profiling for prognosis or tailoring therapy to individual patients, and monitoring of therapy outcome and cancer remission or relapse. Common PCR will amplify both the major (wildtype) and minor (mutant) alleles with the same efficiency, occluding the ability to easily detect the presence of low-level mutations. The capacity to detect a mutation in a mixture of variant/wildtype DNA is valuable because this mixture of variant DNAs can occur when provided with a heterogeneous sample – as is often the case with cancer biopsies. Currently, traditional PCR is used in tandem with a number of different downstream assays for genotyping or the detection of somatic mutations. These can include the use of amplified DNA for RFLP analysis, MALDI-TOF (matrix-assisted laser-desorption–time-of-flight) genotyping, or direct sequencing for detection of mutations by Sanger sequencing or pyrosequencing. Replacing traditional PCR with COLD-PCR for these downstream assays will increase the reliability in detecting mutations from mixed samples, including tumors and body fluids. Method overviews The underlying principle of COLD-PCR is that single nucleotide mismatches will slightly alter the melting temperature (Tm) of the double-stranded DNA. Depending on the sequence context and position of the mismatch, Tm changes of 0.2–1.5 °C (0.36–2.7 °F) are common for sequences up to 200bp or higher. Knowing this the authors of the protocol took advantage of two observations: Each double-stranded DNA has a 'critical temperature' (Tc) lower than its Tm. The PCR amplification efficiency drops measurably below the Tc. The Tc is dependent on DNA sequence. Two template DNA fragments differing by only one or two nucleotide mismatches will have different amplification efficiencies if the denaturation step of PCR is set to the Tc. Keeping these principles in mind the authors developed the following general protocol: Denaturation stage. DNA is denatured at a high temperatureusually . Intermediate annealing stage. Set an intermediate annealing temperature that allows hybridization of mutant and wildtype allele DNA to one another. Because the mutant allele DNA forms the minority of DNA in the mixture they will be more likely to form mismatch heteroduplex DNA with the wildtype DNA. Melting stage. These heteroduplexes will more readily melt at lower temperatures. Hence they are selectively denatured at the Tc. Primer annealing stage. The homo-duplex DNA will preferentially remain double stranded and not be available for primer annealing. Extension stage. The DNA polymerase will extend complementary to the template DNA. Since the heteroduplex DNA is used as template, a larger proportion of minor variant DNA will be amplified and be available for subsequent rounds of PCR. There are two forms of COLD-PCR that have been developed to date. Full COLD-PCR and fast COLD-PCR. Full Full COLD-PCR is identical to the protocol outlined above. These five stages are used for each round of amplification. Fast Fast COLD-PCR differs from Full COLD-PCR in that the denaturation and intermediate annealing stages are skipped. This is because, in some cases, the preferential amplification of the mutant DNA is so great that ensuring the formation of the mutant/wildtype heteroduplex DNA is not needed. Thus the denaturation can occur at the Tc, proceed to primer annealing, and then polymerase-mediated extension. Each round of amplification will include these three stages in that order. By utilizing the lower denaturation temperature, the reaction will discriminate toward the products with the lower Tm – i.e. the variant alleles. Fast COLD-PCR produces much faster results due to the shortened protocol, while Full COLD-PCR is essential for amplification of all possible mutations in the starting mixture of DNA. Two-round COLD-PCR is a modified version of Fast COLD-PCR. During the second round of Fast COLD-PCR nested primers are used. This improves the sensitivity of mutation detection compared to one-round Fast COLD-PCR. Uses COLD-PCR has been used to improve the reliability of a number of different assays that traditionally use conventional PCR. RFLP A restriction fragment length polymorphism results in the cleavage (or absence thereof) of DNA for a specific mutation by a selected restriction enzyme that will not cleave the wildtype DNA. In a study using a mixture of wildtype and mutation containing DNA amplified by regular PCR or COLD-PCR, COLD-PCR preceding RFLP analysis was shown to improve the mutation detection by 10-20 fold. Sanger sequencing Sanger sequencing recently was used to evaluate the enrichment of mutant DNA from a mixture of 1:20 mutant:wildtype DNA. The variant DNA containing a mutation was obtained from a breast cancer cell line known to contain p53 mutations. Comparison of Sanger sequencing chromatograms indicated that the mutant allele was enriched 13 fold when COLD-PCR was used compared to traditional PCR alone. This was determined by the size of the peaks on the chromatogram at the variant allele location. As well, COLD-PCR was used to detect p53 mutations from lung-adenocarcinoma samples. The study was able to detect 8 low level (under 20% abundance) mutations that would likely have been missed using conventional methods that don't enrich for variant sequence DNA. Pyrosequencing Similar to its use in direct Sanger sequencing, with pyrosequencing COLD-PCR was shown to be capable of detecting mutations that had a prevalence 0.5–1% from the samples used. COLD-PCR was used to detect p53 and KRAS mutations by pyrosequencing, and was shown to outperform conventional PCR in both cases. MALDI-TOF The same research group that developed COLD-PCR and used it to compare the sensitivity of regular PCR for genotyping with direct Sanger sequencing, RFLP, and pyrosequencing, also ran a similar study using MALDI-TOF as a downstream application for detecting mutations. Their results indicated that COLD-PCR could enrich mutation sequences from a mixture of DNA by 10–100 fold and that mutations with an initial prevalence of 0.1–0.5% would be detectable. Compared to the 5–10% low-level detection rate expected with traditional PCR. QPCR COLD-PCR run on a quantitative PCR machine, using TaqMan probes specific for a mutation, was shown to increase the measured difference between mutant and wildtype samples. Advantages Single-step method capable of enriching both known and unknown minority alleles irrespective of mutation type and position Does not require extra costly reagents or specialized machinery Better than conventional PCR for the detection of mutations in a mixed sample Does not significantly increase experiment run time compared to conventional PCR Disadvantages Optimal Tc must be measured and determined for each amplicon, adding an extra step to conventional PCR-based procedures Requirement for precise denaturation temperature control during PCR to within ± 0.3 °C (0.54 °F) A suitable critical temperature may not be available that differentiates between mutant and wildtype DNA sequences Restricted to analyzing sequences smaller than approximately 200bp Vulnerable to polymerase-introduced errors Variable overall mutation enrichment dependent on DNA position and nucleotide substitution No guarantee that all low-level mutations will be preferentially enriched History COLD-PCR was originally described by Li et al. in a Nature Medicine paper published in 2008 from Mike Makrigiorgos's lab group at the Dana Farber Cancer Institute of Harvard Medical School. As summarized above, the technology has been used in a number of proof-of-principle experiments and medical research diagnostic experiments. Recently, the COLD-PCR technology has been licensed by Transgenomic, Inc. The licensing terms include the exclusive rights to commercialize the technology combined with Sanger sequencing. The plans are to develop commercial applications that will allow for rapid high-sensitivity detection of low-level somatic and mitochondrial DNA mutations. Alternatives Other technologies are available for the detection of minority DNA mutations, and these methods can be segregated into their ability to enrich for and detect either known or unknown mutations. See also PCR Genotyping Single-nucleotide polymorphism SNP genotyping References Molecular biology techniques Polymerase chain reaction Laboratory techniques DNA profiling techniques
COLD-PCR
[ "Chemistry", "Biology" ]
1,941
[ "Biochemistry methods", "Genetics techniques", "DNA profiling techniques", "Polymerase chain reaction", "Molecular biology techniques", "nan", "Molecular biology" ]
26,418,006
https://en.wikipedia.org/wiki/Exome%20sequencing
Exome sequencing, also known as whole exome sequencing (WES), is a genomic technique for sequencing all of the protein-coding regions of genes in a genome (known as the exome). It consists of two steps: the first step is to select only the subset of DNA that encodes proteins. These regions are known as exons—humans have about 180,000 exons, constituting about 1% of the human genome, or approximately 30 million base pairs. The second step is to sequence the exonic DNA using any high-throughput DNA sequencing technology. The goal of this approach is to identify genetic variants that alter protein sequences, and to do this at a much lower cost than whole-genome sequencing. Since these variants can be responsible for both Mendelian and common polygenic diseases, such as Alzheimer's disease, whole exome sequencing has been applied both in academic research and as a clinical diagnostic. Motivation and comparison to other approaches Exome sequencing is especially effective in the study of rare Mendelian diseases, because it is an efficient way to identify the genetic variants in all of an individual's genes. These diseases are most often caused by very rare genetic variants that are only present in a tiny number of individuals; by contrast, techniques such as SNP arrays can only detect shared genetic variants that are common to many individuals in the wider population. Furthermore, because severe disease-causing variants are much more likely (but by no means exclusively) to be in the protein coding sequence, focusing on this 1% costs far less than whole genome sequencing but still detects a high yield of relevant variants. In the past, clinical genetic tests were chosen based on the clinical presentation of the patient (i.e. focused on one gene or a small number known to be associated with a particular syndrome), or surveyed only certain types of variation (e.g. comparative genomic hybridization) but provided definitive genetic diagnoses in fewer than half of all patients. Exome sequencing is now increasingly used to complement these other tests: both to find mutations in genes already known to cause disease as well as to identify novel genes by comparing exomes from patients with similar features. Technical methodology Step 1: Target-enrichment strategies Target-enrichment methods allow one to selectively capture genomic regions of interest from a DNA sample prior to sequencing. Several target-enrichment strategies have been developed since the original description of the direct genomic selection (DGS) method in 2005. Though many techniques have been described for targeted capture, only a few of these have been extended to capture entire exomes. The first target enrichment strategy to be applied to whole exome sequencing was the array-based hybrid capture method in 2007, but in-solution capture has gained popularity in recent years. Array-based capture Microarrays contain single-stranded oligonucleotides with sequences from the human genome to tile the region of interest fixed to the surface. Genomic DNA is sheared to form double-stranded fragments. The fragments undergo end-repair to produce blunt ends and adaptors with universal priming sequences are added. These fragments are hybridized to oligos on the microarray. Unhybridized fragments are washed away and the desired fragments are eluted. The fragments are then amplified using PCR. Roche NimbleGen was first to take the original DGS technology and adapt it for next-generation sequencing. They developed the Sequence Capture Human Exome 2.1M Array to capture ~180,000 coding exons. This method is both time-saving and cost-effective compared to PCR based methods. The Agilent Capture Array and the comparative genomic hybridization array are other methods that can be used for hybrid capture of target sequences. Limitations in this technique include the need for expensive hardware as well as a relatively large amount of DNA. In-solution capture To capture genomic regions of interest using in-solution capture, a pool of custom oligonucleotides (probes) is synthesized and hybridized in solution to a fragmented genomic DNA sample. The probes (labeled with beads) selectively hybridize to the genomic regions of interest after which the beads (now including the DNA fragments of interest) can be pulled down and washed to clear excess material. The beads are then removed and the genomic fragments can be sequenced allowing for selective DNA sequencing of genomic regions (e.g., exons) of interest. This method was developed to improve on the hybridization capture target-enrichment method. In solution capture (as opposed to hybrid capture) there is an excess of probes to target regions of interest over the amount of template required. The optimal target size is about 3.5 megabases and yields excellent sequence coverage of the target regions. The preferred method is dependent on several factors including: number of base pairs in the region of interest, demands for reads on target, equipment in house, etc. Step 2: Sequencing There are many Next Generation Sequencing sequencing platforms available, postdating classical Sanger sequencing methodologies. Other platforms include Roche 454 sequencer and Life Technologies SOLiD systems, the Life Technologies Ion Torrent and Illumina's Illumina Genome Analyzer II (defunct) and subsequent Illumina MiSeq, HiSeq, and NovaSeq series instruments, all of which can be used for massively parallel exome sequencing. These 'short read' NGS systems are particularly well suited to analyse many relatively short stretches of DNA sequence, as found in human exons. Comparison with other technologies There are multiple technologies available that identify genetic variants. Each technology has advantages and disadvantages in terms of technical and financial factors. Two such technologies are microarrays and whole-genome sequencing. Microarray-based genotyping Microarrays use hybridization probes to test the prevalence of known DNA sequences, thus they cannot be used to identify unexpected genetic changes. In contrast, the high-throughput sequencing technologies used in exome sequencing directly provide the nucleotide sequences of DNA at the thousands of exonic loci tested. Hence, WES addresses some of the present limitations of hybridization genotyping arrays. Although exome sequencing is more expensive than hybridization-based technologies on a per-sample basis, its cost has been decreasing due to the falling cost and increased throughput of whole genome sequencing. Whole-genome sequencing Exome sequencing is only able to identify those variants found in the coding region of genes which affect protein function. It is not able to identify the structural and non-coding variants associated with the disease, which can be found using other methods such as whole genome sequencing. There remains 99% of the human genome that is not covered using exome sequencing, and exome sequencing allows sequencing of portions of the genome over at least 20 times as many samples compared to whole genome sequencing. For translation of identified rare variants into the clinic, sample size and the ability to interpret the results to provide a clinical diagnosis indicates that with the current knowledge in genetics, there are reports of exome sequencing being used for assisting diagnosis. The cost of exome sequencing is typically lower than whole genome sequencing. Data analysis The statistical analysis of the large quantity of data generated from sequencing approaches is a challenge. Even by only sequencing the exomes of individuals, a large quantity of data and sequence information is generated which requires a significant amount of data analysis. Challenges associated with the analysis of this data include changes in programs used to align and assemble sequence reads. Various sequencing technologies also have different error rates and generate various read-lengths which can pose challenges in comparing results from different sequencing platforms. False positive and false negative findings are associated with genomic resequencing approaches and are critical issues. A few strategies have been developed to improve the quality of exome data such as: Comparing the genetic variants identified between sequencing and array-based genotyping Comparing the coding SNPs to a whole genome sequenced individual with the disorder Comparing the coding SNPs with Sanger sequencing of HapMap individuals Rare recessive disorders may not have single nucleotide polymorphisms (SNPs) in public databases such as dbSNP. More common recessive phenotypes would be more likely to have disease-causing variants reported in dbSNP. For example, the most common cystic fibrosis variant has an allele frequency of about 3% in most populations. Screening out such variants might erroneously exclude such genes from consideration. Genes for recessive disorders are usually easier to identify than dominant disorders because the genes are less likely to have more than one rare nonsynonymous variant. The system that screens common genetic variants relies on dbSNP which may not have accurate information about the variation of alleles. Using lists of common variation from a study exome or genome-wide sequenced individual would be more reliable. A challenge in this approach is that as the number of exomes sequenced increases, dbSNP will also increase in the number of uncommon variants. It will be necessary to develop thresholds to define the common variants that are unlikely to be associated with a disease phenotype. Genetic heterogeneity and population ethnicity are also major limitations as they may increase the number of false positive and false negative findings which will make the identification of candidate genes more difficult. Of course, it is possible to reduce the stringency of the thresholds in the presence of heterogeneity and ethnicity, however this will reduce the power to detect variants as well. Using a genotype-first approach to identify candidate genes might also offer a solution to overcome these limitations. Unlike common variant analysis, the analysis of rare variants in whole-exome sequencing studies evaluates variant sets rather than single variants. Functional annotations predict the effect or function of rare variants and help prioritize rare functional variants. Incorporating these annotations can effectively boost the power of genetic association of rare variants analysis of whole genome sequencing studies. Some methods and tools have been developed to perform functionally-informed rare variant association analysis by incorporating functional annotations to empower analysis in whole exome sequencing studies. Ethical implications New technologies in genomics have changed the way researchers approach both basic and translational research. With approaches such as exome sequencing, it is possible to significantly enhance the data generated from individual genomes which has put forth a series of questions on how to deal with the vast amount of information. Should the individuals in these studies be allowed to have access to their sequencing information? Should this information be shared with insurance companies? This data can lead to unexpected findings and complicate clinical utility and patient benefit. This area of genomics still remains a challenge and researchers are looking into how to address these questions. Applications of exome sequencing By using exome sequencing, fixed-cost studies can sequence samples to much higher depth than could be achieved with whole genome sequencing. This additional depth makes exome sequencing well suited to several applications that need reliable variant calls. Rare variant mapping in complex disorders Current association studies have focused on common variation across the genome, as these are the easiest to identify with our current assays. However, disease-causing variants of large effect have been found to lie within exomes in candidate gene studies, and because of negative selection, are found in much lower allele frequencies and may remain untyped in current standard genotyping assays. Whole genome sequencing is a potential method to assay novel variant across the genome. However, in complex disorders (such as autism), a large number of genes are thought to be associated with disease risk. This heterogeneity of underlying risk means that very large sample sizes are required for gene discovery, and thus whole genome sequencing is not particularly cost-effective. This sample size issue is alleviated by the development of novel advanced analytic methods, which effectively map disease genes despite the genetic mutations are rare at variant level. In addition, variants in coding regions have been much more extensively studied and their functional implications are much easier to derive, making the practical applications of variants within the targeted exome region more immediately accessible. Exome sequencing in rare variant gene discovery remains a very active and ongoing area of research, and there is growing evidence that a significant burden of risk is observed across sets of genes. The exome sequencing has been reported rare variants in KRT82 gene in the autoimmune disorder Alopecia Areata. Discovery of Mendelian disorders In Mendelian disorders of large effect, findings thus far suggest one or a very small number of variants within coding genes underlie the entire condition. Because of the severity of these disorders, the few causal variants are presumed to be extremely rare or novel in the population, and would be missed by any standard genotyping assay. Exome sequencing provides high coverage variant calls across coding regions, which are needed to separate true variants from noise. A successful model of Mendelian gene discovery involves the discovery of de novo variants using trio sequencing, where parents and proband are genotyped. Case studies A study published in September 2009 discussed a proof of concept experiment to determine if it was possible to identify causal genetic variants using exome sequencing. They sequenced four individuals with Freeman–Sheldon syndrome (FSS) (OMIM 193700), a rare autosomal dominant disorder known to be caused by a mutation in the gene MYH3. Eight HapMap individuals were also sequenced to remove common variants in order to identify the causal gene for FSS. After exclusion of common variants, the authors were able to identify MYH3, which confirms that exome sequencing can be used to identify causal variants of rare disorders. This was the first reported study that used exome sequencing as an approach to identify an unknown causal gene for a rare mendelian disorder. Subsequently, another group reported successful clinical diagnosis of a suspected Bartter syndrome patient of Turkish origin. Bartter syndrome is a renal salt-wasting disease. Exome sequencing revealed an unexpected well-conserved recessive mutation in a gene called SLC26A3 which is associated with congenital chloride diarrhea (CLD). This molecular diagnosis of CLD was confirmed by the referring clinician. This example provided proof of concept of the use of whole-exome sequencing as a clinical tool in evaluation of patients with undiagnosed genetic illnesses. This report is regarded as the first application of next generation sequencing technology for molecular diagnosis of a patient. A second report was conducted on exome sequencing of individuals with a mendelian disorder known as Miller syndrome (MIM#263750), a rare disorder of autosomal recessive inheritance. Two siblings and two unrelated individuals with Miller syndrome were studied. They looked at variants that have the potential to be pathogenic such as non-synonymous mutations, splice acceptor and donor sites and short coding insertions or deletions. Since Miller syndrome is a rare disorder, it is expected that the causal variant has not been previously identified. Previous exome sequencing studies of common single nucleotide polymorphisms (SNPs) in public SNP databases were used to further exclude candidate genes. After exclusion of these genes, the authors found mutations in DHODH that were shared among individuals with Miller syndrome. Each individual with Miller syndrome was a compound heterozygote for the DHODH mutations which were inherited as each parent of an affected individual was found to be a carrier. This was the first time exome sequencing was shown to identify a novel gene responsible for a rare mendelian disease. This exciting finding demonstrates that exome sequencing has the potential to locate causative genes in complex diseases, which previously has not been possible due to limitations in traditional methods. Targeted capture and massively parallel sequencing represents a cost-effective, reproducible and robust strategy with high sensitivity and specificity to detect variants causing protein-coding changes in individual human genomes. Clinical diagnostics Exome sequencing can be used to diagnose the genetic cause of disease in a patient. Identification of the underlying disease gene mutation(s) can have major implications for diagnostic and therapeutic approaches, can guide prediction of disease natural history, and makes it possible to test at-risk family members. There are many factors that make exome sequencing superior to single gene analysis including the ability to identify mutations in genes that were not tested due to an atypical clinical presentation or the ability to identify clinical cases where mutations from different genes contribute to the different phenotypes in the same patient. Having diagnosed a genetic cause of a disease, this information may guide the selection of appropriate treatment. The first time this strategy was performed successfully in the clinic was in the treatment of an infant with inflammatory bowel disease. A number of conventional diagnostics had previously been used, but the results could not explain the infant's symptoms. Analysis of exome sequencing data identified a mutation in the XIAP gene. Knowledge of this gene's function guided the infant's treatment, leading to a bone marrow transplantation which cured the child of disease. Researchers have used exome sequencing to identify the underlying mutation for a patient with Bartter Syndrome and congenital chloride diarrhea. Bilgular's group also used exome sequencing and identified the underlying mutation for a patient with severe brain malformations, stating "[These findings] highlight the use of whole exome sequencing to identify disease loci in settings in which traditional methods have proved challenging... Our results demonstrate that this technology will be particularly valuable for gene discovery in those conditions in which mapping has been confounded by locus heterogeneity and uncertainty about the boundaries of diagnostic classification, pointing to a bright future for its broad application to medicine". Researchers at University of Cape Town, South Africa used exome sequencing to discover the genetic mutation of CDH2 as the underlying cause of a genetic disorder known as arrhythmogenic right ventricle cardiomyopathy (ARVC)‚ which increases the risk of heart disease and cardiac arrest. Commercial costs Multiple companies have offered exome sequencing to consumers. Knome was the first company to offer exome sequencing services to consumers, at a cost of several thousand dollars. Later, 23andMe ran a pilot WES program that was announced in September 2011 and was discontinued in 2012. Consumers could obtain exome data at a cost of $999. The company provided raw data, and did not offer analysis. In November 2012, DNADTC, a division of Gene by Gene started offering exomes at 80X coverage and introductory price of $695. This price per DNADTC web site is currently $895. In October 2013, BGI announced a promotion for personal whole exome sequencing at 50X coverage for $499. In June 2016 Genos was able to achieve an even lower price of $399 with a CLIA-certified 75X consumer exome sequenced from saliva. A 2018 review of 36 studies found the cost for exome sequencing to range from $555USD to $5,169USD, with a diagnostic yield ranging from 3% to 79% depending on patient groups. See also DNA profiling Genetic counseling Personalized medicine Transcriptomics Whole genome sequencing References External links News on Exome Sequencing Guide for Patients and Families Molecular biology DNA sequencing Molecular biology techniques
Exome sequencing
[ "Chemistry", "Biology" ]
3,931
[ "Biochemistry", "Molecular biology techniques", "DNA sequencing", "Molecular biology" ]
26,419,872
https://en.wikipedia.org/wiki/Chromatin%20immunoprecipitation
Chromatin immunoprecipitation (ChIP) is a type of immunoprecipitation experimental technique used to investigate the interaction between proteins and DNA in the cell. It aims to determine whether specific proteins are associated with specific genomic regions, such as transcription factors on promoters or other DNA binding sites, and possibly define cistromes. ChIP also aims to determine the specific location in the genome that various histone modifications are associated with, indicating the target of the histone modifiers. ChIP is crucial for the advancements in the field of epigenomics and learning more about epigenetic phenomena. Briefly, the conventional method is as follows: DNA and associated proteins on chromatin in living cells or tissues are crosslinked (this step is omitted in Native ChIP). The DNA-protein complexes (chromatin-protein) are then sheared into ~500 bp DNA fragments by sonication or nuclease digestion. Cross-linked DNA fragments associated with the protein(s) of interest are selectively immunoprecipitated from the cell debris using an appropriate protein-specific antibody. The associated DNA fragments are purified and their sequence is determined. Enrichment of specific DNA sequences represents regions on the genome that the protein of interest is associated with in vivo. Typical ChIP There are mainly two types of ChIP, primarily differing in the starting chromatin preparation. The first uses reversibly cross-linked chromatin sheared by sonication called cross-linked ChIP (XChIP). Native ChIP (NChIP) uses native chromatin sheared by micrococcal nuclease digestion. Cross-linked ChIP (XChIP) Cross-linked ChIP is mainly suited for mapping the DNA target of transcription factors or other chromatin-associated proteins, and uses reversibly cross-linked chromatin as starting material. The agent for reversible cross-linking could be formaldehyde or UV light. Then the cross-linked chromatin is usually sheared by sonication, providing fragments of 300 - 1000 base pairs (bp) in length. Mild formaldehyde crosslinking followed by nuclease digestion has been used to shear the chromatin. Chromatin fragments of 400 - 500bp have proven to be suitable for ChIP assays as they cover two to three nucleosomes. Cell debris in the sheared lysate is then cleared by sedimentation and protein–DNA complexes are selectively immunoprecipitated using specific antibodies to the protein(s) of interest. The antibodies are commonly coupled to agarose, sepharose, or magnetic beads. Alternatively, chromatin-antibody complexes can be selectively retained and eluted by inert polymer discs. The immunoprecipitated complexes (i.e., the bead–antibody–protein–target DNA sequence complex) are then collected and washed to remove non-specifically bound chromatin, the protein–DNA cross-link is reversed and proteins are removed by digestion with proteinase K. An epitope-tagged version of the protein of interest, or in vivo biotinylation can be used instead of antibodies to the native protein of interest. The DNA associated with the complex is then purified and identified by polymerase chain reaction (PCR), microarrays (ChIP-on-chip), molecular cloning and sequencing, or direct high-throughput sequencing (ChIP-Seq). Native ChIP (NChIP) Native ChIP is mainly suited for mapping the DNA target of histone modifiers. Generally, native chromatin is used as starting chromatin. As histones wrap around DNA to form nucleosomes, they are naturally linked. Then the chromatin is sheared by micrococcal nuclease digestion, which cuts DNA at the length of the linker, leaving nucleosomes intact and providing DNA fragments of one nucleosome (200bp) to five nucleosomes (1000bp) in length. Thereafter, methods similar to XChIP are used for clearing the cell debris, immunoprecipitating the protein of interest, removing protein from the immunoprecipitated complex, and purifying and analyzing the complex-associated DNA. Comparison of XChIP and NChIP The major advantage of NChIP is antibody specificity. Most antibodies to modified histones are raised against unfixed, synthetic peptide antigens. The epitopes they need to recognize in the XChIP may be disrupted or destroyed by formaldehyde cross-linking, particularly as the cross-links are likely to involve lysine e-amino groups in the N-terminals, disrupting the epitopes. This is likely to explain the consistently low efficiency of XChIP protocols compared to NChIP. But XChIP and NChIP have different aims and advantages relative to each other. XChIP is for mapping target sites of transcription factors and other chromatin-associated proteins; NChIP is for mapping target sites of histone modifiers (see Table 1). Comparison of ChIP-seq and ChIP-chip Chromatin Immunoprecipitation sequencing, also known as ChIP-seq, is an experimental technique used to identify transcription factor binding events throughout an entire genome. Knowing how the proteins in the human body interact with DNA to regulate gene expression is a key component of our knowledge of human diseases and biological processes. ChIP-seq is the primary technique to complete this task, as it has proven to be extremely effective in resolving how proteins and transcription factors influence phenotypical mechanisms. Overall ChIP-seq has risen to be a very efficient method for determining these factors, but there is a rivaling method known as ChIP-on-chip. ChIP-on-chip, also known as ChIP-chip, is an experimental technique used to isolate and identify genomic sites occupied by specific DNA-binding proteins in living cells. ChIP-on-chip is a relatively newer technique, as it was introduced in 2001 by Peggy Farnham and Michael Zhang. ChIP-on-chip gets its name by combining the methods of Chromatin Immunoprecipitation and DNA microarray, thus creating ChIP-on-chip. The two methods seek similar results, as they both strive to find protein binding sites that can help identify elements in the human genome. Those elements in the human genome are important for the advancement of knowledge in human diseases and biological processes. The difference between ChIP-seq and ChIP-chip is established by the specific site of the protein binding identification. The main difference comes from the efficacy of the two techniques, ChIP-seq produces results with higher sensitivity and spatial resolution because of the wide range of genomic coverage. Even though ChIP-seq has proven to be more efficient than ChIP-chip, ChIP-seq is not always the first choice for scientists. The cost and accessibility of ChIP-seq is a major disadvantage, which has led to the more predominant use of ChIP-chip in laboratories across the world. Table 1 Advantages and disadvantages of NChIP and XChIP History and New ChIP methods In 1984 John T. Lis and David Gilmour, at the time a graduate student in the Lis lab, used UV irradiation, a zero-length protein-nucleic acid crosslinking agent, to covalently cross-link proteins bound to DNA in living bacterial cells. Following lysis of cross-linked cells and immunoprecipitation of bacterial RNA polymerase, DNA associated with enriched RNA polymerase was hybridized to probes corresponding to different regions of known genes to determine the in vivo distribution and density of RNA polymerase at these genes. A year later they used the same methodology to study the distribution of eukaryotic RNA polymerase II on fruit fly heat shock genes. These reports are considered the pioneering studies in the field of chromatin immunoprecipitation. XChIP was further modified and developed by Alexander Varshavsky and co-workers, who examined the distribution of histone H4 on heat shock genes using formaldehyde cross-linking. This technique was extensively developed and refined thereafter. NChIP approach was first described by Hebbes et al., 1988, and has also been developed and refined quickly. The typical ChIP assay usually takes 4–5 days and requires 106~ 107 cells at least. Now new techniques on ChIP could be achieved as few as 100~1000 cells and completed within one day. Bead-free ChIP: This novel method ChIP uses discs of inert, porous polymer functionalized with either Protein A or G in spin columns or microplates. The chromatin-antibody complex is selectively retained by the disc and eluted to obtain enriched DNA for downstream applications such as qPCR and sequencing. The porous environment is specifically designed to maximize capture efficiency and reduce non-specific binding. Due to less manual handling and optimized protocols, ChIP can be performed in 5 hours. Carrier ChIP (CChIP): This approach could use as few as 100 cells by adding Drosophila cells as carrier chromatin to reduce loss and facilitate precipitation of the target chromatin. However, it demands highly specific primers for detection of the target cell chromatin from the foreign carrier chromatin background, and it takes two to three days. Fast ChIP (qChIP): The fast ChIP assay reduced the time by shortening two steps in a typical ChIP assay: (i) an ultrasonic bath accelerates the rate of antibody binding to target proteins—and thereby reduces immunoprecipitation time (ii) a resin-based (Chelex-100) DNA isolation procedure reduces the time of cross-link reversal and DNA isolation. However, the fast protocol is suitable only for large cell samples (in the range of 106~107). Up to 24 sheared chromatin samples can be processed to yield PCR-ready DNA in 5 hours, allowing multiple chromatin factors be probed simultaneously and/or looking at genomic events over several time points. Quick and quantitative ChIP (Q2ChIP): The assay uses 100,000 cells as starting material and is suitable for up to 1,000 histone ChIPs or 100 transcription factor ChIPs. Thus many chromatin samples can be prepared in parallel and stored, and Q2ChIP can be undertaken in a day. MicroChIP (μChIP): chromatin is usually prepared from 1,000 cells and up to 8 ChIPs can be done in parallel without carriers. The assay can also start with 100 cells, but only suit for one ChIP. It can also use small (1 mm3) tissue biopsies and microChIP can be done within one day. Matrix ChIP: This is a microplate-based ChIP assay with increased throughput and a simplified procedure. All steps are done in microplate wells without sample transfers, enabling potential for automation. It enables 96 ChIP assays for histone and various DNA-bound proteins in a single day. Pathology-ChIP (PAT-ChIP): This technique allows ChIP from pathology formalin-fixed and paraffin-embedded tissues and thus the use of pathology archives (even those that are several years old) for epigenetic analyses and the identification of candidate epigenetic biomarkers or targets. ChIP has also been applied for genome-wide analysis by combining with microarray technology (ChIP-on-chip) or second-generation DNA-sequencing technology (Chip-Sequencing). ChIP can also combine with paired-end tags sequencing in Chromatin Interaction Analysis using Paired End Tag sequencing (ChIA-PET), a technique developed for large-scale, de novo analysis of higher-order chromatin structures. Limitations Large Scale assays using ChIP is challenging using intact model organisms. This is because antibodies have to be generated for each TF, or, alternatively, transgenic model organisms expressing epitope-tagged TFs need to be produced. Researchers studying differential gene expression patterns in small organisms also face problems as genes expressed at low levels, in a small number of cells, in narrow time window. ChIP experiments cannot discriminate between different TF isoforms (Protein isoform). See also ChIP-exo, a technique that adds exonuclease treatment to the ChIP process to obtain up to single base pair resolution of binding sites ChIP-on-chip, combines ChIP with microarray technology DamID, an alternative location mapping technique that does not require specific antibodies RIP-Chip, a similar technique to analyze RNA-protein interactions References External links EpigenomeNOE.com Chromatin Immunopreciptation (ChIP) on Unfixed Chromatin from Cells and Tissues to Analyze Histone Modifications Chromatin Immunoprecipitation (ChIP) of Protein Complexes: Mapping of Genomic Targets of Nuclear Proteins in Cultured Cells Biochemical separation processes Protein methods Genomics techniques Molecular biology techniques Protein–protein interaction assays Immunologic tests
Chromatin immunoprecipitation
[ "Chemistry", "Biology" ]
2,700
[ "Biochemistry methods", "Genetics techniques", "Genomics techniques", "Protein–protein interaction assays", "Separation processes", "Protein methods", "Protein biochemistry", "Immunologic tests", "Biochemical separation processes", "Molecular biology techniques", "Molecular biology" ]
26,422,563
https://en.wikipedia.org/wiki/Wafer%20backgrinding
Wafer backgrinding is a semiconductor device fabrication step during which wafer thickness is reduced to allow stacking and high-density packaging of integrated circuits (IC). ICs are produced on semiconductor wafers that undergo a multitude of processing steps. The silicon wafers predominantly used today have diameters of 200 and 300 mm. They are roughly 750 μm thick to ensure a minimum of mechanical stability and to avoid warping during high-temperature processing steps. Smartcards, USB memory sticks, smartphones, handheld music players, and other ultra-compact electronic products would not be feasible in their present form without minimizing the size of their various components along all dimensions. The backside of the wafers are thus ground prior to wafer dicing (separation of the individual microchips). Wafers thinned down to 75 to 50 μm are common today. Prior to grinding, wafers are commonly laminated with UV-curable back-grinding tape, which ensures against wafer surface damage during back-grinding and prevents wafer surface contamination caused by infiltration of grinding fluid and/or debris. The wafers are also washed with deionized water throughout the process, which helps prevent contamination. The process is also known as "backlap", "backfinish", "back side grinding", or "wafer thinning". After backgrinding, a finished wafer is cut into individual chips in a process called die singulation. See also Back-illuminated sensor References Semiconductor device fabrication
Wafer backgrinding
[ "Materials_science" ]
310
[ "Semiconductor device fabrication", "Microtechnology" ]
33,633,646
https://en.wikipedia.org/wiki/Geometrical%20acoustics
Geometrical acoustics or ray acoustics is a branch of acoustics that studies propagation of sound on the basis of the concept of acoustic rays, defined as lines along which the acoustic energy is transported. This concept is similar to geometrical optics, or ray optics, that studies light propagation in terms of optical rays. Geometrical acoustics is an approximate theory, valid in the limiting case of very small wavelengths, or very high frequencies. The principal task of geometrical acoustics is to determine the trajectories of sound rays. The rays have the simplest form in a homogeneous medium, where they are straight lines. If the acoustic parameters of the medium are functions of spatial coordinates, the ray trajectories become curvilinear, describing sound reflection, refraction, possible focusing, etc. The equations of geometric acoustics have essentially the same form as those of geometric optics. The same laws of reflection and refraction hold for sound rays as for light rays. Geometrical acoustics does not take into account such important wave effects as diffraction. However, it provides a very good approximation when the wavelength is very small compared to the characteristic dimensions of inhomogeneous inclusions through which the sound propagates. Mathematical description The below discussion is from Landau and Lifshitz. If the amplitude and the direction of propagation varies slowly over the distances of wavelength, then an arbitrary sound wave can be approximated locally as a plane wave. In this case, the velocity potential can be written as For plane wave , where is a constant wavenumber vector, is a constant frequency, is the radius vector, is the time and is some arbitrary complex constant. The function is called the eikonal. We expect the eikonal to vary slowly with coordinates and time consistent with the approximation, then in that case, a Taylor series expansion provides Equating the two terms for , one finds For sound waves, the relation holds, where is the speed of sound and is the magnitude of the wavenumber vector. Therefore, the eikonal satisfies a first order nonlinear partial differential equation, where can be a function of coordinates if the fluid is not homogeneous. The above equation is same as Hamilton–Jacobi equation where the eikonal can be considered as the action. Since Hamilton–Jacobi equation is equivalent to Hamilton's equations, by analogy, one finds that Practical applications Practical applications of the methods of geometrical acoustics can be found in very different areas of acoustics. For example, in architectural acoustics the rectilinear trajectories of sound rays make it possible to determine reverberation time in a very simple way. The operation of fathometers and hydrolocators is based on measurements of the time required for sound rays to travel to a reflecting object and back. The ray concept is used in designing sound focusing systems. Also, the approximate theory of sound propagation in inhomogeneous media (such as the ocean and the atmosphere) has been developed largely on the basis of the laws of geometrical acoustics. The methods of geometrical acoustics have a limited range of applicability because the ray concept itself is only valid for those cases where the amplitude and direction of a wave undergo little changes over distances of the order of wavelength of a sound wave. More specifically, it is necessary that the dimensions of the rooms or obstacles in the sound path should be much greater than the wavelength. If the characteristic dimensions for a given problem become comparable to the wavelength, then wave diffraction begins to play an important part, and this is not covered by geometric acoustics. Software applications The concept of geometrical acoustics is widely used in software applications. Some software applications that use geometrical acoustics for their calculations are ODEON, Enhanced Acoustic Simulator for Engineers, Olive Tree Lab Terrain, CATT-Acoustic™ and COMSOL Multiphysics. References External links ODEON Room Acoustics Software EASE – Industry Standard for Acoustical Simulation of Rooms Olive Tree Lab Terrain Acoustics
Geometrical acoustics
[ "Physics" ]
815
[ "Classical mechanics", "Acoustics" ]
28,267,626
https://en.wikipedia.org/wiki/Categorical%20quantum%20mechanics
Categorical quantum mechanics is the study of quantum foundations and quantum information using paradigms from mathematics and computer science, notably monoidal category theory. The primitive objects of study are physical processes, and the different ways that these can be composed. It was pioneered in 2004 by Samson Abramsky and Bob Coecke. Categorical quantum mechanics is entry 18M40 in MSC2020. Mathematical setup Mathematically, the basic setup is captured by a dagger symmetric monoidal category: composition of morphisms models sequential composition of processes, and the tensor product describes parallel composition of processes. The role of the dagger is to assign to each state a corresponding test. These can then be adorned with more structure to study various aspects. For instance: A dagger compact category allows one to distinguish between an "input" and "output" of a process. In the diagrammatic calculus, it allows wires to be bent, allowing for a less restricted transfer of information. In particular, it allows entangled states and measurements, and gives elegant descriptions of protocols such as quantum teleportation. In quantum theory, it being compact closed is related to the Choi-Jamiołkowski isomorphism (also known as process-state duality), while the dagger structure captures the ability to take adjoints of linear maps. Considering only the morphisms that are completely positive maps, one can also handle mixed states, allowing the study of quantum channels categorically. Wires are always two-ended (and can never be split into a Y), reflecting the no-cloning and no-deleting theorems of quantum mechanics. Special commutative dagger Frobenius algebras model the fact that certain processes yield classical information, that can be cloned or deleted, thus capturing classical communication. In early works, dagger biproducts were used to study both classical communication and the superposition principle. Later, these two features have been separated. Complementary Frobenius algebras embody the principle of complementarity, which is used to great effect in quantum computation, as in the ZX-calculus. A substantial portion of the mathematical backbone to this approach is drawn from 'Australian category theory', most notably from work by Max Kelly and M. L. Laplaza, Andre Joyal and Ross Street, A. Carboni and R. F. C. Walters, and Steve Lack. Modern textbooks include Categories for quantum theory and Picturing quantum processes. Diagrammatic calculus One of the most notable features of categorical quantum mechanics is that the compositional structure can be faithfully captured by string diagrams. These diagrammatic languages can be traced back to Penrose graphical notation, developed in the early 1970s. Diagrammatic reasoning has been used before in quantum information science in the quantum circuit model, however, in categorical quantum mechanics primitive gates like the CNOT-gate arise as composites of more basic algebras, resulting in a much more compact calculus. In particular, the ZX-calculus has sprung forth from categorical quantum mechanics as a diagrammatic counterpart to conventional linear algebraic reasoning about quantum gates. The ZX-calculus consists of a set of generators representing the common Pauli quantum gates and the Hadamard gate equipped with a set of graphical rewrite rules governing their interaction. Although a standard set of rewrite rules has not yet been established, some versions have been proven to be complete, meaning that any equation that holds between two quantum circuits represented as diagrams can be proven using the rewrite rules. The ZX-calculus has been used to study for instance measurement-based quantum computing. Branches of activity Axiomatization and new models One of the main successes of the categorical quantum mechanics research program is that from seemingly weak abstract constraints on the compositional structure, it turned out to be possible to derive many quantum mechanical phenomena. In contrast to earlier axiomatic approaches, which aimed to reconstruct Hilbert space quantum theory from reasonable assumptions, this attitude of not aiming for a complete axiomatization may lead to new interesting models that describe quantum phenomena, which could be of use when crafting future theories. Completeness and representation results There are several theorems relating the abstract setting of categorical quantum mechanics to traditional settings for quantum mechanics. Completeness of the diagrammatic calculus: an equality of morphisms can be proved in the category of finite-dimensional Hilbert spaces if and only if it can be proved in the graphical language of dagger compact closed categories. Dagger commutative Frobenius algebras in the category of finite-dimensional Hilbert spaces correspond to orthogonal bases. A version of this correspondence also holds in arbitrary dimension. Certain extra axioms guarantee that the scalars embed into the field of complex numbers, namely the existence of finite dagger biproducts and dagger equalizers, well-pointedness, and a cardinality restriction on the scalars. Certain extra axioms on top of the previous guarantee that a dagger symmetric monoidal category embeds into the category of Hilbert spaces, namely if every dagger monic is a dagger kernel. In that case the scalars form an involutive field instead of just embedding in one. If the category is compact, the embedding lands in finite-dimensional Hilbert spaces. Six axioms characterize the category of Hilbert spaces completely, fulfilling the reconstruction programme. Two of these axioms concern a dagger and a tensor product, a third concerns biproducts. Special dagger commutative Frobenius algebras in the category of sets and relations correspond to discrete abelian groupoids. Finding complementary basis structures in the category of sets and relations corresponds to solving combinatorical problems involving Latin squares. Dagger commutative Frobenius algebras on qubits must be either special or antispecial, relating to the fact that maximally entangled tripartite states are SLOCC-equivalent to either the GHZ or the W state. Categorical quantum mechanics as logic Categorical quantum mechanics can also be seen as a type theoretic form of quantum logic that, in contrast to traditional quantum logic, supports formal deductive reasoning. There exists software that supports and automates this reasoning. There is another connection between categorical quantum mechanics and quantum logic, as subobjects in dagger kernel categories and dagger complemented biproduct categories form orthomodular lattices. In fact, the former setting allows logical quantifiers, the existence of which was never satisfactorily addressed in traditional quantum logic. Categorical quantum mechanics as foundation for quantum mechanics Categorical quantum mechanics allows a description of more general theories than quantum theory. This enables one to study which features single out quantum theory in contrast to other non-physical theories, hopefully providing some insight into the nature of quantum theory. For example, the framework allows a succinct compositional description of Spekkens' toy theory that allows one to pinpoint which structural ingredient causes it to be different from quantum theory. Categorical quantum mechanics and DisCoCat The DisCoCat framework applies categorical quantum mechanics to natural language processing. The types of a pregroup grammar are interpreted as quantum systems, i.e. as objects of a dagger compact category. The grammatical derivations are interpreted as quantum processes, e.g. a transitive verb takes its subject and object as input and produces a sentence as output. Function words such as determiners, prepositions, relative pronouns, coordinators, etc. can be modeled using the same Frobenius algebras that model classical communication. This can be understood as a monoidal functor from grammar to quantum processes, a formal analogy which led to the development of quantum natural language processing. See also ZX-calculus String diagram Applied category theory Quantum foundations References Quantum mechanics Category theory Dagger categories Monoidal categories
Categorical quantum mechanics
[ "Physics", "Mathematics" ]
1,579
[ "Functions and mappings", "Mathematical structures", "Theoretical physics", "Mathematical objects", "Quantum mechanics", "Monoidal categories", "Fields of abstract algebra", "Category theory", "Mathematical relations", "Dagger categories" ]
43,374,401
https://en.wikipedia.org/wiki/Magnetomechanical%20effects
The magnetomechanical effect is a fundamental feature of ferromagnetism. The fact that the application of external stresses alters the flux density of a magnetized ferromagnet, and thus the shape, and size of its hysteresis loops is easily changeable. Simply, it is the phenomenon of changing the magnetic properties of ferromagnetic materials by applying external stresses. Magnetomechanical effects connect magnetic, mechanical and electric phenomena in solid materials. Magnetostriction Inverse magnetostrictive effect Wiedemann effect Matteucci effect Guillemin effect Magnetostriction is thermodynamically opposite to inverse magnetostriction effect. The same situation occurs for Wiedemann and Matteuci effects. For magnetic, mechanical and electric phenomena in fluids see Magnetohydrodynamics and Electrohydrodynamics. See also Magnetocrystalline anisotropy Magnetism Magnetic ordering
Magnetomechanical effects
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
195
[ "Magnetic ordering", "Condensed matter physics", "Electric and magnetic fields in matter", "Materials science" ]
43,374,450
https://en.wikipedia.org/wiki/Guillemin%20effect
Guillemin effect is one of the magnetomechanical effects. It is connected with the tendency of a previously bent rod, made of magnetostrictive material, to be straightened, when subjected to magnetic field applied in the direction of rod's axis. See also Magnetomechanical effects Magnetostriction Magnetocrystalline anisotropy Magnetic ordering
Guillemin effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
77
[ "Materials science stubs", "Electric and magnetic fields in matter", "Materials science", "Magnetic ordering", "Condensed matter physics", "Electromagnetism stubs" ]
43,377,867
https://en.wikipedia.org/wiki/Iknife
An onkoknife, iKnife, or intelligent scalpel is a surgical knife that tests tissue as it contacts it during an operation and immediately gives information as to whether that tissue contains cancer cells. During a surgery this information is given continuously to the surgeon, significantly accelerating biological tissue analysis and enabling identification and removal of cancer cells. Electroknives have been in use since the 1920s and smart knife surgery is not limited only to cancer detection. In clinical studies the iKnife has shown impressive diagnostic accuracy - distinguishing benign ovarian tissue from cancerous tissue (97.4% sensitivity, 100% specificity), breast tumour from normal breast tissue (90.9% sensitivity, 98.8% specificity) and recognises histological features of poor prognostic outcome in colorectal carcinoma. Furthermore, the technology behind iKnife - rapid evaporative ionisation mass spectrometry (REIMS) - can identify Candida yeasts down to species level. Research and development Zoltán Takáts, Ph.D., a Hungarian research chemist associated with Semmelweis University, in Budapest, invented the intelligent surgical knife. He currently is Professor of Analytical Chemistry at Imperial College London (UK). His iKnife has been tested in three hospitals from 2010 through 2012. Following laboratory analysis of tissue samples in 302 patients that were included in a data base, they included 1624 of cancer and 1309 of non-cancer samples. The current pilot version for the iKnife cost the creating Hungarian scientist, MediMass Ltd. (Old Buda based company) participating in the research, colleagues at Imperial College, and the Hungarian government approximately £200 thousand (68 million HUF). According to Takáts, the investments will have been worth it, however, as the device is on a likely path to marketing. The instrument has been acquired by the Massachusetts Waters Corporation for development by MediMass Ltd., which identifies it as substantive innovative technology labelled, "Intelligent late" and "REIMS", according to their press release on 23 July 2014. The business transaction included all MediMass innovation, including patents, software, databases, and human resources related to the technology. Principle of operation History of direct examination of biological tissue by mass spectrometry (MS) Direct examination of biological tissue by mass spectrometry (MS) began in the 1970s, but at that time the next advance in technical conditions did not exist. The method did not provide any useful information on the chemical composition of the samples tested. The first breakthrough came with desorption ionisation methods (secondary ionization mass spectrometry - SIMS, matrix-assisted laser desorption ionization - MALDI) a release said. Using these methods, after appropriate sample preparation, chemical biological tissue imaging analysis may be achieved. From the end of the 1990s, it became apparent that mass spectrometry data in imaging studies showed a high degree of tissue specificity, that tissue histology could determine mass spectral information, and vice versa. In the case of the detected protein and peptide components, tissue-specific expression of the proteins is known commonly. Precise immunohistochemical methods are based on this phenomenon. The mass spectrometer detection, mainly from cell membranes and similar tissue, specifically, of complex lipids from similar tissue, however, yields surprising results. Since the distribution of proteins are in good agreement with the distribution patterns obtained by immunohistochemical methods, the distribution of the lipid components of the direct ionization mass spectrometric, previously were relative methods leading to the appearance of a new era in the study of biological specimens. The desorption electrospray ionization (DESI) was the first-MS technique, which allowed non-invasive testing of any objects (or organisms) without sample preparation, regardless of their shape or mechanical properties. Rapid evaporative ionization mass spectrometry During the summer of 2009, rapid evaporative ionization mass spectrometry (REIMS) was described. This is the second generation method. Primarily, lipid components of tissues provide the information, but different metabolite molecules and certain proteins also allow detection. The most important advantage of the specificity of mass spectrometry data is at the histological level, providing the opportunity to identify biological tissue based on chemical composition. The REIMS method is unique, in that, while the above-described mass spectrometry techniques specific to the particular method developed ion sources should be used, but it is difficult in the case of ion source devices used in surgical practice. With the operation of a variety of tissue-cutting tools, such as a diathermy knife, a surgical laser, or an ultrasonic tissue atomizer, an aerosol is formed having a composition characteristic of the tissue cut, which also contains ionized cell constructs. Among them, in terms of using the REIMS method, the intact membrane-forming phospholipids are important, which easily are detectable by mass spectrometry on the one hand, and on the other hand, contain the combination of the characteristics of the particular tissue type. Mass spectrometric analysis is just one implementation of an effective extraction system development that was needed to cut the surgical site at the time of running the generated aerosol mass spectrometer. For this purpose, a so-called Venturi-tube serves, as well as the above-mentioned surgical hand pieces, being modified to smoke the aerosols through them. Analysis of the flue gas in the mass spectrometer is realized instantaneously, within a few tenths of a second, resulting in a tissue-specific phospholipid mass spectra being obtained, allowing a response by the surgeon in less than two seconds. The analysis of the collected spectra is made of special-evaluation software, which was developed for this purpose. The software continuously compares the incoming data during surgery, validates mass spectra stored in a database, assigns the appropriate class, and the result is displayed visually to the surgeon. It also may provide information to the surgeon via an audio signal. It is estimated that the tissue identification accuracy during operation is higher than 92%. Therefore, the method is suitable for use in a surgical environment for carrying out measurements, as well as for being a part of a complex tissue identification system used during surgical tumor removal, and it can assist the surgeon in the operating surgical site with accurate histological mapping. The rapid evaporative ionization mass spectrometry (REIMS) is a novel technique that allows electrosurgery cuts with near real-time characterization of human tissue in vivo analysis through analysis of the vapors released during the process of tissue and aerosols. The REIMS technology and electro-surgical procedure adds tissue diagnosis to the intelligent knife iKnife operating principle. See also Instruments used in general surgery References External links Cancer Research UK: An intelligent knife can tell ovarian cancer and healthy tissue apart. Could it make surgery smarter? "Intelligent knife" tells surgeon if tissue is cancerous by Sam Wong Surgical Knife May Sniff Out Cancer By Tanya Lewis, Staff Writer | October 10, 2013 Heath, Nick, The Intelligent knife that helps surgeons sniff out cancer, European Technology, November 26, 2014, distributed in TechRepublic Daily Digest, TechRepublic.com, November 27, 2014 https://web.archive.org/web/20140322224442/http://www.doublexscience.org/iknife-excises-uncertainty-in-tumor-resection/ Surgical instruments Mass spectrometry
Iknife
[ "Physics", "Chemistry" ]
1,570
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Matter" ]
43,380,079
https://en.wikipedia.org/wiki/Copper%28II%29%20perchlorate
Copper(II) perchlorate is an inorganic compound with the chemical formula . The anhydrous solid is rarely encountered but several hydrates are known. Most important is the perchlorate salt of the aquo complex copper(II) perchlorate hexahydrate, . Infrared spectroscopic studies of anhydrous copper(II) perchlorate provided some of the first evidence for the binding of perchlorate anion to a metal ion. The structure of this compound was eventually deduced by X-ray crystallography. Copper resides in a distorted octahedral environment and the perchlorate ligands bridge between the Cu(II) centers. Safety Like other perchlorates, copper(II) perchlorate is a strong oxidant. References Copper(II) compounds Perchlorates
Copper(II) perchlorate
[ "Chemistry" ]
172
[ "Inorganic compounds", "Perchlorates", "Inorganic compound stubs", "Salts" ]
49,394,107
https://en.wikipedia.org/wiki/Delta-convergence
In mathematics, Delta-convergence, or Δ-convergence, is a mode of convergence in metric spaces, weaker than the usual metric convergence, and similar to (but distinct from) the weak convergence in Banach spaces. In Hilbert space, Delta-convergence and weak convergence coincide. For a general class of spaces, similarly to weak convergence, every bounded sequence has a Delta-convergent subsequence. Delta convergence was first introduced by Teck-Cheong Lim, and, soon after, under the name of almost convergence, by Tadeusz Kuczumow. Definition A sequence in a metric space is said to be Δ-convergent to if for every , . Characterization in Banach spaces If is a uniformly convex and uniformly smooth Banach space, with the duality mapping given by , , then a sequence is Delta-convergent to if and only if converges to zero weakly in the dual space (see ). In particular, Delta-convergence and weak convergence coincide if is a Hilbert space. Opial property Coincidence of weak convergence and Delta-convergence is equivalent, for uniformly convex Banach spaces, to the well-known Opial property Delta-compactness theorem The Delta-compactness theorem of T. C. Lim states that if is an asymptotically complete metric space, then every bounded sequence in has a Delta-convergent subsequence. The Delta-compactness theorem is similar to the Banach–Alaoglu theorem for weak convergence but, unlike the Banach-Alaoglu theorem (in the non-separable case) its proof does not depend on the Axiom of Choice. Asymptotic center and asymptotic completeness An asymptotic center of a sequence , if it exists, is a limit of the Chebyshev centers for truncated sequences . A metric space is called asymptotically complete, if any bounded sequence in it has an asymptotic center. Uniform convexity as sufficient condition of asymptotic completeness Condition of asymptotic completeness in the Delta-compactness theorem is satisfied by uniformly convex Banach spaces, and more generally, by uniformly rotund metric spaces as defined by J. Staples. Further reading William Kirk, Naseer Shahzad, Fixed point theory in distance spaces. Springer, Cham, 2014. xii+173 pp. G. Devillanova, S. Solimini, C. Tintarev, On weak convergence in metric spaces, Nonlinear Analysis and Optimization (B. S. Mordukhovich, S. Reich, A. J. Zaslavski, Editors), 43–64, Contemporary Mathematics 659, AMS, Providence, RI, 2016. References Theorems in functional analysis Nonlinear functional analysis Convergence (mathematics)
Delta-convergence
[ "Mathematics" ]
571
[ "Sequences and series", "Theorems in mathematical analysis", "Functions and mappings", "Convergence (mathematics)", "Mathematical structures", "Mathematical objects", "Theorems in functional analysis", "Mathematical relations" ]
49,394,912
https://en.wikipedia.org/wiki/Synthetic%20Biology%20Open%20Language
The Synthetic Biology Open Language (SBOL) is a proposed data standard for exchanging synthetic biology designs between software packages. It has been under development by the SBOL Developers Group since 2008. This group aims to develop the standard in a way that is open and democratic in order to include as many interests as possible and to avoid domination by a single company. The group also aims to develop and improve the design standard over time as the field of synthetic biology reflects this development. A graphical modeling language called SBOL Visual has also been created to visualize SBOL designs. Releases References Biocybernetics Bioinformatics Systems biology
Synthetic Biology Open Language
[ "Engineering", "Biology" ]
127
[ "Synthetic biology", "Biological engineering", "Bioinformatics", "Molecular genetics", "Systems biology" ]
49,396,186
https://en.wikipedia.org/wiki/First%20observation%20of%20gravitational%20waves
The first direct observation of gravitational waves was made on 14 September 2015 and was announced by the LIGO and Virgo collaborations on 11 February 2016. Previously, gravitational waves had been inferred only indirectly, via their effect on the timing of pulsars in binary star systems. The waveform, detected by both LIGO observatories, matched the predictions of general relativity for a gravitational wave emanating from the inward spiral and merger of two black holes (of 36  and 29 ) and the subsequent ringdown of a single, 62  black hole remnant. The signal was named GW150914 (from gravitational wave and the date of observation 2015-09-14). It was also the first observation of a binary black hole merger, demonstrating both the existence of binary stellar-mass black hole systems and the fact that such mergers could occur within the current age of the universe. This first direct observation was reported around the world as a remarkable accomplishment for many reasons. Efforts to directly prove the existence of such waves had been ongoing for over fifty years, and the waves are so minuscule that Albert Einstein himself doubted that they could ever be detected. The waves given off by the cataclysmic merger of GW150914 reached Earth as a ripple in spacetime that changed the length of a 1,120 km LIGO effective span by a thousandth of the width of a proton, proportionally equivalent to changing the distance to the nearest star outside the Solar System by one hair's width. The energy released by the binary as it spiralled together and merged was immense, with the energy of c2  ( joules or foes) in total radiated as gravitational waves, reaching a peak emission rate in its final few milliseconds of about watts – a level greater than the combined power of all light radiated by all the stars in the observable universe. The observation confirmed the last remaining directly undetected prediction of general relativity and corroborated its predictions of space-time distortion in the context of large scale cosmic events (known as strong field tests). It was heralded as inaugurating a new era of gravitational-wave astronomy, which enables observations of violent astrophysical events that were not previously possible and allows for the direct observation of the earliest history of the universe. On 15 June 2016, two more detections of gravitational waves, made in late 2015, were announced. Eight more observations were made in 2017, including GW170817, the first observed merger of binary neutron stars, which was also observed in electromagnetic radiation. Gravitational waves Albert Einstein predicted the existence of gravitational waves in 1916, on the basis of his theory of general relativity. General relativity interprets gravity as a consequence of distortions in spacetime caused by the presence of mass, and further entails that certain movements or acceleration of these masses will cause distortions – or "ripples" – in spacetime which spread outward from the source at the speed of light. Einstein considered this mostly a curiosity, since he understood that these ripples would be far too minuscule to detect using any technology foreseen at that time. As a further consequence following from the conservation of energy, the energy radiated away by gravitational waves from a system of two objects in mutual orbit would cause them to slowly spiral inwards, although again, this effect would be extremely minute and thus challenging to observe. One case where gravitational waves would be strongest is during the final moments of the merger of two compact objects such as neutron stars or black holes. Over a span of millions of years, binary neutron stars, and binary black holes lose energy, largely through gravitational waves, and as a result, they spiral in towards each other. At the very end of this process, the two objects will reach extreme velocities, and in the final fraction of a second of their merger a substantial amount of their mass would theoretically be converted into gravitational energy, and travel outward as gravitational waves, allowing a greater than usual chance for detection. However, since little was known about the number of compact binaries in the universe and reaching that final stage can be very slow, there was little certainty as to how often such events might happen. Observation Gravitational waves can be detected indirectly – by observing celestial phenomena caused by gravitational waves – or more directly by means of instruments such as the Earth-based LIGO or the planned space-based LISA instrument. Indirect observation Evidence of gravitational waves was first deduced in 1974 through the motion of the double neutron star system PSR B1913+16, in which one of the stars is a pulsar that emits electro-magnetic pulses at radio frequencies at precise, regular intervals as it rotates. Russell Hulse and Joseph Taylor, who discovered the stars, also showed that over time, the frequency of pulses shortened, and that the stars were gradually spiralling towards each other with an energy loss that agreed closely with the predicted energy that would be radiated by gravitational waves. For this work, Hulse and Taylor were awarded the Nobel Prize in Physics in 1993. Further observations of this pulsar and others in multiple systems (such as the double pulsar system PSR J0737-3039) also agree with General Relativity to high precision. Direct observation Direct observation of gravitational waves was not possible for many decades following their prediction, due to the minuscule effect that would need to be detected and separated from the background of vibrations present everywhere on Earth. A technique called interferometry was suggested in the 1960s and eventually technology developed sufficiently for this technique to become feasible. In the present approach used by LIGO, a laser beam is split and the two halves are recombined after traveling different paths. Changes to the length of the paths or the time taken for the two split beams, caused by the effect of passing gravitational waves, to reach the point where they recombine are revealed as "beats". Such a technique is extremely sensitive to tiny changes in the distance or time taken to traverse the two paths. In theory, an interferometer with arms about 4 km long would be capable of revealing the change of space-time – a tiny fraction of the size of a single proton – as a gravitational wave of sufficient strength passed through Earth from elsewhere. This effect would be perceptible only to other interferometers of a similar size, such as the Virgo, GEO 600 and planned KAGRA and INDIGO detectors. In practice at least two interferometers would be needed because any gravitational wave would be detected at both of these, but other kinds of disturbances would generally not be present at both. This technique allows the sought-after signal to be distinguished from noise. This project was eventually founded in 1992 as the Laser Interferometer Gravitational-Wave Observatory (LIGO). The original instruments were upgraded between 2010 and 2015 (to Advanced LIGO), giving an increase of around 10 times their original sensitivity. LIGO operates two gravitational-wave observatories in unison, located apart: the LIGO Livingston Observatory () in Livingston, Louisiana, and the LIGO Hanford Observatory, on the DOE Hanford Site () near Richland, Washington. The tiny shifts in the length of their arms are continually compared and significant patterns which appear to arise synchronously are followed up to determine whether a gravitational wave may have been detected or if some other cause was responsible. Initial LIGO operations between 2002 and 2010 did not detect any statistically significant events that could be confirmed as gravitational waves. This was followed by a multi-year shut-down while the detectors were replaced by much improved "Advanced LIGO" versions.  In February 2015, the two advanced detectors were brought into engineering mode, in which the instruments are operating fully for the purpose of testing and confirming they are functioning correctly before being used for research, with formal science observations due to begin on 18 September 2015. Throughout the development and initial observations by LIGO, several "blind injections" of fake gravitational wave signals were introduced to test the ability of the researchers to identify such signals. To protect the efficacy of blind injections, only four LIGO scientists knew when such injections occurred, and that information was revealed only after a signal had been thoroughly analyzed by researchers. On 14 September 2015, while LIGO was running in engineering mode but without any blind data injections, the instrument reported a possible gravitational wave detection. The detected event was given the name GW150914. GW150914 event Event detection GW150914 was detected by the LIGO detectors in Hanford, Washington state, and Livingston, Louisiana, USA, at 9:50:45 UTC on 14 September 2015. The LIGO detectors were operating in "engineering mode", meaning that they were operating fully but had not yet begun a formal "research" phase (which was due to commence three days later on 18 September), so initially there was a question as to whether the signals had been real detections or simulated data for testing purposes before it was ascertained that they were not tests. The chirp signal lasted over 0.2 seconds, and increased in frequency and amplitude in about 8 cycles from 35 Hz to 250 Hz. The signal is in the audible range and has been described as resembling the "chirp" of a bird; astrophysicists and other interested parties the world over excitedly responded by imitating the signal on social media upon the announcement of the discovery. (The frequency increases because each orbit is noticeably faster than the one before during the final moments before merging.) The trigger that indicated a possible detection was reported within three minutes of acquisition of the signal, using rapid ('online') search methods that provide a quick, initial analysis of the data from the detectors. After the initial automatic alert at 9:54 UTC, a sequence of internal emails confirmed that no scheduled or unscheduled injections had been made, and that the data looked clean. After this, the rest of the collaborating team was quickly made aware of the tentative detection and its parameters. More detailed statistical analysis of the signal, and of 16 days of surrounding data from 12 September to 20 October 2015, identified GW150914 as a real event, with an estimated significance of at least 5.1 sigma or a confidence level of 99.99994%. Corresponding wave peaks were seen at Livingston seven milliseconds before they arrived at Hanford. Gravitational waves propagate at the speed of light, and the disparity is consistent with the light travel time between the two sites. The waves had traveled at the speed of light for more than a billion years. At the time of the event, the Virgo gravitational wave detector (near Pisa, Italy) was offline and undergoing an upgrade; had it been online it would likely have been sensitive enough to also detect the signal, which would have greatly improved the positioning of the event. GEO600 (near Hannover, Germany) was not sensitive enough to detect the signal. Consequently, neither of those detectors was able to confirm the signal measured by the LIGO detectors. Astrophysical origin The event happened at a luminosity distance of megaparsecs (determined by the amplitude of the signal), or billion light years, corresponding to a cosmological redshift of (90% credible intervals). Analysis of the signal along with the inferred redshift suggested that it was produced by the merger of two black holes with masses of times and times the mass of the Sun (in the source frame), resulting in a post-merger black hole of  . The mass–energy of the missing   was radiated away in the form of gravitational waves. During the final 20 milliseconds of the merger, the power of the radiated gravitational waves peaked at about or 526 dBm – 50 times greater than the combined power of all light radiated by all the stars in the observable universe. The amount of this energy that was received by the entire planet Earth was about 36 billion joules, of which only a small amount was absorbed. Across the 0.2-second duration of the detectable signal, the relative tangential (orbiting) velocity of the black holes increased from 30% to 60% of the speed of light. The orbital frequency of 75 Hz (half the gravitational wave frequency) means that the objects were orbiting each other at a distance of only 350 km by the time they merged. The phase changes to the signal's polarization allowed calculation of the objects' orbital frequency, and taken together with the amplitude and pattern of the signal, allowed calculation of their masses and therefore their extreme final velocities and orbital separation (distance apart) when they merged. That information showed that the objects had to be black holes, as any other kind of known objects with these masses would have been physically larger and therefore merged before that point, or would not have reached such velocities in such a small orbit. The highest observed neutron star mass is 2 , with a conservative upper limit for the mass of a stable neutron star of 3 , so that a pair of neutron stars would not have had sufficient mass to account for the merger (unless exotic alternatives exist, for example, boson stars), while a black hole-neutron star pair would have merged sooner, resulting in a final orbital frequency that was not so high. The decay of the waveform after it peaked was consistent with the damped oscillations of a black hole as it relaxed to a final merged configuration. Although the inspiral motion of compact binaries can be described well from post-Newtonian calculations, the strong gravitational field merger stage can only be solved in full generality by large-scale numerical relativity simulations. In the improved model and analysis, the post-merger object is found to be a rotating Kerr black hole with a spin parameter of , i.e. one with 2/3 of the maximum possible angular momentum for its mass. The two stars which formed the two black holes were likely formed about 2 billion years after the Big Bang with masses of between 40 and 100 times the mass of the Sun. Location in the sky Gravitational wave instruments are whole-sky monitors with little ability to resolve signals spatially. A network of such instruments is needed to locate the source in the sky through triangulation. With only the two LIGO instruments in observational mode, GW150914's source location could only be confined to an arc on the sky. This was done via analysis of the ms time-delay, along with amplitude and phase consistency across both detectors. This analysis produced a credible region of 150 deg2 with a probability of 50% or 610 deg2 with a probability of 90% located mainly in the Southern Celestial Hemisphere, in the rough direction of (but much farther than) the Magellanic Clouds. For comparison, the area of the constellation Orion is 594 deg2. Coincident gamma-ray observation The Fermi Gamma-ray Space Telescope reported that its Gamma-Ray Burst Monitor (GBM) instrument detected a weak gamma-ray burst above 50 keV, starting 0.4 seconds after the LIGO event and with a positional uncertainty region overlapping that of the LIGO observation. The Fermi team calculated the odds of such an event being the result of a coincidence or noise at 0.22%. However a gamma ray burst would not have been expected, and observations from the INTEGRAL telescope's all-sky SPI-ACS instrument indicated that any energy emission in gamma-rays and hard X-rays from the event was less than one millionth of the energy emitted as gravitational waves, which "excludes the possibility that the event is associated with substantial gamma-ray radiation, directed towards the observer". If the signal observed by the Fermi GBM was genuinely astrophysical, INTEGRAL would have indicated a clear detection at a significance of 15 sigma above background radiation. The AGILE space telescope also did not detect a gamma-ray counterpart of the event. A follow-up analysis by an independent group, released in June 2016, developed a different statistical approach to estimate the spectrum of the gamma-ray transient. It concluded that Fermi GBM's data did not show evidence of a gamma ray burst, and was either background radiation or an Earth albedo transient on a 1-second timescale. A rebuttal of this follow-up analysis, however, pointed out that the independent group misrepresented the analysis of the original Fermi GBM Team paper and therefore misconstrued the results of the original analysis. The rebuttal reaffirmed that the false coincidence probability is calculated empirically and is not refuted by the independent analysis. Black hole mergers of the type thought to have produced the gravitational wave event are not expected to produce gamma-ray bursts, as stellar-mass black hole binaries are not expected to have large amounts of orbiting matter. Avi Loeb has theorised that if a massive star is rapidly rotating, the centrifugal force produced during its collapse will lead to the formation of a rotating bar that breaks into two dense clumps of matter with a dumbbell configuration that becomes a black hole binary, and at the end of the star's collapse it triggers a gamma-ray burst. Loeb suggests that the 0.4 second delay is the time it took the gamma-ray burst to cross the star, relative to the gravitational waves. Other follow-up observations The reconstructed source area was targeted by follow-up observations covering radio, optical, near infra-red, X-ray, and gamma-ray wavelengths along with searches for coincident neutrinos. However, because LIGO had not yet started its science run, notice to other telescopes was delayed. The ANTARES telescope detected no neutrino candidates within ±500 seconds of GW150914. The IceCube Neutrino Observatory detected three neutrino candidates within ±500 seconds of GW150914. One event was found in the southern sky and two in the northern sky. This was consistent with the expectation of background detection levels. None of the candidates were compatible with the 90% confidence area of the merger event. Although no neutrinos were detected, the lack of such observations provided a limit on neutrino emission from this type of gravitational wave event. Observations by the Swift Gamma-Ray Burst Mission of nearby galaxies in the region of the detection, two days after the event, did not detect any new X-ray, optical or ultraviolet sources. Announcement The announcement of the detection was made on 11 February 2016 at a news conference in Washington, D.C. by David Reitze, the executive director of LIGO, with a panel comprising Gabriela González, Rainer Weiss and Kip Thorne, of LIGO, and France A. Córdova, the director of NSF. Barry Barish delivered the first presentation on this discovery to a scientific audience simultaneously with the public announcement. The initial announcement paper was published during the news conference in Physical Review Letters, with further papers either published shortly afterwards or immediately available in preprint form. Awards and recognition In May 2016, the full collaboration, and in particular Ronald Drever, Kip Thorne, and Rainer Weiss, received the Special Breakthrough Prize in Fundamental Physics for the observation of gravitational waves. Drever, Thorne, Weiss, and the LIGO discovery team also received the Gruber Prize in Cosmology. Drever, Thorne, and Weiss were also awarded the 2016 Shaw Prize in Astronomy and the 2016 Kavli Prize in Astrophysics. Barish was awarded the 2016 Enrico Fermi Prize from the Italian Physical Society (Società Italiana di Fisica). In January 2017, LIGO spokesperson Gabriela González and the LIGO team were awarded the 2017 Bruno Rossi Prize. The 2017 Nobel Prize in Physics was awarded to Rainer Weiss, Barry Barish and Kip Thorne "for decisive contributions to the LIGO detector and the observation of gravitational waves". Implications The observation was heralded as inaugurating a revolutionary era of gravitational-wave astronomy. Prior to this detection, astrophysicists and cosmologists were able to make observations based upon electromagnetic radiation (including visible light, X-rays, microwave, radio waves, gamma rays) and particle-like entities (cosmic rays, stellar winds, neutrinos, and so on). These have significant limitations – light and other radiation may not be emitted by many kinds of objects, and can also be obscured or hidden behind other objects. Objects such as galaxies and nebulae can also absorb, re-emit, or modify light generated within or behind them, and compact stars or exotic stars may contain material which is dark and radio silent, and as a result there is little evidence of their presence other than through their gravitational interactions. Expectations for detection of future binary merger events On 15 June 2016, the LIGO group announced an observation of another gravitational wave signal, named GW151226. The Advanced LIGO was predicted to detect five more black hole mergers like GW150914 in its next observing campaign from November 2016 until August 2017 (it turned out to be seven), and then 40 binary star mergers each year, in addition to an unknown number of more exotic gravitational wave sources, some of which may not be anticipated by current theory. Planned upgrades are expected to double the signal-to-noise ratio, expanding the volume of space in which events like GW150914 can be detected by a factor of ten. Additionally, Advanced Virgo, KAGRA, and a possible third LIGO detector in India will extend the network and significantly improve the position reconstruction and parameter estimation of sources. Laser Interferometer Space Antenna (LISA) is a proposed space based observation mission to detect gravitational waves. With the proposed sensitivity range of LISA, merging binaries like GW150914 would be detectable about 1000 years before they merge, providing for a class of previously unknown sources for this observatory if they exist within about 10 megaparsecs. LISA Pathfinder, LISA's technology development mission, was launched in December 2015 and it demonstrated that the LISA mission is feasible. A 2016 model predicted LIGO would detect approximately 1000 black hole mergers per year when it reached full sensitivity following upgrades. Lessons for stellar evolution and astrophysics The masses of the two pre-merger black holes provide information about stellar evolution. Both black holes were more massive than previously discovered stellar-mass black holes, which were inferred from X-ray binary observations. This implies that the stellar winds from their progenitor stars must have been relatively weak, and therefore that the metallicity (mass fraction of chemical elements heavier than hydrogen and helium) must have been less than about half the solar value. The fact that the pre-merger black holes were present in a binary star system, as well as the fact that the system was compact enough to merge within the age of the universe, constrains either binary star evolution or dynamical formation scenarios, depending on how the black hole binary was formed. A significant number of black holes must receive low natal kicks (the velocity a black hole gains at its formation in a core-collapse supernova event), otherwise the black hole forming in a binary star system would be ejected and an event like GW would be prevented. The survival of such binaries, through common envelope phases of high rotation in massive progenitor stars, may be necessary for their survival. The majority of the latest black hole model predictions comply with these added constraints. The discovery of the GW merger event increases the lower limit on the rate of such events, and rules out certain theoretical models that predicted very low rates of less than 1 Gpc−3yr−1 (one event per cubic gigaparsec per year). Analysis resulted in lowering the previous upper limit rate on events like GW150914 from ~140 Gpc−3yr−1 to  Gpc−3yr−1. Impact on future cosmological observation Measurement of the waveform and amplitude of the gravitational waves from a black hole merger event makes accurate determination of its distance possible. The accumulation of black hole merger data from cosmologically distant events may help to create more precise models of the history of the expansion of the universe and the nature of the dark energy that influences it. The earliest universe is opaque since the cosmos was so energetic then that most matter was ionized and photons were scattered by free electrons. However, this opacity would not affect gravitational waves from that time, so if they occurred at levels strong enough to be detected at this distance, it would allow a window to observe the cosmos beyond the current visible universe. Gravitational-wave astronomy therefore may some day allow direct observation of the earliest history of the universe. Tests of general relativity The inferred fundamental properties, mass and spin, of the post-merger black hole were consistent with those of the two pre-merger black holes, following the predictions of general relativity. This is the first test of general relativity in the very strong-field regime. No evidence could be established against the predictions of general relativity. The opportunity was limited in this signal to investigate the more complex general relativity interactions, such as tails produced by interactions between the gravitational wave and curved space-time background. Although a moderately strong signal, it is much smaller than that produced by binary-pulsar systems. In the future stronger signals, in conjunction with more sensitive detectors, could be used to explore the intricate interactions of gravitational waves as well as to improve the constraints on deviations from general relativity. Speed of gravitational waves and limit on possible mass of graviton The speed of gravitational waves (vg) is predicted by general relativity to be the speed of light (c). The extent of any deviation from this relationship can be parameterized in terms of the mass of the hypothetical graviton. The graviton is the name given to an elementary particle that would act as the force carrier for gravity, in quantum theories about gravity. It is expected to be massless if, as it appears, gravitation has an infinite range. (This is because the more massive a gauge boson is, the shorter is the range of the associated force; as with the infinite range of electromagnetism, which is due to the massless photon, the infinite range of gravity implies that any associated force-carrying particle would also be massless.) If the graviton were not massless, gravitational waves would propagate below lightspeed, with lower frequencies (ƒ) being slower than higher frequencies, leading to dispersion of the waves from the merger event. No such dispersion was observed. The observations of the inspiral slightly improve (lower) the upper limit on the mass of the graviton from Solar System observations to , corresponding to or a Compton wavelength (λg) of greater than km, roughly 1 light-year. Using the lowest observed frequency of 35 Hz, this translates to a lower limit on vg such that the upper limit on 1-vg /c is ~ . See also List of gravitational wave observations Notes References Further reading External links GW150914 data release by the LIGO Open Science Center Gravitational wave modelling of GW150914 by the Max Planck Institute for Gravitational Physics Video: GW150914 discovery press conference (71:29) by the National Science Foundation (11 February 2016) Video: "The hunters – the detection of gravitational waves" (11:47) by the Max Planck Institute for Gravitational Physics (22 February 2016) Video: "LIGO Hears Gravitational Waves Einstein Predicted" (4:36) by Dennis Overbye, The New York Times (11 February 2016) 2015 in science 2016 in science 2015 in outer space 2016 in outer space Binary stars Experimental physics General relativity Gravitational-wave astronomy Gravitational waves Science and technology in Germany Science and technology in Italy Science and technology in the United States Scientific observation Stellar black holes Articles containing video clips September 2015
First observation of gravitational waves
[ "Physics", "Astronomy" ]
5,704
[ "Physical phenomena", "Black holes", "Stellar black holes", "Unsolved problems in physics", "Astrophysics", "General relativity", "Waves", "Experimental physics", "Theory of relativity", "Gravitational waves", "Gravitational-wave astronomy", "Astronomical sub-disciplines" ]
49,399,383
https://en.wikipedia.org/wiki/SAMPL%20Challenge
SAMPL (Statistical Assessment of the Modeling of Proteins and Ligands) is a set of community-wide blind challenges aimed to advance computational techniques as standard predictive tools in rational drug design. A broad range of biologically relevant systems with different sizes and levels of complexities including proteins, host–guest complexes, and drug-like small molecules have been selected to test the latest modeling methods and force fields in SAMPL. New experimental data, such as binding affinity and hydration free energy, are withheld from participants until the prediction submission deadline, so that the true predictive power of methods can be revealed. The most recent SAMPL5 challenge contains two prediction categories: the binding affinity of host–guest systems, and the distribution coefficients of drug-like molecules between water and cyclohexane. Since 2008, the SAMPL challenge series has attracte interest from scientists engaged in the field of computer-aided drug design (CADD) The current SAMPL organizers include John Chodera, Michael K. Gilson, David Mobley, and Michael Shirts. Project significance The SAMPL challenge seeks to accelerate progress in developing quantitative, accurate drug discovery tools by providing prospective validation and rigorous comparisons for computational methodologies and force fields. Computer-aided drug design methods have been considerably improved over time, along with the rapid growth of high-performance computing capabilities. However, their applicability in the pharmaceutical industry are still highly limited, due to the insufficient accuracy. Lacking large-scale prospective validations, methods tend to suffer from over-fitting the pre-existing experimental data. To overcome this, SAMPL challenges have been organized as blind tests: each time new datasets are carefully designed and collected from academic or industrial research laboratories, and measurements are released shortly after the deadline of prediction submission. Researchers then can compare those high-quality, prospective experimental data with the submitted estimates. A key emphasis is on lessons learned, allowing participants in future challenges to benefit from modeling improvements made based on earlier challenges. SAMPL has historically focused on the properties of host–guest systems and drug-like small molecules. These simply model systems require considerably less computational resources to simulate than protein systems, and thus converge more quickly. Through careful design, these model systems can be used to focus on one particular or a subset of simulation challenges. The past several SAMPL host–guest, hydration free energy and log D challenges revealed the limitations in generalized force fields, facilitated the development of solvent models, and highlighted the importance of properly handling protonation states and salt effects. Participation Registration and participation is free for SAMPL challenges. Beginning with SAMPL7, challenge participation data was posted on the SAMPL website, as well as the GitHub page for the specific challenge. Instructions, input files and results were then provided through GitHub (earlier challenges provided content primarily through D3R for SAMPL4-5, and via other means for earlier SAMPLs). Participants were allowed to submit multiple predictions through the D3R website, either anonymously or with research affiliation. Since the SAMPL2 challenge, all participants have been invited to attend the SAMPL workshops and submit manuscripts to describe their results. After a peer-review process, the resulting papers, along with the overview papers which summarize all submitting data, were published in the special issues of the Journal of Computer-Aided Molecular Design. Funding The SAMPL project was recently funded by the NIH (grant GM124270-01A1), for the period of Sept. 2018 through August 2022, to allow the design of future SAMPL challenges to drive advances in the areas they are most needed for modeling efforts. The effort is spearheaded by David L. Mobley (UC Irvine) with co-investigators John D. Chodera (MSKCC), Bruce C. Gibb (Tulane), and Lyle Isaacs (Maryland). Currently challenges and workshops are run in partnership with the NIH-funded Drug Design Data Resource, but this will likely change over time as funding for the two projects is not coupled. Funding also allowed a broadening of scope of SAMPL; through SAMPL6, its role had been seen as primarily focused on physical properties, with D3R handling protein-ligand challenges. However, the funded effort broadened its focus to include systems which will drive improvements in modeling, including potentially suitable protein-ligand systems. This is still in contrast to D3R, which relies on donated datasets of pharmaceutical interest, whereas SAMPL challenges are specifically designed to focus on specific modeling challenges. History Earlier SAMPL challenges The first SAMPL exercise, SAMPL0 (2008) focused on the predictions of solvation free energies of 17 small molecules. A research group at Stanford University and scientists at OpenEye Scientific Software carried out the calculations. Despite the informal format, SAMPL0 laid the groundwork for the following SAMPL challenges. SAMPL1 (2009) and SAMPL2 challenges (2010) were organized by OpenEye and continued to focus on predicting solvation free energies of drug-like small molecules. Attempts were also made to predict binding affinities, binding poses and tautomer ratios. Both challenges attracted significant participations from computational scientists and researchers in academia and industry. SAMPL3 and SAMPL4 The blinded data sets for host–guest binding affinities were introduced for the first time in SAMPL3 (2011-2012), along with solvation free energies for small molecules and the binding affinity data for 500 fragment-like tyrosine inhibitors. Three host molecules were all from the cucurbituril family. The SAMPL3 challenge received 103 submissions from 23 research groups worldwide. Different from the prior three SAMPL events, the SAMPL4 exercise (2013-2014) was coordinated by academic researchers, with logistical support from OpenEye. Datasets in SAMPL4 consisted of binding affinities for host–guest systems and HIV integrase inhibitors, as well as hydration free energies of small molecules. Host molecules included cucurbit[7]uril (CB7) and octa-acid. The SAMPL4 hydration challenge involved 49 submissions from 19 groups. The participation of the host–guest challenge also grew significantly compared to SAMPL3. The workshop was held at Stanford University in September, 2013. SAMPL5 The protein-ligand challenges were separated from SAMPL in SAMPL5 (2015-2016) and were distributed as the new Grand Challenges of the Drug Design Data Resource (D3R). SAMPL5 allowed participants to make predictions of the binding affinities of three sets of host–guest systems: an acyclic CB7 derivative and two host from the octa-acid family. Participants were also encouraged to submit predictions for binding enthalpies. A wide array of computational methods were tested, including density functional theory (DFT), molecular dynamics, docking, and metadynamics. The distribution coefficient predictions were introduced for the first time, receiving total of 76 submissions from 18 researcher groups or scientists for a set of 53 small molecules. The workshop was held in March, 2016 at University of California, San Diego as part of the D3R workshop. The top-performing methods in the host–guest challenge yielded encouraging yet imperfect correlations with experimental data, accompanied by large, systematic shifts relative to experiment. SAMPL6 The SAMPL6 testing systems include cucurbit[8]uril, octa-acid, tetra-endo-methyl octa-acid, and a series of fragment-like small molecules. The host–guest, conformational sampling and pKa prediction challenges of SAMPL6 are now closed. The SAMPL6 workshop was jointly run with the D3R workshop in February 2018 at the Scripps Institution of Oceanography and a SAMPL special issue of the Journal of Computer Aided Molecular Design reported many of the results. A SAMPL6 Part II challenge focused on a small octanol-water partition coefficient prediction set and was followed by a virtual workshop on May 16, 2019 and a joint D3R/SAMPL workshop in San Diego in August 2019. A special issue or special section of JCAMD is planned to report the results. SAMPL6 inputs and results are available via the SAMPL6 GitHub repository. SAMPL7 SAMPL7 again included host-guest challenges and a physical property challenge. A protein-ligand binding challenge on PHIPA fragments was also included. Host-guest binding focused on several small molecules binding to octa-acid and exo-octa-acid; binding of two compounds to a series of cyclodextrin derivatives; and binding of a series of small molecules to a clip-like guest known as TrimerTrip. A SAMPL7 virtual workshop took place and is available online. A SAMPL7 physical properties challenge is currently ongoing. Plans for a EuroSAMPL in-person workshop in Fall 2020 were derailed by COVID-19 and the workshop is being conducted virtually. SAMPL7 inputs and (as challenge components are completed, results) are available via the SAMPL6 GitHub repository. SAMPL8 SAMPL8 included host-guest components on binding of drugs of abuse to CB8, and a series of small molecules to Gibb Deep Cavity Cavitands (GDCCs), as detailed on the SAMPL8 GitHub repository. An additional pKa and logD challenge focused on pK and logD prediction for a series of drug-like molecules. SAMPL9 SAMPL9 is in planning stages, except that a SAMPL9 host-guest challenge on a host from Lyle Isaacs' group is currently underway. Details are available on the SAMPL9 GitHub repository SAMPL Special Issues SAMPL Publications A relatively complete list of SAMPL-related publications is maintained by the SAMPL organizers; more than 150 related papers have been published. Future challenges SAMPL is slated to continue its focus on physical property prediction, including logP and logD values, pKa prediction, host–guest binding, and other properties, as well as broadening to include a protein-ligand component. Some data is planned to be collected directly by the SAMPL co-investigators (Chodera, Gibb and Isaacs), but industry partnerships and internships are also proposed. See also References External links Website Drug discovery Computational chemistry
SAMPL Challenge
[ "Chemistry", "Biology" ]
2,119
[ "Life sciences industry", "Drug discovery", "Theoretical chemistry", "Computational chemistry", "Medicinal chemistry" ]
49,400,436
https://en.wikipedia.org/wiki/Riemannian%20metric%20and%20Lie%20bracket%20in%20computational%20anatomy
Computational anatomy (CA) is the study of shape and form in medical imaging. The study of deformable shapes in CA rely on high-dimensional diffeomorphism groups which generate orbits of the form . In CA, this orbit is in general considered a smooth Riemannian manifold since at every point of the manifold there is an inner product inducing the norm on the tangent space that varies smoothly from point to point in the manifold of shapes . This is generated by viewing the group of diffeomorphisms as a Riemannian manifold with , associated to the tangent space at . This induces the norm and metric on the orbit under the action from the group of diffeomorphisms. The diffeomorphisms group generated as Lagrangian and Eulerian flows The diffeomorphisms in computational anatomy are generated to satisfy the Lagrangian and Eulerian specification of the flow fields, , generated via the ordinary differential equation with the Eulerian vector fields in for , with the inverse for the flow given by and the Jacobian matrix for flows in given as To ensure smooth flows of diffeomorphisms with inverse, the vector fields must be at least 1-time continuously differentiable in space which are modelled as elements of the Hilbert space using the Sobolev embedding theorems so that each element has 3-square-integrable derivatives thusly implies embeds smoothly in 1-time continuously differentiable functions. The diffeomorphism group are flows with vector fields absolutely integrable in Sobolev norm: The Riemannian orbit model Shapes in Computational Anatomy (CA) are studied via the use of diffeomorphic mapping for establishing correspondences between anatomical coordinate systems. In this setting, 3-dimensional medical images are modelled as diffeomorphic transformations of some exemplar, termed the template , resulting in the observed images to be elements of the random orbit model of CA. For images these are defined as , with for charts representing sub-manifolds denoted as . The Riemannian metric The orbit of shapes and forms in Computational Anatomy are generated by the group action. This is made into a Riemannian orbit by introducing a metric associated to each point and associated tangent space. For this a metric is defined on the group which induces the metric on the orbit. Take as the metric for Computational anatomy at each element of the tangent space in the group of diffeomorphisms , with the vector fields modelled to be in a Hilbert space with the norm in the Hilbert space . We model as a reproducing kernel Hilbert space (RKHS) defined by a 1-1, differential operator. For a distribution or generalized function, the linear form determines the norm:and inner product for according to where the integral is calculated by integration by parts for a generalized function the dual-space. The differential operator is selected so that the Green's kernel associated to the inverse is sufficiently smooth so that the vector fields support 1-continuous derivative. The right-invariant metric on diffeomorphisms The metric on the group of diffeomorphisms is defined by the distance as defined on pairs of elements in the group of diffeomorphisms according to This distance provides a right-invariant metric of diffeomorphometry, invariant to reparameterization of space since for all , The Lie bracket in the group of diffeomorphisms The Lie bracket gives the adjustment of the velocity term resulting from a perturbation of the motion in the setting of curved spaces. Using Hamilton's principle of least-action derives the optimizing flows as a critical point for the action integral of the integral of the kinetic energy. The Lie bracket for vector fields in Computational Anatomy was first introduced in Miller, Trouve and Younes. The derivation calculates the perturbation on the vector fields in terms of the derivative in time of the group perturbation adjusted by the correction of the Lie bracket of vector fields in this function setting involving the Jacobian matrix, unlike the matrix group case: Proof: Proving Lie bracket of vector fields take a first order perturbation of the flow at point . The Lie bracket gives the first order variation of the vector field with respect to first order variation of the flow. The generalized Euler–Lagrange equation for the metric on diffeomorphic flows The Euler–Lagrange equation can be used to calculate geodesic flows through the group which form the basis for the metric. The action integral for the Lagrangian of the kinetic energy for Hamilton's principle becomes The action integral in terms of the vector field corresponds to integrating the kinetic energy The shortest paths geodesic connections in the orbit are defined via Hamilton's Principle of least action requires first order variations of the solutions in the orbits of Computational Anatomy which are based on computing critical points on the metric length or energy of the path. The original derivation of the Euler equation associated to the geodesic flow of diffeomorphisms exploits the was a generalized function equation when is a distribution, or generalized function, take the first order variation of the action integral using the adjoint operator for the Lie bracket () gives for all smooth , Using the bracket and gives meaning for all smooth Equation () is the Euler-equation when diffeomorphic shape momentum is a generalized function. This equation has been called EPDiff, Euler–Poincare equation for diffeomorphisms and has been studied in the context of fluid mechanics for incompressible fluids with metric. Riemannian exponential for positioning In the random orbit model of Computational anatomy, the entire flow is reduced to the initial condition which forms the coordinates encoding the diffeomorphism, as well as providing the means of positioning information in the orbit. This was first terms a geodesic positioning system in Miller, Trouve, and Younes. From the initial condition then geodesic positioning with respect to the Riemannian metric of Computational anatomy solves for the flow of the Euler–Lagrange equation. Solving the geodesic from the initial condition is termed the Riemannian-exponential, a mapping at identity to the group. The Riemannian exponential satisfies for initial condition , vector field dynamics , for classical equation on the diffeomorphic shape momentum as a smooth vector with the Euler equation exists in the classical sense as first derived for the density: for generalized equation, , then It is extended to the entire group, . The variation problem for matching or registering coordinate system information in computational anatomy Matching information across coordinate systems is central to computational anatomy. Adding a matching term to the action integral of Equation () which represents the target endpoint The endpoint term adds a boundary condition for the Euler–Lagrange equation () which gives the Euler equation with boundary term. Taking the variation gives Necessary geodesic condition: Proof: The Proof via variation calculus uses the perturbations from above and classic calculus of variation arguments. Euler–Lagrange geodesic endpoint conditions for image matching The earliest large deformation diffeomorphic metric mapping (LDDMM) algorithms solved matching problems associated to images and registered landmarks. are in a vector spaces. The image matching geodesic equation satisfies the classical dynamical equation with endpoint condition. The necessary conditions for the geodesic for image matching takes the form of the classic Equation () of Euler–Lagrange with boundary condition: Necessary geodesic condition: Euler–Lagrange geodesic endpoint conditions for landmark matching The registered landmark matching problem satisfies the dynamical equation for generalized functions with endpoint condition: Necessary geodesic conditions: Proof: The variation requires variation of the inverse generalizes the matrix perturbation of the inverse via giving giving References Computational anatomy Geometry Fluid mechanics Neural engineering Biomedical engineering
Riemannian metric and Lie bracket in computational anatomy
[ "Mathematics", "Engineering", "Biology" ]
1,618
[ "Biological engineering", "Biomedical engineering", "Civil engineering", "Geometry", "Fluid mechanics", "Medical technology" ]
49,404,297
https://en.wikipedia.org/wiki/Browder%20fixed-point%20theorem
The Browder fixed-point theorem is a refinement of the Banach fixed-point theorem for uniformly convex Banach spaces. It asserts that if is a nonempty convex closed bounded set in uniformly convex Banach space and is a mapping of into itself such that (i.e. is non-expansive), then has a fixed point. History Following the publication in 1965 of two independent versions of the theorem by Felix Browder and by William Kirk, a new proof of Michael Edelstein showed that, in a uniformly convex Banach space, every iterative sequence of a non-expansive map has a unique asymptotic center, which is a fixed point of . (An asymptotic center of a sequence , if it exists, is a limit of the Chebyshev centers for truncated sequences .) A stronger property than asymptotic center is Delta-limit of Teck-Cheong Lim, which in the uniformly convex space coincides with the weak limit if the space has the Opial property. See also Fixed-point theorems Banach fixed-point theorem References Felix E. Browder, Nonexpansive nonlinear operators in a Banach space. Proc. Natl. Acad. Sci. U.S.A. 54 (1965) 1041–1044 William A. Kirk, A fixed point theorem for mappings which do not increase distances, Amer. Math. Monthly 72 (1965) 1004–1006. Michael Edelstein, The construction of an asymptotic center with a fixed-point property, Bull. Amer. Math. Soc. 78 (1972), 206-208. Fixed-point theorems
Browder fixed-point theorem
[ "Mathematics" ]
349
[ "Theorems in mathematical analysis", "Fixed-point theorems", "Theorems in topology" ]
49,404,619
https://en.wikipedia.org/wiki/Isotopic%20resonance%20hypothesis
The isotopic resonance hypothesis (IsoRes) postulates that certain isotopic compositions of chemical elements affect kinetics of chemical reactions involving molecules built of these elements. The isotopic compositions for which this effect is predicted are called resonance isotopic compositions. Fundamentally, the IsoRes hypothesis relies on a postulate that less complex systems exhibit faster kinetics than equivalent but more complex systems. Furthermore, system's complexity is affected by its symmetry (more symmetric systems are simpler), and symmetry (in general meaning) of reactants may be affected by their isotopic composition. The term “resonance” relates to the use of this term in nuclear physics, where peaks in the dependence of a reaction cross section upon energy are called “resonances”. Similarly, a sharp increase (or decrease) in the reaction kinetics as a function of the average isotopic mass of a certain element is called here a resonance. History of formulation The concept of isotopes developed from radioactivity. The pioneering work on radioactivity by Henri Becquerel, Marie Curie and Pierre Curie was awarded the Nobel Prize in Physics in 1903. Later Frederick Soddy would take radioactivity from physics to chemistry and shed light on the nature of isotopes, something with rendered him the Nobel Prize in Chemistry in 1921 (awarded in 1922). The question of stable, non-radioactive isotopes was more difficult and required the development by Francis Aston of a high-resolution mass spectrograph, which allowed the separation of different stable isotopes of one and the same element. Francis Aston was awarded the 1922 Nobel Prize in Chemistry for this achievement. With his enunciation of the whole-number rule, Aston solved a problem that had riddled chemistry for a hundred years. The understanding was that different isotopes of a given element would be chemically identical. It was discovered in the 1930s by Harold Urey in 1932 (awarded the Nobel Prize in Chemistry in 1934). It was early on found that the deuterium content had a profound effect on chemistry and biochemistry. In the linear approximation, the effect of isotopic substitution is proportional to the mass ratio of the heavy and light isotope. Thus chemical and biological effects of heavier isotopes of the “biological” atoms C, N and O are expected to be much smaller since the mass ratios for the normal to heavier isotopes are much closer to unity than the factor two for hydrogen to deuterium. However, it has been reported in 1930s, and then again in 1970s and 1990s, as well as recently, that relatively small changes in the content of the heavy isotope of hydrogen, deuterium, has profound effects on biological systems. These strong nonlinear effects could not be fully rationalized based on the known concepts of the isotopic effects. These and other observations make it possible that isotopes have a much more profound importance than could ever have been imagined by the pioneers. In 2011 Roman Zubarev formulated the isotope resonance hypothesis. It originated in the following, unexpected observation. Define ΔMm = Mmono - Mnom, where Mmono is the monoisotopic mass (e.g. O = 15.994915 Da) and Mnom is the nominal (integer) mass, i.e., the number of nucleons (e.g. 16O = 16). ΔMm is a constant in the whole Universe. Define ΔMis = Mav - Mmono, where Mav is the average isotopic mass (e.g. O = 15.999 Da on Earth). Obviously ΔMis depends on the precise isotopic composition for a given molecule. Finally define NMD = 1000ΔMm/Mnom and NIS = 1000ΔMis/Mnom, where NMD [in units of ‰] and NIS [in units of ‰] are the normalized isotopic defect and shift, respectively. If NIS is plotted as a function of NMD for a large number of terrestrial peptides, one would anticipate a homogenous distribution of data points (as in Fig. 1B). This is not what was found by Zubarev's team, instead they found band gap in the distribution with a narrow line in the middle (Fig. 1A). This serendipitous discovery led Zubarev to formulate the isotope resonance hypothesis. Analogues in science As an example of isotopic symmetry (in compositional, and not in geometrical sense) affecting the kinetics of physic-chemical processes, see mass independent isotope fractionation in ozone O3. Implication for the origin of life According to the IsoRes hypothesis, there are certain resonance isotopic compositions at which terrestrial organisms thrive best. Curiously, average terrestrial isotopic compositions are very close to a resonance affecting a large class of amino acids and polypeptides, the molecules of outmost importance for life. Thus, the IsoRes hypothesis suggests that early life on Earth was aided, perhaps critically, by the proximity to an IsoRes. In contrast, there is no strong resonance for then atmosphere of Mars, which led to a prediction that life could not have originated on Mars and that the planet is probably sterile. Other nontrivial predictions One would expect that enrichment of heavy isotopes leads to progressively slower reactions, but the IsoRes hypothesis suggests that there exist certain resonance compositions for which kinetics increases even for higher abundances of heavy stable isotopes. For example, at 9.5% 13C, 10.9% 15N and 6.6% 18O (when all three elements are 10-35 times enriched compared to their natural abundances) and normal deuterium composition (150 ppm or 0.015%), a very strong resonance (Fig. 1C) is predicted (“super-resonance”). Yet another nontrivial prediction of the IsoRes hypothesis is that at ≈250-350 ppm deuterium content, the terrestrial resonance becomes “perfect”, and the rates of biochemical reactions and growth of terrestrial organisms further increase. This prediction seems to be matched by at least some experimental observations. Experimental verification The IsoRes hypothesis has been tested experimentally by means of growth of E. coli and found to be supported by extremely strong statistics (p << 10−15). Particular strong evidence of faster growth was found for the “super-resonance”. Fig. 1. 2D plot of molecular masses of 3000 E. coli tryptic peptides. A – terrestrial isotopic compositions (red arrow shows the line representing the resonance); B – 18O abundance is increased by 20%, which destroyed the terrestrial resonance; C – isotopic compositions of the “super-resonance”, where all dots (molecules) are perfectly aligned. Adapted from ref. 4. See also Stable nuclide Mass independent isotope fractionation Heavy water References Isotopes
Isotopic resonance hypothesis
[ "Physics", "Chemistry" ]
1,404
[ "Isotopes", "Nuclear physics" ]
49,405,695
https://en.wikipedia.org/wiki/Qualitative%20theory%20of%20differential%20equations
In mathematics, the qualitative theory of differential equations studies the behavior of differential equations by means other than finding their solutions. It originated from the works of Henri Poincaré and Aleksandr Lyapunov. There are relatively few differential equations that can be solved explicitly, but using tools from analysis and topology, one can "solve" them in the qualitative sense, obtaining information about their properties. It was used by Benjamin Kuipers in the book Qualitative reasoning: modeling and simulation with incomplete knowledge to demonstrate how the theory of PDEs can be applied even in situations where only qualitative knowledge is available. References Further reading Kuipers, Benjamin. Qualitative reasoning: modeling and simulation with incomplete knowledge. MIT press, 1994. Viktor Vladimirovich Nemytskii, Vyacheslav Stepanov, Qualitative theory of differential equations, Princeton University Press, Princeton, 1960. Original references Henri Poincaré, "Mémoire sur les courbes définies par une équation différentielle", Journal de Mathématiques Pures et Appliquées (1881, in French) (it was translated from the original Russian into French and then into this English version, the original is from the year 1892) Differential equations
Qualitative theory of differential equations
[ "Mathematics" ]
257
[ "Mathematical objects", "Differential equations", "Equations" ]
49,405,696
https://en.wikipedia.org/wiki/Pressure%20oxidation
Pressure oxidation is a process for extracting gold from refractory ore. The most common refractory ores are pyrite and arsenopyrite, which are sulfide ores that trap the gold within them. Refractory ores require pre-treatment before the gold can be adequately extracted. The pressure oxidation process is used to prepare such ores for conventional gold extraction processes such as cyanidation. It is performed in an autoclave at high pressure and temperature, where high-purity oxygen mixes with a slurry of ore. When the original sulfide minerals are oxidized at high temperature and pressure, it completely releases the trapped gold. Pressure oxidation has a very high gold recovery rate, normally at least 10% higher than roasting. The oxidation of the iron sulfide minerals produces sulfuric acid, soluble compounds such as ferric sulfate, and solids such as iron sulfate or jarosite. The iron-based solids produced pose an environmental challenge, as they can release acid and heavy metals to the environment. They can also make later precious metal recovery more difficult. Arsenic in the ore is converted to solid scorodite inside the autoclave, allowing it to be easily disposed of. This is an advantage over processes such as roasting where these toxic products are released as gases. A disadvantage of pressure oxidation is that any silver in the feed material will often react to form silver jarosite inside the autoclave, making it difficult and expensive to recover the silver. An example of a mine utilizing this technology is the Pueblo Viejo mine in the Dominican Republic. At Pueblo Viejo, the process is performed by injecting high-purity oxygen into autoclaves operating at 230 degrees C and 40 bar of pressure. The resulting chemical reactions oxides the sulfide minerals the gold is trapped within. The oxidation of pyrite is highly exothermic, allowing the autoclave to operate at this temperature without an external heat source. References Metallurgy Metallurgical processes
Pressure oxidation
[ "Chemistry", "Materials_science", "Engineering" ]
409
[ "Metallurgical processes", "Metallurgy", "nan", "Materials science" ]
40,466,295
https://en.wikipedia.org/wiki/Plasma%20diffusion
Due to the presence of charged particles in plasma, plasma diffusion significantly differs from diffusion of gas or liquid. Even in the absence of externally applied fields, the interaction between the positive (ions) and negative (usually, electrons) plasma particles results in ambipolar diffusion with the diffusion coefficient that is dissimilar to that of either electron or ion species separately if the interaction is neglected. Plasma diffusion across a magnetic field is an important topic in magnetic confinement of fusion plasma. It especially concerns how plasma transport is related to the strength of an external magnetic field B. Classical diffusion predicts the 1/B2 scaling, while Bohm diffusion, borne out of experimental observations from early confinement machines, was conjectured to follow the 1/B scaling. It is still an area of active research. See also Magnetic diffusion Diffusion Diffusion, plasma
Plasma diffusion
[ "Physics", "Chemistry" ]
170
[ "Transport phenomena", "Physical phenomena", "Diffusion", "Plasma physics", "Plasma phenomena", "Plasma physics stubs" ]
40,466,325
https://en.wikipedia.org/wiki/Cerebral%20organoid
A neural, or brain organoid, describes an artificially grown, in vitro, tissue resembling parts of the human brain. Neural organoids are created by culturing pluripotent stem cells into a three-dimensional culture that can be maintained for years. The brain is an extremely complex system of heterogeneous tissues and consists of a diverse array of neurons and glial cells. This complexity has made studying the brain and understanding how it works a difficult task in neuroscience, especially when it comes to neurodevelopmental and neurodegenerative diseases. The purpose of creating an in vitro neurological model is to study these diseases in a more defined setting. This 3D model is free of many potential in vivo limitations. The varying physiology between human and other mammalian models limits the scope of animal studies in neurological disorders. Neural organoids contain several types of nerve cells and have anatomical features that recapitulate regions of the nervous system. Some neural organoids are most similar to neurons of the cortex. In some cases, the retina, spinal cord, thalamus and hippocampus. Other neural organoids are unguided and contain a diversity of neural and non-neural cells. Stem cells have the potential to grow into many different types of tissues, and their fate is dependent on many factors. Below is an image showing some of the chemical factors that can lead stem cells to differentiate into various neural tissues; a more in-depth table of generating specific organoid identity has been published. Similar techniques are used on stem cells used to grow cerebral organoids. Model development Using human pluripotent stem cells to create in vitro neural organoids allows researchers to analyze current developmental mechanisms for human neural tissue as well as study the roots of human neurological diseases. Neural organoids are an investigative tool used to understand how disease pathology works. These organoids can be used in experiments that current in vitro methods are too simplistic for, while also being more applicable to humans than rodent or other mammalian models might be. Historically, major breakthroughs in how the brain works have resulted from studying injury or disorder in human brain function. An in vitro human brain model permits the next wave in our understanding of the human nervous system. Culturing methods An embryoid body cultivated from pluripotent stem cells is used to make an organoid. Embryoid bodies are composed of three layers: endoderm, mesoderm and ectoderm, which has the potential to be differentiated into different types of tissue. A cerebral organoid can be formed by inducing ectoderm cells to differentiate into a cerebral organoids. The general procedure can be broken down into 5 steps. First human pluripotent stem cells are cultured. They are then cultivated into an embryoid body. Next the cell culture is induced to form a neuroectoderm. The neuroectoderm is then grown in a matrigel droplet. The matrigel provides nutrients and the neuroectoderm starts to proliferate and grow. Replication of specific brain regions in cerebral organoid counterparts is achieved by the addition of extracellular signals to the organoid environment during different stages of development; these signals were found to create change in cell differentiation patterns, thus leading to recapitulation of the desired brain region. SMAD inhibition may be used in usual cerebral organoid culturing processes to generate microglia in cerebral organoids. The lack of vasculature limits the size the organoid can grow. This has been the major limitation in organoid development. The use of a spinning bioreactor may improve the availability of nutrients to cells inside the organoid to improve organoid development. Spinning bioreactors have been used increasingly in cell culture and tissue growth applications. The reactor is able to deliver faster cell doubling times, increased cell expansion and increased extra-cellular matrix components when compared to statically cultured cells. Components Differentiation It has been shown that cerebral organoids grown using the spinning bioreactor 3D culture method differentiate into various neural tissue types, such as the optic cup, hippocampus, ventral parts of the teleencephelon and dorsal cortex. Furthermore, it was shown that human brain organoids could intrinsically develop integrated light-sensitive optic cups. The neural stem/progenitor cells are unique because they are able to self-renew and are multipotent. This means they can generate neurons and glial cells which are the two main components of neural systems. The fate of these cells is controlled by several factors that affect the differentiation process. The spatial location and temporal attributes of neural progenitor cells can influence if the cells form neurons or glial cells. Further differentiation is then controlled by extracellular conditions and cell signaling. The exact conditions and stimuli necessary to differentiate neural progenitor cells into specific neural tissues such as hippocampal tissue, optic nerve, cerebral cortex, etc. are unknown. It is believed that cerebral organoids can be used to study the developmental mechanisms of these processes. Gene expression To test if the neural progenitor cells and stem cells are differentiating into specific neural tissues, several gene markers can be tested. Two markers that are present during pluripotent stages are OCT4 and NANOG. These two markers are diminished during the course of development for the organoid. Neural identity markers that note successful neural induction, SOX1 and PAX6, are upregulated during organoid development. These changes in expression support the case for self-guided differentiation of cerebral organoids. Markers for forebrain and hindbrain can also be tested. Forebrain markers FOXG1 and SIX3 are highly expressed throughout organoid development. However, hindbrain markers EGR2 and ISL1 show early presence but a decrease in the later stages. This imbalance towards forebrain development is similar to the developmental expansion of forebrain tissue in human brain development. To test if organoids develop even further into regional specification, gene markers for cerebral cortex and occipital lobe have been tested. Many regions that have forebrain marker FOXG1, labeling them as regions with cerebral cortical morphology, were also positive for marker EMX1 which indicates dorsal cortical identity. These specific regions can be even further specified by markers AUTS2, TSHZ2, and LMO4 with the first representing cerebral cortex and the two after representing the occipital lobe. Genetic markers for the hippocampus, ventral forebrain, and choroid plexus are also present in cerebral organoids, however, the overall structures of these regions have not yet been formed. Organization Cerebral organoids also possess functional cerebral cortical neurons. These neurons must form on the radially organized cortical plate. The marker TBR1 is present in the preplate, the precursor to the cortical plate, and is present, along with MAP2, a neuronal marker, in 30-day-old cerebral organoids. These markers are indicative of a basal neural layer similar to a preplate. These cells are also apically adjacent to a neutral zone and are reelin+ positive, which indicates the presence of Cajal-Retzius cells. The Cajal-Retzius cells are important to the generation of cortical plate architecture. The cortical plate is usually generated inside-out such that later-born neurons migrate to the top superficial layers. This organization is also present in cerebral organoids based on genetic marker testing. Neurons that are early born have marker CTIP2 and are located adjacent to the TBR1 exhibiting preplate cells. Late-born neurons with markers SATB2 and BRN2 are located in a superficial layer, further away from the preplate than the early born neurons suggesting cortical plate layer formation. Additionally, after 75 days of formation, cerebral organoids show a rudimentary marginal zone, a cell-poor region. The formation of layered cortical plate is very basic in cerebral organoids and suggests the organoid lacks the cues and factors to induce formation of layer II-VI organization. The cerebral organoid neurons can, however, form axons as shown by GFP staining. GFP labeled axons have been shown to have complex branching and growth cone formation. Additionally, calcium dye imaging has shown cerebral organoids to have Ca2+ oscillations and spontaneous calcium surges in individual cells. The calcium signaling can be enhanced through glutamate and inhibited through tetrodotoxin. Interactions with environment In DishBrain, grown human brain cells were integrated into digital systems to play a simulated Pong via electrophysiological stimulation and recording. The cells "showed significantly improved performance in Pong" when embodied in a virtual game-world. In the 2020s, significant changes in how these electrophysiological systems are made and interact with brain organoids could lead to better stimulation and recording data across the organoind in 3D. Interactions with surrounding tissues It is not fully understood how individual localized tissues formed by stem cells are able to coordinate with surrounding tissues to develop into a whole organ. It has been shown however that most tissue differentiation requires interactions with surrounding tissues and depends on diffusible induction factors to either inhibit or encourage various differentiation and physical localization. Cerebral organoid differentiation is somewhat localized. The previously mentioned markers for forebrain and hindbrain are physically localized, appearing in clusters. This suggests that local stimuli are released once one or more cells differentiate into a specific type as opposed to a random pathway throughout the tissue. The markers for subspecification of cortical lobes, prefrontal cortex and occipital lobe, are also physically localized. However, the hippocampus and ventral forebrain cells are not physically localized and are randomly located through the cerebral organoid. Cerebral organoids lack blood vessels and are limited in size by nutrient uptake in the innermost cells. Spinning bioreactors and advanced 3D scaffolding techniques are able to increase organoid size, though the integration of in vitro nutrient delivery systems is likely to spark the next major leap in cerebral organoid development. Assays Cerebral organoids have the potential to function as a model with which disease and gene expression might be studied. However, diagnostic tools are needed to evaluate cerebral organoid tissue and create organoids modeling the disease or state of development in question. Transcriptome analysis has been used as an assay to examine the pathology of cerebral organoids derived from individual patients. Additionally, TUNEL assays have been used in studies as an evaluative marker of apoptosis in cerebral organoids. Other assays used to analyze cerebral organoids include the following: Genetic modifications Cerebral organoids can be used to study gene expression via genetic modifications. The degree to which these genetic modifications are present in the entire organoid depends on what stage of development the cerebral organoid is in when these genetic modifications are made; the earlier these modifications are made, such as when the cerebral organoid is in the single cell stage, the more likely these modifications will affect a greater portion of the cells in the cerebral organoid. The degree to which these genetic modifications are present within the cerebral organoid also depends on the process by which these genetic modifications are made. If the genetic information is administered into one cerebral organoid cell's genome via machinery, then the genetic modification will remain present in cells resulting from replication. Crispr/Cas 9 is a method by which this long-lasting genetic modification can be made. A system involving use of transposons has also been suggested as a means to generate long-lasting genetic modifications; however, the extent to which transposons might interact with a cell genome might differs on a cell to cell basis, which would create variable expressivity between cerebral organoid cells. If, however, the genetic modification is made via “genetic cargo” insertion (such as through Adeno-associated virus/ electroporation methods) then it has been found that the genetic modification becomes less present with each round of cell division in cerebral organoids. Computational methods Use of computational methods have been called for as a means to help improve the cerebral organoid cultivation process; development of computational methods has also been called for in order to provide necessary detailed renderings of different components of the cerebral organoid (such as cell connectivity) that current methods are unable to provide. Programming designed to model detailed cerebral organoid morphology does not yet exist. Applications There are many potential applications for cerebral organoid use, such as cell fate potential, cell replacement therapy, and cell-type specific genome assays. Cerebral organoids also provide a unique insight into the timing of development of neural tissues and can be used as a tool to study the differences across species. Further potential applications for cerebral organoids include: Tissue morphogenesis Tissue morphogenesis with respect to cerebral organoids covers how neural organs form in vertebrates. Cerebral organoids can serve as in vitro tools to study the formation, modulate it, and further understand the mechanisms controlling it. Migration assays Cerebral organoids can help to study cell migration. Neural glial cells cover a wide variety of neural cells, some of which move around the neurons. The factors that govern their movements, as well as neurons in general, can be studied using cerebral organoids. Clonal lineage tracing Clonal lineage tracing is part of fate mapping, where the lineage of differentiated tissues is traced to the pluripotent progenitors. The local stimuli released and the mechanism of differentiation can be studied using cerebral organoids as a model. Genetic modifications in cerebral organoids could serve as a means to accomplish lineage tracing. Transplantation Cerebral organoids can be used to grow specific brain regions and transplant them into regions of neurodegeneration as a therapeutic treatment. They can fuse with host vasculature and be immunologically silent. In some cases, the genomes of these cerebral organoids would first have to be edited. Recent studies have been able to achieve successful transplantation and integration of cerebral organoids into mouse brains; development of cell differentiation and vascularity was also observed after transplantation. Cerebral organoids might serve as the basis for transplantation and rebuilding in the human brain due to the similarity in structure. Drug testing Cerebral organoids can be used as simple models of complex brain tissues to study the effects of drugs and to screen them for initial safety and efficacy. Testing new drugs for neurological diseases could also result from this method of applying drug high-throughput screening methods to cerebral organoids. After 2015, significant effort has gone into fabricating microscale devices to generate reproducible cerebral organoids at high-throughput. Developmental biology Organoids can be used for the study of brain development, for example identifying and investigating genetic switches that have a significant impact on it. This can be used for the prevention and treatment of specific diseases (see below) but also for other purposes such as insights into the genetic factors of recent brain evolution (or the origin of humans and evolved difference to other apes), human enhancement and improving intelligence, identifying detrimental exposome impacts (and protection thereof), or improving brain health spans. Disease study Organoids can be used to study the crucial early stages of brain development, test drugs and, because they can be made from living cells, study individual patients. Additionally, the development of vascularized cerebral organoids could be used for investigating stroke therapy in the future. Zika Virus Zika virus has been shown to have teratogenic effects, causing defects in fetal neurological development. Cerebral organoids have been used in studies in order to understand the process by which Zika virus affects the fetal brain and, in some cases, causes microcephaly. Cerebral organoids infected with the Zika virus have been found to be smaller in size than their uninfected counterparts, which is reflective of fetal microcephaly. Increased apoptosis was also found in cerebral organoids infected with Zika virus. Another study found that neural progenitor cell (NPC) populations were greatly reduced in these samples. The two methods by which NPC populations were reduced were increased cell death and reduced cell proliferation. TLR3 receptor upregulation was identified in these infected organoids. Inhibition of this TLR3 receptor was shown to partially halt some of the Zika induced effects. Additionally, lumen size was found to be increased in organoids infected with Zika virus. The results found from studying cerebral organoids infected with Zika virus at different stages of maturation suggest that early exposure in developing fetuses can cause greater likelihood of Zika virus-associated neurological birth defects. Cocaine Cocaine has also been shown to have teratogenic effects on fetal development. Cerebral organoids have been used to study which enzyme isoforms are necessary for fetal neurological defects caused by cocaine use during pregnancy. One of these enzymes was determined to be cytochrome P450 isoform CYP3A5. Microcephaly In one case, a cerebral organoid grown from a patient with microcephaly demonstrated related symptoms and revealed that apparently, the cause is overly rapid development, followed by slower brain growth. Microencephaly is a developmental condition in which the brain remains undersized, producing an undersized head and debilitation. Microcephaly is not suitable for mouse models, which do not replicate the condition. The primary form of the disease is thought to be caused by a homozygous mutation in the microcephalin gene. The disease is difficult to reproduce in mouse models because mice lack the developmental stages for an enlarged cerebral cortex that humans have. Naturally, a disease which affects this development would be impossible to show in a model which does not have it to begin with. To use cerebral organoids to model a human's microcephaly, one group of researchers has taken patient skin fibroblasts and reprogrammed them using four well known reprogramming factors. These include OCT4, SOX2, MYC and KLF4. The reprogrammed sample was able to be cloned into induced pluripotent stem cells. The cells were cultured into a cerebral organoid following a process described in the cerebral organoid creation section below. The organoid that resulted had decreased numbers of neural progenitor cells and smaller tissues. Additionally, the patient-derived tissues displayed fewer and less frequent neuroepithelial tissues made of progenitors, decreased radial glial stem cells, and increased neurons. These results suggest that the underlying mechanism of microcephaly is caused by cells prematurely differentiating into neurons leaving a deficit of radial glial cells. Alzheimer's disease Alzheimer's disease pathology has also been modeled with cerebral organoids. Affected individual's pluripotent stem cells were used to generate brain organoids and then compared to control models, synthesised from healthy individuals. It was found that in the affected models, structures similar to that of plaques caused by amyloid beta proteins and neurofibrillary tangles, that cause the disease's symptoms were observed. Previous attempts to model this so accurately have been unsuccessful, with drugs being developed on the basis of efficacy in pre-clinical murine models demonstrating no effect in human trials. Autism spectrum disorders Cerebral organoids can also be used to study autism spectrum disorders. In one study, cerebral organoids were cultured from cells derived from macrocephaly ASD patients. These cerebral organoids were found to reflect characteristics typical of the ASD-related macrocephaly phenotype found in the patients. By cultivating cerebral organoids from ASD patients with macrocephaly, connections could be made between certain gene mutations and phenotypic expression. Autism has also been studied through the comparison of healthy versus affected synthesised brain organoids. Observation of the two models showed the overexpression of a transcription factor FOXG1 that produced a larger amount of GABAergic inhibitory neurons in the affected models. The significance of this use of brain organoids is that it has added great support for the excitatory/inhibitory imbalance hypothesis which if proven true could help identify targets for drugs so that the condition could be treated. The field of epigenetics and how DNA methylation might influence development of ASD has also been of interest in recent years. The traditional method of studying post-mortem neural samples from individuals with ASD poses many challenges, so cerebral organoids have been proposed as an alternate method of studying the potential effect that epigenetic mechanisms may have on the development of autism. This use of the cerebral organoid model to examine ASD and epigenetic patterns might provide insight in regards to epigenetic developmental timelines. However, it is important to note that the conditions in which cerebral organoids are cultured in might affect gene expression, and consequentially affect observations made using this model. Additionally, there is concern over the variability in cerebral organoids cultured from the same sample. Further research into the extent and accuracy by which cerebral organoids recapitulate epigenetic patterns found in primary samples is also needed. Preterm hypoxia/ischemia Preterm hypoxic injury remain difficult to study because of limited availability of human fetal brain tissues and inadequate animal models to study human corticogenesis. Cerebral organoid can be used to model prenatal pathophysiology and to compare the susceptibility of the different neural cell types to hypoxia during corticogenesis. Intermediate progenitors seem to be particularly affected, due to the unfolded protein response pathway. It has also been observed that hypoxia resulted in apoptosis in cerebral organoids, with outer radial glia and neuroblasts/immature neurons being particularly affected. Glioblastomas Traditional means of studying glioblastomas come with limitations. One example of such limitations would be the limited sample availability. Because of these challenges that come with using a more traditional approach, cerebral organoids have been used as an alternative means to model the development of brain cancer. In one study, cerebral organoids were simulated to reflect tumor-like qualities using CRISPR CAS-9. Increased cell division was observed in these genetically altered models. Cerebral organoids were also used in mice models to study tumorigenesis and invasiveness. At the same time, the growth of brain cancers is influenced by environmental factors which are not yet replicable in cerebral organoid models. Cerebral organoids have been shown to provide insight into dysregulation of genes responsible for tumor development. Multiple Sclerosis Multiple sclerosis is an auto-immune inflammatory disorder affecting the central nervous system. Environmental and genetic factors contribute to the development of multiple sclerosis, however the etiology of this condition is unknown. Induced pluripotent stem cells from healthy human controls, as well as from patients with multiple sclerosis were grown into cerebral organoids creating an innovative human model of this disease. Limitations Cerebral organoids are preferred over their 3D cell culture counterparts because they can better reflect the structure of the human brain, and because, to a certain extent, they can reflect fetal neocortex development over an extended period of time. While cerebral organoids have a lot of potential, their culturing and development comes with limitations and areas for improvement. For example, it takes several months to create one cerebral organoid, and the methods used to analyze them are also time-consuming. Additionally, cerebral organoids do not have structures typical of a human brain, such as a blood brain barrier. This limits the types of diseases that can be studied. Other limitations include: Necrotic centers Until recently, the central part of organoids have been found to be necrotic due to oxygen as well as nutrients being unable to reach that innermost area. This imposes limitations to cerebral organoids' physiological applicability. Because of this lack of oxygen and nutrients, neural progenitor cells are limited in their growth. However, recent findings suggest that, in the process of culturing a cerebral organoid, a necrotic center could be avoided by using fluidic devices to increase the organoid's exposure to media. Reliability in generation The structure of cerebral organoids across different cultures has been found to be variable; a standardization procedure to ensure uniformity has yet to become common practice. Future steps in revising cerebral organoid production would include creating methods to ensure standardization of cerebral organoid generation. One such step proposed involves regulating the composition and thickness of the gel in which cerebral organoids are cultured in; this might contribute to greater reliability in cerebral organoid production. Additionally, variability in generation of cerebral organoids is introduced due to differences in stem cells used. These differences can arise from different manufacturing methods or host differences. Increased metabolic stress has also been found within organoids. This metabolic stress has been found to restrict organoid specificity. Future steps to streamline organoid culturing include analyzing more than one sample at a time. Maturity At the moment, the development of mature synapses in cerebral organoids is limited because of the media used. Additionally, while some electrophysiological properties have been shown to develop in cerebral organoids, cultivation of separate and distinct organoid regions has been shown to limit the maturation of these electrophysiological properties. Modeling of electrophysiological neurodevelopmental processes typical of development later in the neurodevelopmental timeline, such as synaptogenesis, is not yet suggested in cerebral organoid models. Since cerebral organoids are reflective of what happens during fetal neurodevelopment, there has been concern over how late onset diseases manifest in them. Future improvements include developing a way to recapitulate neurodegenerative diseases in cerebral organoids. Ethics Sentient organoids Ethical concerns have been raised with using cerebral organoids as a model for disease due to the potential of them experiencing sensations such as pain or having the ability to develop a consciousness. Currently it is unlikely given the simplicity of synthesised models compared to the complexity of a human brain; however, models have been shown to respond to light-based stimulation, so present models do have some scope of responding to some stimuli. Guidelines and legislation Steps are being taken towards resolving the grey area such as a 2018 symposium at Oxford University where experts in the field, philosophers and lawyers met to try to clear up the ethical concerns with the new technology. Similarly, projects such as Brainstorm from Case Western University aim to observe the progress of the field by monitoring labs working with brain organoids to try to begin the ‘building of a philosophical framework’ that future guidelines and legislation could be built upon. Humanized animals Additionally, the "humanization" of animal models has been raised as a topic of concern in transplantation of human stem cell derived organoids into other animal models. For example, potential future concerns of this type were described when human brain tissue organoids were transplanted into baby rats, appearing to be highly functional, to mature and to integrate with the rat brain. Such models can be used to model human brain development and, as demonstrated, to investigate diseases (and their potential therapies) but could be controversial. See also Neural tissue engineering References Developmental neuroscience Stem cells Central nervous system Synthetic biology
Cerebral organoid
[ "Engineering", "Biology" ]
5,575
[ "Synthetic biology", "Biological engineering", "Molecular genetics", "Bioinformatics" ]
35,258,079
https://en.wikipedia.org/wiki/Triphenylcarbethoxymethylenephosphorane
Triphenylcarbethoxymethylenephosphorane is an organophosphorus compound with the chemical formula Ph3PCHCO2Et (Ph = phenyl, Et = ethyl). It is a white solid that is soluble in organic solvents. The compound is a Wittig reagent. It is used to replace oxygen centres in ketones and aldehydes with CHCO2Et. References Organophosphorus compounds Ethyl esters
Triphenylcarbethoxymethylenephosphorane
[ "Chemistry" ]
105
[ "Organophosphorus compounds", "Organic compounds", "Functional groups" ]
35,260,696
https://en.wikipedia.org/wiki/Peptide%20microarray
A peptide microarray (also commonly known as peptide chip or peptide epitope microarray) is a collection of peptides displayed on a solid surface, usually a glass or plastic chip. Peptide chips are used by scientists in biology, medicine and pharmacology to study binding properties and functionality and kinetics of protein-protein interactions in general. In basic research, peptide microarrays are often used to profile an enzyme (like kinase, phosphatase, protease, acetyltransferase, histone deacetylase etc.), to map an antibody epitope or to find key residues for protein binding. Practical applications are seromarker discovery, profiling of changing humoral immune responses of individual patients during disease progression, monitoring of therapeutic interventions, patient stratification and development of diagnostic tools and vaccines. Principle The assay principle of peptide microarrays is similar to an ELISA protocol. The peptides (up to tens of thousands in several copies) are linked to the surface of a glass chip typically the size and shape of a microscope slide. This peptide chip can directly be incubated with a variety of different biological samples like purified enzymes or antibodies, patient or animal sera, cell lysates and then be detected through a label-dependent fashion, for example, by a primary antibody that targets the bound protein or modified substrates. After several washing steps a secondary antibody with the needed specificity (e.g. anti IgG human/mouse or anti phosphotyrosine or anti myc) is applied. Usually, the secondary antibody is tagged by a fluorescence label that can be detected by a fluorescence scanner. Other label-dependent detection methods includes chemiluminescence, colorimetric or autoradiography. Label-dependent assays are rapid and convenient to perform, but risk giving rise to false positive and negative results. More recently, label-free detection including surface plasmon resonance (SPR) spectroscopy, mass spectrometry (MS) and many other optical biosensors have been employed to measuring a broad range of enzyme activities. Peptide microarrays show several advantages over protein microarrays: Ease and cost of synthesis Extended shelf stability Detection of binding events on epitope level, enabling study of i.e. epitope spreading Flexible design for peptide sequence (i.e. posttranslational modifications, sequence diversity, non-natural amino acids ...) and immobilization chemistries Higher batch-to-batch reproducibility Production of a peptide microarray A peptide microarray is a planar slide with peptides spotted onto it or assembled directly on the surface by in-situ synthesis. Whereas peptides spotted can undergo quality controls that include mass spectrometer analysis and concentration normalization before spotting and result from a single synthetic batch, peptides synthesized directly on the surface may suffer from batch-to-batch variation and limited quality control options. However, peptide synthesis on chip allows the parallel synthesis of tens of thousands of peptides providing larger peptide libraries paired with lower synthesis costs. Peptides are ideally covalently linked through a chemoselective bond leading to peptides with the same orientation for interaction profiling. Some alternative procedures describe unspecific covalent binding and adhesive immobilization. However, lithographic methods can be used to overcome the problem of excessive number of coupling cycles. Combinatorial synthesis of peptide arrays onto a microchip by laser printing has been described, where a modified colour laser printer is used in combination with conventional solid-phase peptide synthesis chemistry. Amino acids are immobilized within toner particles, and the peptides are printed onto the chip surface in consecutive, combinatorial layers. Melting of the toner upon the start of the coupling reaction ensures that delivery of the amino acids and the coupling reaction can be performed independently. Another advantage of this method is that each amino acid can be produced and purified separately, followed by embedding it into the toner particles, which allows long-term storage. Applications of peptide microarrays Peptide microarrays can be used to study different kinds of protein-protein interactions, specially those involving modular protein substructures called peptide recognition modules or, most commonly, protein interaction domains. The reason for this is that such protein substructures recognize short linear motifs often exposed in natively unstructured regions of the binding partner, such that the interaction can be modelled in vitro by peptides as probes and the peptide recognition module as analyte. Most publications can be found in the context of immune monitoring and enzyme profiling. Immunology Mapping of immunodominant regions in antigens or whole proteomes Seromarker discovery Monitoring of clinical trials Profiling of antibody signatures and epitope mapping Finding neutralizing antibodies Enzyme profiling Identification of substrates for orphan enzymes Optimization of known enzyme substrates Elucidation of signal transduction pathways Detection of contaminating enzyme activities Consensus sequence and key residues determination Identifying sites for protein-protein interactions within a complex Analysis and evaluation of results Data analysis and evaluation of results is the most important part of every microarray experiment. After scanning the microarray slides, the scanner records a 20-bit, 16-bit or 8-bit numeric image in tagged image file format (*.tif). The .tif-image enables interpretation and quantification of each fluorescent spot on the scanned microarray slide. This quantitative data is the basis for performing statistical analysis on measured binding events or peptide modifications on the microarray slide. For evaluation and interpretation of detected signals an allocation of the peptide spot (visible in the image) and the corresponding peptide sequence has to be performed. The data for allocation is usually saved in the GenePix Array List (.gal) file and supplied together with the peptide microarray. The .gal-file (a tab-separated text file) can be opened using microarray quantification software-modules or processed with a text editor (e.g. notepad) or Microsoft Excel. This "gal" file is most often provided by the microarray manufacturer and is generated by input txt files and tracking software built into the robots that do the microarray manufacturing. References Biological techniques and tools Immunology Microarrays Protein methods
Peptide microarray
[ "Chemistry", "Materials_science", "Biology" ]
1,314
[ "Biochemistry methods", "Genetics techniques", "Microtechnology", "Microarrays", "Protein methods", "Protein biochemistry", "Bioinformatics", "Immunology", "Molecular biology techniques", "nan" ]
37,702,668
https://en.wikipedia.org/wiki/Immersion%20chiller
Immersion chillers work by circulating a cooling fluid (usually tap water from a garden hose or faucet) through a copper/stainless steel coil that is placed directly in the hot wort. As the cooling fluid runs through the coil it absorbs and carries away heat until the wort has cooled to the desired temperature. The advantage of using a copper or stainless steel immersion chiller is the lower risk of contamination versus other methods when used in an amateur or homebrewing environment. The clean chiller is placed directly in the still boiling wort and thus sanitized before the cooling process begins. See also Brewing#Wort cooling coolship, alternate equipment for cooling References Thermodynamic systems Fluid dynamics
Immersion chiller
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
146
[ "Thermodynamic systems", "Thermodynamics stubs", "Dynamical systems", "Chemical engineering", "Physical systems", "Thermodynamics", "Piping", "Physical chemistry stubs", "Fluid dynamics" ]
37,702,763
https://en.wikipedia.org/wiki/Kempf%E2%80%93Ness%20theorem
In algebraic geometry, the Kempf–Ness theorem, introduced by , gives a criterion for the stability of a vector in a representation of a complex reductive group. If the complex vector space is given a norm that is invariant under a maximal compact subgroup of the reductive group, then the Kempf–Ness theorem states that a vector is stable if and only if the norm attains a minimum value on the orbit of the vector. The theorem has the following consequence: If X is a complex smooth projective variety and if G is a reductive complex Lie group, then (the GIT quotient of X by G) is homeomorphic to the symplectic quotient of X by a maximal compact subgroup of G. References Invariant theory Theorems in algebraic geometry
Kempf–Ness theorem
[ "Physics", "Mathematics" ]
160
[ "Theorems in algebraic geometry", "Symmetry", "Group actions", "Theorems in geometry", "Invariant theory" ]
37,704,809
https://en.wikipedia.org/wiki/ICLIP
iCLIP (individual-nucleotide resolution crossLinking and immunoprecipitation) is a variant of the original CLIP method used for identifying protein-RNA interactions, which uses UV light to covalently bind proteins and RNA molecules to identify RNA binding sites of proteins. This crosslinking step has generally less background than standard RNA immunoprecipitation (RIP) protocols, because the covalent bond formed by UV light allows RNA to be fragmented, followed by stringent purification, and this also enables CLIP to identify the positions of protein-RNA interactions. As with all CLIP methods, iCLIP allows for a very stringent purification of the linked protein-RNA complexes by stringent washing during immunoprecipitation followed by SDS-PAGE and transfer to nitrocellulose. The labelled protein-RNA complexes are then visualised for quality control, excised from nitrocellulose, and treated with proteinase to release the RNA, leaving only a few amino acids at the crosslink site of the RNA. The RNA is then reverse transcribed, causing most cDNAs to truncate at the crosslink site, and the key innovation and unique feature in the development of iCLIP was to enable such truncated cDNAs to be PCR amplified and sequenced using a next-generation sequencing platform. iCLIP also added a random sequence (unique molecular identifier, UMI) along with experimental barcodes to the primer used for reverse transcription, thereby barcoding unique cDNAs to minimise any errors or quantitative biases of PCR, and thus improving the quantification of binding events. Enabling amplification of truncated cDNAs led to identification of the sites of RNA-protein interactions at high resolution by analysing the starting position of truncated cDNAs, as well as their precise quantification using UMIs with software called "iCount". All these innovations of iCLIP were adopted by later variants of CLIP such as eCLIP and irCLIP. An additional approach to identify protein-RNA crosslink sites is the mutational analysis of read-through cDNAs, such as nucleotide transitions in PAR-CLIP, or other types of errors that can be introduced by reverse transcriptase when it reads through the crosslink site in standard HITS-CLIP method with the Crosslink induced mutation site (CIMS) analysis. The quantitative nature of iCLIP enabled pioneering comparison across samples at the level of full RNAs, or to study competitive binding of multiple RNA-binding proteins or subtle changes in binding of a mutant protein at the level of binding peaks. An improved variant of iCLIP (iiCLIP) was recently developed to improve the efficiency and convenience of cDNA library preparation, for example by enzymatically removing adaptor after ligation to minimise artefacts caused by adaptor carry-over, introducing the non-radioactive visualisation of the protein-RNA complex (as done originally by irCLIP), increasing efficiency of ligation, proteinase and reverse transcription reactions, and enabling bead-based purification of cDNAs. Analysis of CLIP sequencing data benefits from use of customised computational software, much of which is available as part of the Nextflow pipeline for CLIP analysis, and specialised software is available for rapid demultiplexing of complex multiplexed libraries, comparative visualisation of crosslinking profiles across RNAs, identification of the peaks of clustered protein-RNA crosslink sites, and identification of sequence motifs enriched around prominent crosslinks. Moreover, iMaps provides a free CLIP analysis web platform and well-curated community database to facilitate studies of RNA regulatory networks across organisms, with a backend based on the Nextflow pipeline. It is applicable to the many variant protocols of CLIP (such as iCLIP, eCLIP, etc), and can be used to analyse unpublished data in a secure manner, or to obtain public CLIP data in a well-annotated format, along with various forms of quality control, visualisation and comparison. Questions on the experimental and computational challenges are collated on the Q&A CLIP Forum. References Biochemistry detection methods Genetics techniques RNA Protein methods
ICLIP
[ "Chemistry", "Engineering", "Biology" ]
847
[ "Biochemistry methods", "Genetics techniques", "Protein methods", "Protein biochemistry", "Genetic engineering", "Chemical tests", "Biochemistry detection methods" ]
37,704,906
https://en.wikipedia.org/wiki/Koopman%E2%80%93von%20Neumann%20classical%20mechanics
The Koopman–von Neumann (KvN) theory is a description of classical mechanics as an operatorial theory similar to quantum mechanics, based on a Hilbert space of complex, square-integrable wavefunctions. As its name suggests, the KvN theory is loosely related to work by Bernard Koopman and John von Neumann in 1931 and 1932, respectively. As explained in this entry, however, the historical origins of the theory and its name are complicated. History Statistical mechanics describes macroscopic systems in terms of statistical ensembles, such as the macroscopic properties of an ideal gas. Ergodic theory is a branch of mathematics arising from the study of statistical mechanics. Ergodic theory The origins of the Koopman–von Neumann theory are tightly connected with the rise of ergodic theory as an independent branch of mathematics, in particular with Boltzmann's ergodic hypothesis. In 1931 Koopman and André Weil independently observed that the phase space of the classical system can be converted into a Hilbert space. According to this formulation, functions representing physical observables become vectors, with an inner product defined in terms of a natural integration rule over the system's probability density on phase space. This reformulation makes it possible to draw interesting conclusions about the evolution of physical observables from Stone's theorem, which had been proved shortly before. This finding inspired von Neumann to apply the novel formalism to the ergodic problem. Subsequently, he published several seminal results in modern ergodic theory, including the proof of his mean ergodic theorem. Historical Misattribution The Koopman-von Neumann theory is often used today to refer to a reformulation of classical mechanics in which a classical system's probability density on phase space is expressed in terms of an underlying wavefunction, meaning that the vectors of the classical Hilbert space are wavefunctions, rather than physical observables. This approach did not originate with Koopman or von Neumann, for whom the classical Hilbert space consisted of physical observables, rather than wavefunctions. Indeed, as noted in 1961 by Thomas F. Jordan and E. C. George Sudarshan:It was shown by Koopman how the dynamical transformations of classical mechanics, considered as measure preserving transformations of the phase space, induce unitary transformations on the Hilbert space of functions which are square integrable with respect to a density function over the phase space. This Hilbert space formulation of classical mechanics was further developed by von Neumann. It is to be noted that this Hilbert space corresponds not to the space of state vectors in quantum mechanics but to the Hilbert space of operators on the state vectors (with the trace of the product of two operators being chosen as the scalar product).The practice of expressing classical probability distributions on phase space in terms of underlying wavefunctions goes back at least to the 1952–1953 work of Mário Schenberg on statistical mechanics. This method was independently developed several more times, by Angelo Loinger in 1962, by Giacomo Della Riccia and Norbert Wiener in 1966, and by E. C. George Sudarshan himself in 1976. The name "Koopman-von Neumann theory" for representing classical systems based on Hilbert spaces made up of classical wavefunctions is therefore an example of Stigler's law of eponymy. This misattribution appears to have first shown up in a paper by Danilo Mauro in 2002. Definition and dynamics Derivation starting from the Liouville equation In the approach of Koopman and von Neumann (KvN), dynamics in phase space is described by a (classical) probability density, recovered from an underlying wavefunction – the Koopman–von Neumann wavefunction – as the square of its absolute value (more precisely, as the amplitude multiplied with its own complex conjugate). This stands in analogy to the Born rule in quantum mechanics. In the KvN framework, observables are represented by commuting self-adjoint operators acting on the Hilbert space of KvN wavefunctions. The commutativity physically implies that all observables are simultaneously measurable. Contrast this with quantum mechanics, where observables need not commute, which underlines the uncertainty principle, Kochen–Specker theorem, and Bell inequalities. The KvN wavefunction is postulated to evolve according to exactly the same Liouville equation as the classical probability density. From this postulate it can be shown that indeed probability density dynamics is recovered. Derivation starting from operator axioms Conversely, it is possible to start from operator postulates, similar to the Hilbert space axioms of quantum mechanics, and derive the equation of motion by specifying how expectation values evolve. The relevant axioms are that as in quantum mechanics (i) the states of a system are represented by normalized vectors of a complex Hilbert space, and the observables are given by self-adjoint operators acting on that space, (ii) the expectation value of an observable is obtained in the manner as the expectation value in quantum mechanics, (iii) the probabilities of measuring certain values of some observables are calculated by the Born rule, and (iv) the state space of a composite system is the tensor product of the subsystem's spaces. These axioms allow us to recover the formalism of both classical and quantum mechanics. Specifically, under the assumption that the classical position and momentum operators commute, the Liouville equation for the KvN wavefunction is recovered from averaged Newton's laws of motion. However, if the coordinate and momentum obey the canonical commutation relation, the Schrödinger equation of quantum mechanics is obtained. Measurements In the Hilbert space and operator formulation of classical mechanics, the Koopman von Neumann–wavefunction takes the form of a superposition of eigenstates, and measurement collapses the KvN wavefunction to the eigenstate which is associated the measurement result, in analogy to the wave function collapse of quantum mechanics. However, it can be shown that for Koopman–von Neumann classical mechanics non-selective measurements leave the KvN wavefunction unchanged. KvN vs Liouville mechanics The KvN dynamical equation () and Liouville equation () are first-order linear partial differential equations. One recovers Newton's laws of motion by applying the method of characteristics to either of these equations. Hence, the key difference between KvN and Liouville mechanics lies in weighting individual trajectories: Arbitrary weights, underlying the classical wave function, can be utilized in the KvN mechanics, while only positive weights, representing the probability density, are permitted in the Liouville mechanics (see this scheme). Quantum analogy Being explicitly based on the Hilbert space language, the KvN classical mechanics adopts many techniques from quantum mechanics, for example, perturbation and diagram techniques as well as functional integral methods. The KvN approach is very general, and it has been extended to dissipative systems, relativistic mechanics, and classical field theories. The KvN approach is fruitful in studies on the quantum-classical correspondence as it reveals that the Hilbert space formulation is not exclusively quantum mechanical. Even Dirac spinors are not exceptionally quantum as they are utilized in the relativistic generalization of the KvN mechanics. Similarly as the more well-known phase space formulation of quantum mechanics, the KvN approach can be understood as an attempt to bring classical and quantum mechanics into a common mathematical framework. In fact, the time evolution of the Wigner function approaches, in the classical limit, the time evolution of the KvN wavefunction of a classical particle. However, a mathematical resemblance to quantum mechanics does not imply the presence of hallmark quantum effects. In particular, impossibility of double-slit experiment and Aharonov–Bohm effect are explicitly demonstrated in the KvN framework. See also Classical mechanics Statistical mechanics Liouville's theorem Quantum mechanics Phase space formulation of quantum mechanics Wigner quasiprobability distribution Dynamical systems Ergodic theory References Further reading PhD thesis, Università degli Studi di Trieste. H.R. Jauslin, D. Sugny, Dynamics of mixed classical-quantum systems, geometric quantization and coherent states, Lecture Note Series, IMS, NUS, Review Vol., August 13, 2009 The Legacy of John von Neumann (Proceedings of Symposia in Pure Mathematics, vol 50), edited by James Glimm, John Impagliazzo, Isadore Singer. — Amata Graphics, 2006. — U. Klein, From Koopman–von Neumann theory to quantum theory, Quantum Stud.: Math. Found. (2018) 5:219–227. Classical mechanics Mathematical physics Articles containing video clips
Koopman–von Neumann classical mechanics
[ "Physics", "Mathematics" ]
1,829
[ "Applied mathematics", "Theoretical physics", "Classical mechanics", "Mechanics", "Mathematical physics" ]
37,705,238
https://en.wikipedia.org/wiki/Glycogenin-1
Glycogenin-1 is an enzyme that is involved in the biosynthesis of glycogen. It is capable of self-glucosylation, forming an oligosaccharide primer that serves as a substrate for glycogen synthase. This is done through an inter-subunit mechanism. It also plays a role in glycogen metabolism regulation. Recombinant human glycogenin-1 was expressed in E. coli and purified using conventional chromatography techniques. Glycogen metabolism Glycogen is a multi-branched polysaccharide. It is primary means of glucose storage in animal cells. In the human body, the two main tissues which store glycogen are liver and skeletal muscle. Glycogen is typically more concentrated in the liver, but because humans have much more muscle mass, our muscles store about three quarters of the total glycogen in our body. Location of glycogen The function of liver glycogen is to maintain glucose homeostasis, generating glucose via glycogenolysis to compensate for the decrease of glucose levels that can occur between meals. Thanks to the presence of the glucose-6-phosphatase enzyme, the hepatocytes are capable of turning glycogen to glucose, releasing it into blood to prevent hypoglycemia. In skeletal muscle, glycogen is used as an energy source for muscle contraction during exercise. The different functions of glycogen in muscle or liver make the regulation mechanisms of its metabolism differ in each tissue. These mechanisms are based mainly on the differences on structure and on the regulation of the enzymes that catalyze synthesis, glycogen synthase (GS), and degradation, glycogen phosphorylase (GF). Glycogen synthesis Glycogenin is the initiator of the glycogen biosynthesis. This protein is a glycosyl transferase that have the ability of autoglycosilation using UDP-glucose, which helps in the growth of itself until forming an oligosaccharide made by 8 glucoses. Glycogenin is an oligomer, and it's capable of interacting with several proteins. In recent years, a family of proteins has been identified, the GNIPs (glycogenin-interacting protein), that interacts with glycogenin stimulating its autoglycolsilation activity. Glycogenin-1 In humans, two isoforms of glycogenin can be expressed: glycogenin-1, with a molecular weight of 37 kDa and codified by GYG1 gen, which is expressed mostly in muscles; and glycogenin-2, with a molecular weight of 66 kDa and codified by GYG2 gen, which is expressed mainly in liver, cardiac muscle and other types of tissue, but not in skeletal muscle. Glycogenin-1 was described by analyzing the glycogen of skeletal muscle. It was determined that this molecule was united by a covalent bond to each mature molecule of muscular glycogen. Gene Structure The glycogenin-1 gene, which spans over 13kb, consists of seven exons and six introns. Its proximal promoter contains a TATA box, a cyclic AMP responsive element, and two putative Sp1 binding sites in a CpG island, a DNA region with a high frequency of CpG sites. There are also nine E-boxes that bind the basic helix-loop-helix of muscle-specific transcription factors. Location and transcription The GYG1 gene is located on the long arm of the chromosome 3, between positions 24 and 25, from base pair 148,709,194 to base pair 148,745,455. Transcription of human glycogenin-1 is mainly initiated at 80bp and 86bp upstream the translator’s codon beginning. Transcriptions factors have different binding sites for its development, some examples are: GATA, activator protein 1 and 2 (AP-1 and AP-2), and numerous potential Octamer-1 binding sites. Deficiency A Glycogenin-1 deficiency leads to Glycogen storage disease type XV. Mutation Deficiency of glycogenin-1 is detected in the sequence of the glycogenin-1 gene, GYG1, which revealed a non-sense mutation in one allele and a missense mutation, Thr83Met, in the other. The missense mutation resulted in inactivation of the autoglycosylation of glycogenin-1, which is necessary for the priming of glycogen synthesis in muscle. Autoglycosylation of glycogenin-1 occurs at Tyr195 by a gulose-1-O-tyrosine linkage. An induced missense mutation of this residue results in inactivated autoglycosylation. However, missense mutation affecting some other residues of glycogenin-1 has also been shown to eliminate autoglycosilation. Consequences The phenotypic features of the skeletal muscle in a patient with this disorder are muscle glycogen depletion, mitochondrial proliferation, and a marked predominance of slow-twitch, oxidative muscle fibres. The mutations in the glycogenin-1 gene GYG1 are also a cause of cardiomyopathy and arrhythmia. See also Glucose Glycogen Glycogen synthase Glycogenin Gene Mutation References External links Glycobiology
Glycogenin-1
[ "Chemistry", "Biology" ]
1,199
[ "Biochemistry", "Glycobiology" ]
37,709,269
https://en.wikipedia.org/wiki/Creep%20and%20shrinkage%20of%20concrete
Creep and shrinkage of concrete are two physical properties of concrete. The creep of concrete, which originates from the calcium silicate hydrates (C-S-H) in the hardened Portland cement paste (which is the binder of mineral aggregates), is fundamentally different from the creep of metals and polymers. Unlike the creep of metals, it occurs at all stress levels and, within the service stress range, is linearly dependent on the stress if the pore water content is constant. Unlike the creep of polymers and metals, it exhibits multi-months aging, caused by chemical hardening due to hydration which stiffens the microstructure, and multi-year aging, caused by long-term relaxation of self-equilibrated micro-stresses in the nano-porous microstructure of the C-S-H. If concrete is fully dried, it does not creep, but it is next to impossible to dry concrete fully without severe cracking. Changes of pore water content due to drying or wetting processes cause significant volume changes of concrete in load-free specimens. They are called the shrinkage (typically causing strains between 0.0002 and 0.0005, and in low strength concretes even 0.0012) or swelling (< 0.00005 in normal concretes, < 0.00020 in high strength concretes). To separate shrinkage from creep, the compliance function , defined as the stress-produced strain (i.e., the total strain minus shrinkage) caused at time t by a unit sustained uniaxial stress applied at age , is measured as the strain difference between the loaded and load-free specimens. The multi-year creep evolves logarithmically in time (with no final asymptotic value), and over the typical structural lifetimes it may attain values 3 to 6 times larger than the initial elastic strain. When a deformation is suddenly imposed and held constant, creep causes relaxation of critically produced elastic stress. After unloading, creep recovery takes place, but it is partial, because of aging. In practice, creep during drying is inseparable from shrinkage. The rate of creep increases with the rate of change of pore humidity (i.e., relative vapor pressure in the pores). For small specimen thickness, the creep during drying greatly exceeds the sum of the drying shrinkage at no load and the creep of a loaded sealed specimen (Fig. 1 bottom). The difference, called the drying creep or Pickett effect (or stress-induced shrinkage), represents a hygro-mechanical coupling between strain and pore humidity changes. Drying shrinkage at high humidities (Fig. 1 top and middle) is caused mainly by compressive stresses in the solid microstructure which balance the increase in capillary tension and surface tension on the pore walls. At low pore humidities (<75%), shrinkage is caused by a decrease of the disjoining pressure across nano-pores less than about 3 nm thick, filled by adsorbed water. The chemical processes of Portland cement hydration lead to another type of shrinkage, called the autogeneous shrinkage, which is observed in sealed specimens, i.e., at no moisture loss. It is caused partly by chemical volume changes, but mainly by self-desiccation due to loss of water consumed by the hydration reaction. It amounts to only about 5% of the drying shrinkage in normal concretes, which self-desiccate to about 97% pore humidity. But it can equal the drying shrinkage in modern high-strength concretes with very low water-cement ratios, which may self-desiccate to as low as 75% humidity. The creep originates in the calcium silicate hydrates (C-S-H) of hardened Portland cement paste. It is caused by slips due to bond ruptures, with bond restorations at adjacent sites. The C-S-H is strongly hydrophilic, and has a colloidal microstructure disordered from a few nanometers up. The paste has a porosity of about 0.4 to 0.55 and an enormous specific surface area, roughly 500 m2/cm3. Its main component is the tri-calcium silicate hydrate gel (3 CaO · 2 SiO3 · 3 H2O, in short C3S2H3). The gel forms particles of colloidal dimensions, weakly bound by van der Waals forces. The physical mechanism and modeling are still being debated. The constitutive material model in the equations that follow is not the only one available but has at present the strongest theoretical foundation and fits best the full range of available test data. Stress–strain relation at constant environment In service, the stresses in structures are < 50% of concrete strength, in which case the stress–strain relation is linear, except for corrections due to microcracking when the pore humidity changes. The creep may thus be characterized by the compliance function (Fig. 2). As increases, the creep value for fixed diminishes. This phenomenon, called aging, causes that depends not only on the time lag but on both and separately. At variable stress , each stress increment applied at time produces strain history . The linearity implies the principle of superposition (introduced by Boltzmann and for the case of aging, by Volterra). This leads to the (uniaxial) stress–strain relation of linear aging viscoelasticity: Here denotes shrinkage strain augmented by thermal expansion, if any. The integral is the Stieltjes integral, which admits histories with jumps; for time intervals with no jumps, one may set to obtain the standard (Riemann) integral. When history is prescribed, then Eq.(1) represents a Volterra integral equation for . This equation is not analytically integrable for realistic forms of , although numerical integration is easy. The solution for strain imposed at any age (and for ) is called the relaxation function . To generalize Eq. (1) to a triaxial stress–strain relation, one may assume the material to be isotropic, with an approximately constant creep Poisson ratio, . This yields volumetric and deviatoric stress–strain relations similar to Eq. (1) in which is replaced by the bulk and shear compliance functions: At high stress, the creep law appears to be nonlinear (Fig. 2) but Eq. (1) remains applicable if the inelastic strain due to cracking with its time-dependent growth is included in . A viscoplastic strain needs to be added to only in the case that all the principal stresses are compressive and the smallest in magnitude is much larger in magnitude than the uniaxial compressive strength . In measurements, Young's elastic modulus depends not only on concrete age but also on the test duration because the curve of compliance versus load duration has a significant slope for all durations beginning with 0.001 s or less. Consequently, the conventional Young's elastic modulus should be obtained as , where is the test duration. The values day and days give good agreement with the standardized test of , including the growth of as a function of , and with the widely used empirical estimate . The zero-time extrapolation happens to be approximately age-independent, which makes a convenient parameter for defining . For creep at constant total water content, called the basic creep, a realistic rate form of the uniaxial compliance function (the thick curves in Fig. 1 bottom) was derived from the solidification theory: where ; = flow viscosity, which dominates multi-decade creep; = load duration; = 1 day, , ; = volume of gel per unit volume of concrete, growing due to hydration; and = empirical constants (of dimension ). Function gives age-independent delayed elasticity of the cement gel (hardened cement paste without its capillary pores) and, by integration, . Integration of gives as a non-integrable binomial integral, and so, if the values of are sought, they must be obtained by numerical integration or by an approximation formula (a good formula exists). However, for computer structural analysis in time steps, is not needed; only the rate is needed as the input. Equations (3) and (4) are the simplest formulae satisfying three requirements: 1) Asymptotically for both short and long times , , should be a power function of time; and 2) so should the aging rate, given by ) (power functions are indicated by self-similarity conditions); and 3) (this condition is required to prevent the principle of superposition from giving non-monotonic recovery curves after unloading which are physically objectionable). Creep at variable environment At variable mass of evaporable (i.e., not chemically bound) water per unit volume of concrete, a physically realistic constitutive relation may be based on the idea of microprestress , considered to be a dimensionless measure of the stress peaks at the creep sites in the microstructure. The microprestress is produced as a reaction to chemical volume changes and to changes in the disjoining pressures acting across the hindered adsorbed water layers in nanopores (which are < 1 nm thick on the average and at most up to about ten water molecules, or 2.7 nm, in thickness), confined between the C-S-H sheets. The disjoining pressures develop first due to unequal volume changes of hydration products. Later, they relax due to creep in the C-S-H so as to maintain thermodynamic equilibrium (i.e., equality of chemical potentials of water) with water vapor in the capillary pores, and build up due to any changes of temperature or humidity in these pores. The rate of bond breakages may be assumed to be a quadratic function of the level of microprestress, which requires Eq. (4) to be generalized as A crucial property is that the microprestress is not appreciably affected by the applied load (since pore water is much more compressible than the solid skeleton and behaves like a soft spring coupled in parallel with a stiff framework). The microprestress relaxes in time and its evolution at each point of a concrete structure may be solved from the differential equation where = positive constants (the absolute value ensures that could never become negative). The microprestress can model the fact that drying and cooling, as well as wetting and heating, accelerate creep. The fact that changes of or produce new microprestress peaks and thus activate new creep sites explains the drying creep effect. A part of this effect, however, is caused by the fact that microcracking in a companion load-free specimen renders its overall shrinkage smaller than the shrinkage in an uncracked (compressed) specimen, thus increasing the difference between the two (which is what defines creep). The concept of microprestress is also needed to explain the stiffening due to aging. One physical cause of aging is that the hydration products gradually fill the pores of hardened cement paste, as reflected in function in Eq. (3). But hydration ceases after about one year, yet the effect of the age at loading is strong even after many years. The explanation is that the microstress peaks relax with age, which reduces the number of creep sites and thus the rate of bond breakages. At variable environment, time in Eq. (3) must be replaced by equivalent hydration time where = decreasing function of (0 if about 0.8) and . In Eq. (4), must be replaced by where = reduced time (or maturity), capturing the effect of and on creep viscosity; = function of decreasing from 1 at to 0 at ; , 5000 K. The evolution of humidity profiles ( = coordinate vector) may be approximately considered as uncoupled from the stress and deformation problem and may be solved numerically from the diffusion equation div[grad } where = self-desiccation caused by hydration (which reaches about 0.97 in normal concretes and about 0.80 in high strength concretes), = diffusivity, which decreases about 20 times as drops from 1.0 to 0.6. The free (unrestrained) shrinkage strain rate is, approximately, where = shrinkage coefficient. Since the -values at various points are incompatible, the calculation of the overall shrinkage of structures as well as test specimens is a stress analysis problem, in which creep and cracking must be taken into account. For finite element structural analysis in time steps, it is advantageous to convert the constitutive law to a rate-type form. This may be achieved by approximating with a Kelvin chain model (or the associated relaxation function with a Maxwell chain model). The history integrals such as Eq. 1 then disappear from the constitutive law, the history being characterized by the current values of the internal state variables (the partial strains or stresses of the Kelvin or Maxwell chain). Conversion to a rate-type form is also necessary for introducing the effect of variable temperature, which affects (according to the Arrhenius law) both the Kelvin chain viscosities and the rate of hydration, as captured by . The former accelerates creep if the temperature is increased, and the latter decelerates creep. Three-dimensional tensorial generalization of Eqs. (3)-(7) is required for finite element analysis of structures. Approximate cross-section response at drying Although multidimensional finite element calculations of creep and moisture diffusion are nowadays feasible, simplified one-dimensional analysis of concrete beams or girders based on the assumption of planar cross sections remaining planar still reigns in practice. Although (in box girder bridges) it involves deflection errors of the order of 30%. In that approach, one needs as input the average cross-sectional compliance function (Fig. 1 bottom, light curves) and average shrinkage function of the cross section (Fig. 1 left and middle) ( = age at start of drying). Compared to the point-wise constitutive equation, the algebraic expressions for such average characteristics are considerably more complicated and their accuracy is lower, especially if the cross section is not under centric compression. The following approximations have been derived and their coefficients optimized by fitting a large laboratory database for environmental humidities below 98%: where = effective thickness, = volume-to-surface ratio, = 1 for normal (type I) cement; = shape factor (e.g., 1.0 for a slab, 1.15 for a cylinder); and , = constant; (all times are in days). Eqs. (3) and (4) apply except that must be replaced by where and . The form of the expression for shrinkage halftime is based on the diffusion theory. Function 'tanh' in Eq. 8 is the simplest function satisfying two asymptotic conditions ensuing from the diffusion theory: 1) for short times , and 2) the final shrinkage must be approached exponentially. Generalizations for the temperature effect exist, too. Empirical formulae have been developed for predicting the parameter values in the foregoing equations on the basis of concrete strength and some parameters of the concrete mix. However, they are very crude, leading to prediction errors with the coefficients of variation of about 23% for creep and 34% for drying shrinkage. These high uncertainties can be drastically reduced by updating certain coefficients of the formulae according to short-time creep and shrinkage tests of the given concrete. For shrinkage, however, the weight loss of the drying test specimens must also be measured (or else the problem of updating is ill-conditioned). A fully rational prediction of concrete creep and shrinkage properties from its composition is a formidable problem, far from resolved satisfactorily. Engineering applications The foregoing form of functions and has been used in the design of structures of high creep sensitivity. Other forms have been introduced into the design codes and standard recommendations of engineering societies. They are simpler though less realistic, especially for multi-decade creep. Creep and shrinkage can cause a major loss of prestress. Underestimation of multi-decade creep has caused excessive deflections, often with cracking, in many of large-span prestressed segmentally erected box girder bridges (over 60 cases documented). Creep may cause excessive stress and cracking in cable-stayed or arch bridges, and roof shells. Non-uniformity of creep and shrinkage, caused by differences in the histories of pore humidity and temperature, age and concrete type in various parts of a structures may lead to cracking. So may interactions with masonry or with steel parts, as in cable-stayed bridges and composite steel-concrete girders. Differences in column shortenings are of particular concern for very tall buildings. In slender structures, creep may cause collapse due to long-time instability. The creep effects are particularly important for prestressed concrete structures (because of their slenderness and high flexibility), and are paramount in safety analysis of nuclear reactor containments and vessels. At high temperature exposure, as in fire or postulated nuclear reactor accidents, creep is very large and plays a major role. In preliminary design of structures, simplified calculations may conveniently use the dimensionless creep coefficient = . The change of structure state from time of initial loading to time can simply, though crudely, be estimated by quasi-elastic analysis in which Young's modulus is replaced by the so-called age-adjusted effective modulus . The best approach to computer creep analysis of sensitive structures is to convert the creep law to an incremental elastic stress–strain relation with an eigenstrain. Eq. (1) can be used but in that form the variations of humidity and temperature with time cannot be introduced and the need to store the entire stress history for each finite element is cumbersome. It is better to convert Eq. (1) to a set of differential equations based on the Kelvin chain rheologic model. To this end, the creep properties in each sufficiently small time step may be considered as non-aging, in which case a continuous spectrum of retardation moduli of Kelvin chain may be obtained from by Widder's explicit formula for approximate Laplace transform inversion. The moduli () of the Kelvin units then follow by discretizing this spectrum. They are different for each integration point of each finite element in each time step. This way the creep analysis problem gets converted to a series of elastic structural analyses, each of which can be run on a commercial finite element program. For an example see the last reference below. See also Deformation (engineering) Selected bibliography References Bagheri, A., Jamali, A., Pourmir, M., and Zanganeh, H. (2019). "The Influence of Curing Time on Restrained Shrinkage Cracking of Concrete with Shrinkage Reducing Admixture," Advances in Civil Engineering Materials 8, no. 1: 596-610. https://doi.org/10.1520/ACEM20190100 ACI Committee 209 (1972). "Prediction of creep, shrinkage and temperature effects in concrete structures" ACI-SP27, Designing for Effects of Creep, Shrinkage and Temperature}, Detroit, pp. 51–93 (reaproved 2008) ACI Committee 209 (2008). Guide for Modeling and Calculating Shrinkage and Creep in Hardened Concrete ACI Report 209.2R-08, Farmington Hills. Brooks, J.J. (2005). "30-year creep and shrinkage of concrete." Magazine of Concrete Research, 57(9), 545–556. Paris, France. CEB-FIP Model Code 1990. Model Code for Concrete Structures. Thomas Telford Services Ltd., London, Great Britain; also published by Comité euro-international du béton (CEB), Bulletins d'Information No. 213 and 214, Lausanne, Switzerland. FIB Model Code 2011. "Fédération internationale de béton (FIB). Lausanne. Harboe, E.M., et al. (1958). "A comparison of the instantaneous and the sustained modulus of elasticity of concrete", Concr. Lab. Rep. No. C-354, Division of Engineering Laboratories, US Dept. of the Interior, Bureau of Reclamation, Denver, Colorado. Jirásek, M., and Bažant, Z.P. (2001). Inelastic analysis of structures, J. Wiley, London (chapters 27, 28). RILEM (1988a). Committee TC 69, Chapters 2 and 3 in Mathematical Modeling of Creep and Shrinkage of Concrete, Z.P. Bažant, ed., J. Wiley, Chichester and New York, 1988, 57–215. Troxell, G.E., Raphael, J.E. and Davis, R.W. (1958). "Long-time creep and shrinkage tests of plain and reinforced concrete" Proc. ASTM 58} pp. 1101–1120. Vítek, J.L. (1997). "Long-Term deflections of Large Prestressed Concrete Bridges". CEB Bulletin d'Information No. 235 – Serviceability Models – Behaviour and Modelling in Serviceability Limit States Including Repeated and Sustained Load, CEB, Lausanne, pp. 215–227 and 245–265. Wittmann, F.H. (1982). "Creep and shrinkage mechanisms." Creep and shrinkage of concrete structures, Z.P. Bažant and F.H. Wittmann, eds., J. Wiley, London 129–161. Bažant, Z.P., and Yu, Q. (2012). "Excessive long-time deflections of prestressed box girders." ASCE J. of Structural Engineering, 138 (6), 676–686, 687–696. Concrete Continuum mechanics Deformation (mechanics) Materials degradation Solid mechanics
Creep and shrinkage of concrete
[ "Physics", "Materials_science", "Engineering" ]
4,640
[ "Structural engineering", "Solid mechanics", "Continuum mechanics", "Deformation (mechanics)", "Classical mechanics", "Materials science", "Mechanics", "Concrete", "Materials degradation" ]
37,712,157
https://en.wikipedia.org/wiki/Einsteinium%28III%29%20iodide
Einsteinium triiodide is an iodide of the synthetic actinide einsteinium which has the molecular formula EsI3. This crystalline salt is an amber-coloured solid. It glows red in the dark due to einsteinium's intense radioactivity. It crystallises in the hexagonal crystal system in the space group R with the lattice parameters a = 753 pm and c = 2084.5 pm with six formula units per unit cell. Its crystal structure is isotypic with that of bismuth(III) iodide. References Further reading Einsteinium compounds Iodides Actinide halides
Einsteinium(III) iodide
[ "Chemistry" ]
132
[ "Inorganic compounds", "Inorganic compound stubs" ]
23,519,759
https://en.wikipedia.org/wiki/Cdx%20protein%20family
The Cdx protein family is a group of the transcription factor proteins which bind to DNA to regulate the expression of genes. In particular this family of proteins can regulate the Hox genes. They are regulators of embryonic development and hematopoiesis in vertebrates, and are also involved in the development of some types of gastrointestinal cancers and leukemias. Cdx proteins Humans have three genes (CDX1, CDX2, and CDX4) that encode the caudal proteins: Cdx1 protein Cdx2 protein Cdx4 protein The human Cdx2 family protein has 94% identity with the mouse Cdx2 and the hamster Cdx3. Cdx proteins and regulation of Hox gene expression Cdx proteins are key regulators of Hox genes. The vertebrate Cdx proteins act upstream of Hox genes. Cdx genes integrate the posteriorizing signals from retinoic acid and Wnt canonical pathways and relay this information to Hox promoters. Expression in mouse embryo Cdx2 expression begins at 3.5 days and is confined to the trophectoderm, being absent from the inner cell mass. From 8.5 days, Cdx2 begins to be expressed in embryonic tissues, principally in the posterior part of the gut from its earliest formation. See also Neural tube Protein family Transcription factor References Transcription factors Protein families
Cdx protein family
[ "Chemistry", "Biology" ]
284
[ "Gene expression", "Protein classification", "Signal transduction", "Induced stem cells", "Protein families", "Transcription factors" ]
39,171,284
https://en.wikipedia.org/wiki/Dirac%20equation%20in%20curved%20spacetime
In mathematical physics, the Dirac equation in curved spacetime is a generalization of the Dirac equation from flat spacetime (Minkowski space) to curved spacetime, a general Lorentzian manifold. Mathematical formulation Spacetime In full generality the equation can be defined on or a pseudo-Riemannian manifold, but for concreteness we restrict to pseudo-Riemannian manifold with signature . The metric is referred to as , or in abstract index notation. Frame fields We use a set of vierbein or frame fields , which are a set of vector fields (which are not necessarily defined globally on ). Their defining equation is The vierbein defines a local rest frame, allowing the constant Gamma matrices to act at each spacetime point. In differential-geometric language, the vierbein is equivalent to a section of the frame bundle, and so defines a local trivialization of the frame bundle. Spin connection To write down the equation we also need the spin connection, also known as the connection (1-)form. The dual frame fields have defining relation The connection 1-form is then where is a covariant derivative, or equivalently a choice of connection on the frame bundle, most often taken to be the Levi-Civita connection. One should be careful not to treat the abstract Latin indices and Greek indices as the same, and further to note that neither of these are coordinate indices: it can be verified that doesn't transform as a tensor under a change of coordinates. Mathematically, the frame fields define an isomorphism at each point where they are defined from the tangent space to . Then abstract indices label the tangent space, while greek indices label . If the frame fields are position dependent then greek indices do not necessarily transform tensorially under a change of coordinates. Raising and lowering indices is done with for latin indices and for greek indices. The connection form can be viewed as a more abstract connection on a principal bundle, specifically on the frame bundle, which is defined on any smooth manifold, but which restricts to an orthonormal frame bundle on pseudo-Riemannian manifolds. The connection form with respect to frame fields defined locally is, in differential-geometric language, the connection with respect to a local trivialization. Clifford algebra Just as with the Dirac equation on flat spacetime, we make use of the Clifford algebra, a set of four gamma matrices satisfying where is the anticommutator. They can be used to construct a representation of the Lorentz algebra: defining , where is the commutator. It can be shown they satisfy the commutation relations of the Lorentz algebra: They therefore are the generators of a representation of the Lorentz algebra . But they do not generate a representation of the Lorentz group , just as the Pauli matrices generate a representation of the rotation algebra but not . They in fact form a representation of However, it is a standard abuse of terminology to any representations of the Lorentz algebra as representations of the Lorentz group, even if they do not arise as representations of the Lorentz group. The representation space is isomorphic to as a vector space. In the classification of Lorentz group representations, the representation is labelled . The abuse of terminology extends to forming this representation at the group level. We can write a finite Lorentz transformation on as where is the standard basis for the Lorentz algebra. These generators have components or, with both indices up or both indices down, simply matrices which have in the index and in the index, and 0 everywhere else. If another representation has generators then we write where are indices for the representation space. In the case , without being given generator components for , this is not well defined: there are sets of generator components which give the same but different Covariant derivative for fields in a representation of the Lorentz group Given a coordinate frame arising from say coordinates , the partial derivative with respect to a general orthonormal frame is defined and connection components with respect to a general orthonormal frame are These components do not transform tensorially under a change of frame, but do when combined. Also, these are definitions rather than saying that these objects can arise as partial derivatives in some coordinate chart. In general there are non-coordinate orthonormal frames, for which the commutator of vector fields is non-vanishing. It can be checked that under the transformation if we define the covariant derivative , then transforms as This generalises to any representation for the Lorentz group: if is a vector field for the associated representation, When is the fundamental representation for , this recovers the familiar covariant derivative for (tangent-)vector fields, of which the Levi-Civita connection is an example. There are some subtleties in what kind of mathematical object the different types of covariant derivative are. The covariant derivative in a coordinate basis is a vector-valued 1-form, which at each point is an element of . The covariant derivative in an orthonormal basis uses the orthonormal frame to identify the vector-valued 1-form with a vector-valued dual vector which at each point is an element of using that canonically. We can then contract this with a gamma matrix 4-vector which takes values at in Dirac equation on curved spacetime Recalling the Dirac equation on flat spacetime, the Dirac equation on curved spacetime can be written down by promoting the partial derivative to a covariant one. In this way, Dirac's equation takes the following form in curved spacetime: where is a spinor field on spacetime. Mathematically, this is a section of a vector bundle associated to the spin-frame bundle by the representation Recovering the Klein–Gordon equation from the Dirac equation The modified Klein–Gordon equation obtained by squaring the operator in the Dirac equation, first found by Erwin Schrödinger as cited by Pollock is given by where is the Ricci scalar, and is the field strength of . An alternative version of the Dirac equation whose Dirac operator remains the square root of the Laplacian is given by the Dirac–Kähler equation; the price to pay is the loss of Lorentz invariance in curved spacetime. Note that here Latin indices denote the "Lorentzian" vierbein labels while Greek indices denote manifold coordinate indices. Action formulation We can formulate this theory in terms of an action. If in addition the spacetime is orientable, there is a preferred orientation known as the volume form . One can integrate functions against the volume form: The function is integrated against the volume form to obtain the Dirac action See also Dirac equation in the algebra of physical space Dirac spinor Maxwell's equations in curved spacetime Two-body Dirac equations References Quantum field theory Spinors Partial differential equations Fermions Curved spacetime
Dirac equation in curved spacetime
[ "Physics", "Materials_science" ]
1,412
[ "Quantum field theory", "Equations of physics", "Fermions", "Eponymous equations of physics", "Quantum mechanics", "Subatomic particles", "Condensed matter physics", "Dirac equation", "Matter" ]
46,517,855
https://en.wikipedia.org/wiki/Relaxor%20ferroelectric
Relaxor ferroelectrics are ferroelectric materials that exhibit high electrostriction. , although they have been studied for over fifty years, the mechanism for this effect is still not completely understood, and is the subject of continuing research. Examples of relaxor ferroelectrics include: lead magnesium niobate (PMN) lead magnesium niobate-lead titanate (PMN-PT) lead lanthanum zirconate titanate (PLZT) lead scandium niobate (PSN) Barium Titanium-Bismuth Zinc Niobium Tantalum (BT-BZNT) Barium Titanium-Barium Strontium Titanium (BT-BST) Applications Relaxor Ferroelectric materials find application in high efficiency energy storage and conversion as they have high dielectric constants, orders-of-magnitude higher than those of conventional ferroelectric materials. Like conventional ferroelectrics, Relaxor Ferroelectrics show permanent dipole moment in domains. However, these domains are on the nano-length scale, unlike conventional ferroelectrics domains that are generally on the micro-length scale, and take less energy to align. Consequently, Relaxor Ferroelectrics have very high specific capacitance and have thus generated interest in the fields of energy storage. Furthermore, due to their slim hysteresis curve with high saturated polarization and low remnant polarization, Relaxor ferroelectrics have high discharge energy density and high discharge rates. BT-BZNT Multilayer Energy Storage Ceramic Capacitors (MLESCC) were experimentally determined to have very high efficiency(>80%) and stable thermal properties over a wide temperature range. References Electric and magnetic fields in matter Ferroelectric materials Electrical phenomena
Relaxor ferroelectric
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
377
[ "Physical phenomena", "Materials science stubs", "Ferroelectric materials", "Electric and magnetic fields in matter", "Materials science", "Materials", "Electrical phenomena", "Condensed matter physics", "Electromagnetism stubs", "Condensed matter stubs", "Hysteresis", "Matter" ]
46,519,707
https://en.wikipedia.org/wiki/Low%20power%20flip-flop
Low power flip-flops are flip-flops that are designed for low-power electronics, such as smartphones and notebooks. A flip-flop, or latch, is a circuit that has two stable states and can be used to store state information. Motivation In most VLSI devices, a large portion of power dissipation is due to the clock network and clocked sequential elements, which can account for anywhere between 25% - 40% of the total power in a design. Sequential elements, latches, and flip-flops dissipate power when there is switching in their internal capacitance. This may happen with every clock transition/pulse into the sequential element. Sometimes the sequential elements need to change their state, but sometimes they retain their state and their output remains the same, before and after the clock pulse. This leads to unnecessary dissipation of power due to clock transition. If flip-flops are designed in such a way that they are able to gate the clock with respect to their own internal data path, power dissipation can be brought down. Techniques Conditional clocking Conditional pre-charging This technique is used for controlling the internal node in the pre charging path in a sequential element. In the above circuit, the D input is connected to the first NMOS in the PDN network (CMOS). When this input is high, the output should also be high. The clk input to the PMOS will charge the output node to high when clk is low. If the D input is already high, there is no need to charge the output to high again. Thus, if one can control this behaviour there can be a power reduction in the flip-flop. To control the internal node in the precharge path, a control switch is used as shown in Fig 1. Only a transition that is going to change the state of the output is allowed. As one of the input to flops is the clock, considering the clock (Clock signal) is the element that makes the most transition in a system, a technique such as conditional precharging can significantly help reduce power. Conditional capture This technique looks to prevent any necessary internal node transition by looking at the input and output and checking to see if there is a need to switch states. In this circuit, there is a control signal that is applied to control the switching of the internal nodes. We can see the clock is supplied to two NMOS in series. The discharge path will not be complete until the control signal allows the last NMOS to be on. This control signal could be generated by a simple circuit, with its inputs being the present output, input and the state of the clock (high or low). If the output of the flip-flop is low, and a high clock pulse is applied with the input being a low pulse, then there is no need for a state transition. The extra computation to sample the inputs cause an increase in setup time of the flip-flop; this is a disadvantage of this technique. Data transition look-ahead In Fig3, the circuit shows how the data transition technique can be beneficial for power saving. The XNOR logical function is performed on the input of the D flip-flop and the output Q. When Q and D are equal, output of the logical XNOR will be zero, generating no internal clock. The circuit can be broken down into 3 parts: data-transition look ahead, pulse generator, and clock generator. The pulse generator output is fed into the clock generator which is used to clock the D flip-flop. Based on the input and output signals, if there is a need to change the state of the D flip-flop, then the clock is allowed to switch to cause a transition; else, the clock is not allowed to transition. When the clock does not make a transition, some time has been already spent in computing the logic, and data from the D input may make it through the first stage of the flip-flop, consuming some power. This power consumption is still less than what an ordinary flipflop would have consumed with a clock transition and no change in output. Clock on demand Fig4 shows the clock on demand technique. The clock generator and pulse generator are combined in this implementation. The advantage of this is that there is reduction in area, improving energy efficiency. If the XNOR output is zero, then the pulse generator will not generate any internal signal from the external clock. If the output Q and input D do not match then the pulse generator will generate an internal clock to trigger a state transition. References Digital electronics
Low power flip-flop
[ "Engineering" ]
937
[ "Electronic engineering", "Digital electronics" ]
46,521,214
https://en.wikipedia.org/wiki/Supertransitive%20class
In set theory, a supertransitive class is a transitive class which includes as a subset the power set of each of its elements. Formally, let A be a transitive class. Then A is supertransitive if and only if Here P(x) denotes the power set of x. See also Rank (set theory) References Set theory
Supertransitive class
[ "Mathematics" ]
74
[ "Mathematical logic", "Set theory" ]
46,523,314
https://en.wikipedia.org/wiki/Double%20tee
A double tee or double-T beam is a load-bearing structure that resembles two T-beams connected to each other side by side. The strong bond of the flange (horizontal section) and the two webs (vertical members, also known as stems) creates a structure that is capable of withstanding high loads while having a long span. The typical sizes of double tees are up to for flange width, up to for web depth, and up to or more for span length. Double tees are pre-manufactured from prestressed concrete which allows construction time to be shortened. History The developments of double tee were started in the 1950s by two independent initiatives, one by Leap Associates founded by Harry Edwards in Florida, and the other by Prestressed Concrete of Colorado. They designed the wings to expand the structural channel in order to cover more area at a lower cost. In 1951, Harry Edwards and Paul Zia designed a wide prestressed double tee section. Non-prestressed double tees were constructed in Miami in 1952 followed by prestressed double tees in 1953. Separately, engineers of Prestressed Concrete of Colorado developed and constructed the first prestressed double tee which was wide called "twin tee" in late 1952. The early twin tee spans were between and . Those double tee spans were first used for the first time to build a cold storage building for Beatrice Foods in Denver. The early double tee spans of had grown to quickly. The Precast/Prestressed Concrete Institute (PCI) published the double tee load capacity calculation (load tables) for the first time in the PCI Design Handbook in 1971. The load tables use the code to identify double tee span type by using the width in feet, followed by "DT", followed by depth in inches, for example, 4DT14 is for wide, deep double tees. In its first publication there were seven double tee types from 4DT14 to 10DT32. The list included 8DT24 that were proven to be the most popular double tee type used for spans for several decades. Currently, the common double tee type is 12DT30 with pretopped surface on the flange. This type has been included in the PCI Design Handbook since 1999. The first building with all pre-stressed concrete columns, beams, and double tees was a two-story office building in Winter Haven, Florida, designed and built in 1961 by Gene Leedy. Leedy experimented when building his architectural office by using structural elements of prestressed concrete and designing the new "double-tee" structural elements. In their early days, the applications of double tees were limited to multi-story car park structures and roof structures of buildings, but they have now been used in highway structures as well. Manufacturing process Double tees are manufactured in factories. The process is the same as in other prestressed concrete manufacturing by building them on pretensioning beds. The beds for making double tees are of the typical sizes of the area that double tees will be used. In most cases, the lengths of the pretensioning beds are of about long. Applications Roofing In non-residential buildings, the roof structure may be flat. Structural concrete is an alternative for flat roof construction. There are three main categories for such method: precast/prestressed, cast-in-place and shell. Within the precast/prestressed concrete roofing, the double tees are the most common products used for roof span up to . Parking structures Modern multi-story parking structures are built from precast/prestressed concrete systems. The floor systems are mostly built from pre-topped double tees. This system evolved from the earlier use of tee systems where the flanges of the T-beams were connected. The concrete is then poured at the top of the tees during the construction to create the floor surface, hence the process is called field-placed concrete topping. In double-tee structures, the top concrete is usually made at the factory as an integral part of the precast double tee structure. Double tees are connected during the construction without topping with concrete to create the parking structure floor surface. A benefit of pre-topped double tees is a higher quality concrete for more durable surface to reduce traffic wears. Factories can produce the topping with minimum concrete strength of 5,000 psi. In some areas, the strength can be 6,000-8,000 psi. This compares to the field-placed concrete topping with the lower concrete strength of 4,000 psi. Typically, the double-tees are hung over a supporting structure. This is done by having dapped ends at the webs of the double tee (pictured). The dapped ends are sensitive to cracking at the supporting area. A recommendation to prevent cracking is to include reinforcing steel in the double-tee design to transfer the loads from the bearing area (the reduced-depth section) to the full-depth section of the web. In case that the cracks are developed after the parking structure is already in use, other methods to provide external support to the double-tees are needed. One of such alternatives is to use externally bonded carbon fiber reinforced polymer (FRP) to provide reinforcement. Bridges Prefabricated bridge designs have been used in many bridge constructions to reduce the construction time. In the United States, there are efforts to come up with Prefabricated Bridge Elements and Systems in many states. Double tee structure is an alternative for short to medium spans between . There are many standards such as double-tee beam of Texas Department of Transportation and the Northeast Extreme Tee (NEXT) Beam of the Northeast. A benefit of using double tees for bridge replacements is to shorten the construction time. Texas has a goal of shortening short-span bridge replacements to one month or less instead of 6 months in traditional bridge constructions. NEXT Beam development started in 2006 by the Precast/Prestressed Concrete Institute (PCI) North East to update regional standard on Accelerated Bridge Construction (ABC). The NEXT Beam design was inspired by double-tee designs that have been used to build railroad platform slabs. The use of double tees with wide flange permits fewer beams and to have them stay in place to form the deck, resulting in a shorter construction time. The first design was introduced in 2008 called "NEXT F" with flange thickness requires topping. This was used for the construction of the Maine State Route 103 bridge that crosses the York River. The seven-span long bridge was completed in 2010 as the first NEXT Beam bridge. The second design was introduced in 2010 for Sibley Pond Bridge at the border of Canaan and Pittsfield, Maine. The design was called "NEXT D" with flange thickness that does not require deck topping, allowing the wearing surface to be applied directly on to the beams. The combination of F and D called "NEXT E" was introduced in 2016. Concerns of using double tees in bridge constructions include bridge deck longitudinal cracks. As the connection points between the double tee beams are longitudinally along the traffic flow, any lateral movements of double tees can cause the road surface to crack longitudinally. These include differential rotation of double-tee flanges that can cause asphalt surface to raise or crack. A separation of the flanges can cause asphalt to sag into the gap forming a reflective crack. To reduce these problems, many methods have been developed to manage the lateral connections of the double tees. The materials used in the connections are backer rods, steel bars, welded plates, and grouts. Walls Double tees have been used in vertical load-bearing members such as exterior walls, and retaining walls. When using load-bearing double tee wall panels, it can significantly reduce construction time as a large area of walls can be covered in a short amount of time. Using load-bearing double tee wall panels in conjunction with double tee roof can reduce the amount of interior columns because double tee roof members can have long spans and the ends are connected to double tee walls to transfer the loads. Additionally, the ceiling can be raised higher as double tee wall members can have long spans also. This is suitable for warehouses as a large area with high ceiling is needed but without windows. This type of construction has been used since the 1970s. Precast Prestressed Concrete Institute included double tee wall panels in its PCI Design Handbook between 1971 and 2010. References Structural engineering Roofing materials de:Plattenbalken
Double tee
[ "Engineering" ]
1,727
[ "Structural engineering", "Civil engineering", "Construction" ]
48,079,393
https://en.wikipedia.org/wiki/Seshat%20%28project%29
The Seshat: Global History Databank (named after Seshat, the ancient Egyptian goddess of wisdom, knowledge, and writing) is an international scientific research project of the nonprofit Evolution Institute. Founded in 2011, the Seshat: Global History Databank gathers data into a single, large database that can be used to test scientific hypotheses. The Databank consults directly with expert scholars to code what historical societies and their environments were like in the form of accessible datapoints and thus forms a digital storehouse for data on the political and social organization of all human groups from the early modern back to the ancient and neolithic periods. The organizers of this research project contend that the mass of data then can be used to test a variety of competing hypotheses about the rise and fall of large-scale societies around the globe which may help science provide answers to global problems. The Seshat: Global History Databank claims to be a scientific approach to historical research and its large dataset, though compiled with the intention of being theory-neutral, is frequently of interest to researchers of cliodynamics. The main goal of cliodynamics researchers is to use the scientific method to produce the data necessary to empirically test competing theories. A large interdisciplinary and international team of experts helps the Seshat project to produce a database that is historically rigorous enough to study the past using well-established scientific techniques. Seshat data may be used with sociocultural evolutionary theory or cultural evolutionary theory to identify long-term dynamics that may have had significant effects on the course of human history. Project The Seshat: Global History Databank is an umbrella organization for several research projects that examine different themes or facets of human life. Each project is led by members of the Seshat Team in collaboration with a group of consultants and contributing experts. Themes include: the evolution of social complexity in early civilizations, the creation of prosociality (i.e., how and why large groups of unrelated individuals come together and cooperate for a common goal), the role of ritual and religion in social cohesion, the causes of economic growth and its consequences on individual's well-being, and many others. The Seshat team is also heavily engaged in improving the way that cutting-edge digital technologies can aid in research, with projects devoted to developing cutting-edge systems for collecting, analyzing, and distributing information with computer assistance. Several key research questions drive these research projects. These include the following: What mechanisms transform economic growth into improvements in quality of life for regular people? What roles do ritual activities and religion play in cultural development and group cohesion? How and under what conditions does prosocial behavior evolve in large societies? What is the impact of environmental and climatic factors in societal advance? To maximise their time and resources, the Seshat project has begun data collection with a representative sample of polities from around the globe and throughout human history, ranging from the late Neolithic (roughly 4,000 BCE) to the early modern period (roughly 1,900 CE). This is the World Sample 30. The World Sample-30 provides the Seshat project with an initial sample of societies that vary along the dimension of social complexity from ten major regions around the globe. Three natural geographic areas (NGAs) were selected within each region––one NGA was selected in each world region that developed complex state-level societies comparatively early; a second NGA was selected that selected complex societies comparatively late, ideally one free of centralized polities (chiefdoms and states) until the colonial period; a third NGA was selected that was intermediate to these two extremes in terms of social complexity. Praise In 2016, Ian Morris praised two key aspects of the Seshat project: (1) it emphasizes the collection of data related to shifts in cultural systems (e.g., changes in religious morality or agricultural techniques) in addition to material elements (e.g., metallurgy technologies) and (2) it better situates seemingly extraordinary individuals in their geographic and historical context. Gary Feinman also praised the Seshat Project for helping to demolish the academic knowledge silos that have emerged with increases in specialisation over the last several decades. Criticism Critics of the Seshat project have noted that the coding of historical data is not a wholly objective enterprise and that concrete and transparent steps should be taken to minimize subjectivity in the coding process. The Seshat project uses multiple coders and experts and other techniques for ensuring data quality, but some have recently suggested that machine coding techniques hold great promise for further reducing biases and increasing the reliability of the data produced. Funding Funding for the Seshat: Global History Databank comes from the John Templeton Foundation, the Economic and Social Research Council, Horizon 2020, the Tricoastal Foundation, and the Evolution Institute. Administration The Seshat: Global History Databank is governed by an editorial board, which includes Peter Turchin, Harvey Whitehouse, Pieter François, Thomas E. Currie, and Kevin C. Feeney. See also Big History Cliodynamics Cliometrics Digital history Longue durée Psychohistory References Further reading External links Peter Turchin's cliodynamics page Harvey Whitehouse's research page Economic history studies Historiometry Mathematical modeling Social history Works about the theory of history
Seshat (project)
[ "Mathematics" ]
1,101
[ "Applied mathematics", "Mathematical modeling" ]
48,081,904
https://en.wikipedia.org/wiki/Translational%20drift
Translational drift also known as melty brain or tornado drive is a form of locomotion, notably found in certain combat robots. Principle The principle is applied to spinning robots, where the driving wheels are normally on for the whole revolution, resulting in an increased rotational energy, which is stored for destructive effect, but, given perfect symmetry, no net translational acceleration. The drive works by modulating the power to the wheel or wheels that spin the robot. The net application of force in one direction results in acceleration in the plane – it can't really be characterised as "forward", "backward" and so forth, as the whole robot is spinning. However, in a standard configuration an accelerometer is used to determine the speed of rotation, and a light emitting diode is turned on once per revolution, to give a nominal forward direction indicator to the operator. The internal controls implement the commands received from the remote control to modulate the drive to the wheels, typically by turning it off for part of a revolution to move in a specific direction. The benefits of using translational drift include less weight needing to be allocated to a weapon due to it being part of the drive system. Disadvantages include the complexity of design, cost, and reliance on the drive system. History In the past, a robot would be classified as a "Sit-and-spin" robot, and would depend on the opponent to engage it to cause damage. As this was deemed less aggressive (which is a common judging criteria) than what could be done by robots armed with a spinning shell mounted on top of their drive, it waned in popularity in most competitions. The first robot to attempt to use this technology was Blade Runner, a middleweight robot built by Ilya Polyakov for the first five seasons of Comedy Central's Battlebots. Unfortunately, the technology never worked as planned. A lightweight two-wheel drive hammer robot, Herr Gepoünden, implemented the design in their final season of Battlebots. The first symmetrical robot, with a similar in design to a contemporary full-body spinner, to use this technology successfully was CycloneBot, which competed at Steel Conflict 4. The most successful heavyweight competitor, Nuts, relied entirely on translational drift for its weaponry en route to its 3rd place finish in the 10th series of Robot Wars. Open Melt Open Melt is an open source implementation of melty brain, the code being licensed under Creative Commons Attribution-Noncommercial-Share Alike licence. Rules across competitions Different rules exist for each competition, some of which allow robots that use translational drift to compete. In Battlebots, the use of translational drift does not count towards the active weapon requirement for the primary weapon, as translational drive relies on the entire robot's movement. Conversely, in Robot Wars, there is no such prohibition against using translational drift as a primary, active weapon. References External links Instructables - building a melty bot Robotics engineering
Translational drift
[ "Technology", "Engineering" ]
606
[ "Computer engineering", "Robotics engineering" ]
48,088,710
https://en.wikipedia.org/wiki/Chordal%20completion
In graph theory, a branch of mathematics, a chordal completion of a given undirected graph is a chordal graph, on the same vertex set, that has as a subgraph. A minimal chordal completion is a chordal completion such that any graph formed by removing an edge would no longer be a chordal completion. A minimum chordal completion is a chordal completion with as few edges as possible. A different type of chordal completion, one that minimizes the size of the maximum clique in the resulting chordal graph, can be used to define the treewidth of . Chordal completions can also be used to characterize several other graph classes including AT-free graphs, claw-free AT-free graphs, and cographs. The minimum chordal completion was one of twelve computational problems whose complexity was listed as open in the 1979 book Computers and Intractability. Applications of chordal completion include modeling the problem of minimizing fill-in when performing Gaussian elimination on sparse symmetric matrices, and reconstructing phylogenetic trees. Chordal completions of a graph are sometimes called triangulations, but this term is ambiguous even in the context of graph theory, as it can also refer to maximal planar graphs. Related graph families A graph is an AT-free graph if and only if all of its minimal chordal completions are interval graphs. is a claw-free AT-free graph if and only if all of its minimal chordal completions are proper interval graphs. And is a cograph if and only if all of its minimal chordal completions are trivially perfect graphs. A graph has treewidth at most if and only if has at least one chordal completion whose maximum clique size is at most . It has pathwidth at most if and only if has at least one chordal completion that is an interval graph with maximum clique size at most . It has bandwidth at most if and only if has at least one chordal completion that is a proper interval graph with maximum clique size at most . And it has tree-depth if and only if it has at least one chordal completion that is a trivially perfect graph with maximum clique size at most . Applications The original application of chordal completion described in Computers and Intractability involves Gaussian elimination for sparse matrices. During the process of Gaussian elimination, one wishes to minimize fill-in, coefficients of the matrix that were initially zero but later become nonzero, because the need to calculate the values of these coefficients slows down the algorithm. The pattern of nonzeros in a sparse symmetric matrix can be described by an undirected graph (having the matrix as its adjacency matrix); the pattern of nonzeros in the filled-in matrix is always a chordal graph, any minimal chordal completion corresponds to a fill-in pattern in this way. If a chordal completion of a graph is given, a sequence of steps in which to perform Gaussian elimination to achieve this fill-in pattern can be found by computing an elimination ordering of the resulting chordal graph. In this way, the minimum fill-in problem can be seen as equivalent to the minimum chordal completion problem. In this application, planar graphs may arise in the solution of two-dimensional finite element systems; it follows from the planar separator theorem that every planar graph with vertices has a chordal completion with at most edges. Another application comes from phylogeny, the problem of reconstructing evolutionary trees, for instance trees of organisms subject to genetic mutations or trees of sets of ancient manuscripts copied one from another subject to scribal errors. If one assumes that each genetic mutation or scribal error happens only once, one obtains a perfect phylogeny, a tree in which the species or manuscripts having any particular characteristic always form a connected subtree. As describes, the existence of a perfect phylogeny can be modeled as a chordal completion problem. One draws an "overlap graph" in which the vertices are attribute values (specific choices for some characteristic of a species or manuscript) and the edges represent pairs of attribute values that are shared by at least one species. The vertices of the graph can be colored by the identities of the characteristics that each attribute value comes from, so that the total number of colors equals the number of characteristics used to derive the phylogeny. Then a perfect phylogeny exists if and only if has a chordal completion that respects the coloring. Computational complexity Although listed as an open problem in the 1979 book Computers and Intractability, the computational complexity of the minimum chordal completion problem (also called the minimum fill-in problem) was quickly resolved: showed it to be NP-complete. If the minimum chordal completion adds edges to a graph , then it is possible to find a chordal completion using at most added edges, in polynomial time. The problem of finding the optimal set of edges to add can also be solved by a fixed-parameter tractable algorithm, in time polynomial in the graph size and subexponential in . The treewidth (minimum clique size of a chordal completion) and related parameters including pathwidth and tree-depth are also NP-complete to compute, and (unless P=NP) cannot be approximated in polynomial time to within a constant factor of their optimum values; however, approximation algorithms with logarithmic approximation ratios are known for them. Both the minimum fill-in and treewidth problems can be solved in exponential time. More precisely, for an -vertex graph, the time is . References Graph theory objects
Chordal completion
[ "Mathematics" ]
1,164
[ "Mathematical relations", "Graph theory", "Graph theory objects" ]
48,090,006
https://en.wikipedia.org/wiki/Spiropentane
Spiropentane is a hydrocarbon with formula . It is the simplest spiro-connected cycloalkane, a triangulane. It took several years after the discovery in 1887 until the structure of the molecule was determined. According to the nomenclature rules for spiro compounds, the systematic name is spiro[2.2]pentane. However, there can be no constitutive isomeric spiropentanes, hence the name is unique without brackets and numbers. Synthesis After Gustavson produced cyclopropane by reacting with ground-up zinc metal, he tried the same reaction with (see formula scheme). The starting material is easily obtained by reacting pentaerythritol with hydrobromic acid. A molecule with the formula was obtained. It was called in the initial publication. In 1907, Fecht expressed the assumption that it must be spiropentane, a constitutional isomer of vinylcyclopropane. Further evidence for the structure of the hydrocarbon comes from the fact that it could also be obtained from (see formula scheme). Spiropentane is difficult to separate from the other reaction products and the early procedures resulted in impure mixtures. Decades later, the production method was improved. The spiro hydrocarbon can be separated from the byproducts () by distillation. Properties Physical properties Structural determination by electron diffraction showed two different C-C lengths; the bonds to the quaternary ("spiro") carbon atom are shorter (146.9 pm) than those between the methylene groups (CH2–CH2, 151.9 pm). The C–C–C angles on the spiro C atom are 62.2°, larger than in cyclopropane. Chemical properties When heating molecules of spiropentane labelled with deuterium atoms, a topomerization or "stereomutation" reaction is observed, similar to that of cyclopropane: equilibrates with . Gustavson (1896) reported that heating spiropentane to 200 °C caused it to change into other hydrocarbons. A thermolysis in the gas phase from 360 to 410 °C resulted in ring expansion to the constitutional isomer , along with the fragmentation products ethene and propadiene. Presumably, the longer – and weaker – bond is broken first, forming a diradical intermediate. Related compounds Spiroheptane References Cyclopropanes Spiro compounds Polycyclic nonaromatic hydrocarbons
Spiropentane
[ "Chemistry" ]
530
[ "Organic compounds", "Spiro compounds" ]
29,686,444
https://en.wikipedia.org/wiki/Aminotransferase%2C%20class%20V
Aminotransferase class-V is an evolutionary conserved protein domain. This domain is found in amino transferases, and other enzymes including cysteine desulphurase EC:4.4.1.-. Aminotransferases share certain mechanistic features with other pyridoxal- phosphate dependent enzymes, such as the covalent binding of the pyridoxal- phosphate group to a lysine residue. On the basis of sequence similarity, these various enzymes can be grouped into subfamilies. This family is called class-V. Subfamilies Phosphoserine aminotransferase Cysteine desulfurase Cysteine desulphurase related, unknown function Cysteine desulphurases, SufS Cysteine desulphurase related 2-aminoethylphosphonate—pyruvate transaminase Human proteins containing this domain AGXT; KYNU; MOCOS; NFS1; PSAT1; SCLY; TLH6; References Protein domains
Aminotransferase, class V
[ "Chemistry", "Biology" ]
225
[ "Biochemistry stubs", "Protein stubs", "Protein domains", "Protein classification" ]
29,686,743
https://en.wikipedia.org/wiki/Asynchronous%20communication%20mechanism
The role of an asynchronous communication mechanism (ACM) is to synchronize the transfer of data in a system between a writing process and a reading process operating concurrently. Description The mechanism by which the ACM performs its tasks varies heavily depending upon the situation in which the ACM is employed. A possible scenario is the writer outputs data at a higher rate than the reader can process it. Without an ACM, one of two things will happen: If the system incorporates a buffer between processes (e.g., a Unix shell pipe), then data will accumulate and be processed at the reader's maximum rate. There are some circumstances in which this is a desirable characteristic (e.g. piping a file over SSH, or if all data in the set is important, and the reader's output does not need to be synchronised with the input). If it is necessary to synchronize the input of the writer with the output of the reader, then the ACM can interface with the two systems, and make active decisions on how to handle each packet of information. If, for example, maximum synchronization is required, the ACM could be configured to drop packets, and output the newest packets at the reader's maximum speed. Alternatively, if there is no buffer, some data may be lost. If this is undesirable, the ACM can provide this buffer, or process the data in such a way that minimal information is lost. References See also Asynchronous communication Rate (mathematics) System Computer-mediated communication Synchronization
Asynchronous communication mechanism
[ "Technology", "Engineering" ]
333
[ "Telecommunications engineering", "Information systems", "Computing and society", "Computer-mediated communication", "Synchronization" ]
29,687,730
https://en.wikipedia.org/wiki/Scanning%20mobility%20particle%20sizer
A scanning mobility particle sizer (SMPS) is an analytical instrument that measures the size and number concentration of aerosol particles with diameters from 2.5 nm to 1000 nm. They employ a continuous, fast-scanning technique to provide high-resolution measurements. Applications The particles that are investigated can be of biological or chemical nature. The instrument can be used for air quality measurement indoors, vehicle exhaust, research in bioaerosols, atmospheric studies, and toxicology testing. Principle of operation The air to be analyzed is pumped through an ionizing source (or neutralizer) which will establish a known charge distribution. Then, exposure to an electric field in the DMA will isolate a certain particle diameter, which is a function of the voltage generating the field (a voltage value corresponding to a particle diameter value that passes through the DMA). Finally, these particles of the same diameter will be counted by an optical device (CPC). The air inlet to be analyzed can be equipped with an impaction head. Impaction head An impaction head, or fractionator head, is a device that uses the principles of fluid mechanics to trap, by their inertia, the largest particles present in the air. The sampling inlet of the SMPS is thus protected from large dust and insects, the air that enters it contains only the fine particles to be quantified. These are usually called "PM10 inlet" or "PM2.5 inlet". Neutralizer The air flow then passes through an ionizing source. The sampled air will be exposed to high concentrations of positive and negative ions, after a certain number of collisions the charge distribution will be stable and known. The neutralizer is also used to eliminate electrostatic charges from aerosol particles. The charge distribution from the neutralizer is a balanced charge distribution that follows Boltzmann's law. DMA (Differential Mobility Analyzer) The sample then enters a differential mobility analyzer. The air and aerosol (whose charge distribution is now balanced and known) are then introduced into an air flow channel. A central tubular electrode, and another concentric one, generate an electric field in this fluid path. In the channel, the particles are subjected to a uniform electric field and an air flow. The particles then move at a speed that depends on their electrical mobility. At a given voltage, only particles of a certain diameter will follow this channel until they exit; the smaller and larger will crash into the electrodes. CPC (Condensation particle counter) The air now contains only particles of a certain diameter. The flow is introduced into a CPC, a condensation particle counter, which measures the concentration of particles in an aerosol sample. The CPC works by using butanol vapor condensation on the particles present in the air sample. The particles are exposed to butanol vapor heated to 39 °C. The butanol vapor condenses on the particles, increasing their size and thus facilitating their optical detection. The particles are then exposed to a laser beam, and each particle scatters light. The peaks of scattered light intensity are continuously counted and expressed in particles/cm3. Results The results obtained by this type of device therefore include the distribution of particle sizes in the air continuously. The DMA will generate voltage back-and-forth between its electrodes from 0 to 10,000 V, corresponding to a measurement range of 8 nm to 800 or 1000 nm, and the CPC will quantify each of these diameters. References Spectrometers Electronic test equipment Signal processing Measuring instruments Laboratory equipment Aerosols Aerosol measurement
Scanning mobility particle sizer
[ "Physics", "Chemistry", "Technology", "Engineering" ]
728
[ "Telecommunications engineering", "Spectrum (physical sciences)", "Computer engineering", "Signal processing", "Electronic test equipment", "Colloids", "Measuring instruments", "Aerosols", "Spectrometers", "Spectroscopy" ]
29,687,749
https://en.wikipedia.org/wiki/Focal-plane%20array%20%28radio%20astronomy%29
Focal-plane arrays (FPAs) are widely used in radio astronomy. FPAs are arrays of receivers placed at the focus of the optical system in a radio-telescope. The optical system may be a reflector or a lens. Traditional radio-telescopes have only one receiver at the focus of the telescope, but radio-telescopes are now starting to be equipped with focal plane arrays, which are of three different types: multi-beam feed arrays, bolometer arrays, and the experimental phased-array feeds. Multi-beam feed arrays Multi-beam feed arrays consist of a small array of feed horns at the focus of a radio-telescope. Each feed horn is connected to a receiver to measure the received power and each horn and receiver pair is sensitive to radio waves from a slightly different direction in the sky. A feed array with receivers will increase the survey speed of the telescope by a factor of , making them very powerful survey instruments. Because radio wavelengths are large, the resulting feed arrays are amongst the largest radio-astronomy receivers ever built. Examples include the multi-beam arrays on the Parkes Observatory, and the ALFA array at Arecibo Observatory, both of which have been used for major pulsar and Hydrogen line studies, such as HIPASS. Bolometer arrays Bolometer arrays are arrays of bolometer receivers which measure the energy of incoming radio photons. They are typically used for astronomy at millimeter wavelengths. Examples include the SCUBA receiver on the James Clerk Maxwell Telescope and the LABOCA instrument on the APEX telescope. Phased array feeds Phased Array Feeds are an experimental type of focal plane array using phased array technology in which antenna elements are closely spaced so that they do not act independently, but instead act as sensors of the electromagnetic field across the focal plane of the telescope. The outputs of the receivers are then coherently combined in a beamformer with appropriate weights to synthesise several discrete beams. They are currently being developed for the Apertif upgrade to the Westerbork Synthesis Radio Telescope, and for the Australian Square Kilometre Array Pathfinder radio telescope. Switched array feeds A switchable array of feed antennas in the focal plane is referred to as a switchable FPA. With this configuration, it is possible to switch between a set of beams directed in different directions. This makes the system steerable in the switching sense, thus creating a multi-beam system. In a switched FPA, the distance between feeding elements are chosen following where F is the focal length of the optical system, D is the diameter of the optical system and λ is the wavelength. Monopulse feeds The angle to the observed target (e.g. a meteor in meteor studies) can be estimated using amplitude monopulse. In such a configuration, three signals are collected from four feed elements. These signals are the elevation difference signal, the azimuth difference signal and the sum signal. See also Staring array References Radio astronomy Image sensors
Focal-plane array (radio astronomy)
[ "Astronomy" ]
595
[ "Radio astronomy", "Astronomical sub-disciplines" ]
29,690,231
https://en.wikipedia.org/wiki/Hybtonite
Hybtonite is trademark of Amroy Europe Oy for carbon nanoepoxy resins. It is a family of composite resins reinforced with carbon nanotubes (CNTs). The material and the manufacturing method were originally developed in the Nanoscience Center of the University of Jyväskylä during the years 2002 to 2004. Ultrasound is used to disperse the nanotubes and to create radicals at the ends of CNT molecules. CNTs can then chemically react with epoxy resin or other material forming strong covalent bonds. This results in a more durable hybrid composite structure that is between 20% and 30% stronger (with only 0.5% CNT contents) than a conventional reinforced plastic. The manufacturing process allows controlling the material properties such as electrical conductivity, thermal conductivity and viscosity. Different forms of hybtonite are available for different purposes such as laminating (glass fiber, carbon fiber), epoxy paints and glues. Applications The first application areas for hybtonite have been in field of wind turbines, marine applications and sports gear. Montreal Nitro ice hockey stick was the first commercial product using hybtonite. Cross-country skis and roller skis by Peltonen Sports Baseball bats by Karhu Sports Hunting arrows by Easton Surfboards by Entropy Surfboards Eagle Windpower manufactures small size (2 kW to 100 kW) wind turbines using hybtonite as material for the blades. Large wind turbines manufactured by Evergreen (China), LM Glassfiber (Denmark) and CompoTech (Czech Republic) Marine paints / AMC nano coating. Awards In January 2006, Montreal Hybtonite hockey stick "Nitro" was voted number one Nano product in the world at Nanotech 2006 trade show in Tokyo, Japan. In December 2009, Amroy received Frost & Sullivan European Technology Innovation Award for its work on hybtonite. References Composite materials Nanomaterials Finnish inventions Synthetic resins
Hybtonite
[ "Physics", "Chemistry", "Materials_science" ]
413
[ "Synthetic resins", "Synthetic materials", "Composite materials", "Materials", "Nanotechnology", "Nanomaterials", "Matter" ]
29,693,994
https://en.wikipedia.org/wiki/Perry%E2%80%93Robertson%20formula
The Perry–Robertson formula is a mathematical formula which is able to produce a good approximation of buckling loads in long slender columns or struts, and is the basis for the buckling formulation adopted in EN 1993. The formula in question can be expressed in the following form: with where: is the average longitudinal stress in the beam's cross section is the material's elastic limit is the average tension measured in the cross section which correspond to the beam's Euler load the amplitude of the initial geometrical imperfection distance from the cross section's centroid to the section's most stressed fiber the section's radius of gyration Robertson then proposed that , where represents the beam's slenderness. References Elasticity (physics)
Perry–Robertson formula
[ "Physics", "Materials_science" ]
154
[ "Deformation (mechanics)", "Physical phenomena", "Physical properties", "Elasticity (physics)" ]
41,894,112
https://en.wikipedia.org/wiki/BreakMate
The BreakMate was a three-flavor soda fountain for The Coca-Cola Company developed in the 1980s in conjunction with BSH Bosch und Siemens Hausgeräte. Its compartment held three one-liter plastic containers of syrup and a CO2 tank, which mixed the water and syrup into a 5:1 ratio, with a reservoir for water for storage if water was not accessible for the machine. Designed for offices of between 5-50 employees, the machine was deemed a commercial flop due to unforeseen complications in cost and parts. In 2007, Coca-Cola stopped supplying parts, and in 2010 the company finally stopped supplying syrup for the machines. References Coca-Cola Products introduced in 1988 Vending machines Commercial machines
BreakMate
[ "Physics", "Technology", "Engineering" ]
146
[ "Machines", "Commercial machines", "Vending machines", "Automation", "Physical systems" ]
25,017,503
https://en.wikipedia.org/wiki/Bicycle%20tree
A bicycle tree or cycle tree or bike tree is a bicycle parking system that resembles a tree in shape. There are a few types that have been developed. Some are manual, some use mechanical means to move the bike, assisting the bike by raising into a particular spot, they can handle between 5–20 bicycles depending on size. They are made by various companies in Europe and North America. Still others, like the one made by JFE Steel of Japan, are fully automated and computerized and can handle and locate some 9,400 bicycles for example, underneath a major train station or university. Manual Various companies have developed simple bike trees, including ones for sale at a hardware store, such as Harbor Freight Tools, that have hooks to hang 5 bicycles. Mechanical assisted A Swiss company, Bike Tree International, designed a system whereby a bicycle can be hoisted into a tree-shaped device after lifting the front of the bike and connecting the wheel to a hook by rope. There were plans to deploy this machine in Geneva. It can handle one or two dozen bicycles. Automated storage and retrieval system A similar but much larger device, an automated storage and retrieval system, has been developed by JFE Engineering, a unit of JFE Holdings of Japan. The first bike tree of this type became available for public use in 2006, storing 1,476 bicycles using an integrated circuit based tag system, cleanly stored away above ground in an urban office-like building. Mechanical units have been expanded to hold some 6,480 bicycles, for which retrieval time is 23 seconds, such as in this 15 meter deep underground storage facility. Various municipalities in the Greater Tokyo area run the system or are planning to install them, charging 1800 yen per month for storage or a one time fee of 100 yen. As of February 2013, some 85 systems of this manufacturer and type in use can hold 17,323 bicycles, all in Japan. See also Bicycle locker Bicycle stand References Bicycle parking Machines
Bicycle tree
[ "Physics", "Technology", "Engineering" ]
394
[ "Physical systems", "Machines", "Mechanical engineering" ]
25,021,082
https://en.wikipedia.org/wiki/Yield%20strength%20anomaly
In materials science, the yield strength anomaly refers to materials wherein the yield strength (i.e., the stress necessary to initiate plastic yielding) increases with temperature. For the majority of materials, the yield strength decreases with increasing temperature. In metals, this decrease in yield strength is due to the thermal activation of dislocation motion, resulting in easier plastic deformation at higher temperatures. In some cases, a yield strength anomaly refers to a decrease in the ductility of a material with increasing temperature, which is also opposite the trend in the majority of materials. Anomalies in ductility can be more clear, as an anomalous effect on yield strength can be obscured by its typical decrease with temperature. In concert with yield strength or ductility anomalies, some materials demonstrate extrema in other temperature dependent properties, such as a minimum in ultrasonic damping, or a maximum in electrical conductivity. The yield strength anomaly in β-brass was one of the earliest discoveries such a phenomenon, and several other ordered intermetallic alloys demonstrate this effect. Precipitation-hardened superalloys exhibit a yield strength anomaly over a considerable temperature range. For these materials, the yield strength shows little variation between room temperature and several hundred degrees Celsius. Eventually, a maximum yield strength is reached. For even higher temperatures, the yield strength decreases and, eventually, drops to zero when reaching the melting temperature, where the solid material transforms into a liquid. For ordered intermetallics, the temperature of the yield strength peak is roughly 50% of the absolute melting temperature. Mechanisms Thermally Activated Cross Slip A number of alloys with the L12 structure (e.g., Ni3Al, Ni3Ga, Ni3Ge, Ni3Si), show yield strength anomalies. The L12 structure is a derivative of the face-centered cubic crystal structure. For these alloys, the active slip system below the peak is ⟨110⟩{111} while the active system at higher temperatures is ⟨110⟩{010}. The hardening mechanism in these alloys is the cross slip of screw dislocations from (111) to (010) crystallographic planes. This cross slip is thermally activated, and the screw dislocations are much less mobile on the (010) planes, so the material is strengthened as temperatures increases and more screw dislocations are in the (010) plane. A similar mechanism has been proposed for some B2 alloys that have yield strength anomalies (e.g., CuZn, FeCo, NiTi, CoHf, CoTi, CoZr). The yield strength anomaly mechanism in Ni-based superalloys is similar. In these alloys, screw superdislocations undergo thermally activated cross slip onto {100} planes from {111} planes. This prevents motion of the remaining parts of the dislocations on the (111)[-101] slip system. Again, with increasing temperature, more cross-slip occurs, so dislocation motion is more hindered and yield strength increases. Grain Boundary Precipitation In superalloys strengthened by metal carbides, increasingly large carbide particles form preferentially at grain boundaries, preventing grain boundary sliding at high temperatures. This leads to an increase in the yield strength, and thus a yield strength anomaly. Vacancy Activated Strengthening While FeAl is a B2 alloy, the observed yield strength anomaly in FeAl is due to another mechanism. If cross-slip were the mechanism, then the yield strength anomaly would be rate dependent, as expected for a thermally activated process. Instead, yield strength anomaly is state dependent, which is a property that is dependent on the state of the material. As a result, vacancy activated strengthening is the most widely-accepted mechanism. The vacancy formation energy is low for FeAl, allowing for an unusually high concentration of vacancies in FeAl at high temperatures (2.5% at 1000C for Fe-50Al). The vacancy formed in either aluminum-rich FeAl or through heating is an aluminum vacancy. At low temperatures around 300K, the yield strength either decreases or does not change with temperature. At moderate temperatures (0.35-0.45 Tm), yield strength has been observed to increase with an increased vacancy concentration, providing further evidence for a vacancy driven strengthening mechanism. The increase in yield strength from increased vacancy concentration is believed to be the result of dislocations being pinned by vacancies on the slip plane, causing the dislocations to bow. Then, above the peak stress temperature, vacancies can migrate as vacancy migration is easier with elevated temperatures. At those temperatures, vacancies no longer hinder dislocation motion but rather aid climb. In the vacancy strengthening model, the increased strength below the peak stress temperature is approximated as proportional to the vacancy concentration to the one-half with the vacancy concentration estimated using Maxwell-Boltzmann statistics. Thus, the strength can be estimated as , with being the vacancy formation energy and T being the absolute temperature. Above the peak stress temperature, a diffusion-assisted deformation mechanism can be used to describe strength since vacancies are now mobile and assist dislocation motion. Above the peak, the yield strength is strain rate dependent and thus, the peak yield strength is rate dependent. As a result, the peak stress temperature increases with an increased strain rate. Note, this is different than the yield strength anomaly, which is the yield strength below the peak, being rate dependent. The peak yield strength is also dependent on percent aluminum in the FeAl alloy. As the percent aluminum increases, the peak yield strength occurs at lower temperatures. The yield strength anomaly in FeAl alloys can be hidden if thermal vacancies are not minimized through a slow anneal at a relatively low temperature (~400 °C for ~5 days). Further, the yield strength anomaly is not present in systems that use a very low strain rate as the peak yield strength is strain rate dependent and thus, would occur at temperatures too low to observe the yield strength anomaly. Additionally, since the formation of vacancies requires time, the peak yield strength magnitude is dependent on how long the material is held at the peak stress temperature. Also, the peak yield strength has been found not to be dependent on crystal orientation. Other mechanisms have been proposed including a cross slip mechanism similar to that for L12, dislocation decomposition into less mobile segments at jogs, dislocation pinning, climb-lock mechanism, and slip vector transition. The slip vector transition from <111> to <100>. At the peak stress temperature, the slip system changes from <111> to <100>. The change is believed to be a result of glide in <111> becoming more difficult as temperature increases due to a friction mechanism. Then, dislocations in <100> have easier movement in comparison. Another mechanism combines the vacancy strengthening mechanism with dislocation decomposition. FeAl with the addition of a tertiary additive such as Mn has been shown to also exhibit the yield stress anomaly. In contrast to FeAl, however, the peak yield strength or peak stress temperature of Fe2MnAl is not dependent on strain rate and thus, may not follow the vacancy activated strengthening mechanism. Instead, there an order-strengthening mechanism has been proposed. Applications Turbines and Jet Engines The yield strength anomaly is exploited in the design of gas turbines and jet engines that operate at high temperatures, where the materials used are selected based on their paramount yield and creep resistance. Superalloys can withstand high temperature loads far beyond the capabilities of steels and other alloys, and allow operation at higher temperatures, which improves efficiency. Nuclear Reactors Materials with yield strength anomalies are used in nuclear reactors due to their high temperature mechanical properties and good corrosion resistance. References Elasticity (physics) Materials science Mechanics Metallurgy Metals Plasticity (physics) Solid mechanics Deformation (mechanics)
Yield strength anomaly
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,621
[ "Physical phenomena", "Solid mechanics", "Applied and interdisciplinary physics", "Metals", "Elasticity (physics)", "Deformation (mechanics)", "Metallurgy", "Materials science", "Plasticity (physics)", "Mechanics", "nan", "Mechanical engineering", "Physical properties" ]
25,021,971
https://en.wikipedia.org/wiki/Fr%C3%A9chet%20mean
In mathematics and statistics, the Fréchet mean is a generalization of centroids to metric spaces, giving a single representative point or central tendency for a cluster of points. It is named after Maurice Fréchet. Karcher mean is the renaming of the Riemannian Center of Mass construction developed by Karsten Grove and Hermann Karcher. On the real numbers, the arithmetic mean, median, geometric mean, and harmonic mean can all be interpreted as Fréchet means for different distance functions. Definition Let (M, d) be a complete metric space. Let x1, x2, …, xN be points in M. For any point p in M, define the Fréchet variance to be the sum of squared distances from p to the xi: The Karcher means are then those points, m of M, which minimise Ψ: If there is a unique m of M that strictly minimises Ψ, then it is Fréchet mean. Sometimes, the xi are assigned weights wi. Then, the Fréchet variances and the Fréchet mean are defined using weighted sums: Examples of Fréchet means Arithmetic mean and median For real numbers, the arithmetic mean is a Fréchet mean, using the usual Euclidean distance as the distance function. The median is also a Fréchet mean, if the definition of the function Ψ is generalized to the non-quadratic where , and the Euclidean distance is the distance function d. In higher-dimensional spaces, this becomes the geometric median. Geometric mean On the positive real numbers, the (hyperbolic) distance function can be defined. The geometric mean is the corresponding Fréchet mean. Indeed is then an isometry from the euclidean space to this "hyperbolic" space and must respect the Fréchet mean: the Fréchet mean of the is the image by of the Fréchet mean (in the Euclidean sense) of the , i.e. it must be: . Harmonic mean On the positive real numbers, the metric (distance function): can be defined. The harmonic mean is the corresponding Fréchet mean. Power means Given a non-zero real number , the power mean can be obtained as a Fréchet mean by introducing the metric f-mean Given an invertible and continuous function , the f-mean can be defined as the Fréchet mean obtained by using the metric: This is sometimes called the generalised f-mean or quasi-arithmetic mean. Weighted means The general definition of the Fréchet mean that includes the possibility of weighting observations can be used to derive weighted versions for all of the above types of means. See also Circular mean Fréchet distance M-estimator Geometric median References Means
Fréchet mean
[ "Physics", "Mathematics" ]
559
[ "Means", "Mathematical analysis", "Point (geometry)", "Geometric centers", "Symmetry" ]
25,025,301
https://en.wikipedia.org/wiki/Slab%20pull
Slab pull is a geophysical mechanism whereby the cooling and subsequent densifying of a subducting tectonic plate produces a downward force along the rest of the plate. In 1975 Forsyth and Uyeda used the inverse theory method to show that, of the many forces likely to be driving plate motion, slab pull was the strongest. Plate motion is partly driven by the weight of cold, dense plates sinking into the mantle at oceanic trenches. This force and slab suction account for almost all of the force driving plate tectonics. The ridge push at rifts contributes only 5 to 10%. Carlson et al. (1983) in Lallemand et al. (2005) defined the slab pull force as: Where: K is (gravitational acceleration = 9.81 m/s2) according to McNutt (1984); Δρ = 80 kg/m3 is the mean density difference between the slab and the surrounding asthenosphere; L is the slab length calculated only for the part above 670 km (the upper/lower mantle boundary); A is the slab age in Ma at the trench. The slab pull force manifests itself between two extreme forms: The aseismic back-arc extension as in the Izu–Bonin–Mariana Arc. And as the Aleutian and Chile tectonics with strong earthquakes and back-arc thrusting. Between these two examples there is the evolution of the Farallon Plate: from the huge slab width with the Nevada, the Sevier and Laramide orogenies; the Mid-Tertiary ignimbrite flare-up and later left as Juan de Fuca and Cocos plates, the Basin and Range Province under extension, with slab break off, smaller slab width, more edges and mantle return flow. Some early models of plate tectonics envisioned the plates riding on top of convection cells like conveyor belts. However, most scientists working today believe that the asthenosphere does not directly cause motion by the friction of such basal forces. The North American Plate is nowhere being subducted, yet it is in motion. Likewise the African, Eurasian and Antarctic Plates. Ridge push is thought responsible for the motion of these plates. The subducting slabs around the Pacific Ring of Fire cool down the Earth and its core-mantle boundary. Around the African Plate upwelling mantle plumes from the core-mantle boundary produce rifting including the African and Ethiopian rift valleys. See also Mid-ocean ridge Seafloor spreading Ridge push References Further reading Geodynamics Geophysics Plate tectonics Subduction Geology theories
Slab pull
[ "Physics" ]
527
[ "Applied and interdisciplinary physics", "Geophysics" ]
25,025,332
https://en.wikipedia.org/wiki/Leica%20Microsystems
Leica Microsystems GmbH is a German microscope manufacturing company. It is a manufacturer of optical microscopes, equipment for the preparation of microscopic specimens and related products. There are ten plants in eight countries with distribution partners in over 100 countries. Leica Microsystems emerged in 1997 out of a 1990 merger between Wild-Leitz, headquartered in Heerbrugg Switzerland, and Cambridge Instruments of Cambridge England. The merger of those two umbrella companies created an alliance of the following 8 individual manufacturers of scientific instruments. American Optical Scientific Products, Carl Reichert Optische Werke AG, R.Jung, Bausch and Lomb Optical Scientific Products Division, Cambridge Instruments, E.Leitz Wetzlar, Kern & Co., and Wild Heerbrugg AG, bringing much-needed modernization and a broader degree of expertise to the newly created entity called Leica Holding B.V. group. In 1997 the name was changed to Leica Microsystems and is a wholly-owned entity of Danaher Corporation since July 2005. Danaher is an American global conglomerate. Details The company employed over 4,000 workers and had a $1 billion turnover in 2008. It is headquartered in Wetzlar, Germany, and represented in over 100 other countries. The company manufactures products for applications requiring microscopic imaging, measurement and analysis. It also offers system solutions in the areas of Life Science including biotechnology and medicine, as well as the science of raw materials and industrial quality assurance. Product categories include Virtual microscopes, Light microscopes, products for Confocal Microscopy, Surgical Microscopes, Stereo Microscopes & Macroscopes, Digital microscopes, Microscope Software, Microscope Cameras, Electron microscope Sample Preparation Equipment In the field of high resolution optical microscopy they produce commercial versions of the STED microscope offering sub-diffraction resolution. In 2007 they launched the TCS STED which operates at a resolution <100 nm. In 2009 they launched the TCS STED CW using a CW laser light source where a resolution <80 nm can be achieved. On 29 September 2011 Leica Microsystems and TrueVision 3D Surgical announced their intention to jointly produce products that will improve microsurgery outcomes in ophthalmology and neurosurgery under the Leica brand. See also Heinrich Wild Wild Heerbrugg References 1997 establishments in Germany 2005 mergers and acquisitions Companies based in Hesse Danaher subsidiaries German brands Manufacturing companies of Germany Microscopes Microscopy Optics manufacturing companies
Leica Microsystems
[ "Chemistry", "Technology", "Engineering" ]
491
[ "Microscopes", "Measuring instruments", "Microscopy" ]
36,262,566
https://en.wikipedia.org/wiki/B%C3%BCchi%20arithmetic
Büchi arithmetic of base k is the first-order theory of the natural numbers with addition and the function which is defined as the largest power of k dividing x, named in honor of the Swiss mathematician Julius Richard Büchi. The signature of Büchi arithmetic contains only the addition operation, and equality, omitting the multiplication operation entirely. Unlike Peano arithmetic, Büchi arithmetic is a decidable theory. This means it is possible to effectively determine, for any sentence in the language of Büchi arithmetic, whether that sentence is provable from the axioms of Büchi arithmetic. Büchi arithmetic and automata A subset is definable in Büchi arithmetic of base k if and only if it is k-recognisable. If this means that the set of integers of X in base k is accepted by an automaton. Similarly if there exists an automaton that reads the first digits, then the second digits, and so on, of n integers in base k, and accepts the words if the n integers are in the relation X. Properties of Büchi arithmetic If k and l are multiplicatively dependent, then the Büchi arithmetics of base k and l have the same expressivity. Indeed can be defined in , the first-order theory of and . Otherwise, an arithmetic theory with both and functions is equivalent to Peano arithmetic, which has both addition and multiplication, since multiplication is definable in . Further, by the Cobham–Semënov theorem, if a relation is definable in both k and l Büchi arithmetics, then it is definable in Presburger arithmetic. References Further reading Formal theories of arithmetic Logic in computer science Proof theory Model theory
Büchi arithmetic
[ "Mathematics" ]
351
[ "Logic in computer science", "Proof theory", "Mathematical logic", "Formal theories of arithmetic", "Arithmetic", "Model theory" ]
36,264,174
https://en.wikipedia.org/wiki/T-spline
In computer graphics, a T-spline is a mathematical model for defining freeform surfaces. A T-spline surface is a type of surface defined by a network of control points where a row of control points is allowed to terminate without traversing the entire surface. The control net at a terminated row resembles the letter "T". B-Splines are a type of curve widely used in CAD modeling. They consist of a list of control points (a list of (X, Y) or (X, Y, Z) coordinates) and a knot vector (a list increasing numbers, usually between 0 and 1). In order to perfectly represent circles and other conic sections, a weight component is often added, which extends B-Splines to rational B-Splines, commonly called NURBS. A NURBS curve represents a 1D perfectly smooth curve in 2D or 3D space. To represent a three-dimensional solid object, or a patch of one, B-Spline or NURBS curves are extended to surfaces. These surfaces consist of a rectangular grid of control points, called a control grid or control net, and two knot vectors, commonly called U and V. During editing, it is possible to insert a new control point into a curve without changing the shape of the curve. This is useful to allow a user to adjust this new control point, as opposed to only being able to adjust the existing control points. However, because the control grid of a B-Spline or NURBS surface has to be rectangular, it is only possible to insert an entire row or column of new control points. T-Splines are an enhancement of NURBS surfaces. They allow control points to be added to the control grid without inserting an entire new row or column. Instead, the new control points can terminate a row or column, which creates a "T" shape in the otherwise rectangular control grid. This is accomplished by assigning a knot vector to each individual control point, and creating some rules around how control points are added or removed. Modeling surfaces with T-splines can reduce the number of control points in comparison to NURBS surfaces and make pieces easier to merge, but increases the book-keeping effort to keep track of the irregular connectivity. T-splines can be converted into NURBS surfaces, by knot insertion, and NURBS can be represented as T-splines without T's or by removing knots. T-splines can therefore, in theory, do everything that NURBS can do. In practice, an enormous amount of programming was required to make NURBS work as well as they do, and creating the equivalent T-spline functionality would require similar effort. To smoothly join at points where more than three surface pieces meet, T-splines have been combined with geometrically continuous constructions of degree 3 by 3 (bi-cubic) and, more recently, of degree 4 by 4 (bi-quartic). Subdivision surfaces, NURBS surfaces, and polygon meshes are alternative technologies. Subdivision surfaces, as well as T-spline and NURBS surfaces with the addition of geometrically continuous constructions, can represent everywhere-smooth surfaces of any connectivity and topology, such as holes, branches, and handles. However, none of T-splines, subdivision surfaces, or NURBS surfaces can always accurately represent the (exact, algebraic) intersection of two surfaces within the same surface representation. Polygon meshes can represent exact intersections but lack the shape quality required in industrial design. Subdivision surfaces are widely adopted in the animation industry. Pixar's variant of the subdivision surfaces has the advantage of edge weights. T-splines do not yet have edge weights. T-splines were initially defined in 2003. In 2007 the U.S. patent office granted patent number 7,274,364 for technologies related to T-Splines. T-Splines, Inc. was founded in 2004 to commercialize the technologies and acquired by Autodesk, Inc. in 2011. The T-spline patent, US patent 7,274,364, expired in 2024. External links Technical articles about T-splines Transitioning from NURBS to T-splines (67-minute video) NURBS and CAD: 30 Years Together An open source T-spline kernel References Computer-aided design Splines (mathematics)
T-spline
[ "Engineering" ]
889
[ "Computer-aided design", "Design engineering" ]
36,264,729
https://en.wikipedia.org/wiki/Nexus%20Q
Nexus Q is a digital media player developed by Google. Unveiled at the Google I/O developers' conference on June 27, 2012, the device was expected to be released to the public in the United States shortly thereafter for US$300. The Nexus Q was designed to leverage Google's online media offerings, such as Google Play Music, Google Play Movies & TV, and YouTube, to provide a "shared" experience. Users could stream content from the supported services to a connected television, or speakers connected to an integrated amplifier, using their Android device and the services' respective apps as a remote control for queueing content and controlling playback. The Nexus Q received mixed reviews from critics following its unveiling. While its unique spherical design was praised, the Nexus Q was criticized for its lack of functionality in comparison to similar devices such as Apple TV, including a lack of support for third-party content services, no support for streaming content directly from other devices using the DLNA standard, as well as other software issues that affected the usability of the device. The unclear market positioning of the Nexus Q was also criticized, as it carried a significantly higher price than competing media players with wider capabilities; The New York Times technology columnist David Pogue described the device as being 'wildly overbuilt' for its limited functions. The Nexus Q was given away at no cost to attendees of Google I/O, but the product's consumer launch was indefinitely postponed the following month, purportedly to collect additional feedback. Those who had pre-ordered the Nexus Q following its unveiling received the device at no cost. The Nexus Q was quietly shelved in January 2013, and support for the device in the Google Play apps was phased out beginning in May 2013. Some of the Nexus Q's concepts were repurposed for a more-successful device known as Chromecast, which similarly allows users to wirelessly queue content for playback using functions found in supported apps, but is designed as a smaller HDMI dongle with support for third-party services. Development An early iteration of the Nexus Q was first demoed at Google I/O in 2011 under the name "Project Tungsten"; the device could stream music wirelessly from another Android device to attached speakers. It served as a component of a home automation concept known as "Android@Home", which aimed to provide an Android-based framework for connected devices within a home. Following the launch of the Google Music service in November 2011, a decision was made to develop a hardware device to serve as a tie-in—a project that eventually resulted in the Nexus Q. Google engineering director Joe Britt explained that the device was designed to make music a "social, shared experience", encouraging real-world interaction between its users. He also felt that there had been "a generation of people who’ve grown up with white earbuds", who had thus not experienced the difference of music played on speakers. The Nexus Q was the first hardware product developed entirely in-house by Google, and was manufactured in a U.S.-based factory—which allowed Google engineers to inspect the devices during their production. Hardware and software The Nexus Q takes the form of a sphere with a flat base; Google designer Mike Simonian explained that its form factor was meant to represent a device that pointed towards "the cloud", and "people all around" to reflect its communal nature. The sphere is divided into two halves; the top half can be rotated to adjust the audio volume being output over attached speakers or to other home theater equipment, and tapped to mute. In between the two halves is a ring of 32 LEDs; these lights serve as a music visualizer that animate in time to music, and can be set to one of five different color schemes. The rear of the device contains a power connector, ethernet jack, micro HDMI and optical audio outputs, banana plugs for connecting speakers to the device's built-in 25-watt "stereo-grade" amplifier, and a micro USB connector meant to "connect future accessories and encourage general hack-ability". The Nexus Q includes an OMAP4 processor, 1 GB of RAM, and 16 GB of storage used for caching of streamed content. It also supports near-field communication and Bluetooth for pairing devices and initial setup. The Nexus Q runs a stripped-down version of Android 4.0 "Ice Cream Sandwich", and is controlled solely via supported apps on Android devices running Android 4.1 "Jelly Bean". Google announced plans to support older versions of Android following the device's official launch. Media could be queued to play on the device using a "Play to" button shown within the Google Play Music, Google Play Movies & TV, and YouTube apps. Content is streamed directly from the services by the Nexus Q, with the Android device used like a remote control. For music, multiple users could collaboratively queue songs from Google Play Music onto a playlist. A management app could be used to adjust Nexus Q hardware settings. Nexus Q did not support any third-party media services, nor could media be stored to the device, or streamed to it using the standardized DLNA protocol. Reception Most criticism of the Nexus Q centered on its relatively high price in comparison to contemporary media streaming devices and set-top boxes, such as Apple TV and Roku, especially considering its lack of features when compared to these devices. The New York Times technology columnist David Pogue described the Nexus Q as being a "baffling" device, stating that it was "wildly overbuilt for its incredibly limited functions, and far too expensive", and arguing that it would probably appeal only to people "whose living rooms are dominated by bowling ball collections." Engadget was similarly mixed, arguing that while it was a "sophisticated, beautiful device with such a fine-grained degree of engineering you can't help but respect it", and that its amplifier was capable of producing "very clean sound", the Nexus Q was a "high-price novelty" that lacked support for DLNA, lossless audio, and playback of content from external or internal storage among other features. Discontinuation Nexus Q units were distributed as a gift to attendees of Google I/O 2012, with online pre-orders to the public opening at a price of US$300. On July 31, 2012, Google announced that it would delay the official launch of the Nexus Q in order to address early feedback, and that all customers who pre-ordered the device would receive it for free. By January 2013, the device was no longer listed for sale on the Google Play website, implying that its official release had been cancelled indefinitely. Google began to discontinue software support for the Nexus Q in May 2013, beginning with an update to the Google Play Music app, and a similar update to Google Play Movies & TV in June. The Nexus Q has also been the subject of third-party development and experimentation; XDA-developers users discovered means for side-loading Android applications onto the Nexus Q to expand its functionality. One user demonstrated the ability to use a traditional Android home screen with keyboard and mouse input, as well as the official Netflix app. In December 2013, an unofficial build of Android 4.4 "KitKat" based on CyanogenMod code was also released for the Nexus Q, although it was unstable and lacked reliable Wi-Fi support. The Nexus Q received a de facto successor in July 2013 with the unveiling of Chromecast, a streaming device that similarly allows users to queue the playback of remote content ("cast") via a mobile device. Chromecast is contrasted by its compact HDMI dongle form factor, the availability of an SDK that allows third-party services to integrate with the device, and its considerably lower price in comparison to the Nexus Q. In late 2014, Google and Asus released a second Nexus-branded digital media player known as the Nexus Player, which served as a launch device for the digital media player and smart TV platform Android TV. See also Comparison of set-top boxes Google TV Chromebit References Further reading Gross, Doug, "Google's new Nexus Q: Made in the U.S.A.", CNN, Thu June 28, 2012 Android (operating system) devices Digital media players Google Nexus Networking hardware Products introduced in 2012 Streaming media systems Vaporware
Nexus Q
[ "Technology", "Engineering" ]
1,710
[ "Vaporware", "Computer networks engineering", "Computer systems", "Streaming media systems", "Telecommunications systems", "Networking hardware", "Computer industry" ]
36,269,430
https://en.wikipedia.org/wiki/Insect%20thermoregulation
Insect thermoregulation is the process whereby insects maintain body temperatures within certain boundaries. Insects have traditionally been considered as poikilotherms (animals in which body temperature is variable and dependent on ambient temperature) as opposed to being homeothermic (animals that maintain a stable internal body temperature regardless of external influences). However, the term temperature regulation, or thermoregulation, is currently used to describe the ability of insects and other animals to maintain a stable temperature (either above or below ambient temperature), at least in a portion of their bodies by physiological or behavioral means. While many insects are ectotherms (animals in which their heat source is primarily from the environment), others are endotherms (animals that can produce heat internally by biochemical processes). These endothermic insects are better described as regional heterotherms because they are not uniformly endothermic. When heat is being produced, different temperatures are maintained in different parts of their bodies, for example, moths generate heat in their thorax prior to flight but the abdomen remains relatively cool. In-flight thermoregulation Animal flight is a very energetically expensive form of locomotion that requires a high metabolic rate. In order for an animal to fly, its flight muscles need to be capable of high mechanical power output, which in turn, due to biochemical inefficiencies, end up producing large amounts of heat. A flying insect produces heat, which, as long as it does not exceed an upper lethal limit, will be tolerated. However, if the flying insect is also exposed to external sources of heat (for example, radiation from the sun) or ambient temperatures are too high, it should be able to thermoregulate and stay in its temperature comfort zone. Higher speeds necessarily increase convective cooling. Higher flying velocities have been shown to result in an increase, instead of a reduction, of thoracic temperature. This is probably caused by the flight muscles working at higher levels and consequently, increasing thoracic heat generation. The first evidence for insect thermoregulation in flight came from experiments in moths demonstrating that dissipation of heat occurs via hemolymph movement from the thorax to the abdomen. The heart of these moths makes a loop through the center of the thorax facilitating heat exchange and converting the abdomen into both a heat sink and a heat radiator that helps the flying insect in maintaining a stable thoracic temperature under different ambient temperature conditions. It was believed that heat regulation was only achieved by varying heat loss until evidence for varying heat production was observed in honeybees. Then, it was then suggested that thermal stability in honeybees, and probably many other heterothermic insects, was primarily attained by varying heat production. Whether flying insects are able or not to regulate their thoracic temperature by regulating heat production or only by varying heat loss, is still a matter of debate. Pre-flight thermoregulation Several large insects have evolved to warm-up previous to flight so that energetically demanding activities, such as flight, are possible. Insect behavior involves inefficient muscle operation that produces excess heat and establishes the thermal range in which specific muscles best function. The high metabolic cost of insect flight muscles means that great amounts of chemical energy are utilized by these specific muscles. However, only a very small percentage of this energy translates into actual mechanical work or wing movement. Thus, the rest of this chemical energy is transformed into heat that in turn produces body temperatures significantly greater than those of the ambient. These high temperatures at which flight muscles work impose a constraint on low temperature take-off because an insect at rest has its flight muscles at ambient temperature, which is not the optimal temperature for these muscles to function. So, heterothermic insects have adapted to make use of the excess heat produced by flight muscles to increase their thoracic temperature pre-flight. Both the dorsolongitudinal muscles (which flip down the wings during flight) and the dorsoventral muscles (which cause the wings to flip upward during flight) are involved in the pre-flight warm-up behavior but in a slightly different way. During flight, these function as antagonistic muscles to produce the wing flapping that allows for sustained flight. However, during warm-up these muscles are contracted simultaneously (or almost simultaneously in some insects) to produce no wing movement (or a minimal amount of wing movement) and produce as much heat as possible to elevate thoracic temperatures to flight-levels. The pre-flight warm-up behavior of male moths (Helicoverpa zea) has been shown to be affected by olfactory information. As in many moths, the males of this species respond to female pheromone by flying towards the female and trying to mate with her. During the warm-up of their flight muscles, and when in presence of the female pheromone, males generate heat at higher rates, so as to take off earlier and out-compete other males that might have also sensed the pheromone. Achieving elevated temperatures as stated above fall under the term physiological thermoregulation because heat is generated by a physiological process inside the insect. The other described way of thermoregulation is called behavioral thermoregulation because body temperature is controlled by behavioral means, such as basking in the sun. Butterflies are a good example of insects that are heliotherms (deriving heat almost exclusively from the sun). Other thermoregulatory examples Some nocturnal dung beetles have been shown to increase their ball-making and ball-rolling velocity when their thoracic temperature increases. In these beetles, dung is a precious commodity that allows them to find a mate and feed their larvae. Discovering the resource soon is important so that they can start rolling a ball as soon as possible and take it to a distant place for burying. The beetles first detect the dung by olfactory cues and fly towards it rapidly. As they first arrive, their body temperatures are still high due to their flight metabolism, which allows them to make and roll balls faster; and the bigger the ball, the better chances they have of getting a mate. However, as time passes, a grounded beetle making a ball starts to cool off and it becomes harder to increase the size of the dung ball and also transport it. So, there is a trade-off between making a large ball that would guarantee a mate but might be not easily transported and a smaller ball, which might not attract a mate but can be safely taken to the burying place. Additionally, other beetles that arrive later (which are hotter), can actually fight over balls and have been shown to usually win against beetles that are cooler. Another example of thermoregulation is that of heat being used as a defensive mechanism. The Japanese honeybee (Apis cerana japonica) is preyed upon by a hornet (Vespa simillima xanthoptera) that usually waits at the entrance of their hive. Even though the hornets are many times bigger than the bees, bees numbers make the difference. These bees are adapted to survive temperatures above but the hornet is not. Thus, bees are able to kill their attacker by making a ball around the hornet and then increasing their body temperature above . Anopheles mosquitoes, vectors of Malaria, thermoregulate each time they take a blood meal on a warm-blooded animal. During blood ingestion, they emit a droplet composed of urine and fresh blood that they keep attached to their anus. The liquid of the drop evaporates dissipating the excess of heat in their bodies consequence of the rapid ingestion of relatively high amounts of blood much warmer than the insect itself. This evaporative cooling mechanism helps them to avoid the thermal stress associated to their haematophagous way of life. The Grayling butterfly (Hipparchia semele) engages in thermoregulation as well. The species prefers to live in open habitats with easy access to the sun, and can be seen orienting its body to maximize exposure to the sun. At lower temperatures, the grayling can be observed exposing as much of its body as possible to the sun, whereas at higher temperatures, it exposes as little of its body as possible. This behavior is often used by male butterflies defending their territory, as this thermoregulatory behavior allows them to maximize their flight efficiency. The thermoregulatory properties of dark coloration are important for mate searching by Phymata americana males. In cool climates, darker coloration allows males to reach warmer temperatures faster, which increases locomotor ability and decreases mate search time. See also Thermoregulation Entomology Ethnoentomology Flying and gliding animals References Further reading SP Roberts, JF Harrison (1999), "Mechanisms of thermal stability during flight in the honeybee apis mellifera", J Exp Biol, 202 (11):1523-33 Cheng-Chia Tsai, Richard A. Childers, Norman Nan Shi, Crystal Ren, Julianne N. Pelaez, Gary D. Bernard, Naomi E. Pierce & Nanfang Yu (2020), "Physical and behavioral adaptations to prevent overheating of the living wings of butterflies", Nature Communications, 11 (551) Insect physiology Thermoregulation Articles containing video clips
Insect thermoregulation
[ "Biology" ]
1,936
[ "Thermoregulation", "Homeostasis" ]
36,271,791
https://en.wikipedia.org/wiki/Fundamental%20increment%20lemma
In single-variable differential calculus, the fundamental increment lemma is an immediate consequence of the definition of the derivative of a function at a point : The lemma asserts that the existence of this derivative implies the existence of a function such that for sufficiently small but non-zero . For a proof, it suffices to define and verify this meets the requirements. The lemma says, at least when is sufficiently close to zero, that the difference quotient can be written as the derivative f plus an error term that vanishes at . That is, one has Differentiability in higher dimensions In that the existence of uniquely characterises the number , the fundamental increment lemma can be said to characterise the differentiability of single-variable functions. For this reason, a generalisation of the lemma can be used in the definition of differentiability in multivariable calculus. In particular, suppose f maps some subset of to . Then f is said to be differentiable at a if there is a linear function and a function such that for non-zero h sufficiently close to 0. In this case, M is the unique derivative (or total derivative, to distinguish from the directional and partial derivatives) of f at a. Notably, M is given by the Jacobian matrix of f evaluated at a'. We can write the above equation in terms of the partial derivatives as See also Generalizations of the derivative References Differential calculus
Fundamental increment lemma
[ "Mathematics" ]
294
[ "Differential calculus", "Calculus" ]
36,271,919
https://en.wikipedia.org/wiki/%283-Aminopropyl%29triethoxysilane
(3-Aminopropyl)triethoxysilane (APTES) is an aminosilane frequently used in the process of silanization, the functionalization of surfaces with alkoxysilane molecules. It can also be used for covalent attaching of organic films to metal oxides such as silica and titania. Use with PDMS APTES can be used to covalently bond thermoplastics to poly(dimethylsiloxane) (PDMS). Thermoplastics are treated with oxygen plasma to functionalize surface molecules, and subsequently coated with an aqueous 1% by volume APTES solution. PDMS is treated with oxygen plasma and placed in contact with the functionalized thermoplastic surface. A stable, covalent bond forms within 2 minutes. Silsesquioxane synthesis Octa(3-aminopropyl)silsesquioxane can be obtained in a one step hydrolytic condensation using APTES and hydrochloric or trifluoromethanesulfonic acid (CFSOH). Use with cell cultures APTES-functionalized surfaces have been shown to be nontoxic to embryonic rat cardiomyocytes in vitro. Further experimentation is needed to evaluate toxicity to other cell types in extended culture. Toxicity APTES is a toxic compound with an MSDS health hazard score of 3. APTES fumes are destructive to the mucous membranes and the upper respiratory tract, and should be used in a fume hood with gloves. If a fume hood is not available, a face shield and full face respirator must be implemented. The target organs of APTES are nerves, liver and kidney. References Amines Silyl ethers Ethoxy compounds
(3-Aminopropyl)triethoxysilane
[ "Chemistry" ]
370
[ "Amines", "Bases (chemistry)", "Functional groups" ]
36,273,089
https://en.wikipedia.org/wiki/C9H12N2
{{DISPLAYTITLE:C9H12N2}} The molecular formula C9H12N2 (molar mass: 148.20 g/mol, exact mass: 148.1000 u) may refer to: Nornicotine, also known as demethylnicotine 4-Pyrrolidinylpyridine Molecular formulas
C9H12N2
[ "Physics", "Chemistry" ]
77
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
23,523,889
https://en.wikipedia.org/wiki/Enthalpy%20of%20fusion
In thermodynamics, the enthalpy of fusion of a substance, also known as (latent) heat of fusion, is the change in its enthalpy resulting from providing energy, typically heat, to a specific quantity of the substance to change its state from a solid to a liquid, at constant pressure. The enthalpy of fusion is the amount of energy required to convert one mole of solid into liquid. For example, when melting 1 kg of ice (at 0 °C under a wide range of pressures), 333.55 kJ of energy is absorbed with no temperature change. The heat of solidification (when a substance changes from liquid to solid) is equal and opposite. This energy includes the contribution required to make room for any associated change in volume by displacing its environment against ambient pressure. The temperature at which the phase transition occurs is the melting point or the freezing point, according to context. By convention, the pressure is assumed to be unless otherwise specified. Overview The enthalpy of fusion is a latent heat, because, while melting, the heat energy needed to change the substance from solid to liquid at atmospheric pressure is latent heat of fusion, as the temperature remains constant during the process. The latent heat of fusion is the enthalpy change of any amount of substance when it melts. When the heat of fusion is referenced to a unit of mass, it is usually called the specific heat of fusion, while the molar heat of fusion refers to the enthalpy change per amount of substance in moles. The liquid phase has a higher internal energy than the solid phase. This means energy must be supplied to a solid in order to melt it and energy is released from a liquid when it freezes, because the molecules in the liquid experience weaker intermolecular forces and so have a higher potential energy (a kind of bond-dissociation energy for intermolecular forces). When liquid water is cooled, its temperature falls steadily until it drops just below the line of freezing point at 0 °C. The temperature then remains constant at the freezing point while the water crystallizes. Once the water is completely frozen, its temperature continues to fall. The enthalpy of fusion is almost always a positive quantity; helium is the only known exception. Helium-3 has a negative enthalpy of fusion at temperatures below 0.3 K. Helium-4 also has a very slightly negative enthalpy of fusion below . This means that, at appropriate constant pressures, these substances freeze with the addition of heat. In the case of 4He, this pressure range is between 24.992 and . These values are mostly from the CRC Handbook of Chemistry and Physics, 62nd edition. The conversion between cal/g and J/g in the above table uses the thermochemical calorie (calth) = 4.184 joules rather than the International Steam Table calorie (calINT) = 4.1868 joules. Examples Solubility prediction The heat of fusion can also be used to predict solubility for solids in liquids. Provided an ideal solution is obtained the mole fraction of solute at saturation is a function of the heat of fusion, the melting point of the solid and the temperature of the solution: Here, is the gas constant. For example, the solubility of paracetamol in water at 298 K is predicted to be: Since the molar mass of water and paracetamol are and and the density of the solution is , an estimate of the solubility in grams per liter is: 1000 g/L * (mol/18.0153g) is an estimate of the number of moles of molecules in 1L solution, using water density as a reference; 0.0248 * (1000 g/L * (mol/18.0153g)) is the molar fraction of substance in saturated solution with a unit of mol/L; 0.0248 * (1000 g/L * (mol/18.0153g)) * 151.17g/mol is the solute's molar fraction equivalent mass conversion; 1-0.0248 will be the fraction of the solution that is solvent. which is a deviation from the real solubility (240 g/L) of 11%. This error can be reduced when an additional heat capacity parameter is taken into account. Proof At equilibrium the chemical potentials for the solute in the solution and pure solid are identical: or with the gas constant and the temperature. Rearranging gives: and since the heat of fusion being the difference in chemical potential between the pure liquid and the pure solid, it follows that Application of the Gibbs–Helmholtz equation: ultimately gives: or: and with integration: the result is obtained: See also Enthalpy of vaporization Heat capacity Thermodynamic databases for pure substances Joback method (Estimation of the heat of fusion from molecular structure) Latent heat Lattice energy Heat of dilution Notes References Enthalpy
Enthalpy of fusion
[ "Physics", "Chemistry", "Mathematics" ]
1,041
[ "Enthalpy", "Quantity", "Physical quantities", "Thermodynamic properties" ]
23,524,398
https://en.wikipedia.org/wiki/Rapid%20phase%20transition
Rapid phase transition or RPT is an explosive boiling phenomenon realized in liquefied natural gas (LNG) incidents, in which LNG vaporizes violently upon coming in contact with water causing what is known as a physical explosion. During such explosions there is no combustion but rather a huge amount of energy is transferred in the form of heat from the room-temperature water to the LNG at a temperature difference of about 200 K (176.667°C / 350 °F). Liquefied natural gas, or LNG, is natural gas that gets liquefied at atmospheric pressure and −161.5 °C (112.7 K; −258.7 °F). It is odorless, tasteless, colorless, and not poisonous but causes asphyxia. It can cause frostbite due to its cryogenic temperature. If saturated LNG contacts liquid water (e.g. sea water, which has an average temperature of 15 °C), heat is transferred from the water to the LNG, rapidly vaporizing it. This results in an explosion because the volume occupied by natural gas in its gaseous form is 600 times greater than when its liquefied; this is the phenomenon of rapid phase transition. See also BLEVE Dry ice bomb References Phase transitions Explosions Liquefied natural gas Process safety
Rapid phase transition
[ "Physics", "Chemistry", "Engineering" ]
271
[ "Physical phenomena", "Phase transitions", "Safety engineering", "Phases of matter", "Critical phenomena", "Process safety", "Explosions", "Chemical process engineering", "Statistical mechanics", "Matter" ]
23,524,651
https://en.wikipedia.org/wiki/C9H6O2
{{DISPLAYTITLE:C9H6O2}} The molecular formula C9H6O2 (molar mass: 146.14 g/mol, exact mass: 146.036779 u) may refer to: Chromone Coumarin 1,3-Indandione Isocoumarin Phenylpropiolic acid Molecular formulas
C9H6O2
[ "Physics", "Chemistry" ]
81
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
23,524,936
https://en.wikipedia.org/wiki/C13H24N2O
The molecular formula C13H24N2O (molar mass: 224.34 g/mol, exact mass: 224.1889 u) may refer to: Cuscohygrine Dicyclohexylurea Molecular formulas
C13H24N2O
[ "Physics", "Chemistry" ]
52
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
23,525,573
https://en.wikipedia.org/wiki/Executable%20architecture
An Executable Architecture (EA), in general, is the description of a system architecture (including software and/or otherwise) in a formal notation together with the tools (e.g. compilers/translators) that allow the automatic or semi-automatic generation of artifacts (e.g. capability gap analysis (CGA), models, software stubs, Military Scenario Definition Language (MSDL)) from that notation and which are used in the analysis, refinement, and/or the implementation of the architecture described. Closely related subjects Subjects closely related to EA include: Object Management Group's Model-driven architecture Object Management Group's Business Process Management Initiative Vanderbilt University's Model Integrated Computing (MIC) Implementations Implementations of EA include: Rational Rose Generic Modeling Environment (GME) Open-Source eGov Reference Architecture (OSERA) See also Business Process Execution Language (BPEL) Business Process Management Initiative (BPMI) Business Process Modeling Language (BPML) Executable Operational Architecture Model-driven architecture (MDA) Model-driven engineering (MDE) Object Management Group (OMG) Semantic Web Unified Process Unified Modeling Language (UML) Vanderbilt University References External links Model Integrated Computing (MIC) Website OSERA's Website Executable Architecture of Software-Defined Enterprises Systems engineering Military acquisition Military simulation
Executable architecture
[ "Engineering" ]
272
[ "Systems engineering" ]
23,528,678
https://en.wikipedia.org/wiki/Engineered%20Glass%20Products
Engineered Glass Products, LLC, (or EGP) in Chicago, Illinois, is the parent company of Marsco Glass Products, LLC, and Thermique Technologies, LLC. History Marsco Manufacturing (later Engineered Glass Products) was founded in 1947 as a manufacturer of specialized glass products, most importantly customized dials and gauges. With the arrival of television, Marsco became one of the first manufacturers to produce protective gray glass shields for television tubes. In the 1970s, Marsco engineers introduced a heat-resistant coating for glass that greatly expanded the uses for glass in kitchen appliances like self-cleaning ovens. Marsco then formed partnerships with well-known industry names like General Electric to become the dominant supplier of heat-resistant coated glass in North America. In 1995, general manager Mike Hobbs engineered a management buyout of the company and began to modernize the company’s corporate structure. Hobbs refocused the company on research and diversification of technology. Fred Fowler signed on as company president to direct internal operations as Marsco pursued its new mission. In 2002, Marsco engineers developed and later patented a new control technology for electrically heated glass. This technology allowed Marsco to manufacture glass panes that radiate infrared heat energy up to 350 degrees Fahrenheit (177 degrees Celsius) with adjustable temperature control. The first commercial application of this control technology was a heated glass shelf for displaying warm foods in merchandising cases. Next, the company developed a towel warmer with a glass heating element. In 2006, Marsco Manufacturing transformed into Engineered Glass Products. The company then created two subsidiaries: Marsco Glass Products, which continues to manufacture heat-resistant coated glass for home appliances, and Thermique Technologies, which manufactures heated glass products and components. Corporate management In 1995, general manager Mike Hobbs engineered a management buyout of EGP, then known as Marsco Manufacturing. Hobbs oversaw the development of Thermique heated glass and the creation of Thermique Technologies as a separate entity. Hobbs is CEO of both EGP and Thermique Technologies. Fred Fowler is a president of EGP and Marsco Glass Products. Products Marsco Glass Products is the world’s leading manufacturer of coated glass for self-cleaning ovens. Designed for safety and energy savings, the company’s Heat Barrier coating is applied to one side of the glass in the HBI product line and both sides of the glass in the HBII product line. References Glass production Manufacturing companies based in Chicago
Engineered Glass Products
[ "Materials_science", "Engineering" ]
506
[ "Glass engineering and science", "Glass production" ]
33,653,025
https://en.wikipedia.org/wiki/Helium%20analyzer
A Helium analyzer is an instrument used to identify the presence and concentration of helium in a mixture of gases. In Technical diving where breathing gas mixtures known as Trimix comprising oxygen, helium and nitrogen are used, it is necessary to know the fraction of helium in the mixture to reliably calculate decompression schedules for dives using that mixture. Thermal conductivity principle Portable instruments for the analysis of helium content of breathing gas mixtures may be based on a thermal conductivity sensor (katharometer). These sensors can be very stable and maintenance free and also highly reliable and accurate. A typical thermal helium analyser comprises two chambers, each with an identical thermal conductivity sensor. One chamber is sealed and is filled with pure helium as the reference gas, and the other receives the sample gas. The difference in thermal conductivity of the reference and sample gases is measured and converted into a concentration value by the electronic circuitry in the instrument. The system is inherently stable and when precise temperature compensation is made, the system is more than adequately accurate for breathing gas analysis. Accuracy and display precision is typically within 0.1%, and accuracy within 1% is considered sufficient for most decompression algorithms. The thermal conductivity of nitrogen and oxygen are very similar, and that of helium very different so that the ratio of oxygen and nitrogen in the mix is relatively unimportant, and need not be compensated. This allows a direct reading of helium fraction from these instruments. However, for greater accuracy and compensation to oxygen cross-sensitivity, some instruments include an oxygen cell, and in these cases can generally give a full helium and oxygen analysis of the mixture simultaneously. Speed of sound principle Helium content may also be determined on the basis of measuring the speed of sound in the analyzed gas mixture. The speed of sound depends on the mixture of gases and the temperature of the mix; in the analysis of trimix the speed of sound can be described by a non-linear function of temperature, oxygen content and helium content, and thus the content of helium can be determined by measuring the speed of sound through the mix, the temperature of the mix and its oxygen content. See also References External links Breathing gases Underwater diving safety Measurement Diving support equipment
Helium analyzer
[ "Physics", "Mathematics" ]
452
[ "Quantity", "Physical quantities", "Measurement", "Size" ]
33,653,162
https://en.wikipedia.org/wiki/Booster%20pump
A booster pump is a machine which increases the pressure of a fluid. It may be used with liquids or gases, and the construction details vary depending on the fluid. A gas booster is similar to a gas compressor, but generally a simpler mechanism which often has only a single stage of compression, and is used to increase pressure of a gas already above ambient pressure. Two-stage boosters are also made. Boosters may be used for increasing gas pressure, transferring high pressure gas, charging gas cylinders and scavenging. Water pressure On new construction and retrofit projects, water pressure booster pumps are used to provide adequate water pressure to upper floors of high rise buildings. The need for a water pressure booster pump can also arise after the installation of a backflow prevention device (BFP), which is currently mandated in many municipalities to protect the public water supplies from contaminants within a building entering the public water supply. The use of BFPs began after The Clean Water Act was passed. These devices can cause a loss of 12 PSI, and can cause flushometers on upper floors not to work properly. After pipes have been in service for an extended period, scale can build up on the inside surfaces which will cause a pressure drop when the water flows. Water pressure booster construction and function Booster pumps for household water pressure are usually simple electrically driven centrifugal pumps with a non-return valve. They may be constant speed pumps which switch on when pressure drops below the low pressure set-point and switch off when pressure reaches the high set-point, or variable speed pumps which are controlled to maintain a constant output pressure. Constant speed pumps are switched on by a normally closed low-pressure switch and will content to run until the pressure rises to open the high pressure switch. They will cycle whenever enough water is used to cause a pressure drop below the low set point. An accumulator in the upstream pipeline will reduce cycling. Variable speed pumps use pressure feedback to electronically control motor speed to maintain a reasonably constant discharge pressure. Most applications run off AC mains current and use an inverter to control motor speed. Installations that provide water to highrise buildings may need boosters at several levels to provide acceptably consistent pressure on all floors. In such a case independent boosters may be installed at various levels, each boosting the pressure provided by the next lower level. It is also possible to boost once to the maximum pressure required, and then to use a pressure reducer at each level. This method would be used if there is a holding tank on the roof with gravity feed to the supply system. Fire sprinkler booster pumps Multi-story buildings equipped with fire sprinkler systems may require a large booster pump to deliver sufficient water pressure and volume to upper floors in the event of a fire. Such pumps are often powered by a diesel engine dedicated to this purpose. The engine needs a fuel tank and an automatic controller that will start the booster pump when it is needed. A small auxiliary electrically-powered booster pump (called a "jockey pump") is often included in the system to maintain the sprinkler pipes at sufficient pressure, without requiring startup of the large diesel engine. Any emergency system must be periodically tested and maintained to ensure its reliability. A diesel engine must be started and operated for testing, and a battery bank for the starting motor must be maintained or replaced periodically. In recent years, a larger electrical pump with substantial battery backup may be substituted for the diesel engine, reducing but not eliminating the need for maintenance. Gas pressure Gas pressure boosting may be used to fill storage cylinders to a higher pressure than the available gas supply, or to provide production gas at pressure higher than line pressure. Examples include: Breathing gas blending for underwater diving where the gas is to be supplied from high-pressure cylinders, as in scuba, scuba replacement and surface-supplied mixed gas diving, where the component gases are blended by partial pressure addition to the storage cylinders, and the mixture storage pressure may be higher than the available pressure of the components. Helium reclaim systems, where the heliox breathing gas exhaled by a saturation diver is piped back to the surface, oxygen is added to make up the required composition, and the gas is boosted to the appropriate supply pressure, filtered, scrubbed of carbon dioxide, and returned to the gas distribution panel to be supplied to the diver again, or returned to high pressure storage Workshop compressed air is usually provided at a pressure suited to the majority of the applications, but some may need a higher pressure. A small booster can be effective to provide this air. Gas booster construction and function Gas booster pumps are usually piston or plunger type compressors. A single-acting, single-stage booster is the simplest configuration, and comprises a cylinder, designed to withstand the operating pressures, with a piston which is driven back and forth inside the cylinder. The cylinder head is fitted with supply and discharge ports, to which the supply and discharge hoses or pipes are connected, with a non-return valve on each, constraining flow in one direction from supply to discharge. When the booster is inactive, and the piston is stationary, gas will flow from the inlet hose, through the inlet valve into the space between the cylinder head and the piston. If the pressure in the outlet hose is lower, it will then flow out and to whatever the outlet hose is connected to. This flow will stop when the pressure is equalized, taking valve opening pressures into account. Once the flow has stopped, the booster is started, and as the piston withdraws along the cylinder, increasing the volume between the cylinder head and the piston crown, the pressure in the cylinder will drop, and gas will flow in from the inlet port. On the return cycle, the piston moves toward the cylinder head, decreasing the volume of the space and compressing the gas until the pressure is sufficient to overcome the pressure in the outlet line and the opening pressure of the outlet valve. At that point, the gas will flow out of the cylinder via the outlet valve and port. There will always be some compressed gas remaining in the cylinder and cylinder head spaces at the top of the stroke. The gas in this "dead space" will expand during the next induction stroke, and only after it has dropped below the supply gas pressure, more supply gas will flow into the cylinder. The ratio of the volume of the cylinder space with the piston fully withdrawn, to the dead space, is the "compression ratio" of the booster, also termed "boost ratio" in this context. Efficiency of the booster is related to the compression ratio, and gas will only be transferred while the pressure ratio between supply and discharge gas is less than the boost ratio, and delivery rate will drop as the inlet to delivery pressure ratio increases. Delivery rate starts at very close to swept volume when there is no pressure difference, and drops steadily until there is no effective transfer when the pressure ratio reaches the maximum boost ratio. Compression of gas will cause a rise in temperature. The heat is mostly carried out by the compressed gas, but the booster components will also be heated by contact with the hot gas. Some boosters are cooled by water jackets or external fins to increase convectional cooling by the ambient air, but smaller models may have no special cooling facilities at all. Cooling arrangements will improve efficiency, but will cost more to manufacture. Boosters to be used with oxygen must be made from oxygen-compatible materials, and use oxygen-compatible lubricants to avoid fire. Configurations Single stage, single acting: There is one booster cylinder, which pressurizes the gas in one direction of piston movement, and refills the cylinder on the return stroke. Single stage, double acting: There are two booster cylinders, which operate alternately, with each one pressurizing gas while the other is refilling. The cylinders each pressurize gas-fed directly from the supply, and the delivered gas from each is combined at the outlets. The cylinders work in parallel and have the same bore. Two stage, double acting: There are two cylinders, which operate alternately, each pressurising gas while the other is refilling, but the second stage has a smaller bore and is filled by the gas pressurised by the first stage, and it pressurises the gas further. The stages operate in series, and the gas passes though both of them in turn. Power sources Gas boosters may be driven by an electric motor, hydraulics, low or high pressure air, or manually by a lever system. Compressed air Those powered by compressed air are usually linear actuated systems, where a pneumatic cylinder directly drives the compression piston, often in a common housing, separated by one or more seals. A high pressure pneumatic drive arrangement may use the same pressure as the output pressure to drive the piston, and a low pressure drive will use a larger diameter piston to multiply the applied force. Low pressure air A common arrangement for low pressure air powered boosters is for the booster pistons to be direct coupled with the drive piston, on the same centreline. The low pressure cylinder has a considerably larger section area than the high pressure cylinders, in proportion to the pressure ratio between the drive and boosted gas. A single action booster of this type has a boost cylinder on one end of the power cylinder, and a double action booster has a boost cylinder on each end of the power cylinder, and the piston rod has a drive piston in the middle and a booster piston on each end. Oxygen boosters require some design features which may not be necessary in boosters for less reactive gases. It is necessary to ensure that drive air, which may not be sufficiently clean for safe contact with high pressure oxygen, cannot leak past the seals into the booster cylinder, or high pressure oxygen can not leak ito the drive cylinder. This can be done by providing a space between the low pressure cylinder and high pressure cylinder that is vented to atmosphere, and the piston rod is sealed on each side where it passes through this space. Any gas leaks from either cylinder past the rod seals escapes harmlessly into the ambient air. A special case for gas powered boosters is where the booster uses the same gas supply to power the booster and as the gas to be boosted. This arrangement is wasteful of gas and is most suitable for use to provide small quantities of higher pressure air where large quantities of lower pressure air are already available. This system is sometimes known as a "bootstrap" booster. High pressure Electrical Electrically powered boosters may use a single or three-phase AC motor drive. The high speed rotational output of the motor must be converted to lower speed reciprocating motion of the pistons. One way this has been done (Dräger and Russian KN-3 and KN-4 military boosters) is to connect the motor to a worm drive gearbox with an eccentric output shaft driving a connecting rod which drives the double-ended piston via a central trunnion. This system is well suited to a double acting booster, either with single-stage boost by parallel connected cylinders with the same bore, or two-stage cylinders of different bores connected in series. Some of these boosters allow for the connecting rod to be disconnected and a pair of long levers to be fitted for manual operation in emergencies or where electrical power is not available. Manual Manual boosters have been made with the configuration described above, either with a single vertical lever or with a seesaw styled double ended horizontal lever, and also with two parallel vertically mounted cylinders, much like the lever-operated diver's air pumps used for the early standard diving dress but with much smaller bore to allow two operators to generate high pressures. Manufacturers High pressure gas boosters are manufactured by Haskel, MPS Technology, Dräger, Gas Compression Systems and others. Rugged and unsophisticated models (KN-3 and KN-4) were manufactured for the Soviet Armed Forces and surplus examples are now used by technical divers as they are relatively inexpensive and are supplied with a comprehensive spares and tool kit. References Gas compressors Gases Diving support equipment
Booster pump
[ "Physics", "Chemistry" ]
2,446
[ "Matter", "Turbomachinery", "Gas compressors", "Phases of matter", "Statistical mechanics", "Gases" ]
45,160,791
https://en.wikipedia.org/wiki/Corticosteroid%20receptor
The corticosteroid receptors are receptors for corticosteroids. Corticosteroid receptors mediate the target organ response to the major products of the adrenal cortex, glucocorticoids (principally cortisol in man), and mineralocorticoids (principally aldosterone). They are members of the intracellular receptor superfamily which is highly evolutionarily conserved, and includes receptors for thyroid hormones, vitamin D, sex steroids, and retinoids. They include the following two nuclear receptors: Glucocorticoid receptor (type I) – glucocorticoids like cortisol Mineralocorticoid receptor (type I) – mineralocorticoids like aldosterone There are also membrane corticosteroid receptors, including the membrane glucocorticoid receptors and the membrane mineralocorticoid receptors, which are not well-characterized at present. References Intracellular receptors Transcription factors
Corticosteroid receptor
[ "Chemistry", "Biology" ]
201
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
45,164,071
https://en.wikipedia.org/wiki/Rectangular%20mask%20short-time%20Fourier%20transform
In mathematics and Fourier analysis, a rectangular mask short-time Fourier transform (rec-STFT) has the simple form of short-time Fourier transform. Other types of the STFT may require more computation time than the rec-STFT. The rectangular mask function can be defined for some bound (B) over time (t) as We can change B for different tradeoffs between desired time resolution and frequency resolution. Rec-STFT Inverse form Property Rec-STFT has similar properties with Fourier transform Integration (a) (b) Shifting property (shift along x-axis) Modulation property (shift along y-axis) special input When When Linearity property If ,and are their rec-STFTs, then Power integration property Energy sum property (Parseval's theorem) Example of tradeoff with different B From the image, when B is smaller, the time resolution is better. Otherwise, when B is larger, the frequency resolution is better. Advantage and disadvantage Compared with the Fourier transform: Advantage: The instantaneous frequency can be observed. Disadvantage: Higher complexity of computation. Compared with other types of time-frequency analysis: Advantage: Least computation time for digital implementation. Disadvantage: Quality is worse than other types of time-frequency analysis. The jump discontinuity of the edges of the rectangular mask results in Gibbs ringing artifacts in the frequency domain, which can be alleviated with smoother windows. See also Uncertainty principle References Jian-Jiun Ding (2014) Time-frequency analysis and wavelet transform Fourier analysis Time–frequency analysis Transforms
Rectangular mask short-time Fourier transform
[ "Physics", "Mathematics" ]
316
[ "Functions and mappings", "Spectrum (physical sciences)", "Time–frequency analysis", "Frequency-domain analysis", "Mathematical objects", "Mathematical relations", "Transforms" ]
45,165,284
https://en.wikipedia.org/wiki/Indonesian%20units%20of%20measurement
A number of units of measurement were used in Indonesia to measure length, mass, capacity, etc. Metric system adopted in 1923 and has been compulsory in Indonesia since 1938. System before metric system Old Dutch and local measures were used under Dutch East Indies. Local measures were very variable, and later they have been legally defined with their metric equivalents. Length A number of units were used to measure length. One depa was equal to 1.70 m by its legal definition. Some other units and their legal equivalents are given below: 1 hasta = depa 1 kilan = depa. Mass A number of units were used to measure mass. Ordinary One pikol (or one pecul) was equal to kg by its legal definition. Some other units and their legal equivalents are given below: 1 thail = pikol 1 catti = pikol 1 kabi = pikol 1 kulack = 0.0725 pikol 1 amat = 2 pikol 1 small bahar = 3 pikol 1 large bahar = 4.5 pikol 1 timbang = 5 pikol 1 kojang (Batavia) = 27 pikol = 1667.555 kg 1 kojang (Semarang) = 28 pikol = 1729.316 kg 1 kojang (Soerabaya) = 30 pikol = 1852.839 kg. For precious metals One thail was equal to 54.090 kg by its legal definition. Some other units and their legal equivalents are given below: 1 wang = thail 1 tali = thail 1 soekoe = thail 1 reaal = thail. For opium One thail was equal to 38.601 kg by its legal definition. Some other units and their legal equivalents are given below: 1 tji = thail 1 tjembang Mata = thail 1 hoen = thail. Area Several units were used to measure area. One bahoe (or 1 bouw) was equal to 7096.5 m2 and lieue2 (Geographic) was equal to 55.0632 km by its legal definition. Capacity Two systems, dry and liquid, were used to measure capacity. Dry Several units were used to measure dry capacity. One kojang was equal to 2011.2679 L by its legal definition. One pikol was equal to kojang. Liquid A number of units were used to measure liquid capacity. Some other units and their legal equivalents are given below: 1 takar (for oil) = 25.770 L 1 kit (for oil) = 15.159 L 1 koelak (for oil) = 3.709 L 1 kan (for various products) = 1.575 L 1 mutsje (for various products) = 0.1516 L 1 pintje (for oil) = 0.0758 L. Sumatra Several local units were used in Sumatra. Length Units for length included: 1 etto = 2 jankal 1 hailoh = 2 etto 1 tung = 4 hailoh = 12 feet. Capacity Units for capacity included: 1 koolah = 2.1173 bushel 1 pakha = 0.14535 gallon. Mass Units for mass included: 1 catty = 2.118 lb 1 maund = 77 lb 1 pecul = lb 1 candil = lb 1 ootan (for camphor) = 4 lb. Java Several local units were used in Java. Old Dutch units too were in use, and other units were varied for example one town to another.: Length One covid was equal to yard and other units were Dutch. Mass Units for mass included: 1 gantang (for coffee) = 10 catties 1 pecul = 100 catties = 135.6312 lb 1 bahar (at Bantam) = 396 lb 1 bahar (at Bantam; used for pepper) = 406.78 lb 1 bahar (at Batavia) = 610.17 lb 1 timbang (for grain) = 677.9625 lb 1 tael (at Bantam) = 0.1511 lb 1 tael (at Batavia) = 0.0847 lb. Capacity Units for capacity included: 1 kanne = 0.394 gallons 1 legger (for arrack) = 160.0 gallons 1 bambou (at Bantam) = 0.09223 bushels 1 koyang = 147.568 bushels 1 koyang (at Batavia; measure for rise) = bushels. Celebes (Modern Sulawesi) Units were resemble or identical with the units of neighbouring islands under Netherlands. Mass One pecul was equal to 135.64 lb. Molucca Islands Dutch units and other units resembling the units in Java, Sumatra, etc. were used. Amboyna Mass Units included: 1 bahar = 597.61 lb Sus1 mace = grain 1 tael = 55.3371 bushel. Ternate Mass One catty was equal to 1.3017 lb. References Culture of Indonesia Indonesia
Indonesian units of measurement
[ "Mathematics" ]
1,053
[ "Units of measurement by country", "Quantity", "Units of measurement" ]
45,170,672
https://en.wikipedia.org/wiki/Fractional%20graph%20isomorphism
In graph theory, a fractional isomorphism of graphs whose adjacency matrices are denoted A and B is a doubly stochastic matrix D such that DA = BD. If the doubly stochastic matrix is a permutation matrix, then it constitutes a graph isomorphism. Fractional isomorphism is the coarsest of several different relaxations of graph isomorphism. Computational complexity Whereas the graph isomorphism problem is not known to be decidable in polynomial time and not known to be NP-complete, the fractional graph isomorphism problem is decidable in polynomial time because it is a special case of the linear programming problem, for which there is an efficient solution. More precisely, the conditions on matrix D that it be doubly stochastic and that DA = BD can be expressed as linear inequalities and equalities, respectively, so any such matrix D is a feasible solution of a linear program. Equivalence to coarsest equitable partition Two graphs are also fractionally isomorphic if they have a common coarsest equitable partition. A partition of a graph is a collection of pairwise disjoint sets of vertices whose union is the vertex set of the graph. A partition is equitable if for any pair of vertices u and v in the same block of the partition and any block B of the partition, both u and v have the same number of neighbors in B. An equitable partition P is coarsest if each block in any other equitable partition is a subset of a block in P. Two coarsest equitable partitions P and Q are common if there is a bijection f from the blocks of P to the blocks of Q such for any blocks B and C in P, the number of neighbors in C of any vertex in B equals the number of neighbors in f(C) of any vertex in f(B). See also Graph isomorphism References Fractional graph theory
Fractional graph isomorphism
[ "Mathematics" ]
399
[ "Fractional graph theory", "Mathematical relations", "Graph theory" ]
45,173,599
https://en.wikipedia.org/wiki/Hollow%20fiber%20bioreactor
A Hollow fiber bioreactor is a 3 dimensional cell culturing system based on hollow fibers, which are small, semi-permeable capillary membranes arranged in parallel array with a typical molecular weight cut-off (MWCO) range of 10-30 kDa. These hollow fiber membranes are often bundled and housed within tubular polycarbonate shells to create hollow fiber bioreactor cartridges. Within the cartridges, which are also fitted with inlet and outlet ports, are two compartments: the intracapillary (IC) space within the hollow fibers, and the extracapillary (EC) space surrounding the hollow fibers. Cells are seeded into the EC space of the hollow fiber bioreactor and expand there. Cell culture medium is pumped through the IC space and delivers oxygen and nutrients to the cells via hollow fiber membrane perfusion. As the cells expand, their waste products and CO2 also perfuse the hollow fiber membranes and are carried away by the pumping of medium through the IC space. As waste products build up due to increased cell mass, the rate of medium flow can also be increased so that cell growth is not inhibited by waste product toxicity. Because thousands of hollow fibers may be packed into a single hollow fiber bioreactor, they increase the surface area of the cartridge considerably. As a result, cells can fill up the EC space to densities >108 cells/ml. However, the cartridge itself takes up a very small volume (oftentimes the volume of a 12-oz soda can). The fact that hollow fiber bioreactors are very small and yet enable incredibly high cell densities has led to their development for both research and commercial applications, including monoclonal antibody and influenza vaccine production. Likewise, because hollow fiber bioreactors use up significantly less medium and growth factors than traditional cell culture methods such as stirred-tank bioreactors, they offer a significant cost savings. Finally, hollow fiber bioreactors are sold as single-use disposables, resulting in significant time savings for laboratory staff and technicians. History In 1972, the Richard Knazek group at the NIH reported how mouse fibroblasts cultured on 1.5 cm3 hollow fiber capillary membranes composed of cellulose acetate were able to form 1 mm-wide nodules in 28 days. The group recorded the final cell number as approximately 1.7 x 107 cells from a starter batch of only 200,000 cells. When the same group cultured human choriocarcinoma cells on polymeric and silicone polycarbonate capillary membranes totaling less than 3 cm3 in volume, the cells expanded to an amount approximating 2.17 x 108 cells. The Knazek group was awarded the patent for hollow fiber bioreactor technology in 1974. Based on this patented technology, companies began building different and larger (commercial) scale hollow fiber bioreactors, with significant development and technological improvement occurring in the late 1980s to early 1990s. By 1990, at least three companies were reported to offer commercially available hollow fiber bioreactors. One engineering advance included adding a gas exchange cartridge, which enabled better control of system's pH and oxygen levels. Similar to a mammalian lung, the gas exchange cartridge efficiently oxygenated the culture medium, allowing the bioreactor to support higher numbers of cells. Combined with the ability to add or remove CO2 for precise pH control, the limitations commonly associated with large-scale cell culture were eliminated, resulting in densely packed cell cultures that could be maintained for several months. In addition, control of the fluid dynamics within each hollow fiber bioreactor led to further optimization of the cell culture environment. By alternating the pressure gradient across the hollow fiber membrane, media could flow back and forth between the EC side (cell compartment) and the IC side (hollow fiber lumen). This process, combined with the axial media flow created when media passes down the length of the fibers, optimized the growth environment throughout the entire bioreactor. This concept is termed EC cycling, and was developed as a solution to the gradients that form within hollow fiber bioreactors when media is pushed down the length of their fibers. Higher hydrostatic pressure at the axial end (media entering the fiber lumen) compared to the distal end of the bioreactor creates a Starling flow in the EC space, which is similar to what is observed in the body. This phenomenon also creates a nutrient-rich axial region and a nutrient-depleted distal region within the bioreactor. By incorporating EC cycling, the effects of Starling flow are eliminated and the entire bioreactor becomes nutrient-rich and optimized for cell growth. Optimal IC and EC space perfusion rates must be achieved in order to efficiently deliver media nutrients and growth supplements, respectively, and to collect supernatant. During the cell growth phase within these bioreactors, the media feed rate is increased to accommodate the expanding cell population. More specifically, the IC media perfusion rate is increased to provide additional glucose and oxygen to the cells while continually removing metabolic wastes such as lactic acid. When the cell space is completely filled with cells, the media feed rate plateaus, resulting in constant glucose consumption, oxygen uptake and lactate production rates. Applications With the introduction of hybridoma technology in 1975, cell culture could be applied towards the generation of secreted proteins such as monoclonal antibodies, growth hormones, and even some categories of vaccines. In order to produce these proteins on a commercial scale, new methods for culturing large batches of cells had to be developed. One such technological development was the hollow fiber bioreactor. Hollow fiber bioreactors are used to generate high concentrations of cell-derived products including monoclonal antibodies, recombinant proteins, growth factors, viruses and virus-like particles. This is possible because the semi-permeable hollow fiber membranes allow for the passage of low molecular weight nutrients and wastes from the cell-containing EC into the non-cell-containing IC space, but they do not allow the passage of larger products such as antibodies. Therefore, as a cell line (e.g., hybridoma) expands and expresses a target protein, that protein remains within the EC space and is not flushed out. At a given time point (or continually during the culture), the harvest supernatant (product) is collected, clarified and refrigerated for a future downstream application. Smaller hollow fiber bioreactors are often used for selection and optimization of cell lines prior to stepping up to larger cell culturing systems. Doing so saves on growth factor costs because a significant portion of the cell culture media does not require the addition of expensive components like fetal bovine serum. Likewise, the smaller hollow fiber bioreactors can be housed in a laboratory incubator just like cell culture plates and flasks. Recently, hollow fiber bioreactors have been tested as novel platforms for the commercial production of high-titer influenza A virus. In this study, both adherent and suspension Madin-Darby Canine Kidney Epithelial Cells (MDCK) were infected with two different strains of influenza: A/PR/8/34 (H1N1), and the pandemic strain A/Mexico/4108/2009 (H1N1). High titers were achieved for both the suspension and adherent strains; furthermore, the hollow fiber bioreactor technology was found comparable in its production capacity to that of other commercial bioreactors on the market, including classic stirred-tank and wave bioreactors (Wave) and ATF perfusion systems. References Bioreactors
Hollow fiber bioreactor
[ "Chemistry", "Engineering", "Biology" ]
1,554
[ "Bioreactors", "Biological engineering", "Chemical reactors", "Biochemical engineering", "Microbiology equipment" ]
45,180,368
https://en.wikipedia.org/wiki/Lithium%20helide
Lithium helide is a compound of helium and lithium with the formula LiHe. The substance is a cold low-density gas made of Van der Waals molecules, each composed of a helium atom and lithium atom bound by van der Waals force. The preparation of LiHe opens up the possibility to prepare other helium dimers, and beyond that multi-atom clusters that could be used to investigate Efimov states and Casimir retardation effects. Detection It was detected in 2013. Previously 7Li4He was predicted to have a binding energy of 0.0039 cm−1 (7.7×10−8eV, 1.2×10−26J, or 6 mK), and a bond length of 28 Å. Other van der Waals-bound helium molecules were previously known including Ag3He and He2. Detection of LiHe was done via fluorescence. The lithium atom in the X2Σ state was excited to A2Π. The spectrum showed a pair of lines, each split into two with the hyperfine structure of 7Li. The lines had wavenumbers of 14902.563, 14902.591, 14902.740, and 14902.768 cm−1. The two pairs are separated by 0.177 cm−1. This is explained by two different vibrational states of the LiHe molecule: 1/2 and 3/2. The bonding between the atoms is so low that it cannot withstand any rotation or greater vibration without breaking apart. The lowest rotation states would have energies of 40 and 80 mK, greater than the binding energy around 6 mK. LiHe was formed by laser ablation of lithium metal into a cryogenic helium buffer gas at a temperature between 1 and 5 K. The proportion of LiHe molecules was proportional to the density of He, and declined as the temperature increased. Properties LiHe is polar and paramagnetic. The average separation between the lithium and helium atoms depends on the isotope. For 6LiHe the separation is 48.53 Å, but for 7LiHe the distance is much smaller at 28.15 Å on average. If the helium atom of LiHe is excited so that the 1s electron is promoted to 2s, it decays by transferring energy to ionise lithium, and the molecule breaks up. This is called interatomic Coulombic decay. The energy of the Li+ and He decay products is distributed in a curve that oscillates up and down about a dozen times. See also Helium compounds#Predicted compounds References Helium compounds Lithium compounds Van der Waals molecules
Lithium helide
[ "Physics", "Chemistry" ]
530
[ "Molecules", "Matter", "Van der Waals molecules" ]
35,272,263
https://en.wikipedia.org/wiki/Linear%20predictor%20function
In statistics and in machine learning, a linear predictor function is a linear function (linear combination) of a set of coefficients and explanatory variables (independent variables), whose value is used to predict the outcome of a dependent variable. This sort of function usually comes in linear regression, where the coefficients are called regression coefficients. However, they also occur in various types of linear classifiers (e.g. logistic regression, perceptrons, support vector machines, and linear discriminant analysis), as well as in various other models, such as principal component analysis and factor analysis. In many of these models, the coefficients are referred to as "weights". Definition The basic form of a linear predictor function for data point i (consisting of p explanatory variables), for i = 1, ..., n, is where , for k = 1, ..., p, is the value of the k-th explanatory variable for data point i, and are the coefficients (regression coefficients, weights, etc.) indicating the relative effect of a particular explanatory variable on the outcome. Notations It is common to write the predictor function in a more compact form as follows: The coefficients β0, β1, ..., βp are grouped into a single vector β of size p + 1. For each data point i, an additional explanatory pseudo-variable xi0 is added, with a fixed value of 1, corresponding to the intercept coefficient β0. The resulting explanatory variables xi0(= 1), xi1, ..., xip are then grouped into a single vector xi of size p + 1. Vector Notation This makes it possible to write the linear predictor function as follows: using the notation for a dot product between two vectors. Matrix Notation An equivalent form using matrix notation is as follows: where and are assumed to be a (p+1)-by-1 column vectors, is the matrix transpose of (so is a 1-by-(p+1) row vector), and indicates matrix multiplication between the 1-by-(p+1) row vector and the (p+1)-by-1 column vector, producing a 1-by-1 matrix that is taken to be a scalar. Linear regression An example of the usage of a linear predictor function is in linear regression, where each data point is associated with a continuous outcome yi, and the relationship written where is a disturbance term or error variable — an unobserved random variable that adds noise to the linear relationship between the dependent variable and predictor function. Stacking In some models (standard linear regression, in particular), the equations for each of the data points i = 1, ..., n are stacked together and written in vector form as where The matrix X is known as the design matrix and encodes all known information about the independent variables. The variables are random variables, which in standard linear regression are distributed according to a standard normal distribution; they express the influence of any unknown factors on the outcome. This makes it possible to find optimal coefficients through the method of least squares using simple matrix operations. In particular, the optimal coefficients as estimated by least squares can be written as follows: The matrix is known as the Moore–Penrose pseudoinverse of X. The use of the matrix inverse in this formula requires that X is of full rank, i.e. there is not perfect multicollinearity among different explanatory variables (i.e. no explanatory variable can be perfectly predicted from the others). In such cases, the singular value decomposition can be used to compute the pseudoinverse. Preprocessing of explanatory variables When a fixed set of nonlinear functions are used to transform the value(s) of a data point, these functions are known as basis functions. An example is polynomial regression, which uses a linear predictor function to fit an arbitrary degree polynomial relationship (up to a given order) between two sets of data points (i.e. a single real-valued explanatory variable and a related real-valued dependent variable), by adding multiple explanatory variables corresponding to various powers of the existing explanatory variable. Mathematically, the form looks like this: In this case, for each data point i, a set of explanatory variables is created as follows: and then standard linear regression is run. The basis functions in this example would be This example shows that a linear predictor function can actually be much more powerful than it first appears: It only really needs to be linear in the coefficients. All sorts of non-linear functions of the explanatory variables can be fit by the model. There is no particular need for the inputs to basis functions to be univariate or single-dimensional (or their outputs, for that matter, although in such a case, a K-dimensional output value is likely to be treated as K separate scalar-output basis functions). An example of this is radial basis functions (RBF's), which compute some transformed version of the distance to some fixed point: An example is the Gaussian RBF, which has the same functional form as the normal distribution: which drops off rapidly as the distance from c increases. A possible usage of RBF's is to create one for every observed data point. This means that the result of an RBF applied to a new data point will be close to 0 unless the new point is near to the point around which the RBF was applied. That is, the application of the radial basis functions will pick out the nearest point, and its regression coefficient will dominate. The result will be a form of nearest neighbor interpolation, where predictions are made by simply using the prediction of the nearest observed data point, possibly interpolating between multiple nearby data points when they are all similar distances away. This type of nearest neighbor method for prediction is often considered diametrically opposed to the type of prediction used in standard linear regression: But in fact, the transformations that can be applied to the explanatory variables in a linear predictor function are so powerful that even the nearest neighbor method can be implemented as a type of linear regression. It is even possible to fit some functions that appear non-linear in the coefficients by transforming the coefficients into new coefficients that do appear linear. For example, a function of the form for coefficients could be transformed into the appropriate linear function by applying the substitutions leading to which is linear. Linear regression and similar techniques could be applied and will often still find the optimal coefficients, but their error estimates and such will be wrong. The explanatory variables may be of any type: real-valued, binary, categorical, etc. The main distinction is between continuous variables (e.g. income, age, blood pressure, etc.) and discrete variables (e.g. sex, race, political party, etc.). Discrete variables referring to more than two possible choices are typically coded using dummy variables (or indicator variables), i.e. separate explanatory variables taking the value 0 or 1 are created for each possible value of the discrete variable, with a 1 meaning "variable does have the given value" and a 0 meaning "variable does not have the given value". For example, a four-way discrete variable of blood type with the possible values "A, B, AB, O" would be converted to separate two-way dummy variables, "is-A, is-B, is-AB, is-O", where only one of them has the value 1 and all the rest have the value 0. This allows for separate regression coefficients to be matched for each possible value of the discrete variable. Note that, for K categories, not all K dummy variables are independent of each other. For example, in the above blood type example, only three of the four dummy variables are independent, in the sense that once the values of three of the variables are known, the fourth is automatically determined. Thus, it's really only necessary to encode three of the four possibilities as dummy variables, and in fact if all four possibilities are encoded, the overall model becomes non-identifiable. This causes problems for a number of methods, such as the simple closed-form solution used in linear regression. The solution is either to avoid such cases by eliminating one of the dummy variables, and/or introduce a regularization constraint (which necessitates a more powerful, typically iterative, method for finding the optimal coefficients). See also Linear model Linear regression References Regression analysis Machine learning
Linear predictor function
[ "Engineering" ]
1,786
[ "Artificial intelligence engineering", "Machine learning" ]
35,272,703
https://en.wikipedia.org/wiki/Biomanipulation
Biomanipulation is the deliberate alteration of an ecosystem by adding or removing species, especially predators. Aquatic ecosystems Changing the fish population of bodies of water as a part of watershed management can facilitate desirable changes in aquatic ecosystems suffering from eutrophication characterized by domination by phytoplankton aiding ecosystem restoration, an application of restoration ecology. In ponds or lakes alternative stable conditions, one with high algae populations, little other plant life, and turbid water, another with low algae populations, a diverse plant population, and clear water, may exist. In addition to prevention of excess nutrients such as phosphorus and nitrates, removal of certain fish species adapted to turbid water may facilitate change from one steady state to the other, an application of dynamical systems theory. Fish species may be removed by means of poison, harvesting, or introduction of predatory species. As a different fish community will result from the process it will affect recreational and commercial fishermen whose cooperation or opposition is important. Lake Zwemlust, a hypertrophic pond used as a swimming pool in The Netherlands with an area of 1.5 hectares and an average depth of 1.5 meters, was treated in March 1987. The initial Secchi disk transparency was only 0.3 meters, less than the 1 meter maximum permitted for swimming pools in The Netherlands. In the first summer Secchi disk transparency increased to at least 2.5 meters, the maximum depth of the lake. The lake was drained and 1,500 kilograms of planktivorous and benthivorous fish such as common bream were removed by seining and electrofishing. The pond was stocked with 1500 northern pike fingerlings and some mature rudd whose offspring served as food for the pike. Willow branches, nuphar lutea roots, and starts of Chara globularis were added as vegetation and shelter. Expenses were met by the local water authority which was compensated by increased patronage by swimmers. In the summer of the second year, 1988, there was considerable plant growth and, possibly due to lack of predation by carp or minnows, an explosion in the number of snails, including Radix peregra var. ovata a host of Trichobilharzia ocellata the cause of schistosome dermatitis, swimmer's itch. In addition to grazing by zooplankton the lush growth of macrophytes removed sufficient nutrients from the water to prevent algal bloom by phytoplanktons. Notes External links and further reading Ecology terminology Environmental engineering
Biomanipulation
[ "Chemistry", "Engineering", "Biology" ]
520
[ "Chemical engineering", "Civil engineering", "Ecology terminology", "Environmental engineering" ]
28,276,396
https://en.wikipedia.org/wiki/Hoyle%E2%80%93Narlikar%20theory%20of%20gravity
The Hoyle–Narlikar theory of gravity is a Machian and conformal theory of gravity proposed by Fred Hoyle and Jayant Narlikar that originally fits into the quasi steady state model of the universe. Description The gravitational constant G is arbitrary and is determined by the mean density of matter in the universe. The theory was inspired by the Wheeler–Feynman absorber theory for electrodynamics. When Richard Feynman, as a graduate student, lectured on the Wheeler–Feynman absorber theory in the weekly physics seminar at Princeton, Albert Einstein was in the audience and stated at question time that he was trying to achieve the same thing for gravity. Incompatibility Stephen Hawking showed in 1965 that the theory is incompatible with an expanding universe, because the Wheeler–Feynman advanced solution would diverge. However, at that time the accelerating expansion of the universe was not known, which resolves the divergence issue because of the cosmic event horizon. Comparison with Einstein's General Relativity The Hoyle–Narlikar theory reduces to Einstein's general relativity in the limit of a smooth fluid model of particle distribution constant in time and space. Hoyle–Narlikar's theory is consistent with some cosmological tests. Hypothesis Unlike the standard cosmological model, the quasi steady state hypothesis implies the universe is eternal. According to Narlikar, multiple mini bangs would occur at the center of quasars, with various creation fields (or C-field) continuously generating matter out of empty space due to local concentration of negative energy that would also prevent violation of conservation laws, in order to keep the mass density constant as the universe expands. The low-temperature cosmic background radiation would not originate from the Big Bang but from metallic dust made from supernovae, radiating the energy of stars. Challenge However, the quasi steady-state hypothesis is challenged by observation as it does not fit into WMAP data. See also Mach's principle Conformal gravity Wheeler–Feynman absorber theory Brans–Dicke theory Non-standard cosmology Notes Bibliography General relativity Physical cosmology Theories of gravity
Hoyle–Narlikar theory of gravity
[ "Physics", "Astronomy" ]
442
[ "Astronomical sub-disciplines", "Theoretical physics", "Theories of gravity", "Astrophysics", "General relativity", "Theory of relativity", "Physical cosmology" ]
28,277,010
https://en.wikipedia.org/wiki/Poincar%C3%A9%20complex
In mathematics, and especially topology, a Poincaré complex (named after the mathematician Henri Poincaré) is an abstraction of the singular chain complex of a closed, orientable manifold. The singular homology and cohomology groups of a closed, orientable manifold are related by Poincaré duality. Poincaré duality is an isomorphism between homology and cohomology groups. A chain complex is called a Poincaré complex if its homology groups and cohomology groups have the abstract properties of Poincaré duality. A Poincaré space is a topological space whose singular chain complex is a Poincaré complex. These are used in surgery theory to analyze manifold algebraically. Definition Let be a chain complex of abelian groups, and assume that the homology groups of are finitely generated. Assume that there exists a map , called a chain-diagonal, with the property that . Here the map denotes the ring homomorphism known as the augmentation map, which is defined as follows: if , then . Using the diagonal as defined above, we are able to form pairings, namely: , where denotes the cap product. A chain complex C is called geometric if a chain-homotopy exists between and , where is the transposition/flip given by . A geometric chain complex is called an algebraic Poincaré complex, of dimension n, if there exists an infinite-ordered element of the n-dimensional homology group, say , such that the maps given by are group isomorphisms for all . These isomorphisms are the isomorphisms of Poincaré duality. Example The singular chain complex of an orientable, closed n-dimensional manifold is an example of a Poincaré complex, where the duality isomorphisms are given by capping with the fundamental class . See also Poincaré space References – especially Chapter 2 External links Classifying Poincaré complexes via fundamental triples on the Manifold Atlas Algebraic topology Homology theory Duality theories
Poincaré complex
[ "Mathematics" ]
409
[ "Mathematical structures", "Algebraic topology", "Fields of abstract algebra", "Topology", "Category theory", "Duality theories", "Geometry" ]
28,281,258
https://en.wikipedia.org/wiki/Eschenmoser%20sulfide%20contraction
The Eschenmoser sulfide contraction is an organic reaction first described by Albert Eschenmoser for the synthesis of 1,3-dicarbonyl compounds from a thioester. The method requires a base and a tertiary phosphine. The method is of some relevance to organic chemistry and has been notably applied in the vitamin B12 total synthesis. A base abstracts the labile hydrogen atom in the thioester, a sulfide anion is formed through an episulfide intermediate which is removed by the phosphine. Scope The Eschenmoser sulfide contraction method has been employed in a number of total synthesis efforts, like that of fuligocandin A and B, cocaine, diplodialide A and isoretronecanol An example of general synthetic utility is the synthesis of novel carbapenems References Organic reactions Name reactions
Eschenmoser sulfide contraction
[ "Chemistry" ]
178
[ "Name reactions", "Organic reactions" ]
49,416,240
https://en.wikipedia.org/wiki/Qadad
Qadad ( qaḍāḍ) or qudad is a waterproof plaster surface, made of a lime plaster treated with slaked lime and oils and fats. The technique is over a thousand years old, with the remains of this early plaster still seen on the standing sluices of the ancient Marib Dam. Volcanic ash, pumice, scoria (), in the Yemeni dialect, or other crushed volcanic aggregate are often used as pozzolanic agents, reminiscent of ancient Roman lime plaster which incorporated pozzolanic volcanic ash. Due to the slowness of some of the chemical reactions, qadad mortar can take over a hundred days to prepare, from quarrying of raw materials to the beginning of application to the building. It can also take over a year to set fully. In 2004, a documentary film Qudad, Re-inventing a Tradition was made by the filmmaker Caterina Borelli. It documents the restoration of the Amiriya Complex, which was awarded the Aga Khan Award for Architecture in 2007. Traditional preparation After collecting blocks of lime stone, they were fired in a kiln for 4 days, after which the fire and baked lime were extinguished with water, and allowed to cool for 2-3 days more. The baked lime (Arabic: nūreh) was then crushed and mixed with soft, black volcanic cinders known as scoria (Arabic: shāsh), a pumice having the consistency of gravel. The scoria and lime were pounded with a stone to break them down into finer particles and thoroughly mixed together without water (the two ingredients being mixed together in a ratio of two parts of aggregate to one part of lime), and then allowed to rest 3-4 days until settled. Afterwards, the two elements were mixed together with water (usually 1 volume of water to 3 volumes of lime/aggregate), during which time the batch is continuously agitated in a tedious process known as slaking and which required many long hours of manual labour (as much as 4-5 weeks), before a finer lime water solution was added thereto for 1-2 months so as to convert it to a paste. The more that it was pounded with a long shovel or wooden paddle, the more the qadad became adhesive. Traditional application With the now ready mixture of lime and volcanic cinders, they would apply three-layers of qadad-plaster to the walls of cisterns to make them impermeable; the first layer having the largest particles of volcanic cinders (scoria) and the least amount of lime was applied to rough stone, the plaster being added to a thickness of about two inches. They took a sharp-edged stone and, for several days, pounded and rubbed the first layer of qadad firmly onto the wall, all the while sprinkling it with lime-water to keep it wet. The second layer was applied after fully working the first layer by beating. The first process was repeated, this time the wall being plastered with a mixture of qadad containing smaller particles of volcanic cinders and more lime. A sharp-edged stone was again used to pound the qadad firmly onto the wall, all the while sprinkling it with lime-water to keep it wet. Finally, the third layer was applied containing the smallest particles of volcanic cinders and the largest quantity of lime and worked with a sharp-edged stone (one part aggregate to two parts lime, and pounded to a fine paste), and lime-water spattered on the wall to maintain its wetness. After the final application, the wall was treated with a very finely-ground consistency of qadad which was allowed to dry, and when dried, an application of animal fat (suet) was then smeared on the wall for smoothing and burnishing. The end result is that of a wall that is as hard as smooth-marble with beating. According to archaeologist Selma Al-Radi, qadad can only be used as a plaster on buildings constructed of stone and baked brick, but it will not adhere to mudbrick, cement blocks or concrete. In Yemen it was traditionally made with two basic ingredients, baked lime and volcanic scoria, other countries have traditionally made-use of fine riverbed sand or pebbles instead of scoria, and which were mixed together with lime for use as a common mortar, or to be used as an impervious wall plaster. Usage In Sana'a of the early 20th century, qadad-plaster was used to line pools, reservoirs, drainage pipes, and cesspits, and to make them impermeable. After applying the qadad, the coating was burnished with a stone. Often its use extended unto the main kitchen room and to gutters and sinks, wherever water was likely to be used extensively (see also tadelakt). The walls of store-rooms where grain was kept and which required being impervious to water were also frequently painted-over with qadad and which gave to the rooms an appearance of being painted with oil paint. Carl Rathjens, who visited Yemen in the first half of the 20th century, mentions seeing in Sana'a "the houses of well-to-do people" where the entrance halls were often painted with qadad up to a certain height. The interior walls of public baths were sometimes brick, sometimes stone. If brick, they were protected with a thick layer of hard gypsum plaster which were then oil-painted. In Islamic architecture, different consistencies of qadad were made for different usages: domes, flat ceilings, vertical walls and decorations in the geometric interlace. See also Limepit (old technique used in calcining limestone) Lime plaster Plasterwork Pozzolan Tadelakt, a similar waterproof lime-soap plaster Sarooj, a similar water-resistant plaster References External links Caterina Borelli, , 2012, A Documentary on the renovation of the ‘Amiryia Madrasa and Mosque in Rada, Yemen, using the ancient waterproofing technique with qudad. Arabic architecture Building materials Architecture Islamic architectural elements Arab inventions Moisture protection Plastering Alchemical substances Wallcoverings
Qadad
[ "Physics", "Chemistry", "Engineering" ]
1,269
[ "Building engineering", "Coatings", "Alchemical substances", "Architecture", "Construction", "Materials", "Plastering", "Matter", "Building materials" ]
49,418,115
https://en.wikipedia.org/wiki/Large%20deformation%20diffeomorphic%20metric%20mapping
Large deformation diffeomorphic metric mapping (LDDMM) is a specific suite of algorithms used for diffeomorphic mapping and manipulating dense imagery based on diffeomorphic metric mapping within the academic discipline of computational anatomy, to be distinguished from its precursor based on diffeomorphic mapping. The distinction between the two is that diffeomorphic metric maps satisfy the property that the length associated to their flow away from the identity induces a metric on the group of diffeomorphisms, which in turn induces a metric on the orbit of shapes and forms within the field of computational anatomy. The study of shapes and forms with the metric of diffeomorphic metric mapping is called diffeomorphometry. A diffeomorphic mapping system is a system designed to map, manipulate, and transfer information which is stored in many types of spatially distributed medical imagery. Diffeomorphic mapping is the underlying technology for mapping and analyzing information measured in human anatomical coordinate systems which have been measured via Medical imaging. Diffeomorphic mapping is a broad term that actually refers to a number of different algorithms, processes, and methods. It is attached to many operations and has many applications for analysis and visualization. Diffeomorphic mapping can be used to relate various sources of information which are indexed as a function of spatial position as the key index variable. Diffeomorphisms are by their Latin root structure preserving transformations, which are in turn differentiable and therefore smooth, allowing for the calculation of metric based quantities such as arc length and surface areas. Spatial location and extents in human anatomical coordinate systems can be recorded via a variety of Medical imaging modalities, generally termed multi-modal medical imagery, providing either scalar and or vector quantities at each spatial location. Examples are scalar T1 or T2 magnetic resonance imagery, or as 3x3 diffusion tensor matrices diffusion MRI and diffusion-weighted imaging, to scalar densities associated to computed tomography (CT), or functional imagery such as temporal data of functional magnetic resonance imaging and scalar densities such as Positron emission tomography (PET). Computational anatomy is a subdiscipline within the broader field of neuroinformatics within bioinformatics and medical imaging. The first algorithm for dense image mapping via diffeomorphic metric mapping was Beg's LDDMM for volumes and Joshi's landmark matching for point sets with correspondence, with LDDMM algorithms now available for computing diffeomorphic metric maps between non-corresponding landmarks and landmark matching intrinsic to spherical manifolds, curves, currents and surfaces, tensors, varifolds, and time-series. The term LDDMM was first established as part of the National Institutes of Health supported Biomedical Informatics Research Network. In a more general sense, diffeomorphic mapping is any solution that registers or builds correspondences between dense coordinate systems in medical imaging by ensuring the solutions are diffeomorphic. There are now many codes organized around diffeomorphic registration including ANTS, DARTEL, DEMONS, StationaryLDDMM, FastLDDMM, as examples of actively used computational codes for constructing correspondences between coordinate systems based on dense images. The distinction between diffeomorphic metric mapping forming the basis for LDDMM and the earliest methods of diffeomorphic mapping is the introduction of a Hamilton principle of least-action in which large deformations are selected of shortest length corresponding to geodesic flows. This important distinction arises from the original formulation of the Riemannian metric corresponding to the right-invariance. The lengths of these geodesics give the metric in the metric space structure of human anatomy. Non-geodesic formulations of diffeomorphic mapping in general does not correspond to any metric formulation. History of development Diffeomorphic mapping 3-dimensional information across coordinate systems is central to high-resolution Medical imaging and the area of Neuroinformatics within the newly emerging field of bioinformatics. Diffeomorphic mapping 3-dimensional coordinate systems as measured via high resolution dense imagery has a long history in 3-D beginning with Computed Axial Tomography (CAT scanning) in the early 80's by the University of Pennsylvania group led by Ruzena Bajcsy, and subsequently the Ulf Grenander school at Brown University with the HAND experiments. In the 90's there were several solutions for image registration which were associated to linearizations of small deformation and non-linear elasticity. The central focus of the sub-field of Computational anatomy (CA) within medical imaging is mapping information across anatomical coordinate systems at the 1 millimeter morphome scale. In CA mapping of dense information measured within Magnetic resonance image (MRI) based coordinate systems such as in the brain has been solved via inexact matching of 3D MR images one onto the other. The earliest introduction of the use of diffeomorphic mapping via large deformation flows of diffeomorphisms for transformation of coordinate systems in image analysis and medical imaging was by Christensen, Rabbitt and Miller and Trouve. The introduction of flows, which are akin to the equations of motion used in fluid dynamics, exploit the notion that dense coordinates in image analysis follow the Lagrangian and Eulerian equations of motion. This model becomes more appropriate for cross-sectional studies in which brains and or hearts are not necessarily deformations of one to the other. Methods based on linear or non-linear elasticity energetics which grows with distance from the identity mapping of the template, is not appropriate for cross-sectional study. Rather, in models based on Lagrangian and Eulerian flows of diffeomorphisms, the constraint is associated to topological properties, such as open sets being preserved, coordinates not crossing implying uniqueness and existence of the inverse mapping, and connected sets remaining connected. The use of diffeomorphic methods grew quickly to dominate the field of mapping methods post Christensen's original paper, with fast and symmetric methods becoming available. Such methods are powerful in that they introduce notions of regularity of the solutions so that they can be differentiated and local inverses can be calculated. The disadvantages of these methods is that there was no associated global least-action property which could score the flows of minimum energy. This contrasts the geodesic motions which are central to the study of Rigid body kinematics and the many problems solved in Physics via Hamilton's principle of least action. In 1998, Dupuis, Grenander and Miller established the conditions for guaranteeing the existence of solutions for dense image matching in the space of flows of diffeomorphisms. These conditions require an action penalizing kinetic energy measured via the Sobolev norm on spatial derivatives of the flow of vector fields. The large deformation diffeomorphic metric mapping (LDDMM) code that Faisal Beg derived and implemented for his PhD at Johns Hopkins University developed the earliest algorithmic code which solved for flows with fixed points satisfying the necessary conditions for the dense image matching problem subject to least-action. Computational anatomy now has many existing codes organized around diffeomorphic registration including ANTS, DARTEL, DEMONS, LDDMM, StationaryLDDMM as examples of actively used computational codes for constructing correspondences between coordinate systems based on dense images. These large deformation methods have been extended to landmarks without registration via measure matching, curves, surfaces, dense vector and tensor imagery, and varifolds removing orientation. The diffeomorphism orbit model in computational anatomy Deformable shape in computational anatomy (CA)is studied via the use of diffeomorphic mapping for establishing correspondences between anatomical coordinates in Medical Imaging. In this setting, three dimensional medical images are modelled as a random deformation of some exemplar, termed the template , with the set of observed images element in the random orbit model of CA for images . The template is mapped onto the target by defining a variational problem in which the template is transformed via the diffeomorphism used as a change of coordinate to minimize a squared-error matching condition between the transformed template and the target. The diffeomorphisms are generated via smooth flows , with , satisfying the Lagrangian and Eulerian specification of the flow field associated to the ordinary differential equation, with the Eulerian vector fields determining the flow. The vector fields are guaranteed to be 1-time continuously differentiable by modelling them to be in a smooth Hilbert space supporting 1-continuous derivative. The inverse is defined by the Eulerian vector-field with flow given by To ensure smooth flows of diffeomorphisms with inverse, the vector fields with components in must be at least 1-time continuously differentiable in space which are modelled as elements of the Hilbert space using the Sobolev embedding theorems so that each element has 3-times square-integrable weak-derivatives. Thus embeds smoothly in 1-time continuously differentiable functions. The diffeomorphism group are flows with vector fields absolutely integrable in Sobolev norm The variational problem of dense image matching and sparse landmark matching LDDMM algorithm for dense image matching In CA the space of vector fields are modelled as a reproducing Kernel Hilbert space (RKHS) defined by a 1-1, differential operator determining the norm where the integral is calculated by integration by parts when is a generalized function in the dual space . The differential operator is selected so that the Green's kernel, the inverse of the operator, is continuously differentiable in each variable implying that the vector fields support 1-continuous derivative; see for the necessary conditions on the norm for existence of solutions. The original large deformation diffeomorphic metric mapping (LDDMM) algorithms of Beg, Miller, Trouve, Younes was derived taking variations with respect to the vector field parameterization of the group, since are in a vector spaces. Beg solved the dense image matching minimizing the action integral of kinetic energy of diffeomorphic flow while minimizing endpoint matching term according to Beg's Iterative Algorithm for Dense Image Matching Update until convergence, each iteration, with : This implies that the fixed point at satisfies , which in turn implies it satisfies the Conservation equation given by the according to LDDMM registered landmark matching The landmark matching problem has a pointwise correspondence defining the endpoint condition with geodesics given by the following minimum: ; Iterative Algorithm for Landmark Matching Joshi originally defined the registered landmark matching probleme,. Update until convergence, each iteration, with : This implies that the fixed point satisfy with . Variations for LDDMM dense image and landmark matching The Calculus of variations was used in Beg[49] to derive the iterative algorithm as a solution which when it converges satisfies the necessary maximizer conditions given by the necessary conditions for a first order variation requiring the variation of the endpoint with respect to a first order variation of the vector field. The directional derivative calculates the Gateaux derivative as calculated in Beg's original paper[49] and. LDDMM Diffusion Tensor Image Matching LDDMM matching based on the principal eigenvector of the diffusion tensor matrix takes the image as a unit vector field defined by the first eigenvector. The group action becomes where that denotes image squared-error norm. LDDMM matching based on the entire tensor matrix has group action transformed eigenvectors . Dense matching problem onto principle eigenvector of DTI The variational problem matching onto vector image with endpoint becomes Dense matching problem onto DTI MATRIX The variational problem matching onto: with endpoint with Frobenius norm, giving variational problem LDDMM ODF High angular resolution diffusion imaging (HARDI) addresses the well-known limitation of DTI, that is, DTI can only reveal one dominant fiber orientation at each location. HARDI measures diffusion along uniformly distributed directions on the sphere and can characterize more complex fiber geometries by reconstructing an orientation distribution function (ODF) that characterizes the angular profile of the diffusion probability density function of water molecules. The ODF is a function defined on a unit sphere, . Denote the square-root ODF () as , where is non-negative to ensure uniqueness and . The metric defines the distance between two functions as where is the normal dot product between points in the sphere under the metric. The template and target are denoted , , indexed across the unit sphere and the image domain, with the target indexed similarly. Define the variational problem assuming that two ODF volumes can be generated from one to another via flows of diffeomorphisms , which are solutions of ordinary differential equations . The group action of the diffeomorphism on the template is given according to , where is the Jacobian of the affined transformed ODF and is defined as The LDDMM variational problem is defined as . Hamiltonian LDDMM for dense image matching Beg solved the early LDDMM algorithms by solving the variational matching taking variations with respect to the vector fields. Another solution by Vialard, reparameterizes the optimization problem in terms of the state , for image , with the dynamics equation controlling the state by the control given in terms of the advection equation according to . The endpoint matching term gives the variational problem: Software for diffeomorphic mapping Software suites containing a variety of diffeomorphic mapping algorithms include the following: Deformetrica ANTS DARTEL Voxel-based morphometry(VBM) DEMONS LDDMM StationaryLDDMM Cloud software MRICloud See also Riemannian metric and Lie-bracket in computational anatomy Bayesian model of computational anatomy References Further reading Computational anatomy Geometry Fluid mechanics Neuroscience Neural engineering Biomedical engineering
Large deformation diffeomorphic metric mapping
[ "Mathematics", "Engineering", "Biology" ]
2,790
[ "Biological engineering", "Neuroscience", "Biomedical engineering", "Civil engineering", "Geometry", "Fluid mechanics", "Medical technology" ]