text
stringlengths
11
1.65k
source
stringlengths
38
44
C15H17N The molecular formula CHN may refer to:
https://en.wikipedia.org/wiki?curid=30783771
Plasma Science Society of India was founded in 1979 at Institute for Plasma Research, Ahmedabad in India for the benefit of the fusion community working on plasma. This carves the thrive for knowledge towards the fusion research in the field of theoretical and experimental research . The devices are SST-1, SINP-Tokamak, AdityaTokamak. There are over 950 life-member of this society along with number of annual members.
https://en.wikipedia.org/wiki?curid=30788646
1,1'-Azobis-1,2,3-triazole 1,1′-Azobis-1,2,3-triazole is a moderately explosive but comparatively stable chemical compound which contains a long continuous chain of nitrogen atoms, with an unbroken chain of eight nitrogen atoms cyclised into two 1,2,3-triazole rings. It is stable up to 194 °C. The compound exhibits cis–trans isomerism at the central azo group: the "trans" isomer is more stable and is yellow, while the "cis" isomer is less stable and is blue. The two rings are aromatic and form a conjugated system with the azo linkage. This chromophore allows the "trans" compound to be isomerised to the "cis" when treated with an appropriate wavelength of ultraviolet light. In 2011, azobis(tetrazole) was prepared by Klapötke and Piercey which has a ten-nitrogen chain. The record was later taken by a N11 chain compound synthesized by a group of Chinese researchers. A branched chain N11 system has also been reported as part of an unstable but highly nitrogen rich azidotetrazole derivative with formula CN.
https://en.wikipedia.org/wiki?curid=30790895
Blake number The in fluid mechanics is a nondimensional number showing the ratio of inertial force to viscous force. It is used in momentum transfer in general and in particular for flow of a fluid through beds of solids. It is a generalisation of the Reynolds number for flow through porous media. Expressed mathematically the is: where
https://en.wikipedia.org/wiki?curid=30790943
Michael P. Barnett Michael Peter Barnett (24 March 1929 – 13 March 2012) was a British theoretical chemist and computer scientist. He developed mathematical and computer techniques for quantum chemical problems, and some of the earliest software for several other kinds of computer application. After his early days in London, Essex and Lancashire, he went to King's College, London, in 1945, the Royal Radar Establishment in Malvern in 1953, IBM UK in 1955, the University of Wisconsin Department of Chemistry in 1957, and the MIT Solid State and Molecular Theory Group in 1958. At MIT he was an Associate Professor of Physics and Director of the Cooperative Computing Laboratory. He returned to England, to the Institute of Computer Science of the University of London in 1964, and then back to United States the following year. He worked in industry, and taught at Columbia University 1975–77 and the City University of New York, 1977–96, retiring as an Emeritus Professor. After retirement he focused on symbolic calculation in quantum chemistry and nuclear magnetic resonance. Barnett spent most of the World War II years near Fleetwood in Lancashire. He attended Baines' Grammar School in Poulton-le-Fylde, then went to King's College, London in 1945, where he received a BSc in Chemistry in 1948, a PhD for work in the Theoretical Physics Department with Charles Coulson in 1952, that he continued on a one-year post-doctoral fellowship
https://en.wikipedia.org/wiki?curid=30796781
Michael P. Barnett His assigned project was to determine if electrostatic forces could account for the energy needed to make two parts of an ethane molecule rotate around the bond that joins them. This work required the evaluation of certain mathematical objects – molecular integrals over Slater orbitals. Barnett extended some earlier work by Charles Coulson by discovering some recurrence formulas, that are part of a method of analysis and computation frequently referred to as the Barnett-Coulson expansion. Molecular integrals remain a significant problem in quantum chemistry and continued to be one of Barnett's main interests. Two years after Barnett started this work, he was invited to be one of the twenty-five participants in a conference that was organised by Robert Mulliken, sponsored by the National Academy of Sciences and known, from its venue, as the Shelter Island Conference on Quantum Mechanics in Valence Theory. Barnett's attendance was enabled by the British Rayon Research Association, which supported his post-graduate work. At the Royal Radar Establishment, Barnett held a Senior Government Fellowship. He worked on aspects of theoretical solid state physics, that included the properties of organic semiconductors. As part of his work at IBM UK, he directed a IBM model 650 computer centre
https://en.wikipedia.org/wiki?curid=30796781
Michael P. Barnett He directed and participated in numerous projects that included (1) calculating DNA structures from crystallographic data, and (2) simulations to plan the location and operation of dams and reservoirs on the River Nile, working with Humphry Morrice, the hydrological advisor to the Government of the Sudan, and his predecessor, Nimmo Allen. In 1957, Barnett accepted an invitation from Joseph Hirschfelder, in the Chemistry Department of the University of Wisconsin at Madison, to work on mathematical theories of combustion and detonation. In 1958, John Clarke Slater invited Barnett to join his Solid State and Molecular Theory Group. He was made an Associate Professor of Physics in 1960 and, in 1962, set up an IBM 709 installation, the Cooperative Computing Laboratory (CCL). This supported heavy computations by several groups at MIT. The SSMTG used much of the time for molecular and solid state research, attracting many post-doctoral workers from the UK and Canada. The calculations of quantum chemistry involve approximate solutions of the Schrödinger equation. Many methods for computing these require molecular integrals that are defined for systems of 2, 3 and 4 atoms, respectively. The 4-atom (or 4-centre) integrals are by far the most difficult. By extending the methods of his PhD papers, Barnett developed a detailed methodology for evaluating all of these integrals These were coded in FORTRAN, in software that was available to the IBM mainframe community through the SHARE organisation
https://en.wikipedia.org/wiki?curid=30796781
Michael P. Barnett Members of the SSMTG who developed and used these programs included Donald Ellis, Russell Pitzer and Donald Merrifield. In 1960, Barnett started to extend a technique he had learned from Frank Boys to program a computer to construct coded mathematical formulas. He needed a way to typeset these. A Photon machine, equipped with paper provided an immediate solution. Barnett developed software to typeset computer output, and applied this to documents containing mathematical formulas and to a wide range of other typesetting problems. He produced books for the MIT Libraries, and with Imre Izs·k, the Smithsonian Astrophysical Observatory. The work of his team and the parallel work of other groups through 1964 is described in his monograph. Barnett also began to develop his ideas on cognitive modelling, as a member of Frank Schmitt's seminar on biological memory. He wrote on river simulation as a member of the Harvard Water Resources seminar (see for related work. He, John Iliffe, Robert Futrelle, Paul Fehder, George Coulouris and other members of the CCL worked on parsing, text processing (the precursor of word processing), programming language constructs, scientific visualisation, and further topics that melded into the computer science of later years
https://en.wikipedia.org/wiki?curid=30796781
Michael P. Barnett In 1963, Barnett accepted an appointment as Reader in Information Processing at the Institute of Computer Science in the University of London, and, while he was still at MIT, the Department of Scientific and Industrial Research (DSIR) awarded him a grant, to be taken up in London, to continue his work on computer typesetting, that was publicised by the Director, Richard A. Buckingham. His return received further publicity as a "reverse brain drain". He worked extensively with printing trade union officials and the staff of training colleges, to provide understanding of the new methods and their potential (pages 208–218 of his book). His concern with social aspects of technological innovation is noted in a detailed book review. He served on the Information Committee of the DSIR. Asked about university research in England, in a BBC interview on his arrival in 1964, he said "the trouble was deeper than money … Frustration is caused by concentration of power in the hands of a few." His deepening concern about entrepreneurial activity in academe intensified, (Section 10.6 of his book.) After a year at the Institute of Computer Science, Barnett went back to the US He joined the newly formed Graphic Systems Division of RCA, to create software for commercial computer typesetting. RCA acquired the US rights to the Digiset machine of Rudolf Hell and marketed an adaptation as the Videocomp. About 50 were sold. Barnett designed the algorithmic markup language PAGE-1 to express complicated formats in full page composition
https://en.wikipedia.org/wiki?curid=30796781
Michael P. Barnett This was used for a wide range of typeset products that included, over the years, the "Social Sciences Index" of the H. W. Wilson Company and several other publications excerpted in a later review paper., The application to database publishing led Barnett to devise and implement a programming language, that he called SNAP, to express file handling operations as sequences of grammatical English sentences. In 1969, Barnett joined the H. W. Wilson Company, a publisher of bibliographic tools for libraries, to automate the production of these. He designed and introduced the system that was used to produce the "Social Sciences Index" for about 10 years. He had also started to teach courses on library automation at the Columbia School of Library Service. He joined the Columbia faculty full-time in 1975. In 1977, Barnett moved to the Department of Computer and Information Science at Brooklyn College of the City University of New York in 1977, retiring as Professor Emeritus in 1996. Whilst at CUNY, he directed a major NSF funded project to develop computer generated printed matter for undergraduate teaching. He wrote software that incorporated pictures in documents that were typeset using PAGE-1. He wrote several books with his three teenage children, Gabrielle, Simon and Graham, aimed at the home market. These dealt with the production of computer graphics on early personal computers, that included the Commodore 64, the Apple II, and IBM PC, and the use of elementary algorithms
https://en.wikipedia.org/wiki?curid=30796781
Michael P. Barnett In 1989, Barnett started to spend part of his time as a Visiting Scientist at the John von Neumann National Supercomputer Center, located on the outskirts of Princeton and run by a consortium of universities. He restarted work on molecular integrals, using the power of the supercomputer to go beyond the possibilities of the 1960s. In 1997, he became Emeritus from City University. In retirement, he continued to explore applications of symbolic calculation to molecular integrals, nuclear magnetic resonance, and other topics.
https://en.wikipedia.org/wiki?curid=30796781
Selenium yeast Selenium yeast, produced by fermenting "Saccharomyces cerevisiae" in a selenium-rich media, is a recognized source of organic food-form selenium. In this process, virtually all of the selenium structurally substitutes for sulfur in the amino acid methionine thus forming selenomethionine via the same pathways and enzymes that are used to form sulfur-containing methionine. Owing to its similarity to S-containing methionine, selenomethionine is taken up nonspecifically and becomes part of yeast protein. It is this metabolic route that makes selenium yeast valuable in animal and human nutrition, since it offers the same organic form of selenium produced by food-chain autotrophs (i.e., most plants and certain blue-green algae). Selenium is physiologically essential and may also offer a protective effect against several degenerative diseases. The organic form of selenium provided by selenium yeast has been shown to differ in bioavailability and metabolism compared with inorganic (e.g., selenate, selenite) forms of dietary selenium. Dietary supplementation using selenium yeast has been associated with increased ability to counteract oxidative stress. Furthermore, selenium yeast has been used in a wide range of studies aimed at examining the importance of selenium status in the incidence and progression of a variety of infectious and degenerative diseases
https://en.wikipedia.org/wiki?curid=30806891
Selenium yeast Selenium supplementation in yeast form has been shown to have beneficial effects in many species, especially on animal immune status, growth and reproduction The consequent improvements in productivity can be of economic benefit to livestock producers for many reasons, including greater overall efficiency of feedstuff use. supplementation of food-animal diets has an added nutritional benefit to human consumers of food-animal products. Dietary selenomethionine-containing plant or yeast protein can be also stored nonspecifically in animal protein, which can result in nutritionally useful selenium content in meat, milk, and eggs. Consequently, strategies to supplement animal feed with selenium yeast have led to the development of selenium-rich functional foods, including selenium-enriched eggs and meats for human consumption. Since 2000, selenium yeast ("S. cerevisiae" CNCM I-3060) has been reviewed and received the following approvals for use in animal and human diets: A review of the scientific literature concluded that selenium yeast from reputable manufacturers is adequately characterised, of reproducible quality, and shows no evidence of toxicity in long-term supplementation studies at doses as high as 400 and 800 micrograms per day (exceeding the EC tolerable upper intake level of 300 micrograms per day). Total selenium in selenium yeast can be reliably determined using open acid digestion to extract selenium from the yeast matrix followed by flame atomic absorption spectrometry
https://en.wikipedia.org/wiki?curid=30806891
Selenium yeast Determination of the selenium species selenomethionine can be achieved via proteolytic digestion of selenium yeast followed by high performance liquid chromatography (HPLC) with inductively coupled plasma mass spectrometry (ICP-MS).
https://en.wikipedia.org/wiki?curid=30806891
Wave method In fluid dynamics, the wave method (WM), or wave characteristic method (WCM), is a model describing unsteady flow of fluids in conduits (pipes). The wave method is based on the physically accurate concept that transient pipe flow occurs as a result of pressure waves generated and propagated from a disturbance in the pipe system (valve closure, pump trip, etc.) This method was developed and first described by Don J. Wood in 1966. A pressure wave, which represents a rapid pressure and associated flow change, travels at sonic velocity for the liquid pipe medium, and the wave is partially transmitted and reflected at all discontinuities in the pipe system (pipe junctions, pumps, open or closed ends, surge tanks, etc.) A pressure wave can also be modified by pipe wall resistance. This description is one that closely represents the actual mechanism of transient pipe flow. The WM has the very significant advantage that computations need be made only at nodes in the piping system. Other techniques such as the method of characteristics (MOC) require calculations at equally spaced interior points in a pipeline. This requirement can easily increase the number of calculations by a factor of 10 or more. However, virtually identical solutions are obtained by the WM and the MOC.
https://en.wikipedia.org/wiki?curid=30810934
Wagner's gene network model is a computational model of artificial gene networks, which explicitly modeled the developmental and evolutionary process of genetic regulatory networks. A population with multiple organisms can be created and evolved from generation to generation. It was first developed by Andreas Wagner in 1996 and has been investigated by other groups to study the evolution of gene networks, gene expression, robustness, plasticity and epistasis. The model and its variants have a number of simplifying assumptions. Three of them are listing below. The model represents individuals as networks of interacting transcriptional regulators. Each individual expresses formula_1 genes encoding transcription factors. The product of each gene can regulate the expression level of itself and/or the other genes through cis-regulatory elements. The interactions among genes constitute a gene network that is represented by a formula_2 × formula_2 regulatory matrix formula_4 in the model. The elements in matrix "R" represent the interaction strength. Positive values within the matrix represent the activation of the target gene, while negative ones represent repression. Matrix elements with value 0 indicate the absence of interactions between two genes. The phenotype of each individual is modeled as the gene expression pattern at time formula_5. It is represented by a state vector formula_6 in this model. formula_7 whose elements formula_8 denotes the expression states of gene "i" at time "t"
https://en.wikipedia.org/wiki?curid=30818571
Wagner's gene network model In the original Wagner model, formula_6 ∈ formula_10 where 1 represents the gene is expressed while -1 implies the gene is not expressed. The expression pattern can only be ON or OFF. The continuous expression pattern between -1 (or 0) and 1 is also implemented in some other variants. The development process is modeled as the development of gene expression states. The gene expression pattern formula_11 at time formula_12 is defined as the initial expression state. The interactions among genes change the expression states during the development process. This process is modeled by the following differential equations formula_13τformula_14σ formula_15 <nowiki>=</nowiki> σ formula_16 where formula_13τ) represents the expression state of formula_18 at time t+τ. It is determined by a filter function σformula_19. formula_20 represents the weighted sum of regulatory effects (formula_21) of all genes on gene formula_22 at time t. In the original Wagner model, the filter function is a step function σformula_23 if formula_24 if formula_25 if formula_26 In other variants, the filter function is implemented as a sigmoidal function σformula_27 In this way, the expression states will acquire a continuous distribution. The gene expression will reach the final state if it reaches a stable pattern. Evolutionary simulations are performed by reproduction-mutation-selection life cycle. Populations are fixed at size N and they will not go extinct. Non-overlapping generations are employed
https://en.wikipedia.org/wiki?curid=30818571
Wagner's gene network model In a typical evolutionary simulation, a single random viable individual that can produce a stable gene expression pattern is chosen as the founder. Cloned individuals are generated to create a population of N identical individuals. According to the asexual or sexual reproductive mode, offspring are produced by randomly choosing (with replacement) parent individual(s) from current generation. Mutations can be acquired with probability μ and survive with probability equal to their fitness. This process is repeated until N individuals are produced that go on to found the following generation. Fitness in this model is the probability that an individual survives to reproduce. In the simplest implementation of the model, developmentally stable genotypes survive (i.e. their fitness is 1) and developmentally unstable ones do not (i.e. their fitness is 0). Mutations are modeled as the changes in gene regulation, i.e., the changes of the elements in the regulatory matrix formula_28. Both sexual and asexual reproductions are implemented. Asexual reproduction is implemented as producing the offspring's genome (the gene network) by directly copying the parent's genome. Sexual reproduction is implemented as the recombination of the two parents' genomes. An organism is considered viable if it reaches a stable gene expression pattern. An organism with oscillated expression pattern is discarded and cannot enter the next generation.
https://en.wikipedia.org/wiki?curid=30818571
Corrosion monitoring A corrosion monitoring program provides comprehensive monitoring of all critical components of industrial objects, assets, facilities and plants for signs of corrosion. For reliable operation it is important to identify the location, rate, and underlying causes of corrosion. A corrosion monitoring program identifies any non-conforming alloy components, as these are generally susceptible to accelerated corrosion and can give relatively frequent cause for catastrophic failure. Corrosion Monitoring can provide significant advantages when integrated into both preventative maintenance and the processes inherent to safety management programs. Based on the results of the Corrosion Monitoring program, informed decisions can be made, not only regarding the remaining life of the object affected by corrosion but also regarding life extension strategies, prospective material selection, and cost-effective methods for remedy of corrosion issues and problems. An effective corrosion monitoring program includes a wide range of activities: Corrosion is a major problem in many industries, particularly in the petrochemical industry. Corrosion is one of the most serious ageing mechanisms impacting the equipment and assets of refineries and plants. Uncontrolled corrosion can cause leaks and component failures, bringing about a reduction in both the performance and reliability of important equipment
https://en.wikipedia.org/wiki?curid=30818649
Corrosion monitoring In extreme cases, corrosion can lead to unexpected failures that can be costly in terms of repair costs, environmental damage and potential harm to humans. Corrosion Monitoring uses a wide range of measurement techniques. non-destructive testing (NDT) methods are the most effective and broadly applied testing methods. Suitable NDT methods for the monitoring of corrosion include: The selection of the appropriate method as well as the detection and monitoring of corrosion requires knowledgeable and experienced personnel.
https://en.wikipedia.org/wiki?curid=30818649
Total absorption spectroscopy is a measurement technique that allows the measurement of the gamma radiation emitted in the different nuclear gamma transitions that may take place in the daughter nucleus after its unstable parent has decayed by means of the beta decay process. This technique can be used for beta decay studies related to beta feeding measurements "within the full decay energy window" for nuclei far from stability. It is implemented with a special type of detector, the ""total absorption spectrometer"" (TAS), made of a scintillator crystal that almost completely surrounds the activity to be measured, covering a solid angle of approximately 4π. Also, in an ideal case, it should be thick enough to have a peak efficiency close to 100%, in this way its total efficiency is also very close to 100% (this is one of the reasons why it is called "total" absorption spectroscopy). Finally, it should be blind to any other type of radiation. The gamma rays produced in the decay under study are collected by photomultipliers attached to the scintillator material. This technique may solve the problem of the Pandemonium effect. There is a change in philosophy when measuring with a TAS. Instead of detecting the individual gamma rays (as high-resolution detectors do), it will detect the gamma cascades emitted in the decay
https://en.wikipedia.org/wiki?curid=30832132
Total absorption spectroscopy Then, the final energy spectrum will not be a collection of different energy peaks coming from the different transitions (as can be expected in the case of a germanium detector), but a collection of peaks situated at an energy that is the sum of the different energies of all the gammas of the cascade emitted from each level. This means that the energy spectrum measured with a TAS will be in reality a spectrum of the levels of the nuclei, where each peak is a level populated in the decay. Since the efficiency of these detectors is close to 100%, it is possible to see the feeding to the high excitation levels that usually can not be seen by high-resolution detectors. This makes total absorption spectroscopy the best method to measure beta feedings and provide accurate beta intensity ("I") distributions for complex decay schemes. In an ideal case, the measured spectrum would be proportional to the beta feeding ("I"). But a real TAS has limited efficiency and resolution, and also the "I" has to be extracted from the measured spectrum, which depends on the spectrometer response. The analysis of TAS data is not simple: to obtain the strength from the measured data, a deconvolution process should be applied. The complex analysis of the data measured with the TAS can be reduced to the solution of a linear problem: "d = Ri" given that it relates the measured data ("d") with the feedings ("i") from which the beta intensity distribution "I" can be obtained
https://en.wikipedia.org/wiki?curid=30832132
Total absorption spectroscopy "R" is the response matrix of the detector (meaning the probability that a decay that feeds a certain level gives a count in certain bin of the spectrum). The function "R" depends of the detector but also of the particular level scheme that is being measured. To be able to extract the value of "i" from the data "d" the equation has to be inverted (this equation is also called the ""inverse problem""). Unfortunately this can not be done easily because there is similar response to the feeding of adjacent levels when they are at high excitation energies where the level density is high. In other words, this is one of the so-called "ill-posed" problems, for which several sets of parameters can reproduce closely the same data set. Then, to find "i", the response has to be obtained for which the branching ratios and a precise simulation of the geometry of the detector are needed. The higher the efficiency of the TAS used, the lower the dependence of the response on the branching ratios will be. Then it is possible to introduce the unknown branching ratios by hand from a plausible guess. A good guess can be calculated by means of the Statistical Model
https://en.wikipedia.org/wiki?curid=30832132
Total absorption spectroscopy Then the procedure to find the feedings is iterative: using the expectation-maximization algorithm to solve the inverse problem, Then the procedure to find the feedings is iterative: using the expectation-maximization algorithm to solve the inverse problem, the feedings are extracted; if they don't reproduce the experimental data, it means that the initial guess of the branching ratios is wrong and has to be changed (of course, it is possible to play with other parameters of the analysis). Repeating this procedure iteratively in a reduced number of steps, the data is finally reproduced. The best way to handle this problem is to keep a set of discrete levels at low excitation energies and a set of binned levels at high energies. The set at low energies is supposed to be known and can be taken from databases (for example, the [ENSDF] database, which has information from what has been already measured with the high resolution technique). The set at high energies is unknown and does not overlap with the known part. At the end of this calculation, the whole region of levels inside the Q value window (known and unknown) is binned. At this stage of the analysis it is important to know the internal conversion coefficients for the transitions connecting the known levels. The internal conversion coefficient is defined as the number of de-excitations via e− emission over those via γ emission
https://en.wikipedia.org/wiki?curid=30832132
Total absorption spectroscopy If internal conversion takes place, the EM multipole fields of the nucleus do not result in the emission of a photon, instead, the fields interact with the atomic electrons and cause one of the electrons to be emitted from the atom. The gamma that would be emitted after the beta decay is missed, and the γ intensity decreases accordingly: IT = Iγ + Ie− = Iγ(1 + αe), so this phenomenon has to be taken into account in the calculation. Also, the x rays will be contaminated with those coming from the electron conversion process. This is important in electron capture decay, as it can affect the results of any x-ray gated spectra if the internal conversion is strong. Its probability is higher for lower energies and high multipolarities. One of the ways to obtain the whole branching ratio matrix is to use the Statistical Nuclear Model. This model generates a binned branching ratio matrix from average level densities and average gamma strength functions. For the unknown part, average branching ratios can be calculated, for which several parameterizations may be chosen, while for the known part the information in the databases is used. It is not possible to produce gamma sources that emit all the energies needed to calculate accurately the response of a TAS detector. For this reason, it is better to perform a Montecarlo simulation of the response. For this simulation to be reliable, the interactions of all the particles emitted in the decay (γ, e−/e+, Auger e, x rays, etc
https://en.wikipedia.org/wiki?curid=30832132
Total absorption spectroscopy ) have to be modeled accurately, and the geometry and materials in the way of these particles have to be well reproduced. Also, the light production of the scintillator has to be included. The way to perform this simulation is explained in detail in paper by D. Cano-Ott et al. GEANT3 and GEANT4 are well suited for these kind of simulations. If the scintillator material of the TAS detector suffers from a non proportionality in the light production, the peaks produced by a cascade will be displaced further for every increment in the multiplicity and the width of these peaks will be different from the width of single peaks with the same energy. This effect can be introduced in the simulation by means of a hyperbolic scintillation efficiency. The simulation of the light production will widen the peaks of the TAS spectrum; however, this still does not reproduce the real width of the experimental peaks. During the measurement there are additional statistical processes that affect the energy collection and are not included in the Montecarlo. The effect of this is an extra widening of the TAS experimental peaks. Since the peaks reproduced with the Montecarlo do not have the correct width, a convolution with an empirical instrumental resolution distribution has to be applied to the simulated response. Finally, if the data to be analyzed comes from electron capture events, a simulated gamma response matrix must be built using the simulated responses to individual monoenergetic γ rays of several energies
https://en.wikipedia.org/wiki?curid=30832132
Total absorption spectroscopy This matrix contains the information related to the dependence of the response function on the detector. To include also the dependence on the level scheme that is being measured, the above-mentioned matrix should be convoluted with the branching ratio matrix calculated previously. In this way, the final global response R is obtained. An important thing to have in mind when using the TAS technique is that, if nuclei with short half-lifes are measured, the energy spectrum will be contaminated with the gamma cascades of the daughter nuclei produced in the decay chain. Normally the TAS detectors have the possibility to place ancillary detectors inside of them, to measure secondary radiation like X-rays, electrons or positrons. In this way it is possible to tag the other components of the decay during the analysis, allowing to separate the contributions coming from all the different nuclei (isobaric separation). In 1970, a spectrometer consisting of two cylindrical NaI detectors of 15 cm diameter and 10 cm length was used at ISOLDE The TAS Measuring Station installed at the GSI had a tape transport system that allowed the collection of the ions coming out of the separator (they were implanted in the tape), and the transportation of those ions from the collection position to the center of the TAS for the measurement (by means of the movement of the tape). The TAS at this facility was made of a cylindrical NaI crystal of Φ = h = 35.6 cm, with a concentric cylindrical hole in the direction of the symmetry axis
https://en.wikipedia.org/wiki?curid=30832132
Total absorption spectroscopy This hole was filled by a plug detector (4.7x15.0 cm) with a holder that allowed the placement of ancillary detectors and two rollers for a tape. This measuring station, installed at the end of one of the ISOLDE beamlines, consists of a TAS, and a tape station. In this station, a beam pipe is used to hold the tape. The beam is implanted in the tape outside of the TAS, which is then transported to the center of the detector for the measurement. In this station it is also possible to implant the beam directly in the center of the TAS, by changing the position of the rollers. The latter procedure allows the measurement of more exotic nuclei with very short half-lives. "Lucrecia" is the TAS at this station. It is made of one piece of NaI(Tl) material cylindrically shaped with φ = h = 38 cm (the largest ever built to our knowledge). It has a cylindrical cavity of 7.5 cm diameter that goes through perpendicularly to its symmetry axis. The purpose of this hole is to allow the beam pipe to reach the measurement position so that the tape can be positioned in the center of the detector. It also allows the placement of ancillary detectors in the opposite side to measure other types of radiation emitted by the activity implanted in the tape (x rays, e−/e+, etc.). However, the presence of this hole makes this detector less efficient as compared to the GSI TAS (Lucrecia’s total efficiency is around 90% from 300 to 3000 keV). Lucrecia’s light is collected by 8 photomultipliers
https://en.wikipedia.org/wiki?curid=30832132
Total absorption spectroscopy During the measurements Lucrecia is kept measuring at a total counting rate not larger than 10 kHz to avoid second and higher order pileup contributions. Surrounding the TAS there is a shielding box 19.2 cm thick made of four layers: polyethylene, lead, copper and aluminium. The purpose of it is to absorb most of the external radiation (neutrons, cosmic rays, and the room background).
https://en.wikipedia.org/wiki?curid=30832132
Transmission electron microscopy DNA sequencing is a single-molecule sequencing technology that uses transmission electron microscopy techniques. The method was conceived and developed in the 1960s and 70s, but lost favor when the extent of damage to the sample was recognized. In order for DNA to be clearly visualized under an electron microscope, it must be labeled with heavy atoms. In addition, specialized imaging techniques and aberration corrected optics are beneficial for obtaining the resolution required to image the labeled DNA molecule. In theory, transmission electron microscopy DNA sequencing could provide extremely long read lengths, but the issue of electron beam damage may still remain and the technology has not yet been commercially developed. Only a few years after James Watson and Francis Crick deduced the , and nearly two decades before Frederick Sanger published the first method for rapid DNA sequencing, Richard Feynman, an American physicist, envisioned the electron microscope as the tool that would one day allow biologists to “see the order of bases in the DNA chain”. Feynman believed that if the electron microscope could be made powerful enough, then it would become possible to visualize the atomic structure of any and all chemical compounds, including DNA. In 1970, Albert Crewe developed the high-angle annular dark-field imaging (HAADF) imaging technique in a scanning transmission electron microscope. Using this technique, he visualized individual heavy atoms on thin amorphous carbon films
https://en.wikipedia.org/wiki?curid=30833420
Transmission electron microscopy DNA sequencing In 2010 Krivanek and colleagues reported several technical improvements to the HAADF method, including a combination of aberration corrected electron optics and low accelerating voltage. The latter is crucial for imaging biological objects, as it allows to reduce damage by the beam and increase the image contrast for light atoms. As a result, single atom substitutions in a boron nitride monolayer could be imaged. Despite the invention of a multitude of chemical and fluorescent sequencing technologies, electron microscopy is still being explored as a means of performing single-molecule DNA sequencing. For example, in 2012 a collaboration between scientists at Harvard University, the University of New Hampshire and ZS Genetics demonstrated the ability to read long sequences of DNA using the technique, however transmission electron microscopy DNA sequencing technology is still far from being commercially available. The electron microscope has the capacity to obtain a resolution of up to 100 pm, whereby microscopic biomolecules and structures such as viruses, ribosomes, proteins, lipids, small molecules and even single atoms can be observed. Although DNA is visible when observed with the electron microscope, the resolution of the image obtained is not high enough to allow for deciphering the sequence of the individual bases, "i.e.", DNA sequencing. However, upon differential labeling of the DNA bases with heavy atoms or metals, it is possible to both visualize and distinguish between the individual bases
https://en.wikipedia.org/wiki?curid=30833420
Transmission electron microscopy DNA sequencing Therefore, electron microscopy in conjunction with differential heavy atom DNA labeling could be used to directly image the DNA in order to determine its sequence. As in a standard polymerase chain reaction (PCR), the double stranded DNA molecules to be sequenced must be denatured before the second strand can be synthesized with labeled nucleotides. The elements that make up biological molecules (C, H, N, O, P, S) are too light (low atomic number, Z) to be clearly visualized as individual atoms by transmission electron microscopy. To circumvent this problem, the DNA bases can be labeled with heavier atoms (higher Z). Each nucleotide is tagged with a characteristic heavy label, so that they can be distinguished in the transmission electron micrograph. The DNA molecules must be stretched out on a thin, solid substrate so that order of the labeled bases will be clearly visible on the electron micrograph. Molecular combing is a technique that utilizes the force of a receding air-water interface to extend DNA molecules, leaving them irreversibly bound to a silane layer once dry. This is one means by which alignment of the DNA on a solid substrate may be achieved. Transmission electron microscopy (TEM) produces high magnification, high resolution images by passing a beam of electrons through a very thin sample. Whereas atomic resolution has been demonstrated with conventional TEM, further improvement in spatial resolution requires correcting the spherical and chromatic aberrations of the microscope lenses
https://en.wikipedia.org/wiki?curid=30833420
Transmission electron microscopy DNA sequencing This has only been possible in scanning transmission electron microscopy where the image is obtained by scanning the object with a finely focused electron beam, in a way similar to a cathode ray tube. However, the achieved improvement in resolution comes together with irradiation of the studied object by much higher beam intensities, the concomitant sample damage and the associated imaging artefacts. Different imaging techniques are applied depending on whether the sample contains heavy or light atoms: Dark and bright spots on the electron micrograph, corresponding to the differentially labeled DNA bases, are analyzed by computer software. is not yet commercially available, but the long read lengths that this technology may one day provide will make it useful in a variety of contexts. When sequencing a genome, it must be broken down into pieces that are short enough to be sequenced in a single read. These reads must then be put back together like a jigsaw puzzle by aligning the regions that overlap between reads; this process is called "de novo" genome assembly. The longer the read length that a sequencing platform provides, the longer the overlapping regions, and the easier it is to assemble the genome. From a computational perspective, microfluidic Sanger sequencing is still the most effective way to sequence and assemble genomes for which no reference genome sequence exists
https://en.wikipedia.org/wiki?curid=30833420
Transmission electron microscopy DNA sequencing The relatively long read lengths provide substantial overlap between individual sequencing reads, which allows for greater statistical confidence in the assembly. In addition, long Sanger reads are able to span most regions of repetitive DNA sequence which otherwise confound sequence assembly by causing false alignments. However, "de novo" genome assembly by Sanger sequencing is extremely expensive and time consuming. Second generation sequencing technologies, while less expensive, are generally unfit for "de novo" genome assembly due to short read lengths. In general, third generation sequencing technologies, including transmission electron microscopy DNA sequencing, aim to improve read length while maintaining low sequencing cost. Thus, as third generation sequencing technologies improve, rapid and inexpensive "de novo" genome assembly will become a reality. A haplotype is a series of linked alleles that are inherited together on a single chromosome. DNA sequencing can be used to genotype all of the single nucleotide polymorphisms (SNPs) that constitute a haplotype. However, short DNA sequencing reads often cannot be phased; that is, heterozygous variants cannot be confidently assigned to the correct haplotype. In fact, haplotyping with short read DNA sequencing data requires very high coverage (average >50x coverage of each DNA base) to accurately identify SNPs, as well as additional sequence data from the parents so that Mendelian transmission can be used to estimate the haplotypes
https://en.wikipedia.org/wiki?curid=30833420
Transmission electron microscopy DNA sequencing Sequencing technologies that generate long reads, including transmission electron microscopy DNA sequencing, can capture entire haploblocks in a single read. That is, haplotypes are not broken up among multiple reads, and the genetically linked alleles remain together in the sequencing data. Therefore, long reads make haplotyping easier and more accurate, which is beneficial to the field of population genetics. Genes are normally present in two copies in the diploid human genome; genes that deviate from this standard copy number are referred to as copy number variants (CNVs). Copy number variation can be benign (these are usually common variants, called copy number polymorphisms) or pathogenic. CNVs are detected by fluorescence in situ hybridization (FISH) or comparative genomic hybridization (CGH). To detect the specific breakpoints at which a deletion occurs, or to detect genomic lesions introduced by a duplication or amplification event, CGH can be performed using a tiling array (array CGH), or the variant region can be sequenced. Long sequencing reads are especially useful for analyzing duplications or amplifications, as it is possible to analyze the orientation of the amplified segments if they are captured in a single sequencing read. Cancer genomics, or oncogenomics, is an emerging field in which high-throughput, second generation DNA sequencing technology is being applied to sequence entire cancer genomes
https://en.wikipedia.org/wiki?curid=30833420
Transmission electron microscopy DNA sequencing Analyzing this short read sequencing data encompasses all of the problems associated with "de novo" genome assembly using short read data. Furthermore, cancer genomes are often aneuploid. These aberrations, which are essentially large scale copy number variants, can be analyzed by second-generation sequencing technologies using read frequency to estimate the copy number. Longer reads would, however, provide a more accurate picture of copy number, orientation of amplified regions, and SNPs present in cancer genomes. The microbiome refers the total collection of microbes present in a microenvironment and their respective genomes. For example, an estimated 100 trillion microbial cells colonize the human body at any given time. The human microbiome is of particular interest, as these commensal bacteria are important for human health and immunity. Most of the Earth's bacterial genomes have not yet been sequenced; undertaking a microbiome sequencing project would require extensive "de novo" genome assembly, a prospect which is daunting with short read DNA sequencing technologies. Longer reads would greatly facilitate the assembly of new microbial genomes. Compared to other second- and third-generation DNA sequencing technologies, transmission electron microscopy DNA sequencing has a number of potential key strengths and weaknesses, which will ultimately determine its usefulness and prominence as a future DNA sequencing technology
https://en.wikipedia.org/wiki?curid=30833420
Transmission electron microscopy DNA sequencing Many non-Sanger second- and third-generation DNA sequencing technologies have been or are currently being developed with the common aim of increasing throughput and decreasing cost such that personalized genetic medicine can be fully realized. Both the US$10 million Archon X Prize for Genomics supported by the X Prize Foundation (Santa Monica, CA, USA) and the US$70 million in grant awards supported by the National Human Genome Research Institute of the National Institutes of Health (NIH-NHGRI) are fueling the rapid burst of research activity in the development of new DNA sequencing technologies. Since different approaches, techniques, and strategies are what define each DNA sequencing technology, each has its own strengths and weaknesses. Comparison of important parameters between various second- and third-generation DNA sequencing technologies are presented in Table 1.
https://en.wikipedia.org/wiki?curid=30833420
Membrane emulsification (ME) is a relatively novel technique for producing all types of single and multiple emulsions for DDS (drug delivery systems), solid micro carriers for encapsulation of drug or nutrient, solder particles for surface-mount technology, mono dispersed polymer microspheres (for analytical column packing, enzyme carriers, liquid crystal display spacers, toner core particles). was introduced by Nakashima and Shimizu in the late 1980s in Japan. In this process, the dispersed phase is forced through the pores of a microporous membrane directly into the continuous phase. Emulsified droplets are formed and detached at the end of the pores with a drop-by-drop mechanism. The advantages of membrane emulsification over conventional emulsification processes are that it enables one to obtain very fine emulsions of controlled droplet sizes and narrow droplet size distributions. Successful emulsification can be carried out with much less consumption of emulsifier and energy, and because of the lowered shear stress effect, membrane emulsification allows the use of shear-sensitive ingredients, such as starch and proteins. The membrane emulsification process is generally carried out in cross-flow (continuous or batch) mode or in a stirred cell (batch). A major limiting factor of ME is the low dispersed phase flux. In order to expand the industrial applications, the productivity of this method has to be increased. Some research has been aimed at solving this problem and others, such as membrane fouling.
https://en.wikipedia.org/wiki?curid=30838269
Dynamical mean-field theory (DMFT) is a method to determine the electronic structure of strongly correlated materials. In such materials, the approximation of independent electrons, which is used in density functional theory and usual band structure calculations, breaks down. Dynamical mean-field theory, a non-perturbative treatment of local interactions between electrons, bridges the gap between the nearly free electron gas limit and the atomic limit of condensed-matter physics. DMFT consists in mapping a many-body lattice problem to a many-body "local" problem, called an impurity model. While the lattice problem is in general intractable, the impurity model is usually solvable through various schemes. The mapping in itself does not constitute an approximation. The only approximation made in ordinary DMFT schemes is to assume the lattice self-energy to be a momentum-independent (local) quantity. This approximation becomes exact in the limit of lattices with an infinite coordination. One of DMFT's main successes is to describe the phase transition between a metal and a Mott insulator when the strength of electronic correlations is increased. It has been successfully applied to real materials, in combination with the local density approximation of density functional theory. The DMFT treatment of lattice quantum models is similar to the mean-field theory (MFT) treatment of classical models such as the Ising model
https://en.wikipedia.org/wiki?curid=30839171
Dynamical mean-field theory In the Ising model, the lattice problem is mapped onto an effective single site problem, whose magnetization is to reproduce the lattice magnetization through an effective "mean-field". This condition is called the self-consistency condition. It stipulates that the single-site observables should reproduce the lattice "local" observables by means of an effective field. While the N-site Ising Hamiltonian is hard to solve analytically (to date, analytical solutions exist only for the 1D and 2D case), the single-site problem is easily solved. Likewise, DMFT maps a lattice problem ("e.g." the Hubbard model) onto a single-site problem. In DMFT, the local observable is the local Green's function. Thus, the self-consistency condition for DMFT is for the impurity Green's function to reproduce the lattice local Green's function through an effective mean-field which, in DMFT, is the hybridization function formula_1 of the impurity model. DMFT owes its name to the fact that the mean-field formula_1 is time-dependent, or dynamical. This also points to the major difference between the Ising MFT and DMFT: Ising MFT maps the N-spin problem into a single-site, single-spin problem. DMFT maps the lattice problem onto a single-site problem, but the latter fundamentally remains a N-body problem which captures the temporal fluctuations due to electron-electron correlations. The Hubbard model describes the onsite interaction between electrons of opposite spin by a single parameter, formula_3
https://en.wikipedia.org/wiki?curid=30839171
Dynamical mean-field theory The Hubbard Hamiltonian may take the following form: where, on suppressing the spin 1/2 indices formula_5, formula_6 denote the creation and annihilation operators of an electron on a localized orbital on site formula_7, and formula_8. The following assumptions have been made: The Hubbard model is in general intractable under usual perturbation expansion techniques. DMFT maps this lattice model onto the so-called Anderson impurity model (AIM). This model describes the interaction of one site (the impurity) with a "bath" of electronic levels (described by the annihilation and creation operators formula_11 and formula_12) through a hybridization function. The Anderson model corresponding to our single-site model is a single-orbital Anderson impurity model, whose hamiltonian formulation, on suppressing some spin 1/2 indices formula_5, is: where The Matsubara Green's function of this model, defined by formula_21, is entirely determined by the parameters formula_22 and the so-called hybridization function formula_23, which is the imaginary-time Fourier-transform of formula_24. This hybridization function describes the dynamics of electrons hopping in and out of the bath. It should reproduce the lattice dynamics such that the impurity Green's function is the same as the local lattice Green's function
https://en.wikipedia.org/wiki?curid=30839171
Dynamical mean-field theory It is related to the non-interacting Green's function by the relation: Solving the Anderson impurity model consists in computing observables such as the interacting Green's function formula_26 for a given hybridization function formula_27 and formula_28. It is a difficult but not intractable problem. There exists a number of ways to solve the AIM, such as The self-consistency condition requires the impurity Green's function formula_29 to coincide with the local lattice Green's function formula_30: where formula_32 denotes the lattice self-energy. The only DMFT approximations (apart from the approximation that can be made in order to solve the Anderson model) consists in neglecting the spatial fluctuations of the lattice self-energy, by equating it to the impurity self-energy: This approximation becomes exact in the limit of lattices with infinite coordination, that is when the number of neighbors of each site is infinite. Indeed, one can show that in the diagrammatic expansion of the lattice self-energy, only local diagrams survive when one goes into the infinite coordination limit. Thus, as in classical mean-field theories, DMFT is supposed to get more accurate as the dimensionality (and thus the number of neighbors) increases. Put differently, for low dimensions, spatial fluctuations will render the DMFT approximation less reliable
https://en.wikipedia.org/wiki?curid=30839171
Dynamical mean-field theory In order to find the local lattice Green's function, one has to determine the hybridization function such that the corresponding impurity Green's function will coincide with the sought-after local lattice Green's function. The most widespread way of solving this problem is by using a forward recursion method, namely, for a given formula_34, formula_35 and temperature formula_36: The local lattice Green's function and other impurity observables can be used to calculate a number of physical quantities as a function of correlations formula_3, bandwidth, filling (chemical potential formula_35), and temperature formula_36: In particular, the drop of the double-occupancy as formula_3 increases is a signature of the Mott transition. DMFT has several extensions, extending the above formalism to multi-orbital, multi-site problems. DMFT can be extended to Hubbard models with multiple orbitals, namely with electron-electron interactions of the form formula_49 where formula_50 and formula_51 denote different orbitals. The combination with density functional theory (DFT+DMFT) then allows for a realistic calculation of correlated materials. Extended DMFT yields a local impurity self energy for non-local interactions and hence allows us to apply DMFT for more general models such as the t-J model. In order to improve on the DMFT approximation, the Hubbard model can be mapped on a multi-site impurity (cluster) problem, which allows one to add some spatial dependence to the impurity self-energy
https://en.wikipedia.org/wiki?curid=30839171
Dynamical mean-field theory Clusters contain 4 to 8 sites at low temperature and up to 100 sites at high temperature. Spatial dependencies of the self energy beyond DMFT, including long-range correlations in the vicinity of a phase transition, can be obtained also through a combination of analytical and numerical techniques. The starting point of the dynamical vertex approximation and of the dual fermion approach is the local two-particle vertex. DMFT has been employed to study non-equilibrium transport and optical excitations. Here, the reliable calculation of the AIM's Green function out of equilibrium remains a big challenge.
https://en.wikipedia.org/wiki?curid=30839171
Sodium bisulfite (or sodium bisulphite, sodium hydrogen sulfite) is a chemical mixture with the approximate chemical formula NaHSO. in fact is not a real compound, but a mixture of salts that dissolve in water to give solutions composed of sodium and bisulfite ions. It is a white solid with an odor of sulfur dioxide. Regardless of its ill-defined nature, "sodium bisulfite" is a food additive with E number E222. solutions can be prepared by treating a solution of suitable base, such as sodium hydroxide or sodium bicarbonate with sulfur dioxide. Attempts to crystallize the product yields sodium disulfite, NaSO. is a common industrial reducing agent, as it readily reacts with dissolved oxygen: It is usually added to large piping systems to prevent oxidative corrosion. In biochemical engineering applications, it is helpful to maintain anaerobic conditions within a reactor. It is used for preserving food and beverages.
https://en.wikipedia.org/wiki?curid=30844503
Cholesterol total synthesis in chemistry describes the total synthesis of the complex biomolecule cholesterol and is considered a great scientific achievement. The research group of Robert Robinson with John Cornforth (Oxford University) published their synthesis in 1951 and that of Robert Burns Woodward with Franz Sondheimer (Harvard University) in 1952. Both groups competed for the first publication since 1950 with Robinson having started in 1932 and Woodward in 1949. According to historian Greg Mulheirn the Robinson effort was hampered by his micromanagement style of leadership and the Woodward effort was greatly facilitated by his good relationships with chemical industry. Around 1949 steroids like cortisone were produced from natural resources but expensive. Chemical companies Merck & Co. and Monsanto saw commercial opportunities for steroid synthesis and not only funded Woodward but also provided him with large quantities of certain chemical intermediates from pilot plants. Hard work also helped the Woodward effort: one of the intermediate compounds was named Christmasterone as it was synthesized on Christmas Day 1950 by Sondheimer. Other cholesterol schemes have also been developed: racemic cholesterol was synthesized in 1966 by W.S. Johnson, the enantiomer of natural cholesterol was reported in 1996 by Rychnovsky and Mickus , in 2002 by Jiang & Covey and again in 2008 by Rychnovsky and Belani . Cholesterol is a tetracyclic alcohol and a type of sterol
https://en.wikipedia.org/wiki?curid=30845073
Cholesterol total synthesis Added to the sterol frame with the alcohol group at position 3 are 2 methyl groups at carbon positions 10 and 13 and a 2-isooctyl group at position 17. The molecule is unsaturated at position 5,6 with an alkene group. The total number of stereocenters is 8. The unnatural cholesterol molecule that has also been synthesized is called ent-cholesterol. The Robinson synthesis is an example of a so-called relay synthesis. As many of the chemical intermediates (all steroids) were already known and available from natural resources all that was needed for a formal synthesis was proof that these intermediates could be linked to each other via chemical synthesis. Starting point for the Robinson synthesis was 1,6-dihydroxynaphthalene 1 that was converted in about 20 steps into the then already known androsterone 4. Ruzicka had already demonstrated in 1938 that androsterone could be converted into androstenedione 5 and Robinson demonstrated its conversion to dehydroepiandrosterone 6 (note the epimerized hydroxyl group) also already a known compound. Conversion of 6 to pregnenolone 7 and then to allopregnanolone 8 allowed the addition of the tail group as the acetate in 9 and then conversion to cholestanol 10. The conversion of cholestanol to cholesterol was already demonstrated by oxidation of the ketone, bromination to the bromoketone and elimination to the enone
https://en.wikipedia.org/wiki?curid=30845073
Cholesterol total synthesis The conversion of cholestenone into cholesterol by the method of Dauben and Eastham (1950) consisted of reduction of the enol acetate (lithium aluminum hydride) and fractionation with digitonin for the isolation of the correct isomer. Starting point for the Woodward synthesis was the hydroquinone 1 that was converted to cis-bicycle 2 in a Diels-Alder reaction with butadiene. Conversion to the desired trans isomer 5 was accomplished by synthesis of the sodium enolate salt 4 (benzene, sodium hydride) followed by acidification. Reduction (lithium aluminum hydride) then gave diol 6, a dehydration (HCl/water) gave ketol 7, deoxygenation of its acetate by elemental zinc gave enone 8, formylation (ethyl formate) gave enol 9, Michael ethyl vinyl ketone addition (potassium t-butoxide/t-butanol) gave dione 11 which on reaction with KOH in dioxane gave tricycle 12 in an aldol condensation with elimination of the formyl group. In the next series of steps oxidation (osmium tetroxide) gave diol 13, protection (acetone/copper sulfate) gave acetonide 14, hydrogenation (palladium-strontium carbonate) gave 15, formylation (ethyl formate) gave enol 16 which protected as the enamine 17 ("N"-methylaniline/methanol) gave via the potassium anion 18, carboxylic acid 19 by reaction with cyanoethylene using triton B as the base. Acid 19 was converted to lactone 20 (acetic anhydride, sodium acetate) and reaction with methylmagnesium chloride gave tetracyclic ketone 21
https://en.wikipedia.org/wiki?curid=30845073
Cholesterol total synthesis Treatment with periodic acid (dioxane) and piperidine acetate (benzene) gave aldehyde 24 through diol 22 (oxidation) and dialdehyde 23 (aldol condensation). Sodium dichromate oxidation gave carboxylic acid 25, Diazomethane treatment gave methyl ester 26 and sodium borohydride the allyl alcohol 27. Chiral resolution of this racemic compound with digitonin produced chiral 28 and on Oppenauer oxidation chiral 29. Hydrogenation (Adams' catalyst) gave alcohol 30, chromic acid oxidation gave ketone 31, sodium borohydride reduction stereoselectively gave alcohol 32, hydrolysis followed by acylation gave acetate 33, thionyl chloride treatment gave acyl chloride 34 and methyl cadmium the ketone 35. In the final stages reaction of 35 with isohexylmagnesium bromide 36 gave diol 37, acetic acid treatment gave dehydration and then hydrogenation gave acetate 38. Hydrolysis of this ester gave cholestanol 39. The route from cholestanol to cholesterol was already known (see: Robinson synthesis).
https://en.wikipedia.org/wiki?curid=30845073
Local elevation is a technique used in computational chemistry or physics, mainly in the field of molecular simulation (including molecular dynamics (MD) and Monte Carlo (MC) simulations). It was developed in 1994 by Huber, Torda and van Gunsteren to enhance the searching of conformational space in molecular dynamics simulations and is available in the GROMOS software for molecular dynamics simulation (since GROMOS96). The method was, together with the conformational flooding method the first to introduce memory dependence into molecular simulations. Many recent methods build on the principles of the local elevation technique, including the Engkvist-Karlström adaptive biasing force Wang-Landau, metadynamics, adaptively biased molecular dynamics adaptive reaction coordinate forces and local elevation umbrella sampling methods. The basic principle of the method is to add a memory-dependent potential energy term in the simulation so as to prevent the simulation to revisit already sampled configurations, which leads to the increased probability of discovering new configurations. The method can be seen as a continuous variant of the Tabu search method. The basic step of the algorithm is to add a small, repulsive potential energy function to the current configuration of the molecule such as to penalize this configuration and increase the likelihood of discovering other configurations. This requires the selection of a subset formula_1 of the degrees of freedom, which define the relevant conformational variables
https://en.wikipedia.org/wiki?curid=30846294
Local elevation These are typically a set of conformationally relevant dihedral angles, but can in principle be any differentiable function of the cartesian coordinates formula_2. The algorithm deforms the physical potential energy surface by introducing a bias energy, such that the total potential energy is defined as formula_3 The local elevation bias formula_4 depends on the simulation time formula_5 and is set to zero at the start of the simulation (formula_6) and is gradually built as a sum of small, repulsive functions, giving formula_7 , where formula_8 is a scaling constant and formula_9 is a multidimensional, repulsive function with formula_10. The resulting bias potential will be a sum of all the added functions formula_11 To reduce the number of added repulsive functions, a common approach is to add the functions to grid points. The original choice of formula_12 is to use a multidimensional Gaussian function. However, due to the infinite range of the Gaussian as well as the artifacts that can occur with a sum of gridded Gaussians, a better choice is to apply multidimensional truncated polynomial functions The local elevation method can be applied to free energy calculations as well as to conformational searching problems. In free energy calculations the local elevation technique is applied to level out the free energy surface along the selected set of variables
https://en.wikipedia.org/wiki?curid=30846294
Local elevation It has been shown by Engkvist and Karlström that the bias potential built by the local elevation method will approximate the negative of the free energy surface. The free energy surface can therefore be approximated directly from the bias potential (as done in the metadynamics method) or the bias potential can be used for umbrella sampling (as done in metadynamics with umbrella sampling corrections and local elevation umbrella sampling methods) to obtain more accurate free energies.
https://en.wikipedia.org/wiki?curid=30846294
Semi-drying oil A semi-drying oil is an oil which partially hardens when it is exposed to air. This is as opposed to a to drying oil, which hardens completely, or a non-drying oil, which does not harden at all. Oils with an iodine number of 115-130 are considered semi-drying.
https://en.wikipedia.org/wiki?curid=30850153
Calcareous sinter is a freshwater calcium carbonate deposit, also known as calc-sinter. Deposits are characterised by low porosity and well-developed lamination. should not be confused with siliceous sinter, which the term sinter more frequently refers to. It has been suggested that the term "sinter" should be restricted to siliceous spring deposits and be dropped for calcareous deposits entirely. is characterised by laminations of prismatic crystals growing perpendicular to the substrate; laminations are separated by thin layers of microcrystalline carbonate. Macrophytes are absent, consequently porosity is very low. Exclusion of species is due either to high temperature (travertine), high pH/ionic strength (tufa) or absence of light (speleothems). Pedley (1990) suggests the term be abandoned in favour of tufa for ambient temperature deposits (and presumably travertine for geothermally heated deposits). This avoids any potential confusion with siliceous sinter and prevents deposits formed in different environmental conditions (hot spring deposits, cold spring deposits and speleothems are all lumped together under the term sinter) from being amalgamated into one group. Deposits are formed from either calcite or aragonite. Precipitation is brought about by degassing of CO, which decreases the solubility of calcite/aragonite. (See tufa/geochemistry) The build-up of calc-sinter material in the Eifel Aqueduct was commercially exploited in the 11th and 12th centuries
https://en.wikipedia.org/wiki?curid=30850405
Calcareous sinter With deposits up to thick, the material was cut into vertical columns of polished brown rock with impressive layered patterns, which made it much in demand by cathedral builders across large parts of central Europe and beyond. In England it was used to provide polychromy, contrasting with the pale limestone favoured by Norman English Cathedrals. The stone was for many years known as 'Onyx Marble' despite being very obviously neither onyx nor marble. Those studying the stonework at Canterbury Cathedral were unaware of its origins in the aqueduct until 2011. Such large-scale use as the cloisters around a cathedral quadrangle needed many hundreds of columns, which must have been supplied by a well-organised extraction and transport operation. The Eifel deposits, have also been identified at Rochester and in the now lost Romanesque cloister at Norwich as well as the Infirmary Cloister, Chapter House windows, Anselm Chapel door and the Treasury gateway at Canterbury.
https://en.wikipedia.org/wiki?curid=30850405
Purpuric acid is a nitrogenous acid. is a nitrogenous acid related to barbituric acid that yields alloxan and uramil on hydrolysis and is known especially in purple-red salts (as murexide) from which it is obtained as an orange-red powder. was first described in 1818 by the English chemist William Prout (1785-1850). Though colourless itself, purpuric acid has a tendency to form red or purple-coloured salts with alkaline bases. This characteristic led the English doctor William Hyde Wollaston (1766-1828) to suggest the name "purpuric acid". can be synthesized by nitration of uric acid (previously known as lithic acid). In 1818 Prout obtained lithic acid from the excrement of a boa constrictor (which largely consists of this substance) or else used urinary calculi. He dissolved the lithic acid in dilute nitric acid and after an effervescence took place a purple liquid was formed. After neutralization of the solution with ammonia, granular crystals begin to separate out. is insoluble in alcohol and ether. The mineral acids dissolve it only when they are concentrated. It does not affect litmus paper. combines with the alkalis, alkaline earths and metallic oxides. It is capable of expelling carbonic acid from the alkaline carbonates, by the assistance of heat, and does not combine with any other acid. Wolllaston believed that these characterisitcs were sufficient to distinguish it from an oxide, and to establish its character as an acid.
https://en.wikipedia.org/wiki?curid=30850738
Shelter Island Conference on Quantum Mechanics in Valence Theory, 1951 Shelter Island Conference on Quantum Mechanics in Valence Theory was held in 1951. It was also sponsored by the National Academy of Sciences. The Nobel Laureate Robert S. Mulliken organized the meeting. The other participants were Theodore H. Berlin, Bryce L. Crawford, Charles A. Coulson, Henry Eyring, Joseph Hirschfelder, George E. Kimball, Masao Kotani, Sir John Lennard-Jones, Per-Olov Lowdin, D. A. McInnes, Henry Margenau, Joseph E. Mayer, William Moffitt, Robert G. Parr, Linus Pauling, Kenneth S. Pitzer, Clemens C. J. Roothaan, Klaus Ruedenberg, Harrison Shull, John Clarke Slater, Leslie E. Sutton, C. W. Ufford, John H. Van Vleck, George Wheland, and Michael P Barnett. In 1981, J.C. Light wrote, in an issue of "The Journal of Physical Chemistry", that it presented papers from a conference that was "merely the latest in a long sequence of ... conferences ... on theoretical chemistry spanning 3 decades ..." followed by a list that began with the Shelter Island conference. In 1996, Parr wrote "The fall of 1951 was an exciting time for quantum chemistry ... the Shelter Island Conference on Quantum-Mechanical Methods in Valence Theory ... was singularly important ... ".
https://en.wikipedia.org/wiki?curid=30850925
Tributyltin azide is an organotin compound with the formula (CH)SnN. It is a colorless solid although older samples can appear as yellow oils. The compound is used as a reagent in organic synthesis. is synthesized by the salt metathesis reaction of tributyltin chloride and sodium azide. It is a reagent used in the synthesis of tetrazoles, which in turn are used to generate angiotensin II receptor antagonists. In some applications, tributyltin azide has been replaced by the less toxic trioctyltin azide and organoaluminium azides. Lower alkyl tin compounds are often highly toxic and have penetrating odors. causes skin rashes, itching or blisters.
https://en.wikipedia.org/wiki?curid=30853464
Retinoblastoma protein The retinoblastoma protein (protein name abbreviated pRb; gene name abbreviated RB or RB1) is a tumor suppressor protein that is dysfunctional in several major cancers. One function of Rb is to prevent excessive cell growth by inhibiting cell cycle progression until a cell is ready to divide. When the cell is ready to divide, Rb is phosphorylated to pRb, leading to the inactivation of the activity of Rb. This process allows cells to enter into the cell cycle state. It is also a recruiter of several chromatin remodeling enzymes such as methylases and acetylases. Rb belongs to the pocket protein family, whose members have a pocket for the functional binding of other proteins. Should an oncogenic protein, such as those produced by cells infected by high-risk types of human papillomavirus, bind and inactivate pRb, this can lead to cancer. The "RB" gene may have been responsible for the evolution of multicellularity in several lineages of life including animals. In humans, the protein is encoded by the "RB1" gene located on chromosome 13—more specifically, 13q14.1-q14.2. If both alleles of this gene are mutated early in life, the protein is inactivated and results in development of retinoblastoma cancer, hence the name 'Rb'
https://en.wikipedia.org/wiki?curid=30855468
Retinoblastoma protein Retinal cells are not sloughed off or replaced, and are subjected to high levels of mutagenic UV radiation, and thus most pRb knock-outs occur in retinal tissue (but it has also been documented in certain skin cancers in patients from New Zealand where the amount of UV radiation is significantly higher). Two forms of retinoblastoma were noticed: a bilateral, familial form and a unilateral, sporadic form. Sufferers of the former were six times more likely to develop other types of cancer later in life. This highlighted the fact that mutated Rb could be inherited and lent support for the two-hit hypothesis. This states that only one working allele of a tumour suppressor gene is necessary for its function (the mutated gene is recessive), and so both need to be mutated before the cancer phenotype will appear. In the familial form, a mutated allele is inherited along with a normal allele. In this case, should a cell sustain only one mutation in the other "RB" gene, all Rb in that cell would be ineffective at inhibiting cell cycle progression, allowing cells to divide uncontrollably and eventually become cancerous. Furthermore, as one allele is already mutated in all other somatic cells, the future incidence of cancers in these individuals is observed with linear kinetics. The working allele need not undergo a mutation per se, as loss of heterozygosity (LOH) is frequently observed in such tumours. However, in the sporadic form, both alleles would need to sustain a mutation before the cell can become cancerous
https://en.wikipedia.org/wiki?curid=30855468
Retinoblastoma protein This explains why sufferers of sporadic retinoblastoma are not at increased risk of cancers later in life, as both alleles are functional in all their other cells. Future cancer incidence in sporadic Rb cases is observed with polynomial kinetics, not exactly quadratic as expected because the first mutation must arise through normal mechanisms, and then can be duplicated by LOH to result in a tumour progenitor. "RB1" orthologs have also been identified in most mammals for which complete genome data are available. "RB"/"E2F"-family proteins repress transcription. Rb is a multifunctional protein with many binding and phosphorylation sites. Although its common function is seen as binding and repressing "E2F" targets, Rb is likely a multifunctional protein as it binds to at least 100 other proteins. Rb has three major structural components: a carboxy-terminus, a "pocket" subunit, and an amino-terminus. Within each subunit, there are a variety of protein binding sites, as well as a total of 15 possible phosphorylation sites. Generally, phosphorylation causes interdomain locking, which changes Rb's conformation and prevents binding to target proteins. Different sites may be phosphorylated at different times, giving rise to many possible conformations and likely many functions/activity levels. Rb restricts the cell's ability to replicate DNA by preventing its progression from the G1 (first gap phase) to S (synthesis phase) phase of the cell division cycle
https://en.wikipedia.org/wiki?curid=30855468
Retinoblastoma protein Rb binds and inhibits E2 promoter-binding–protein-dimerization partner (E2F-DP) dimers, which are transcription factors of the "E2F" family that push the cell into S phase. By keeping E2F-DP inactivated, "RB1" maintains the cell in the G1 phase, preventing progression through the cell cycle and acting as a growth suppressor. The Rb-E2F/DP complex also attracts a histone deacetylase (HDAC) protein to the chromatin, reducing transcription of S phase promoting factors, further suppressing DNA synthesis. Rb has the ability to reversibly inhibit DNA replication through transcriptional repression of DNA replication factors. Rb is able to bind to transcription factors in the E2F family and thereby inhibit their function. When Rb is chronically activated, it leads to the downregulation of the necessary DNA replication factors. Within 72–96 hours of active Rb induction in A2-4 cells, the target DNA replication factor proteins—MCMs, RPA34, DBF4, RFCp37, and RFCp140—all showed decreased levels. Along with decreased levels, there was a simultaneous and expected inhibition of DNA replication in these cells. This process, however, is reversible. Following induced knockout of Rb, cells treated with cisplatin, a DNA-damaging agent, were able to continue proliferating, without cell cycle arrest, suggesting Rb plays an important role in triggering chronic S-phase arrest in response to genotoxic stress. One such example of E2F-regulated genes repressed by Rb are cyclin E and cyclin A
https://en.wikipedia.org/wiki?curid=30855468
Retinoblastoma protein Both of these cyclins are able to bind to Cdk2 and facilitate entry into the S phase of the cell cycle. Through the repression of expression of cyclin E and cyclin A, Rb is able to inhibit the G1/S transition. There are at least three distinct mechanisms in which pRb can repress transcription of E2F-regulated promoters. Though these mechanisms are known, it is unclear which are the most important for the control of the cell cycle. E2Fs are a family of proteins whose binding sites are often found in the promoter regions of genes for cell proliferation or progression of the cell cycle. E2F1 to E2F5 are known to associate with proteins in the pRb-family of proteins while E2F6 and E2F7 are independent of pRb. Broadly, the E2Fs are split into activator E2Fs and repressor E2Fs though their role is more flexible than that on occasion. The activator E2Fs are E2F1, E2F2 and E2F3 while the repressor E2Fs are E2F4, E2F5 and E2F6. Activator E2Fs along with E2F4 bind exclusively to pRb. pRb is able to bind to the activation domain of the activator E2Fs which blocks their activity, repressing transcription of the genes controlled by that E2F-promoter. The preinitiation complex (PIC) assembles in a stepwise fashion on the promoter of genes to initiate transcription. The TFIID binds to the TATA box in order to begin the assembly of the TFIIA, recruiting other transcription factors and components needed in the PIC
https://en.wikipedia.org/wiki?curid=30855468
Retinoblastoma protein Data suggests that pRb is able to repress transcription by both Rb being recruited to the promoter as well as having a target present in TFIID. The presence of pRb may change the conformation of the TFIIA/IID complex into a less active version with a decreased binding affinity. pRb can also directly interfere with their association as proteins, preventing TFIIA/IID from forming an active complex. pRb acts as a recruiter that allows for the binding of proteins that alter chromatin structure onto the site E2F-regulated promoters. Access to these E2F-regulated promoters by transcriptional factors is blocked by the formation of nucleosomes and their further packing into chromatin. Nucleosome formation is regulated by post-translational modifications to histone tails. Acetylation leads to the disruption of nucleosome structure. Proteins called histone acetyltransferases (HATs) are responsible for acetylating histones and thus facilitating the association of transcription factors on DNA promoters. Deacetylation, on the other hand, leads to nucleosome formation and thus makes it more difficult for transcription factors to sit on promoters. Histone deacetylases (HDACs) are the proteins responsible for facilitating nucleosome formation and are therefore associated with transcriptional repressors proteins. Rb interacts with the histone deacetylases HDAC1 and HDAC3. Rb binds to HDAC1 in its pocket domain in a region that is independent to its E2F-binding site
https://en.wikipedia.org/wiki?curid=30855468
Retinoblastoma protein Rb recruitment of histone deacetylases leads to the repression of genes at E2F-regulated promoters due to nucleosome formation. Some genes activated during the G1/S transition such as cyclin E are repressed by HDAC during early to mid-G1 phase. This suggests that HDAC-assisted repression of cell cycle progression genes is crucial for the ability of Rb to arrest cells in G1. To further add to this point, the HDAC-Rb complex is shown to be disrupted by cyclin D/Cdk4 which levels increase and peak during the late G1 phase. Senescence in cells is a state in which cells are metabolically active but are no longer able to replicate. Rb is an important regulator of senescence in cells and since this prevents proliferation, senescence is an important antitumor mechanism. Rb may occupy E2F-regulated promoters during senescence. For example, Rb was detected on the cyclin A and PCNA promoters in senescent cells. Cells respond to stress in the form of DNA damage, activated oncogenes, or sub-par growing conditions, and can enter a senescence-like state called "premature senescence". This allows the cell to prevent further replication during periods of damaged DNA or general unfavorable conditions. DNA damage in a cell can induce Rb activation. Rb's role in repressing the transcription of cell cycle progression genes leads to the S phase arrest that prevents replication of damaged DNA
https://en.wikipedia.org/wiki?curid=30855468
Retinoblastoma protein When it is time for a cell to enter S phase, complexes of cyclin-dependent kinases (CDK) and cyclins phosphorylate Rb to pRb, allowing E2F-DP to dissociate from pRb and become active. When E2F is free it activates factors like cyclins (e.g. cyclin E and cyclin A), which push the cell through the cell cycle by activating cyclin-dependent kinases, and a molecule called proliferating cell nuclear antigen, or PCNA, which speeds DNA replication and repair by helping to attach polymerase to DNA. Since the 1990s, Rb was known to be inactivated via phosphorylation. Until, the prevailing model was that Cyclin D- Cdk 4/6 progressively phosphorylated it from its unphosphorylated to it hyperphosphorylated state (14+ phosphorylations). However, it was recently shown that Rb only exists in three states: un-phosphorylated, mono-phosphorylated, and hyper-phosphorylated. Each has a unique cellular function. Before the development of 2D IEF, only hyper-phosphorylated Rb was distinguishable from all other forms, i.e. un-phosphorylated Rb resembled mono-phosphorylated Rb on immunoblots. As Rb was either in its active “hypo-phosphorylated” state or inactive “hyperphosphorylated” state. However, with 2D IEF, it is now known that Rb is un-phosphorylated in G0 cells and mono-phosphorylated in early G1 cells, prior to hyper-phosphorylation after the restriction point in late G1. When a cell enters G1, Cyclin D- Cdk4/6 phosphorylates Rb at a single phosphorylation site
https://en.wikipedia.org/wiki?curid=30855468
Retinoblastoma protein No progressive phosphorylation occurs because when HFF cells were exposed to sustained cyclin D- Cdk4/6 activity (and even deregulated activity) in early G1, only mono-phosphorylated Rb was detected. Furthermore, triple knockout, p16 addition, and Cdk 4/6 inhibitor addition experiments confirmed that Cyclin D- Cdk 4/6 is the sole phosphorylator of Rb. Throughout early G1, mono-phosphorylated Rb exists as 14 different isoforms (the 15th phosphorylation site is not conserved in primates in which the experiments were performed). Together, these isoforms represent the “hypo-phosphorylated” active Rb state that was thought to exist. Each isoform has distinct preferences to associate with different exogenous expressed E2Fs. A recent report showed that mono-phosphorylation controls Rb's association with other proteins and generates functional distinct forms of Rb. All different mono-phosphorylated Rb isoforms inhibit E2F transcriptional program and are able to arrest cells in G1-phase. Importantly, different mono-phosphorylated forms of RB have distinct transcriptional outputs that are extended beyond E2F regulation. After a cell passes the restriction point, Cyclin E - Cdk 2 hyper-phosphorylates all mono-phosphorylated isoforms. While the exact mechanism is unknown, one hypothesis is that binding to the C-terminus tail opens the pocket subunit, allowing access to all phosphorylation sites. This process is hysteretic and irreversible, and it is thought accumulation of mono-phosphorylated Rb induces the process
https://en.wikipedia.org/wiki?curid=30855468
Retinoblastoma protein The bistable, switch like behavior of Rb can thus be modeled as a bifurcation point: Presence of un-phosphorylated Rb drives cell cycle exit and maintains senescence. At the end of mitosis, PP1 dephosphorylates hyper-phosphorylated Rb directly to its un-phosphorylated state. Furthermore, when cycling C2C12 myoblast cells differentiated (by being placed into a differentiation medium), only un-phosphorylated Rb was present. Additionally, these cells had a markedly decreased growth rate and concentration of DNA replication factors (suggesting G0 arrest). This function of un-phosphorylated Rb gives rise to a hypothesis for the lack of cell cycle control in cancerous cells: Deregulation of Cyclin D - Cdk 4/6 phosphorylates un-phosphorylated Rb in senescent cells to mono-phosphorylated Rb, causing them to enter G1. The mechanism of the switch for Cyclin E activation is not known, but one hypothesis is that it is a metabolic sensor. Mono-phosphorylated Rb induces an increase in metabolism, so the accumulation of mono-phosphorylated Rb in previously G0 cells then causes hyper-phosphorylation and mitotic entry. Since any un-phosphorylated Rb is immediately phosphorylated, the cell is then unable to exit the cell cycle, resulting in continuous division. DNA damage to G0 cells activates Cyclin D - Cdk 4/6, resulting in mono-phosphorylation of un-phosphorylated Rb. Then, active mono-phosphorylated Rb causes repression of E2F-targeted genes specifically
https://en.wikipedia.org/wiki?curid=30855468
Retinoblastoma protein Therefore, mono-phosphorylated Rb is thought to play an active role in DNA damage response, so that E2F gene repression occurs until the damage is fixed and the cell can pass the restriction point. As a side note, the discovery that damages causes Cyclin D - Cdk 4/6 activation even in G0 cells should be kept in mind when patients are treated with both DNA damaging chemotherapy and Cyclin D - Cdk 4/6 inhibitors. During the M-to-G1 transition, pRb is then progressively dephosphorylated by PP1, returning to its growth-suppressive hypophosphorylated state Rb. Rb family proteins are components of the DREAM complex composed of DP, E2F4/5, RB-like (p130/p107) And MuvB (Lin9:Lin37:Lin52:RbAbP4:Lin54). The DREAM complex is assembled in Go/G1 and maintains quiescence by assembling at the promoters of > 800 cell-cycle genes and mediating transcriptional repression. Assembly of DREAM requires DYRK1A (Ser/Thr kinase) dependant phosphorylation of the MuvB core component, Lin52 at Serine28. This mechanism is crucial for recruitment of p130/p107 to the MuvB core and thus DREAM assembly. Consequences of loss of Rb function is dependent on cell type and cell cycle status, as Rb's tumor suppressive role changes depending on the state and current identity of the cell. In G0 quiescent stem cells, Rb is proposed to maintain G0 arrest although the mechanism remains largely unknown. Loss of Rb leads to exit from quiescence and an increase in the number of cells without loss of cell renewal capacity
https://en.wikipedia.org/wiki?curid=30855468
Retinoblastoma protein In cycling progenitor cells, Rb plays a role at the G1, S, and G2 checkpoints and promotes differentiation. In differentiated cells, which make up the majority of cells in the body and are assumed to be in irreversible G0, Rb maintains both arrest and differentiation. Loss of Rb therefore exhibits multiple different responses within different cells that ultimately all could result in cancer phenotypes. For cancer initiation, loss of Rb may induce cell cycle re-entry in both quiescent and post-mitotic differentiated cells through dedifferentiation. In cancer progression, loss of Rb decreases the differentiating potential of cycling cells, increases chromosomal instability, prevents induction of cellular senescence, promotes angiogenesis, and increases metastatic potential. In vivo, it is still not entirely clear how and which cell types cancer initiation occurs with solely loss of Rb, but it is clear that the Rb pathway is altered in large number of human cancers.[110] In mice, loss of Rb is sufficient to initiate tumors of the pituitary and thyroid glands, and mechanisms of initiation for these hyperplasia are currently being investigated. The classic view of Rb's role as a tumor suppressor and cell cycle regulator developed through research investigating mechanisms of interactions with E2F family member proteins. Yet, more data generated from biochemical experiments and clinical trials reveal other functions of Rb within the cell unrelated (or indirectly related) to tumor suppression
https://en.wikipedia.org/wiki?curid=30855468
Retinoblastoma protein In proliferating cells, certain Rb conformations (when RxL motif if bound by protein phosphatase 1 or when it is acetylated or methylated) are resistant to CDK phosphorylation and retain other function throughout cell cycle progression, suggesting not all Rb in the cell are devoted to guarding the G1/S transition. Studies have also demonstrated that hyperphosphorylated Rb can specifically bind E2F1 and form stable complexes throughout the cell cycle to carry out unique unexplored functions, a surprising contrast from the classical view of Rb releasing E2F factors upon phosphorylation. In summary, many new findings about Rb's resistance to CDK phosphorylation are emerging in Rb research and shedding light on novel roles of Rb beyond cell cycle regulation. Rb is able to be localize to sites of DNA breaks during the repair process and assist in non-homologous end joining and homologous recombination through complexing with E2F1. Once at the breaks, Rb is able to recruit regulators of chromatin structure such as the DNA helicase transcription activator BRG1. Rb has been shown to also be able to recruit protein complexes such as condensin and cohesin to assist in the structural maintenance of chromatin. Such findings suggest that in addition to its tumor suppressive role with E2F, Rb is also distributed throughout the genome to aid in important processes of genome maintenance such as DNA break-repair, DNA replication, chromosome condensation, and heterochromatin formation
https://en.wikipedia.org/wiki?curid=30855468
Retinoblastoma protein Rb has also been implicated in regulating metabolism through interactions with components of cellular metabolic pathways. RB1 mutations can cause alterations in metabolism, including reduced mitochondrial respiration, reduced activity in the electron transport chain, and changes in flux of glucose and/or glutamine. Particular forms of Rb have been found to localize to the outer mitochondrial membrane and directly interacts with Bax to promote apoptosis. While the frequency of alterations of the RB gene is substantial for many human cancer types including as lung, esophageal, and liver, alterations in up-steam regulatory components of Rb such as CDK4 and CDK6 have been the main targets for potential therapeutics to treat cancers with dysregulation in the RB pathway. This focus has resulted in the recent development and FDA clinical approval of three small molecule CDK4/6 inhibitors (Palbociclib (IBRANCE, Pfizer Inc. 2015), Ribociclib (KISQUALI, Novartis. 2017), & Abemaciclib (VERZENIO, Eli Lilly. 2017)) for the treatment of specific breast cancer subtypes. However, recent clinical studies finding limited efficacy, high toxicity, and acquired resistance of these inhibitors suggests the need to further elucidate mechanisms that influence CDK4/6 activity as well as explore other potential targets downstream in the Rb pathway to reactivate Rb's tumor suppressive functions
https://en.wikipedia.org/wiki?curid=30855468
Retinoblastoma protein Treatment of cancers by CDK4/6 inhibitors depends of the presence of Rb within the cell for therapeutic effect, limiting their usage only to cancers where RB is not mutated and Rb protein levels are not significantly depleted. Direct Rb reactivation in humans has not been achieved. However, in murine models, novel genetic methods have allowed for in vivo Rb reactivation experiments. Rb loss induced in mice with oncogenic KRAS-driven tumors of lung adenocarcinoma negates the requirement of MAPK signal amplification for progression to carcinoma and promotes loss of lineage commitment as well as accelerate the acquisition of metastatic competency. Reactivation of Rb in these mice rescues the tumors towards a less metastatic state, but does not completely stop tumor growth due to a proposed rewiring of MAPK pathway signaling, which suppresses Rb through a CDK-dependent mechanism. Besides trying to re-activate the tumor suppressive function of Rb, one other distinct approach to treat dysregulated Rb pathway cancers is to take advantage of certain cellular consequences induced by Rb loss. It has been shown that E2F stimulates expression of pro-apoptotic genes in addition to G1/S transition genes, however, cancer cells have developed defensive signaling pathways that protect themselves from death by deregulated E2F activity. Development of inhibitors of these protective pathways could thus be a synthetically lethal method to kill cancer cells with overactive E2F
https://en.wikipedia.org/wiki?curid=30855468
Retinoblastoma protein In addition, it has been shown that the pro-apoptotic activity of p53 is restrained by the Rb pathway, such that Rb deficient tumor cells become sensitive to p53 mediated cell death. This opens the door to research of compounds that could activate p53 activity in these cancer cells and induce apoptosis and reduce cell proliferation. While the loss of a tumor suppressor such as Rb leading to uncontrolled cell proliferation is detrimental in the context of cancer, it may be beneficial to deplete or inhibit suppressive functions of Rb in the context of cellular regeneration. Harvesting the proliferative abilities of cells induced to a controlled “cancer like” state could aid in repairing damaged tissues and delay again phenotypes. This idea has remains to be thoroughly explored as a potential cellular injury and anti-again treatment. The retinoblastoma protein is involved in the growth and development of mammalian hair cells of the cochlea, and appears to be related to the cells' inability to regenerate. Embryonic hair cells require Rb, among other important proteins, to exit the cell-cycle and stop dividing, which allows maturation of the auditory system. Once wild-type mammals have reached adulthood, their cochlear hair cells become incapable of proliferation. In studies where the gene for Rb is deleted in mice cochlea, hair cells continue to proliferate in early adulthood
https://en.wikipedia.org/wiki?curid=30855468
Retinoblastoma protein Though this may seem to be a positive development, Rb-knockdown mice tend to develop severe hearing loss due to degeneration of the organ of Corti. For this reason, Rb seems to be instrumental for completing the development of mammalian hair cells and keeping them alive. However, it is clear that without Rb, hair cells have the ability to proliferate, which is why Rb is known as a tumor suppressor. Temporarily and precisely turning off Rb in adult mammals with damaged hair cells may lead to propagation and therefore successful regeneration. Suppressing function of the retinoblastoma protein in the adult rat cochlea has been found to cause proliferation of supporting cells and hair cells. Rb can be downregulated by activating the sonic hedgehog pathway, which phosphorylates the proteins and reduces gene transcription. Disrupting Rb expression in vitro, either by gene deletion or knockdown of Rb short interfering RNA, causes dendrites to branch out farther. In addition, Schwann cells, which provide essential support for the survival of neurons, travel with the neurites, extending farther than normal. The inhibition of Rb supports the continued growth of nerve cells. Rb is known to interact with more than 300 proteins, some of which are listed below: Several methods for detecting the RB1 gene mutations have been developed including a method that can detect large deletions that correlate with advanced stage retinoblastoma.
https://en.wikipedia.org/wiki?curid=30855468
United States Army CBRN School The United States Army CBRN School, located at Fort Leonard Wood, Missouri, is the primary American training school specializing in military Chemical, Biological, Radiological, and Nuclear (CBRN) defense. Until 2008, it was known as the United States Army Chemical School. In accordance with U.S. Federal Law, Fort Leonard Wood, Missouri is designated the central location for all of the Department of Defense's CBRN Operations Training and home to the Chemical Corps Regiment. It was moved from Fort McClellan Alabama after the base was closed by the Defense Base Closure and Realignment Commission (BRAC) in 1999. The Army CBRN School provides numerous courses for officers, Non-commissioned Officers and Initial Entry Soldiers. Numerous international officers also send students to train at the CBRN School. Additionally, the US Air Force, US Navy, US Coast Guard and US Marine Corps all also maintain training elements at Fort Leonard Wood who, in partnerships with the Army CBRN School, train their personnel in CBRN operations. Fort Leonard Wood and the Army CBRN School also have facilities in which to conduct training, such as the Chemical Defense Training Facility (or CDTF) where military students from across the globe train and become familiar with actual nerve agents in realistic scenarios, and also conduct training with radiological isotopes and inert biological agents. The Edwin R
https://en.wikipedia.org/wiki?curid=30855956
United States Army CBRN School Bradley Radiological Teaching Laboratories is one of the very few radiological teaching laboratories licensed by the NRC in the Department of Defense. It provides a variety of training in radiological and nuclear defense under the supervision of credentialed scientists. The newest facility at the CBRN School is the Lieutenant Joseph Terry CBRN Training Facility. Opened in November 2007, The 1LT Joseph Terry Chemical, Biological, Radiological, Nuclear (CBRN) Responder Training Facility occupies approximately and provides a state-of-the-art CBRN Responder Training Campus for Inter-Service and other Agencies as requested. The US Army CBRN School is the lead for all DOD CBRN Response Training. This facility provides unmatched training opportunities in the fields of CBRN Consequence Management, Hazardous Materials Incident Response, Realistic training venues and other CBRN Response arenas as required. The CBRN School also provides training in Sensitive Site Assessment and Exploitation. In addition to training, the CBRN School also develops doctrine for Operations, researches and develops materiel requirements, and conducts joint service experimentation as the Joint Combat Developer for the Department of Defense's Chemical and Biological Defense Program. On 11 January 2008, The U.S. Army Chemical School was renamed as The U.S. Army Chemical, Biological, Radiological and Nuclear School (USACBRNS)
https://en.wikipedia.org/wiki?curid=30855956
United States Army CBRN School The name change was to encompass, in the title of the school, the wide range of training and expertise maintained by the U.S. Army Chemical Corps. As of June 15, 2018, the Commandant of the U.S. Army CBRN School is Brigadier General Andy Munera United States Army CBRN School. The Assistant Commandant is Colonel Thomas A. Duncan II. The Regimental Command Sergeant Major is RCSM Henney M. Hodgkins. The Regimental Chief Warrant Officer, is CCWO Robert A. Lockwood.
https://en.wikipedia.org/wiki?curid=30855956
C22H42O4 The molecular formula CHO may refer to:
https://en.wikipedia.org/wiki?curid=30856833
Chemical defense is a life history strategy employed by many organisms to avoid consumption by producing toxic or repellent metabolites. The production of defensive chemicals occurs in plants, fungi, and bacteria, as well as invertebrate and vertebrate animals. The class of chemicals produced by organisms that are considered defensive may be considered in a strict sense to only apply to those aiding an organism in escaping herbivory or predation. However, the distinction between types of chemical interaction is subjective and defensive chemicals may also be considered to protect against reduced fitness by pests, parasites, and competitors. Many chemicals used for defensive purposes are secondary metabolites derived from primary metabolites which serve a physiological purpose in the organism. Secondary metabolites produced by plants are consumed and sequestered by a variety of arthropods and, in turn, toxins found in some amphibians, snakes, and even birds can be traced back to arthropod prey. There are a variety of special cases for considering mammalian antipredatory adaptations as chemical defenses as well. Bacteria of the genera "Chromobacterium", "Janthinobacterium", and "Pseudoalteromonas" produce a toxic secondary metabolite, violacein, to deter protozoan predation. Violacein is released when bacteria are consumed, killing the protozoan. Another bacteria, "Pseudomonas aeruginosa", aggregates into quorum sensing biofilms which may aid the coordinated release of toxins to protect against predation by protozoans
https://en.wikipedia.org/wiki?curid=30856926
Chemical defense Flagellates were allowed to grow and were present in a biofilm of "P. aeruginosa" grown for three days, but no flagellates were detected after seven days. This suggests that concentrated and coordinated release of extracellular toxins by biofilms has a greater effect than unicellular excretions. Bacterial growth is inhibited not only by bacterial toxins, but also by secondary metabolites produced by fungi as well. The most well-known of these, first discovered and published by Alexander Fleming in 1929, described the antibacterial properties of a "mould juice" isolated from "Penicillium notatum". He named the substance penicillin, and it became the world's first broad-spectrum antibiotic. Many fungi are either pathogenic saprophytic, or live within plants without harming them as endophytes, and many of these have been documented to produce chemicals with antagonistic effects against a variety of organisms, including fungi, bacteria, and protozoa. Studies of coprophilous fungi have found antifungal agents which reduce the fitness of competing fungi. In addition, sclerotia of "Aspergillus flavus" contained a number of previously unknown aflavinines which were much more effective at reducing predation by the fungivorous beetle, "Carpophilus hemipterus", than aflatoxins which "A. flavus" also produced and it has been hypothesized that ergot alkaloids, mycotoxins produced by "Claviceps purpurea", may have evolved to discourage herbivory of the host plant
https://en.wikipedia.org/wiki?curid=30856926
Chemical defense A wealth of literature exists on the defensive chemistry of secondary metabolites produced by terrestrial plants and their antagonistic effects on pests and pathogens, likely owing to the fact that human society depends upon large-scale agricultural production to sustain global commerce. Since the 1950s, over 200,000 secondary metabolites have been documented in plants. These compounds serve a variety of physiological and allelochemical purposes, and provide a sufficient stock for the evolution of defensive chemicals. Examples of common secondary metabolites used as chemical defenses by plants include alkaloids, phenols, and terpenes. Defensive chemicals used to avoid consumption may be broadly characterized as either toxins or substances reducing the digestive capacity of herbivores. Although toxins are defined in a broad sense as any substance produced by an organism that reduces the fitness of another, in a more specific sense toxins are substances which directly affect and diminish the functioning of certain metabolic pathways. Toxins are minor constituents (<2% dry weight), active in small concentrations, and more present in flowers and young leaves. On the other hand, indigestible compounds make up to 60% dry weight of tissue and are predominately found in mature, woody species. Many alkaloids, pyrethrins, and phenols are toxins. Tannins are major inhibitors of digestion and are polyphenolic compounds with large molecular weights
https://en.wikipedia.org/wiki?curid=30856926
Chemical defense Lignin and cellulose are important structural elements in plants and are also usually highly indigestible. Tannins are also toxic against pathogenic fungi at natural concentrations in a variety of woody tissues. Not only useful as deterrents to pathogens or consumers, some of the chemicals produced by plants are effective in inhibiting competitors as well. Two separate shrub communities in the California chaparral were found to produce phenolic compounds and volatile terpenes which accumulated in soil and prevented various herbs from growing near the shrubs. Other plants were only observed to grow when fire removed shrubs, but herbs subsequently died off after shrubs returned. Although the focus has been on broad-scale patterns in terrestrial plants, Paul and Fenical in 1986 demonstrated a variety of secondary metabolites in marine algae which prevented feeding or induced mortality in bacteria, fungi, echinoderms, fishes, and gastropods. In nature, pests are a severe problem to plant communities as well, leading to the co-evolution of plant chemical defenses and herbivore metabolic strategies to detoxify their plant food. A variety of invertebrates consume plants, but insects have received a majority of the attention. Insects are pervasive agricultural pests and sometimes occur in such high densities that they can strip fields of crops. Many insects are distasteful to predators and excrete irritants or secrete poisonous compounds that cause illness or death when ingested
https://en.wikipedia.org/wiki?curid=30856926
Chemical defense Secondary metabolites obtained from plant food may also be sequestered by insects and used in the production of their own toxins. One of the more well-known examples of this is the monarch butterfly, which sequesters poison obtained from the miilkweed plant. Among the most successful insect orders employing this strategy are beetles (Coleoptera), grasshoppers (Orthoptera), and moths and butterflies (Lepidoptera). Insects also biosynthesize unique toxins, and while sequestration of toxins from food sources is claimed to be the energetically favorable strategy, this has been contested. Passion-vine associated butterflies in the tribe Heliconiini (sub-family Heliconiinae) either sequester or synthesize "de novo" defensive chemicals, but moths in the genus "Zygaena" (family Zygaenidae) have evolved the ability to either synthesize or sequester their defensive chemicals through convergence. Some coleopterans sequester secondary metabolites to be used as defensive chemicals but most biosynthesize their own "de novo". Anatomical structures have developed to store these substances, and some are circulated in the hemolyph and released associated with a behavior called reflex bleeding. Vertebrates can also biosynthesize defensive chemicals or sequester them from plants or prey. Sequestered compounds have been observed in frogs, natricine snakes, and two genera of birds, "Pitohui" and "Ifrita"
https://en.wikipedia.org/wiki?curid=30856926
Chemical defense It is suspected that some well-known compounds such as batrachotoxins from poison frogs in the family Dendrobatidae and tetrodotoxin produced by newts and pufferfish are derived from invertebrate prey. Bufadienolides, defensive chemicals produced by toads, have been found in glands of natricine snakes used for defense. Some mammals can emit foul smelling liquids from anal glands, such as the pangolin and some members of families Mephitidae and Mustelidae including skunks, weasels, and polecats. Monotremes have venomous spurs used to avoid predation and slow lorises (Primates: Nycticebus) produce venom which appears to be effective at deterring both predators and parasites. It has also been demonstrated that physical contact with a slow loris (without being bitten) can cause a reaction in humans – acting as a contact poison.
https://en.wikipedia.org/wiki?curid=30856926
Field-emission microscopy (FEM) is an analytical technique used in materials science to investigate molecular surface structures and their electronic properties. Invented by Erwin Wilhelm Müller in 1936, the FEM was one of the first surface-analysis instruments that approached near-atomic resolution. Microscopy techniques are used to produce real-space magnified images of a surface showing what it looks like. In general, microscopy information concerns surface crystallography (i.e. how the atoms are arranged at the surface), surface morphology (i.e. the shape and size of topographic features making the surface), and surface composition (the elements and compounds the surface is composed of). (FEM) was invented by Erwin Müller in 1936. In FEM, the phenomenon of field electron emission was used to obtain an image on the detector on the basis of the difference in work function of the various crystallographic planes on the surface. A field-emission microscope consists of a metallic sample in the form of a sharp tip and a conducting fluorescent screen enclosed in ultrahigh vacuum. The tip radius used is typically of the order of 100 nm. It is composed of a metal with a high melting point, such as tungsten. The sample is held at a large negative potential (1–10 kV) relative to the fluorescent screen. This gives the electric field near the tip apex to be the order of 10 V/m, which is high enough for field emission of electrons to take place
https://en.wikipedia.org/wiki?curid=30857461
Field-emission microscopy The field-emitted electrons travel along the field lines and produce bright and dark patches on the fluorescent screen, giving a one-to-one correspondence with the crystal planes of the hemispherical emitter. The emission current varies strongly with the local work function in accordance with the Fowler–Nordheim equation; hence, the FEM image displays the projected work function map of the emitter surface. The closely packed faces have higher work functions than atomically rough regions, and thus they show up in the image as dark spots on the brighter background. In short, the work-function anisotropy of the crystal planes is mapped onto the screen as intensity variations. The magnification is given by the ratio formula_1, where formula_2 is the tip apex radius, and formula_3 is the tip–screen distance. Linear magnifications of about 10 to 10 are attained. The spatial resolution of this technique is of the order of 2 nm and is limited by the momentum of the emitted electrons parallel to the tip surface, which is of the order of the Fermi velocity of the electron in metal. It is possible to set up an FEM with a probe hole in the phosphor screen and a Faraday cup collector behind it to collect the current emitted from a single plane. This technique allows the measurement of the variation of work function with orientation for a wide variety of orientations on a single sample
https://en.wikipedia.org/wiki?curid=30857461
Field-emission microscopy The FEM has also been used to study adsorption and surface diffusion processes, making use of the work-function change associated with the adsorption process. Field emission requires a very good vacuum, and often even in ultra-high vacuum (UHV), emission is not due to the clean surface. A typical field emitter needs to be "flashed" to clean it, usually by passing a current through a loop on which it is mounted. After flashing the emission current is high but unstable. The current decays with time and in the process becomes more stable due to the contamination of the tip, either from the vacuum, or more often from diffusion of adsorbed surface species to the tip. Thus the real nature of the FEM tips during use is somewhat unknown. Application of FEM is limited by the materials that can be fabricated in the shape of a sharp tip, can be used in a UHV environment, and can tolerate the high electrostatic fields. For these reasons, refractory metals with high melting temperature (e.g. W, Mo, Pt, Ir) are conventional objects for FEM experiments.
https://en.wikipedia.org/wiki?curid=30857461
École nationale supérieure de chimie de Montpellier The École Nationale Supérieure de Chimie de Montpellier, or ENSCM, is one of the French "Grandes Ecoles", situated in Montpellier. Although it may share academic staff and research activities with the University as well as research bodies such as CNRS, the ENSCM has a particular status as an independent body with its own research laboratories. Teaching chemistry in Montpellier started in 1676 with the creation of the first chair of chemistry at the University of Medicine. Later on, in 1803 the School of Pharmacy was established with a chair of Chemistry and then the Faculty of Science in 1809. The original Institute of Chemistry was founded in 1889 in order to gather the professors who were teaching the same subjects in different Faculties. In 1934, it left the old historic centre to settle in larger and more functional new buildings. It also acquired new facilities on a second site, 3 kilometers away from the main one, called "La Galéra", which is equipped with a kilo-lab and on a third site, close to Université Montpellier II : the "Institut Européen des Membranes". Since 2017, the ENSCM has moved into its new premises within the Pole Balard Formation located on Avenue du Professeur Emile Jeanbrau, just beside the "Institut Européen des Membranes"
https://en.wikipedia.org/wiki?curid=30857485
École nationale supérieure de chimie de Montpellier The ENSCM provides high level training for engineers and researchers in Chemistry and is renowned for its research activities in the following fields: Research is carried out ( in partnership with CNRS, Universités Montpellier I et II) in “Unités Mixtes de Recherche” (Mixed Research Units) Some ENSCM Professors and researchers are members of a team carrying out research in the field of Biology and Health. : UMR 5160 : Health Pharmacology and Biotechnologies Centre, CNRS, Université Montpellier II, Université Montpellier 1. The ENSCM is also a member of the “Fédération Gay Lussac” – network of “Grandes Ecoles” gathering 17 Schools of Chemistry and Chemical Engineering. Within this network, the ENSCM ranks very high both by the level of the student it recruits and the importance of the research activities carried out in its internationally renowned laboratories.
https://en.wikipedia.org/wiki?curid=30857485
Rose's metal Rose's metal, Rose metal or Rose's alloy is a fusible alloy with a low melting point. consists of 50% bismuth, 25–28% lead and 22–25% tin. Its melting point is between . The alloy does not appreciably contract or expand on solidification, this characteristic being a function of its bismuth percentage. has several common uses: It was discovered by the German chemist Valentin Rose the Elder, the grandfather of Heinrich Rose.
https://en.wikipedia.org/wiki?curid=30861606
Microchannel plate detector A micro-channel plate (MCP) is a planar component used for detection of single particles (electrons, ions and neutrons) and low intensity impinging radiation (ultraviolet radiation and X-rays). It is closely related to an electron multiplier, as both intensify single particles or photons by the multiplication of electrons via secondary emission. However, because a microchannel plate detector has many separate channels, it can additionally provide spatial resolution. A micro-channel plate is a slab made from highly resistive material of typically 2 mm thickness with a regular array of tiny tubes or slots (microchannels) leading from one face to the opposite, densely distributed over the whole surface. The microchannels are typically approximately 10 micrometers in diameter (6 micrometer in high resolution MCPs) and spaced apart by approximately 15 micrometers; they are parallel to each other and often enter the plate at a small angle to the surface (~8° from normal). At non-relativistic energies, single particles generally produce effects too small to enable their direct detection. The microchannel plate functions as a particle amplifier, turning a single impinging particle into a cloud of electrons. By applying a strong electric field across the MCP, each individual microchannel becomes a continuous-dynode electron multiplier. A particle or photon that enters one of the channels through a small orifice is guaranteed to hit the wall of the channel due to the channel being at an angle to the plate
https://en.wikipedia.org/wiki?curid=30862351
Microchannel plate detector The impact starts a cascade of electrons that propagates through the channel, amplifying the original signal by several orders of magnitude depending on the electric field strength and the geometry of the micro-channel plate. After the cascade, the microchannel takes time to recover (or recharge) before it can detect another signal. The electrons exit the channels on the opposite side of the plate where they are collected on an anode. Some anodes are designed to allow spatially resolved ion collection, producing an image of the particles or photons incident on the plate. Although in many cases the collecting anode functions as the detecting element, the MCP itself can also be used as a detector. The discharging and recharging of the plate produced by the electron cascade can be decoupled from the high voltage applied to the plate and measured to directly produce a signal corresponding to a single particle or photon. The gain of an MCP is very noisy, meaning that two identical particles detected in succession will often produce wildly different signal magnitudes. The temporal jitter resulting from the peak height variation can be removed using a constant fraction discriminator. Employed in this way, MCPs are capable of measuring particle arrival times with very high resolution, making them an ideal detector for mass spectrometers. Most modern MCP detectors consist of two microchannel plates with angled channels rotated 180° from each other producing a chevron (v-like) shape
https://en.wikipedia.org/wiki?curid=30862351
Microchannel plate detector In a chevron MCP the electrons that exit the first plate start the cascade in the next plate. The angle between the channels reduces ion feedback in the device, as well as producing significantly more gain at a given voltage compared to a straight channel MCP. The two MCPs can either be pressed together to preserve spatial resolution or have a small gap between them to spread the charge across multiple channels which further increases the gain. This is an assembly of three microchannel plates with channels aligned in a Z shape. Single MCPs can have gain up to 10,000 (40dB) but this system can provide gain more than 10 million (70dB). An external voltage divider is used to apply 100 volts to the acceleration optics (for electron detection), each MCP, the gap between the MCPs, and the backside of the last MCP and the collector (anode). The last voltage dictates the time of flight of the electrons and in this way the pulse-width. The anode is a 0.4 mm thick plate with an edge of 0.2 mm radius to avoid high field strengths. It is just large enough to cover the active area of the MCP, because the backside of the last MCP and the anode act as a capacitor with 2 mm separation and large capacitance slows down the signal. The positive charge in the MCP influences positive charge in the backside metalization. A hollow torus conducts this around the edge of the anode plate
https://en.wikipedia.org/wiki?curid=30862351
Microchannel plate detector A torus is the optimum compromise between low capacitance and short path and for similar reasons usually no dielectric (Markor) is placed into this region. After a 90° turn of the torus it is possible to attach a large coaxial waveguide. A taper permits minimizing the radius so that an SMA connector can be used. To save space and make the impedance match less critical, the taper is often reduced to a small 45° cone on the backside of the anode plate. The typical 500 volts between the backside of the last MCP and the anode cannot be fed into the preamplifier. Therefore, the inner or the outer conductor needs a DC-block, that is, a capacitor. Often it is chosen to only have 10-fold capacitance compared to the MCP-anode capacitance and is implemented as a plate capacitor. Rounded, electro-polished metal plates and the ultra high vacuum allow very high field strengths and high capacitance without a dielectric. The bias for the center conductor is applied via resistors hanging through the waveguide (see bias tee). If the DC block is used in the outer conductor, it is in parallel with the larger capacitor in the power supply. Assuming good screening, the only noise is due to current noise from the linear power regulator. Because the current is low in this application and space for large capacitors is available, and because the DC-block capacitor is fast, it is possible to have very low voltage noise, so that even weak MCP signals can be detected
https://en.wikipedia.org/wiki?curid=30862351
Microchannel plate detector Sometimes the preamplifier is on a potential ("off ground") and gets its power through a low-power isolation transformer and outputs its signal optically. The gain of a MCP is very noisy, especially for single particles. With two thick MCPs (>1 mm) and small channels (< 10 µm), saturation occurs, especially at the ends of the channels after many electron multiplications have taken place. The last stages of the following semiconductor amplifier chain also go into saturation. A pulse of varying length, but stable height and a low jitter leading edge is sent to the time to digital converter. The jitter can be further reduced by means of a constant fraction discriminator. That means that MCP and the preamplifier are used in the linear region (space charge negligible) and the pulse shape is assumed to be due to an impulse response with variable height but fixed shape from a single particle. Because MCPs have a fixed charge that they can amplify in their life, especially the second MCP has a lifetime problem. It is important to use thin MCPs, low voltage and instead more sensitive and fast semiconductor amplifiers after the anode. (see: Secondary emission#Special amplifying tubes, With high count rates or slow detectors (MCPs with phosphor screen or discrete photomultipliers) pulses overlap. In this case a high impedance (slow, but less noisy) amplifier and an ADC is used
https://en.wikipedia.org/wiki?curid=30862351
Microchannel plate detector Since the output signal from the MCP is generally small, the presence of the thermal noise limits the measurement of the time structure of the MCP signal. However, with fast amplification schemes it is possible to have valuable information on the signal amplitude even at very low signal levels, yet not on the time structure information of the wideband signals. In a delay line detector the electrons are accelerated to 500 eV between the back of the last MCP and a grid. Then they fly for 5 mm and are dispersed over an area of 2 mm. A grid follows. Each element has a diameter of 1 mm and consists of an electrostatic lens focusing arriving electrons through a 30 µm hole of a grounded sheet of aluminium. Behind that a cylinder of the same size follows. The electron cloud induces a 300 ps negative pulse when entering the cylinder and a positive when leaving. After that another sheet, a second cylinder follows, and a last sheet follows. Effectively the cylinders are fused into the center-conductor of a stripline. The sheets minimize cross talk between the layers and adjacent lines in the same layer, which would lead to signal dispersion and ringing. These striplines meander across the anode to connect all cylinders, to offer each cylinder 50 Ω impedance, and to generate a position dependent delay. Because the turns in the stripline adversely affect the signal quality their number is limited and for higher resolutions multiple independent striplines are needed
https://en.wikipedia.org/wiki?curid=30862351
Microchannel plate detector At both ends the meanders are connected to detector electronics. These electronics convert the measured delays into X- (first layer) and Y-coordinates (second layer). Sometimes a hexagonal grid and 3 coordinates are used. This redundancy reduces the dead space-time by reducing the maximum travel distance and thus the maximum delay, allowing for faster measurements. The microchannel plate detector must not operate over around 60 degree Celsius, otherwise it will degrade rapidly, bakeout without voltage has no influence.
https://en.wikipedia.org/wiki?curid=30862351
Countercurrent chromatography (CCC, also counter-current chromatography) is a form of liquid–liquid chromatography that uses a liquid stationary phase that is held in place by centrifugal force and is used to separate, identify, and quantify the chemical components of a mixture. In its broadest sense, countercurrent chromatography encompasses a collection of related liquid chromatography techniques that employ two immiscible liquid phases without a solid support. The two liquid phases come in contact with each other as at least one phase is pumped through a column, a hollow tube or a series of chambers connected with channels, which contains both phases. The resulting dynamic mixing and settling action allows the components to be separated by their respective solubilities in the two phases. A wide variety of two-phase solvent systems consisting of at least two immiscible liquids may be employed to provide the proper selectivity for the desired separation. Some types of countercurrent chromatography, such as dual flow CCC, feature a true countercurrent process where the two immiscible phases flow past each other and exit at opposite ends of the column. More often, however, one liquid acts as the stationary phase and is retained in the column while the mobile phase is pumped through it. The liquid stationary phase is held in place by gravity or by centrifugal force. An example of a gravity method is called droplet counter current chromatography (DCCC)
https://en.wikipedia.org/wiki?curid=30862402
Countercurrent chromatography There are two modes by which the stationary phase is retained by centrifugal force: hydrostatic and hydrodynamic. In the hydrostatic method, the column is rotated about a central axis. Hydrostatic instruments are marketed under the name centrifugal partition chromatography (CPC). Hydrodynamic instruments are often marketed as high-speed or high-performance countercurrent chromatography (HSCCC and HPCCC respectively) instruments which rely on the Archimedes' screw force in a helical coil to retain the stationary phase in the column. The components of a CCC system are similar to most liquid chromatography configurations, such as high-performance liquid chromatography. One or more pumps deliver the phases to the column which is the CCC instrument itself. Samples are introduced into the column through a sample loop filled with an automated or manual syringe. The outflow is monitored with various detectors such as ultraviolet–visible spectroscopy or mass spectrometry. The operation of the pumps, CCC instrument, sample injection, and detection may be controlled manually or with a microprocessor. The predecessor of modern countercurrent chromatography theory and practice was countercurrent distribution (CCD). The theory of CCD was described in the 1930s by Randall and Longtin. Archer Martin and Richard Laurence Millington Synge developed the methodology further during the 1940s. Finally, Lyman C. Craig introduced the Craig countercurrent distribution apparatus in 1944 which made CCD practical for laboratory work
https://en.wikipedia.org/wiki?curid=30862402