text
stringlengths
11
1.65k
source
stringlengths
38
44
Modes of toxic action Because different modes of toxic action generally appear to be associated with different ranges of body residues, modes of toxic action can then be separated into categories. However, it is unlikely that every chemical has the same mode of toxic action in every organism, so this variability should be considered. The effects of mixture toxicity should be considered as well, even though mixture toxicity it's generally additive, chemicals with more than one mode of toxic action may contribute to toxicity. Modeling has become a common used tool to predict modes of toxic action in the last decade. The models are based in Quantitative Structure-Activity Relationships (QSARs), which are mathematical models that relate the biological activity of molecules to their chemical structures and corresponding chemical and physicochemical properties. QSARs can then predict modes of toxic action of unknown compounds by comparing its characteristic toxicity profile and chemical structure to reference compounds with known toxicity profiles and chemical structures. Russom and colleagues were one of the first group of researchers being able to classify modes of toxic action with the use of QSARs; they classified 600 chemicals as narcotics. Even though QSARs are a useful tool for predicting modes of toxic action, chemicals having multiple modes of toxic action can obscure QSAR analyses. Therefore, these models are continuously being developed
https://en.wikipedia.org/wiki?curid=39392026
Modes of toxic action The objective of environmental risk assessment is to protect the environment from adverse effects. Researchers are further developing QSAR models with the ultimate goal providing a clear insight about a mode of toxic action, but also about what the actual target site is, the concentration of the chemical at this target site, and the interaction occurring at the target site, as well as to predict the modes of toxic action in mixtures. Information on the mode of toxic action is crucial not only in understanding joint toxic effects and potential interactions between chemicals in mixtures, but also for developing assays for the evaluation of complex mixtures in the field. The combination of behavioral and physiological responses, CBR estimates, and chemical fate and bioaccumulation QSAR models can be a powerful regulatory tool to address pollution and toxicity in areas where effluents are discharged.
https://en.wikipedia.org/wiki?curid=39392026
Bernard L. Shaw Bernard Leslie Shaw, FRS is an English chemist who has made notable contributions to organometallic chemistry. He is Emeritus Professor of Inorganic and Structural Chemistry at the University of Leeds. Together with his longtime collaborator Joseph Chatt, Shaw contributed to the development of organoplatinum chemistry. They reported the first platinum hydride, PtHCl(PEt). This colourless, volatile solid was the first non-organometallic hydride (i.e., lacking a metal-carbon bond). With an interest in cyclometallation, he discovered one of the first pincer complexes via the orthometalation of 1,3-CH(CHPBu).
https://en.wikipedia.org/wiki?curid=39404321
Standard dimension ratio (SDR) is a method of rating a pipe's durability against pressure. The standard dimension ratio describes the correlation between the pipe dimension and the thickness of the pipe wall. Common nominations are SDR11, SDR17 and SDR35. Pipes with a lower SDR can withstand higher pressures. formula_1 formula_2 Pipe outside diameter formula_3 Pipe wall thickness
https://en.wikipedia.org/wiki?curid=39410762
Origin and occurrence of fluorine Fluorine is relatively rare in the universe compared to other elements of nearby atomic weight. On earth, fluorine is essentially found only in mineral compounds because of its reactivity. The main commercial source, fluorite, is a common mineral. At 400 ppb, fluorine is estimated to be the 24th most common element in the universe. It is comparably rare for a light element (elements tend to be more common the lighter they are). All of the elements from atomic number 6 (carbon) to atomic number 14 (silicon) are hundreds or thousands of times more common than fluorine except for 11 (sodium). One science writer described fluorine as a "shack amongst mansions" in terms of abundance. Fluorine is so rare because it is not a product of the usual nuclear fusion processes in stars. And any created fluorine within stars is rapidly eliminated through strong nuclear fusion reactions—either with hydrogen to form oxygen and helium, or with helium to make neon and hydrogen. The presence of fluorine at all—outside of temporary existence in stars—is somewhat of a mystery because of the need to escape these fluorine-destroying reactions. Three theoretical solutions to the mystery exist: In type II supernovae, atoms of neon could be hit by neutrinos during the explosion and converted to fluorine. In Wolf-Rayet stars (blue stars over 40 times heavier than the Sun), a strong solar wind could blow the fluorine out of the star before hydrogen or helium could destroy it
https://en.wikipedia.org/wiki?curid=39415735
Origin and occurrence of fluorine Finally, in asymptotic giant branch (a type of red giant) stars, fusion reactions occur in pulses and convection could lift fluorine out of the inner star. Only the red giant hypothesis has supporting evidence from observations. In space, fluorine commonly combines with hydrogen to form hydrogen fluoride. (This compound has been suggested as a tracer to enable tracking reservoirs of hydrogen in the universe.) In addition to HF, monatomic fluorine has been observed in the interstellar medium. Fluorine cations have been seen in planetary nebulae and in stars, including our Sun. Fluorine is the thirteenth most common element in Earth's crust, comprising between 600 and 700 ppm of the crust by mass. Because of its reactivity, it is essentially only found in compounds. Three minerals exist that are industrially relevant sources of fluorine: fluorite, fluorapatite, and cryolite. Fluorite (CaF), also called fluorspar, is the main source of commercial fluorine. Fluorite is a colorful mineral associated with hydrothermal deposits. It is common and found worldwide. China supplies more than half of the world's demand and Mexico is the second-largest producer in the world The United States produced most of the world's fluorite in the early 20th century, but its last mine, in Illinois, shut down in 1995. Canada also exited production in the 1990s. The United Kingdom has declining fluorite mining and has been a net importer since the 1980s
https://en.wikipedia.org/wiki?curid=39415735
Origin and occurrence of fluorine Fluorapatite (Ca(PO)F) is mined along with other apatites for its phosphate content and is used mostly for production of fertilizers. Most of the Earth's fluorine is bound in this mineral, but because the percentage within the mineral is low (3.5%), the fluorine is discarded as waste. Only in the United States is there significant recovery. There, the hexafluorosilicates produced as byproducts are used to supply water fluoridation. Cryolite (NaAlF) is the least abundant of the three major fluorine-containing minerals, but is a concentrated source of fluorine. It was formerly used directly in aluminium production. However, the main commercial mine, on the west coast of Greenland, closed in 1987. Several other minerals, such as the gemstone topaz, contain fluoride. Fluoride is not significant in seawater or brines, unlike the other halides, because the alkaline earth fluorides precipitate out of water. Commercially insignificant quantities of organofluorines have been observed in volcanic eruptions and in geothermal springs. Their ultimate origin (from biological sources or geological formation) is unclear. The possibility of small amounts of gaseous fluorine within crystals has been debated for many years. One form of fluorite, antozonite, has a smell suggestive of fluorine when crushed. The mineral also has a dark black color, perhaps from free calcium (not bonded to fluoride). In 2012, a study reported detection of trace quantities (0.04% by weight) of diatomic fluorine in antozonite
https://en.wikipedia.org/wiki?curid=39415735
Origin and occurrence of fluorine It was suggested that radiation from small amounts of uranium within the crystals had caused the free fluorine defects.
https://en.wikipedia.org/wiki?curid=39415735
1994 Oslo Protocol on Further Reduction of Sulphur Emissions The Protocol to the 1979 Convention on Long-Range Transboundary Air Pollution on Further Reduction of Sulphur Emissions is an agreement to provide for a further reduction in sulphur emissions or transboundary fluxes. It is a protocol to the Convention on Long-Range Transboundary Air Pollution and supplements the 1985 Helsinki Protocol on the Reduction of Sulphur Emissions. "opened for signature" - 14 June 1994 "entered into force" - 5 August 1998 "parties" - (29) Austria, Belgium, Bulgaria, Canada, Croatia, Cyprus, Czech Republic, Denmark, European Union, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Liechtenstein, Lithuania, Luxembourg, Republic of Macedonia, Monaco, Netherlands, Norway, Slovakia, Slovenia, Spain, Sweden, Switzerland, United Kingdom "countries that have signed, but not yet ratified" - (3) Poland, Russia, Ukraine
https://en.wikipedia.org/wiki?curid=39415751
Mallotojaponin may refer to:
https://en.wikipedia.org/wiki?curid=39416783
Cymenes The cymenes (methylcumenes, isopropyltoluenes) constitute a group of substances of aromatic hydrocarbons, which structure consists of a benzene ring with an isopropyl group (−CH(CH)), and a methyl group (−CH) as a substituent. Through their different arrangement, they form three structural isomers with the molecular formula CH. They also belong to the group of C-benzenes. The best-known isomer is the "p"-cymene, it occurs in nature and is one of the terpenes.
https://en.wikipedia.org/wiki?curid=39417701
C30H26O13 The molecular formula CHO (molar mass: 594.52 g/mol, exact mass: 594.137340 u) may refer to:
https://en.wikipedia.org/wiki?curid=39417963
Alkaline hydrolysis Alkaline hydrolysis, in organic chemistry, usually refers to types of nucleophilic substitution reactions in which the attacking nucleophile is a hydroxide ion. In the alkaline hydrolysis of esters and amides the hydroxide ion nucleophile attacks the carbonyl carbon in a nucleophilic acyl substitution reaction. This mechanism is supported by isotope labeling experiments. For example, when ethyl propionate with an oxygen-18 labeled ethoxy group is treated with sodium hydroxide (NaOH), the oxygen-18 is completely absent from the sodium propionate product and is found exclusively in the ethanol formed. The reaction is often used to turn solid organic matter into a liquid form for easier disposal. Drain cleaners take advantage of this method to dissolve hair and fat in pipes. The reaction is also used to dispose of human and other animal remains as an alternative to traditional burial or cremation.
https://en.wikipedia.org/wiki?curid=39419082
Transition engineering is the professional-engineering discipline that deals with the application of the principles of science to the design, innovation and adaptation of engineered systems that meet the needs of today without compromising the ecological, societal and economic systems on which future generations will depend to meet their own needs. Just as all engineering fields incorporate safety considerations into design parameters, sustainability thinking is built into transition engineering. is emerging as a field to give engineers the tools necessary to address sustainability in design and management of engineered systems. is a cross-disciplinary field that addresses the issues of future resource-availability and identifies, then realizes opportunities to increase resilience and adaptation. Engineering professions emerge when new technologies, new problems or new opportunities arise. This was the case when safety engineering grew in the early 1900s to combat the high workplace injury and fatality rates. In the 1960s, Environmental engineering emerged as a discipline to reduce industrial pollution and mitigate impacts on environmental health and water quality. Quality engineering came about with the increase in mass production techniques during WWII and the need to confirm the quality of the products
https://en.wikipedia.org/wiki?curid=39425504
Transition engineering There are two serious problems driving the emergence of Transition Engineering; the exponential growth in the concentration of carbon dioxide in Earth’s atmosphere and the lack of growth and imminent decline of conventional oil production known as peak oil. The concentration of carbon dioxide in the atmosphere recently exceeded 400ppm, a level that Earth has not known for 800,000 years. aims to take advantage of the current access to the remaining lower cost and higher EROI energy resources to re-develop all aspects of urban and industrial engineered systems to adapt as fossil fuel use is dramatically reduced. The idea behind transition engineering has sprouted from many different roots, both technical and non-technical. The concept of sustainable development has been around since 1987 and the problem of sustainability was a driving force in the development of transition engineering. The Transition Town movement provided further inspiration as it showed that there were many groups of people around the world motivated to prepare for peak oil and climate change. Transition towns and ecovillages demonstrate the need for engineers to build systems that manage un-sustainable risks and provide people with sustainable options. Engineers are ethically required to "hold paramount the safety, health and welfare of the public" and answer society's need for sustainable development The origins of safety engineering provided much of the inspiration for transition engineering
https://en.wikipedia.org/wiki?curid=39425504
Transition engineering At the beginning of the 1900s, business owners viewed workplace safety as a wasted investment and politicians were slow to change. After the Triangle Shirtwaist Factory Fire in New York City killed 156 trapped workers, 62 engineers came together to investigate how to make the workplace a safer place to be. This eventually lead to the formation of the American Society of Safety Engineers. As safety engineering manages the risks of unsafe conditions, transition engineering manages the risks of unsustainable conditions. To give engineers a better grasp of sustainability, transition engineering defines the problem as UN-sustainability. This is similar to the problem of un-safe conditions that is the purpose of safety engineering. We do not necessarily know what a perfectly safe system looks like, but we do know what unsafe systems look like and how to improve them; the same applies to unsustainability of systems. By reducing unsustainability issues we take steps in the right direction The Transition Engineering method involves 7 steps to help engineers develop projects to deal with changing unsustainable activities. As a discipline, Transition Engineering recognizes that "Business as Usual" projections of future scenarios from past trends are not valid because the underlying conditions have changed sufficiently from the conditions of the past. For example, the projection of future oil supply in 2050 from data prior to 2005 would give an expectation of a 50% increase in demand over that time-frame
https://en.wikipedia.org/wiki?curid=39425504
Transition engineering However, the actual production rate of conventional oil has not increased since 2005 and is projected to decline by more than 50% by 2050. GATE opened the first group in the UK in Feb 2014. GATE is a Professional Engineering Institution; a membership association and learned society, and comprises an emerging network of engineers and non-engineers that share the idea that engineers are responsible for changing engineered systems in order to adapt to reducing fossil fuel and other unsustainable resources. Transition Engineering is a change management discipline. Like Safety Engineering, Transition Engineering uses and audit and stock-take of current system design and operation to quantify the risks to essential activities and resources over a time-frame of study. The time-frame of study should be commensurate with the lifetime of the assets involved in the activity. An activity is anything that the engineered system supports, for example manufacturing, sewage treatment, mobility, or food preservation. Transition Engineering recognizes that the analytical methods of strategic analysis over a life-cycle time-frame are at odds with most economic analyses that discount values with time. The strategic analysis carried out by Transition Engineers seeks to avoid stranded investment by recognizing resource risks. A classic example of stranded investments is the North Atlantic Cod Fishery - where the largest number of bottom trawling ships (e.g
https://en.wikipedia.org/wiki?curid=39425504
Transition engineering those ships responsible for destroying the Cod spawning beds) were manufactured in the year that the fish stocks collapsed. The Global Association for Transition Engineering is registered charity number 1166048, registered with the UK Charity Commission on 14 March 2016. It is a "Charitable Incorporated Organisation" or CIO. Engineering design process<br> Safety Engineering<br> Sustainable transport<br> Systems engineering<br> Susan Krumdieck
https://en.wikipedia.org/wiki?curid=39425504
Recalescence is an increase in temperature that occurs while cooling metal when a change in structure with an increase in entropy occurs. The heat responsible for the change in temperature is due to the change in entropy. When a structure transformation occurs the Gibbs free energy of both structures are more or less the same. Therefore, the process will be exothermic. The heat provided is the latent heat. also occurs after supercooling, when the supercooled liquid suddenly crystallizes, forming a solid but releasing heat in the process.
https://en.wikipedia.org/wiki?curid=39438424
Coherent effects in semiconductor optics The interaction of matter with light, i.e., electromagnetic fields, is able to generate a coherent superposition of excited quantum states in the material. "Coherent" denotes the fact that the material excitations have a well defined phase relation which originates from the phase of the incident electromagnetic wave. Macroscopically, the superposition state of the material results in an optical polarization, i.e., a rapidly oscillating dipole density. The optical polarization is a genuine non-equilibrium quantity that decays to zero when the excited system relaxes to its equilibrium state after the electromagnetic pulse is switched off. Due to this decay which is called "dephasing", coherent effects are observable only for a certain temporal duration after pulsed photoexcitation. Various materials such as atoms, molecules, metals, insulators, semiconductors are studied using coherent optical spectroscopy and such experiments and their theoretical analysis has revealed a wealth of insights on the involved matter states and their dynamical evolution. This article focusses on coherent optical effects in semiconductors and semiconductor nanostructures. After an introduction into the basic principles, the semiconductor Bloch equations (abbreviated as SBEs) which are able to theoretically describe coherent semiconductor optics on the basis of a fully microscopic many-body quantum theory are introduced
https://en.wikipedia.org/wiki?curid=39444110
Coherent effects in semiconductor optics Then, a few prominent examples for coherent effects in semiconductor optics are described all of which can be understood theoretically on the basis of the SBEs. Macroscopically, Maxwell's equations show that in the absence of free charges and currents an electromagnetic field interacts with matter via the optical polarization formula_1. The wave equation for the electric field formula_2 reads formula_3 and shows that the second derivative with respect to time of formula_1, i.e., formula_5, appears as a source term in the wave equation for the electric field formula_2. Thus, for optically thin samples and measurements performed in the far-field, i.e., at distances significantly exceeding the optical wavelength formula_7, the emitted electric field resulting from the polarization is proportional to its second time derivative, i.e., formula_8. Therefore, measuring the dynamics of the emitted field formula_9 provides direct information on the temporal evolution of the optical material polarization formula_10. Microscopically, the optical polarization arises from quantum mechanical transitions between different states of the material system. For the case of semiconductors, electromagnetic radiation with optical frequencies is able to move electrons from the valence (formula_11) to the conduction (formula_12) band
https://en.wikipedia.org/wiki?curid=39444110
Coherent effects in semiconductor optics The macroscopic polarization formula_1 is computed by summing over all microscopic transition dipoles formula_14 via formula_15, where formula_16 is the dipole matrix element which determines the strength of individual transitions between the states formula_11 and formula_12, formula_19 denotes the complex conjugate, and formula_20 is the appropriately chosen system's volume. If formula_21 and formula_22 are the energies of the conduction and valence band states, their dynamic quantum mechanical evolution is according to the Schrödinger equation given by phase factors formula_23 and formula_24, respectively. The superposition state described by formula_14 is evolving in time according to formula_26. Assuming that we start at formula_27 with formula_28, we have for the optical polarization formula_29. Thus, formula_30 is given by a summation over the microscopic transition dipoles which all oscillate with frequencies corresponding to the energy differences between the involved quantum states. Clearly, the optical polarization formula_30 is a coherent quantity which is characterized by an amplitude and a phase. Depending on the phase relationships of the microscopic transition dipoles, one may obtain constructive or destructive interference, in which the microscopic dipoles are in or out of phase, respectively, and temporal interference phenomena like quantum beats, in which the modulus of formula_30 varies as function of time
https://en.wikipedia.org/wiki?curid=39444110
Coherent effects in semiconductor optics Ignoring many-body effects and the coupling to other quasi particles and to reservoirs, the dynamics of photoexcited two-level systems can be described by a set of two equations, the so-called optical Bloch equations. These equations are named after Felix Bloch who formulated them in order to analyze the dynamics of spin systems in nuclear magnetic resonance. The two-level Bloch equations read formula_33 and formula_34 Here, formula_35 denotes the energy difference between the two states and formula_36 is the inversion, i.e., the difference in the occupations of the upper and the lower states. The electric field formula_2 couples the microscopic polarization formula_38 to the product of the Rabi energy formula_39 and the inversion formula_36. In the absence of the driving electric field, i.e., for formula_41, the Bloch equation for formula_38 describes an oscillation, i.e., formula_43. The optical Bloch equations enable a transparent analysis of several nonlinear optical experiments. They are, however, only well suited for systems with optical transitions between isolated levels in which many-body interactions are of minor importance as is sometimes the case in atoms or small molecules. In solid state systems, such as semiconductors and semiconductor nanostructures, an adequate description of the many-body Coulomb interaction and the coupling to additional degrees of freedom is essential and thus the optical Bloch equations are not applicable
https://en.wikipedia.org/wiki?curid=39444110
Coherent effects in semiconductor optics For a realistic description of optical processes in solid materials, it is essential to go beyond the simple picture of the optical Bloch equations and to treat many-body interactions that describe the coupling among the elementary material excitations by, e.g., the see article Coulomb interaction between the electrons and the coupling to other degrees of freedom, such as lattice vibrations, i.e., the electron-phonon coupling. Within a semiclassical approach, where the light field is treated as a classical electromagnetic field and the material excitations are described quantum mechanically, all above mentioned effects can be treated microscopically on the basis of a many-body quantum theory. For semiconductors the resulting system of equations are known as the semiconductor Bloch equations. For the simplest case of a two-band model of a semiconductor, the SBEs can be written schematically as formula_44 formula_45 formula_46 Here formula_47 is the microscopic polarization and formula_48 and formula_49 are the electron occupations in the conduction and valence bands (formula_12 and formula_11), respectively, and formula_52 denotes the crystal momentum. As a result of the many-body Coulomb interaction and possibly further interaction processes, the transition energy formula_53 and the Rabi energy formula_54 both depend on the state of the excited system, i.e
https://en.wikipedia.org/wiki?curid=39444110
Coherent effects in semiconductor optics , they are functions of the time-dependent polarizations formula_55 and occupations formula_56 and formula_57, respectively, at all crystal momenta formula_58. Due to this coupling among the excitations for all values of the crystal momentum formula_52, the optical excitations in semiconductor cannot be described on the level of isolated optical transitions but have to be treated as an interacting many-body quantum system. A prominent and important result of the Coulomb interaction among the photoexcitations is the appearance of strongly absorbing discrete excitonic resonances which show up in the absorption spectra of semiconductors spectrally below the fundamental band gap frequency. Since an exciton consists of a negatively charged conduction band electron and a positively charged valence band hole (i.e., an electron missing in the valence band) which attract each other via the Coulomb interaction, excitons have a hydrogenic series of discrete absorption lines. Due to the optical selection rules of typical III-V semiconductors such as Galliumarsenide (GaAs) only the s-states, i.e., 1"s", 2"s", etc., can be optically excited and detected, see article on Wannier equation. The many-body Coulomb interaction leads to significant complications since it results in an infinite hierarchy of dynamic equations for the microscopic correlation functions that describe the nonlinear optical response
https://en.wikipedia.org/wiki?curid=39444110
Coherent effects in semiconductor optics The terms given explicitly in the SBEs above arise from a treatment of the Coulomb interaction in the time-dependent Hartree–Fock approximation. Whereas this level is sufficient to describe excitonic resonances, there are several further effects, e.g., excitation-induced dephasing, contributions from higher-order correlations like excitonic populations and biexcitonic resonances, which require one to treat so-called many-body correlation effects that are by definition beyond the Hartree–Fock level. These contributions are formally included in the SBEs given above in the terms denoted by formula_60. The systematic truncation of the many-body hierarchy and the development and the analysis of controlled approximations schemes is an important topic in the microscopic theory of the optical processes in condensed matter systems. Depending on the particular system and the excitation conditions several approximations schemes have been developed and applied. For highly excited systems, it is often sufficient to describe many-body Coulomb correlations using the second order Born approximation. Such calculations were, in particular, able to successfully describe the spectra of semiconductor lasers, see article on semiconductor laser theory. In the limit of weak light intensities, signature of exciton complexes, in particular, biexcitons, in the coherent nonlinear response have been analyzed using the dynamics controlled truncation scheme
https://en.wikipedia.org/wiki?curid=39444110
Coherent effects in semiconductor optics These two approaches and several other approximation schemes can be viewed as special cases of the so-called cluster expansion in which the nonlinear optical response is classified by correlation functions which explicitly take into account interactions between a certain maximum number of particles and factorize larger correlation functions into products of lower order ones. By nonlinear optical spectroscopy using ultrafast laser pulses with durations on the order of ten to hundreds of femtoseconds, several coherent effects have been observed and interpreted. Such studies and their proper theoretical analysis have revealed a wealth of information on the nature of the photoexcited quantum states, the coupling among them, and their dynamical evolution on ultrashort time scales. In the following, a few important effects are briefly described. Quantum beats are observable in systems in which the total optical polarization is due to a finite number of discrete transition frequencies which are quantum mechanically coupled, e.g., by common ground or excited states. Assuming for simplicity that all these transitions have the same dipole matrix element, after excitation with a short laser pulse at formula_27 the optical polarization formula_30 of the system evolves as formula_63, where the index formula_64 labels the participating transitions
https://en.wikipedia.org/wiki?curid=39444110
Coherent effects in semiconductor optics A finite number of frequencies results in temporal modulations of the squared modulus of the polarization formula_65 and thus of the intensity of the emitted electromagnetic field formula_66 with time periods formula_67. For the case of just two frequencies the squared modulus of the polarization is proportional to formula_68, i.e., due to the interference of two contributions with the same amplitude but different frequencies, the polarization varies between a maximum and zero. In semiconductors and semiconductor heterostructures, such as quantum wells, nonlinear optical quantum-beat spectroscopy has been widely used to investigate the temporal dynamics of excitonic resonances. In particular, the consequences of many-body effects which depending on the excitation conditions may lead to, e.g., a coupling among different excitonic resonances via biexcitons and other Coulomb correlation contributions and to a decay of the coherent dynamics by scattering and dephasing processes, has been explored in many pump-probe and four-wave-mixing measurements. The theoretical analysis of such experiments in semiconductors requires a treatment on the basis of quantum mechanical many-body theory as is provided by the SBEs with many-body correlations incorporated on an adequate level. In nonlinear optics it is possible to reverse the destructive interference of so-called inhomogeneously broadened systems which contain a distribution of uncoupled subsystems with different resonance frequencies
https://en.wikipedia.org/wiki?curid=39444110
Coherent effects in semiconductor optics For example, consider a four-wave-mixing experiment in which the first short laser pulse excites all transitions at formula_27. As a result of the destructive interference between the different frequencies the overall polarization decays to zero. A second pulse arriving at formula_70 is able to conjugate the phases of the individual microscopic polarizations, i.e., formula_71, of the inhomogeneously broadened system. The subsequent unperturbed dynamical evolution of the polarizations leads to rephasing such that all polarization are in phase at formula_72 which results in a measurable macroscopic signal. Thus, this so-called photon echo occurs since all individual polarizations are in phase and add up constructively at formula_72. Since the rephasing is only possible if the polarizations remain coherent, the loss of coherence can be determined by measuring the decay of the photon echo amplitude with increasing time delay. When photon echo experiments are performed in semiconductors with exciton resonances, it is essential to include many-body effects in the theoretical analysis since they may qualitatively alter the dynamics. For example, numerical solutions of the SBEs have demonstrated that the dynamical reduction of the band gap which originates from the Coulomb interaction among the photoexcited electrons and holes is able to generate a photon echo even for resonant excitation of a single discrete exciton resonance with a pulse of sufficient intensity
https://en.wikipedia.org/wiki?curid=39444110
Coherent effects in semiconductor optics Besides the rather simple effect of inhomogeneous broadening, spatial fluctuations of the energy, i.e., disorder, which in semiconductor nanostructure may, e.g., arise from imperfection of the interfaces between different materials, can also lead to a decay of the photon echo amplitude with increasing time delay. To consistently treat this phenomenon of disorder induced dephasing the SBEs need to be solved including biexciton correlations. As shown in Ref. such a microscopic theoretical approach is able to describe disorder induced dephasing in good agreement with experimental results. In a pump-probe experiment one excites the system with a pump pulse (formula_74) and probes its dynamics with a (weak) test pulse (formula_75). With such experiments one can measure the so-called differential absorption formula_76 which is defined as the difference between the probe absorption in the presence of the pump formula_77 and the probe absorption without the pump formula_78. For resonant pumping of an optical resonance and when the pump precedes the test, the absorption change formula_79 is usually negative in the vicinity of the resonance frequency. This effect called bleaching arises from the fact that the excitation of the system with the pump pulse reduces the absorbance of the test pulse. There may also be positive contributions to formula_79 spectrally near the original absorption line due to resonance broadening and at other spectral positions due to excited-state absorption, i.e
https://en.wikipedia.org/wiki?curid=39444110
Coherent effects in semiconductor optics , optical transitions to states such as biexcitons which are only possible if the system is in an excited state. The bleaching and the positive contributions are generally present in both coherent and incoherent situations where the polarization vanishes but occupations in excited states are present. For detuned pumping, i.e., when the frequency of the pump field is not identical with the frequency of the material transition, the resonance frequency shifts as a result of the light-matter coupling, an effect known as the optical Stark effect. The optical Stark effect requires coherence i.e., a non vanishing optical polarization induced be the pump pulse, and thus decreases with increasing time delay between the pump and probe pulses and vanishes if the system has returned to its ground state. As can be shown by solving the optical Bloch equations for a two-level system due to the optical Stark effect the resonance frequency should shift to higher values, if the pump frequency is smaller than the resonance frequency and vice versa. This is also the typical result of experiments performed on excitons in semiconductors. The fact that in certain situations such predictions which are based on simple models fail to even qualitatively describe experiments in semiconductors and semiconductor nanostructures has received significant attention
https://en.wikipedia.org/wiki?curid=39444110
Coherent effects in semiconductor optics Such deviations are because in semiconductors typically many-body effects dominate the optical response and therefore it is required to solve the SBEs instead of the optical Bloch equations to obtain an adequate understanding. An important example was presented in Ref. where it was shown that many-body correlations arising from biexcitons are able to reverse the sign of the optical Stark effect. In contrast to the optical Bloch equations, the SBEs including coherent biexcitonic correlations were able to properly describe the experiments performed on semiconductor quantum wells. Consider formula_81 two-level systems at different positions in space. Maxwell's equations lead to a coupling among all the optical resonances since the field emitted from a specific resonance interferes with the emitted fields of all other resonances. As a result, the system is characterized by formula_81 eigenmodes originating from the radiatively coupled optical resonances. A spectacular situation arises if formula_81 identical two-level systems are regularly arranged with distances that equals an integer multiple of formula_84, where formula_7 is the optical wavelength. In this case, the emitted fields of all resonances interfere constructively and the system behaves effectively as a single system with a formula_81-times stronger optical polarization. Since the intensity of the emitted electromagnetic field is proportional to the squared modulus of the polarization, it scales initially as formula_87
https://en.wikipedia.org/wiki?curid=39444110
Coherent effects in semiconductor optics Due to the cooperativity that originates from the coherent coupling of the subsystems, the radiative decay rate formula_88 is increased by formula_81, i.e., formula_90 where formula_91 is the radiative decay of a single two-level system. Thus the coherent optical polarization decays formula_81-times faster proportional to formula_93 than that of an isolated system. As a result the time integrated emitted field intensity scales as formula_81, since the initial formula_87 factor is multiplied by formula_96 which arises from the time integral over the enhanced radiative decay. This effect of superradiance has been demonstrated by monitoring the decay of the exciton polarization in suitably arranged semiconductor multiple quantum wells. Due to superradiance introduced by the coherent radiative coupling among the quantum wells, the decay rate increases proportional to the number of quantum wells and is thus significantly more rapid than for a single quantum well. The theoretical analysis of this phenomenon requires a consistent solution of Maxwell's equations together with the SBEs. The few examples given above represent only a small subset of several further phenomena which demonstrate that the coherent optical response of semiconductors and semiconductor nanostructures is strongly influenced by many-body effects. Other interesting research directions which similarly require an adequate theoretical analysis including many-body interactions are, e.g
https://en.wikipedia.org/wiki?curid=39444110
Coherent effects in semiconductor optics , phototransport phenomena where optical fields generate and/or probe electronic currents, the combined spectroscopy with optical and Terahertz field, see article Terahertz spectroscopy and technology, and the rapidly developing area of semiconductor quantum optics, see article Semiconductor quantum optics with dots.
https://en.wikipedia.org/wiki?curid=39444110
Thomas M. Klapötke Thomas Matthias Klapötke (born February 24, 1961 in Göttingen) is a German inorganic chemist. He was Professor of Inorganic Chemistry at University of Glasgow from 1992 to 1997. Since 1997, he has been Professor of Inorganic Chemistry at the Ludwig Maximilian University of Munich (LMU). Klapötke currently does research at the University of Munich with a group of about 30 employees, mainly on explosives. According to media reports, Klapötke directs the "only university chemistry lab in Germany that deals with defense". A flavour of the "exciting" nature of his research interests comes from the "Things I Won't Work With" thread of the Chemistry blog "In The Pipeline", which introduces that day's topic thus : "But remember, N-amino azidotetrazole is the "starting" material for the work I’m talking about today. It’s a base camp, familiar territory, merely a jumping-off point in the quest for still more energetic compounds. The most alarming of them has two carbons, "fourteen" nitrogens, and no hydrogens at all, a formula that even Klapötke himself, who clearly has refined sensibilities when it comes to hellishly unstable chemicals, calls "exciting". ... The compound exploded in solution, it exploded on any attempts to touch or move the solid, and (most interestingly) it exploded "when they were trying to get an infrared spectrum of it."
https://en.wikipedia.org/wiki?curid=39446026
ABA Chemicals is a Chinese chemical manufacturing company headquartered in Taicang. The company was founded in 2006 as Suzhou ABA Chemicals, and is traded on the Shenzhen Stock Exchange as 300261. The company provides contract development and manufacturing services to the pharmaceutical, biotechnological, and agrochemical industries. It offers various services for the production of fine chemicals; intermediates; and active pharmaceutical ingredients. The company also provides unnatural amino acids, pyrazoles, triazoles, and azaindoles, as well as producing an intermediate of API Levetiracetam for the treatment of epilepsy. The company is also involved in pesticide manufacturing intermediates, and acquired the Chinese firm Shanghai Puyi Chemical to consolidate its position in this market. It also acquired Amino Chemicals, a pharmaceutical API intermediate manufacturer and member of the Dipharma Group, in 2017; Amino is based on Malta and was established in 1992. ABA and Dipharma began a working partnership in the 1980s, which provided Dipharma access to the Chinese pharmaceutical market. As of 2017, ABA had six facilities, three manufacturing facilities in Jiangsu Province and three research and development facilities in Shanghai. The manufacturing facility in Taicang was ISO 9001, ISO 14001 and OSHAS 18001 certified as of 2016.
https://en.wikipedia.org/wiki?curid=39446967
Auger architectomics is a scientific imaging technique that allows biologists, working in the field of nano-technology, to slice open the cells of living organisms to view and assess their internal workings. Using argon gas etching to open the cells and a scanning electron microscope to create a three-dimensional view, researchers can harness this technique to track how cells function. This is most importantly used to assess how cells react to medication, for instance in the field of cancer research. It was first discovered in 2010 by Professor Lodewyk Kock and his team working in the biotechnology department at the University of the Free State in South Africa. The technique was adapted from Nano Scanning Auger Microscopy (NanoSAM), a technique used by physical scientists to study the surface structures of metal and inanimate materials such as semiconductors. Originally designed to observe yeast cells to find out more about how they manufactured the gas that causes bread to rise, the scientists discovered that the process could also be used in observing other living cells. In 2012 the technique was successfully applied to human cell tissue. The project was initiated at the University of the Free State by the Kock group in 1982, with the major inputs and breakthroughs occurring between 2007 and 2012. The initial aim was to explore lipid biochemical routes, which would uncover unique lipids in yeasts, and to develop new taxonomies on the structures of these lipids
https://en.wikipedia.org/wiki?curid=39455714
Auger architectomics This unfolded into the development of the anti-mitochondrial antifungal assay (3A system), where yeast sensors are used to indicate anti-mitochondrial activity in compounds. These compounds, aimed at selectively switching off the mitochondria, therefore, might find application in combating various diseases such as fungal infections and cancer. Auger architectomics, which opens up individual cells to scan them, can be used to assess the effectiveness of such drugs by determining if a single cell can be "powered down" with targeted treatment. Based on the development of the anti-mitochondrial antifungal assay system, the University of the Free State scientists felt there was a need to analyse the system in more detail. As a result, they adapted Nano Scanning Auger Microscopy, a technique used to scan the properties of metals in physics, to apply it to cells. The result was a combination of auger atom electron physics, electron microscopy, and argon etching. The main challenge in applying the technology to biological material was to invent a sample preparation procedure that would ensure that the atom and 3D structure remained stable while argon nano-etching occurred. During the NanoSAM scanning electron microscope visualisation, an electron beam at 25 kV is used instead of the normal 5 kV beam. Sample fixation and dehydration methods had to be developed and optimised to fit NanoSAM without creating sample distortions
https://en.wikipedia.org/wiki?curid=39455714
Auger architectomics Dehydration regimes based on alcohol extraction procedures were installed and optimised, while fixation using various fixatives was included. Electron conductivity of samples throughout Argon etching was assured by optimised gold sputtering. Firstly, the biological sample is plated with gold to stabilise the outer structure and make it electron conductive. It is then scanned in SEM mode and the surface visually enlarged. Auger atom electron physics are applied and selected areas on the sample surface are beamed with electrons. The incident beam ejects an electron in the inner orbital of the atom, leaving an open space. This is filled by an electron from an outer orbital by relaxation. Energy is released, causing the ejection of an electron from the outer orbital. This electron is called the Auger electron. The amount of energy that is released is measured by auger electron spectroscopy (AES) and used to identify the atom and its intensity. Similarly, the surface area can be screened by an electron beam eventually yielding auger electrons that are mapped, showing the distribution of atoms in different colours covering a surface area of predetermined size. The previously-screened surface of the sample is etched with argon, exposing a new surface of the sample that is then again analysed. In this way, a 3-dimensional image and element composition architecture of the whole cell is visualised. This process in nanotechnology led to the discovery of gas bubbles inside yeasts
https://en.wikipedia.org/wiki?curid=39455714
Auger architectomics This is considered a paradigm shift, since naked gas bubbles are not expected inside any type of cell due to structured water in the cytoplasm. This was exposed in a fluconazole-treated bubble-like sensor of the yeast "Nadsonia". This is the only technology known at present that can accomplish this type of nano-analysis on biological material. Nanotechnology developments in medicine allow microdoses of drugs and therapies to be delivered directly to infected cells, instead of killing large groups of cells, often at the expense of healthy cells. Gold at a nano-level has the ability to bind to certain types of biological material, which means that certain types of cells can be targeted. The technique of auger architectomics may be used to map the success or otherwise of targeted drug delivery by analysing cells. The team at the University of the Free State is working with the Mayo Clinic to use the technology as a part of their cancer research.
https://en.wikipedia.org/wiki?curid=39455714
Terahertz spectroscopy and technology Terahertz spectroscopy detects and controls properties of matter with electromagnetic fields that are in the frequency range between a few hundred gigahertz and several terahertz (abbreviated as THz). In many-body systems, several of the relevant states have an energy difference that matches with the energy of a THz photon. Therefore, THz spectroscopy provides a particularly powerful method in resolving and controlling individual transitions between different many-body states. By doing this, one gains new insights about many-body quantum kinetics and how that can be utilized in developing new technologies that are optimized up to the elementary quantum level. Different electronic excitations within semiconductors are already widely used in lasers, electronic components and computers. At the same time, they constitute an interesting many-body system whose quantum properties can be modified, e.g., via a nanostructure design. Consequently, THz spectroscopy on semiconductors is relevant in revealing both new technological potentials of nanostructures as well as in exploring the fundamental properties of many-body systems in a controlled fashion. There are a great variety of techniques to generate THz radiation and to detect THz fields. One can, e.g., use an antenna, a quantum-cascade laser, a free-electron laser, or optical rectification to produce well-defined THz sources. The resulting THz field can be characterized via its electric field "E"("t")
https://en.wikipedia.org/wiki?curid=39456566
Terahertz spectroscopy and technology Present-day experiments can already output "E"("t") that has a peak value in the range of MV/cm (megavolts per centimeter). To estimate how strong such fields are, one can compute the level of energy change such fields induce to an electron over microscopic distance of one nanometer (nm), i.e., "L" = 1 nm. One simply multiplies the peak "E"("t") with elementary charge "e" and "L" to obtain "e" "E"("t") "L" = 100 meV. In other words, such fields have a major effect on electronic systems because the mere field strength of "E"("t") can induce electronic transitions over microscopic scales. One possibility is to use such THz fields to study Bloch oscillations where semiconductor electrons move through the Brillouin zone, just to return to where they started, giving rise to the Bloch oscillations. The THz sources can be also extremely short, down to single cycle of THz field's oscillation. For one THz, that means duration in the range of one picosecond (ps). Consequently, one can use THz fields to monitor and control ultrafast processes in semiconductors or to produce ultrafast switching in semiconductor components. Obviously, the combination of ultrafast duration and strong peak "E"("t") provides vast new possibilities to systematic studies in semiconductors. Besides the strength and duration of "E"("t"), the THz field's photon energy plays a vital role in semiconductor investigations because it can be made resonant with several intriguing many-body transitions
https://en.wikipedia.org/wiki?curid=39456566
Terahertz spectroscopy and technology For example, electrons in conduction band and holes, i.e., electronic vacancies, in valence band attract each other via the Coulomb interaction. Under suitable conditions, electrons and holes can be bound to excitons that are hydrogen-like states of matter. At the same time, the exciton binding energy is few to hundreds of meV that can be matched energetically with a THz photon. Therefore, the presence of excitons can be uniquely detected based on the absorption spectrum of a weak THz field. Also simple states, such as plasma and correlated electron–hole plasma can be monitored or modified by THz fields. In optical spectroscopy, the detectors typically measure the intensity of the light field rather than the electric field because there are no detectors that can directly measure electromagnetic fields in the optical range. However, there are multiple techniques, such as antennas and electro-optical sampling, that can be applied to measure the time evolution of "E"("t") directly. For example, one can propagate a THz pulse through a semiconductor sample and measure the transmitted and reflected fields as function of time. Therefore, one collects information of semiconductor excitation dynamics completely in time domain, which is the general principle of the terahertz time-domain spectroscopy. By using short THz pulses, a great variety of physical phenomena have already been studied
https://en.wikipedia.org/wiki?curid=39456566
Terahertz spectroscopy and technology For unexcited, intrinsic semiconductors one can determine the complex permittivity or THz-absorption coefficient and refractive index, respectively. The frequency of transversal-optical phonons, to which THz photons can couple, lies for most semiconductors at several THz. Free carriers in doped semiconductors or optically excited semiconductors lead to a considerable absorption of THz photons. Since THz pulses passes through non-metallic materials, they can be used for inspection and transmission of packaged items. The THz fields can be applied to accelerate electrons out of their equilibrium. If this is done fast enough, one can measure the elementary processes, such as how fast the screening of the Coulomb interaction is built up. This was experimentally explored in Ref. where it was shown that screening is complete within tens of femtoseconds in semiconductors. These insights are very important to understand how electronic plasma behaves in solids. The Coulomb interaction can also pair electrons and holes into excitons, as discussed above. Due to their analog to the hydrogen atom, excitons have bound states that can be uniquely identified by the usual quantum numbers 1"s", 2"s", 2"p", and so on. In particular, 1"s"-to-2"p" transition is dipole allowed and can be directly generated by "E"("t") if the photon energy matches the transition energy. In gallium arsenide-type systems, this transition energy is roughly 4 meV that corresponds to 1 THz photons
https://en.wikipedia.org/wiki?curid=39456566
Terahertz spectroscopy and technology At resonance, the dipole "d" defines the Rabi energy Ω = "d" "E"("t") that determines the time scale at which the 1"s"-to-2"p" transition proceeds. For example, one can excite the excitonic transition with an additional optical pulse which is synchronized with the THz pulse. This technique is called transient THz spectroscopy. Using this technique one can follow the formation dynamics of excitons or observe THz gain arising from intraexcitonic transitions. Since a THz pulse can be intense and short, e.g., single-cycle, it is experimentally possible to realize situations where duration of the pulse, time scale related to Rabi- as well as the THz photon energy ħω are degenerate. In this situation, one enters the realm of extreme nonlinear optics where the usual approximations, such as the rotating-wave approximation (abbreviated as RWA) or the conditions for complete state transfer, break down. As a result, the Rabi oscillations become strongly distorted by the non-RWA contributions, the multiphoton absorption or emission processes, and the dynamic Franz–Keldysh effect, as measured in Refs. By using a free-electron laser, one can generate longer THz pulses that are more suitable for detecting the Rabi oscillations directly. This technique could indeed demonstrate the Rabi oscillations, or actually the related Autler–Townes splitting, in experiments
https://en.wikipedia.org/wiki?curid=39456566
Terahertz spectroscopy and technology The Rabi splitting has also been measured with a short THz pulse and also the onset to multi-THz-photon ionization has been detected, as the THz fields are made stronger. Recently, it has also been shown that the Coulomb interaction causes nominally dipole-forbidden intra-excitonic transitions to become partially allowed. Terahertz transitions in solids can be systematically approached by generalizing the semiconductor Bloch equations and the related many-body correlation dynamics. At this level, one realizes the THz field are directly absorbed by two-particle correlations that modify the quantum kinetics of electron and hole distributions. Therefore, a systematic THz analysis must include the quantum kinetics of many-body correlations, that can be treated systematically, e.g., with the cluster-expansion approach. At this level, one can explain and predict a wide range of effects with the same theory, ranging from Drude-like response of plasma to extreme nonlinear effects of excitons.
https://en.wikipedia.org/wiki?curid=39456566
Dunham expansion In quantum chemistry, the is an expression for the rotational-vibrational energy levels of a diatomic molecule: where "v" and "J" are the vibrational and rotational quantum numbers. The constant coefficients formula_2 are called Dunham parameters with formula_3 representing the electronic energy. The expression derives from a semiclassical treatment of a perturbational approach to deriving the energy levels. The Dunham parameters are typically calculated by a least-squares fitting procedure of energy levels with the quantum numbers. This table adapts the sign conventions from the book of Huber and Herzberg.
https://en.wikipedia.org/wiki?curid=39456961
PageRank algorithm in biochemistry The PageRank algorithm has several applications in biochemistry. ("PageRank" is an algorithm used in Google Search for ranking websites in their results, but it has been adopted for other purposes also. According to Google, PageRank works by "counting the number and quality of links to a page to determine a rough estimate of how important the website is," the underlying assumption being that more important websites are likely to receive more links from other websites.) The relative importance-measuring property of the PageRank link analysis algorithm could be used to identify new possible drug targets in proteins. A PageRank-based algorithm could identify important protein targets in the pathogen organism better than a method considering only the number of incoming edges (in-degree) of a node in the metabolic network. The reason for this is that some already known, important protein targets do not have a high degree (are not hubs) and also, perturbing some hubs could result in unwanted physiological effects. The clinical use of most antibiotics result in a mutation of the pathogen organism leading to their resistance against the drug. Therefore, development of new drugs is always needed. A potential first step in developing new drugs against currently threatening diseases (e.g. tuberculosis) is to find new drug targets in the causative agent of the disease, i.e. the pathogen microorganism, let it be either a bacterium, or a protozoan parasite
https://en.wikipedia.org/wiki?curid=39458335
PageRank algorithm in biochemistry After finding the target protein in the bacterium (or protozoan parasite), one could design small molecular drug compounds that bind to the protein and inhibit it. Public availability of biological network data makes the process of searching for new drug targets easier than it was before. By using the available metabolic networks, it is possible to find important nodes with link analysis algorithms, like PageRank. In a recently published paper, biochemical reactions are treated as nodes of the metabolic network. In this directed network, reaction A has a directed edge towards reaction B if the product of the former enters the latter reaction as a substrate or co-factor. To select important nodes that could serve as drug targets, we might think of selecting high in-degree nodes (hubs; nodes with many incoming edges). It was shown however[2], that targeting hub proteins with many vital functions may unintentionally harm the living cell as well. A PageRank-based scoring method could detect important nodes that are not hubs and therefore might be better drug targets. The PageRank of a node A is the stationary limit probability distribution that the random walker is at node A. In its original application, the personalization vector w captured the personal interest of a web-surfer: interesting websites to a surfer appeared with a higher probability in the distribution given in vector w
https://en.wikipedia.org/wiki?curid=39458335
PageRank algorithm in biochemistry In this metabolic network, w is personalized to proteins; w is larger for those proteins that appear in higher concentrations in the proteomics analysis of certain diseases. This personalized PageRank may identify other related proteins to the disease. However, by using only the personalized PageRank to identify important nodes, hubs still get a high score on average. To find non-hub important nodes instead, we should consider scoring the nodes by their "relativized personalized PageRank"; i.e. their personalized PageRank scores over the number of edges pointing towards them (over their in-degree): The relativized personalized PageRank (rPPR(v)) for a node v is given by: where PpageRank(v) is the personalized PageRank score of node v, and d_(v) is its in-degree. It was shown, that by using this method, numerous already validated drug targets can be found (e.g. in the Mycobacterium tuberculosis), therefore, new, currently unknown targets might be detected as well.
https://en.wikipedia.org/wiki?curid=39458335
C24H29ClO4 The molecular formula CHClO may refer to:
https://en.wikipedia.org/wiki?curid=39466369
Cláudio Costa Neto Claudio Costa Neto (born Rio de Janeiro, December 11, 1932) is a Brazilian chemical and chemical engineer, one of the founders of the Institute of Chemistry, UFRJ. He is currently emeritus professor at the Institute of Chemistry of the Federal University of Rio de Janeiro. He got his BSc degree in Industrial Chemistry and Chemical Engineering from the University of Brazil (currently Universidade Federal do Rio de Janeiro) in 1954, Costa Neto worked under supervision of Fritz Feigl, responsible for the development of spot tests for identification and characterization of substances. He was responsible for creating the pioneering shale oil project ("projeto xistoquímica" in Portuguese). He was also responsible for the study of organic geochemistry at UFRJ. Vila Rosário: trilogy about chemistry and society: first part: why and how to eliminate tuberculosis in a society, Claudio Costa Neto, Rio de Janeiro, Calamus Publisher, 480 pages, 2002. Organic analysis: methods and procedures for characterizing organochemicals, Claudio Costa Neto, Rio de Janeiro, UFRJ publisher, 2004, Volumes 1 and 2
https://en.wikipedia.org/wiki?curid=39466812
Rock mass plasticity Plasticity theory for rocks is concerned with the response of rocks to loads beyond the elastic limit. Historically, conventional wisdom has it that rock is brittle and fails by fracture while plasticity is identified with ductile materials. In field scale rock masses, structural discontinuities exist in the rock indicating that failure has taken place. Since the rock has not fallen apart, contrary to expectation of brittle behavior, clearly elasticity theory is not the last work. Theoretically, the concept of rock plasticity is based on soil plasticity which is different from metal plasticity. In metal plasticity, for example in steel, the size of a dislocation is sub-grain size while for soil it is the relative movement of microscopic grains. The theory of soil plasticity was developed in the 1960s at Rice University to provide for inelastic effects not observed in metals. Typical behaviors observed in rocks include strain softening, perfect plasticity, and work hardening. Application of continuum theory is possible in jointed rocks because of the continuity of tractions across joints even through displacements may be discontinuous. The difference between an aggregate with joints and a continuous solid is in the type of constitutive law and the values of constitutive parameters. Experiments are usually carried out with the intention of characterizing the mechanical behavior of rock in terms of rock strength
https://en.wikipedia.org/wiki?curid=39471791
Rock mass plasticity The strength is the limit to elastic behavior and delineates the regions where plasticity theory is applicable. Laboratory tests for characterizing rock plasticity fall into four overlapping categories: confining pressure tests, pore pressure or effective stress tests, temperature-dependent tests, and strain rate-dependent tests. Plastic behavior has been observed in rocks using all these techniques since the early 1900s. The Boudinage experiments show that localized plasticity is observed in certain rock specimens that have failed in shear. Other examples of rock displaying plasticity can be seen in the work of Cheatham and Gnirk. Test using compression and tension show necking of rock specimens while tests using wedge penetration show lip formation. The tests carried out by Robertson show plasticity occurring at high confining pressures. Similar results are observable in the experimental work carried out by Handin and Hager, Paterson, and Mogi. From these results it appears that the transition from elastic to plastic behavior may also indicate the transition from softening to hardening. More evidence is presented by Robinson and Schwartz. It is observed that the higher the confining pressure, the greater the ductility observed. However, the strain to rupture remains roughly the same at around 1. The effect of temperature on rock plasticity has been explored by several teams of researchers. It is observed that the peak stress decreases with temperature
https://en.wikipedia.org/wiki?curid=39471791
Rock mass plasticity Extension tests (with confining pressure greater than the compressive stress) show that the intermediate principal stress as well as the strain rate has an effect on the strength. The experiments on the effect of strain rate by Serdengecti and Boozer show that increasing the strain rate makes rock stronger but also makes it appear more brittle. Thus dynamic loading may actually cause the strength of the rock to increase substantially. Increase in temperature appears to increase the rate effect in the plastic behavior of rocks. After these early explorations in the plastic behavior of rocks, a significant amount of research has been carried out on the subject, primarily by the petroleum industry. From the accumulated evidence, it is clear that rock does exhibit remarkable plasticity under certain conditions and the application of a plasticity theory to rock is appropriate
https://en.wikipedia.org/wiki?curid=39471791
Rock mass plasticity The equations that govern the deformation of jointed rocks are the same as those used to describe the motion of a continuum: where formula_2 is the mass density, formula_3 is the material time derivative of formula_4, formula_5 is the particle velocity, formula_6 is the particle displacement, formula_7 is the material time derivative of formula_8, formula_9 is the Cauchy stress tensor, formula_10 is the body force density, formula_11 is the internal energy per unit mass, formula_12 is the material time derivative of formula_13, formula_14 is the heat flux vector, formula_15 is an energy source per unit mass, formula_16 is the location of the point in the deformed configuration, and "t" is the time. In addition to the balance equations, initial conditions, boundary conditions, and constitutive models are needed for a problem to be well-posed. For bodies with internal discontinuities such as jointed rock, the balance of linear momentum is more conveniently expressed in the integral form, also called the principle of virtual work: where formula_18 represents the volume of the body and formula_19 is its surface (including any internal discontinuities), formula_20 is an admissible variation that satisfies the displacement (or velocity) boundary conditions, the divergence theorem has been used to eliminate derivatives of the stress tensor, and formula_21 are surface tractions on the surfaces formula_19
https://en.wikipedia.org/wiki?curid=39471791
Rock mass plasticity The jump conditions across stationary internal stress discontinuities require that the tractions across these surfaces be continuous, i.e., where formula_24 are the stresses in the sub-bodies formula_25, and formula_26 is the normal to the surface of discontinuity. For small strains, the kinematic quantity that is used to describe rock mechanics is the small strain tensor formula_27 If temperature effects are ignored, four types of constitutive relations are typically used to describe small strain deformations of rocks. These relations encompass elastic, plastic, viscoelastic, and viscoplastic behavior and have the following forms: A failure criterion or yield surface for the rock may then be expressed in the general form Typical constitutive relations for rocks assume that the deformation process is isothermal, the material is isotropic, quasi-linear, and homogenous and material properties do not depend upon position at the start of the deformation process, that there is no viscous effect and therefore no intrinsic time scale, that the failure criterion is rate-independent, and that there is no size effect. However, these assumptions are made only to simplify analysis and should be abandoned if necessary for a particular problem. Design of mining and civil structures in rock typically involves a failure criterion that is cohesive-frictional. The failure criterion is used to determine whether a state of stress in the rock will lead to inelastic behavior, including brittle failure
https://en.wikipedia.org/wiki?curid=39471791
Rock mass plasticity For rocks under high hydrostatic stresses, brittle failure is preceded by plastic deformation and the failure criterion is used to determine the onset of plastic deformation. Typically, perfect plasticity is assumed beyond the yield point. However strain hardening and softening relations with nonlocal inelasticity and damage have also been used. Failure criteria and yield surfaces are also often augmented with a cap to avoid unphysical situations where extreme hydrostatic stress states do not lead to failure or plastic deformation. Two widely used yield surfaces/failure criteria for rocks are the Mohr-Coulomb model and the Drucker-Prager model. The Hoek–Brown failure criterion is also used, notwithstanding the serious consistency problem with the model. The defining feature of these models is that tensile failure is predicted at low stresses. On the other hand, as the stress state becomes increasingly compressive, failure and yield requires higher and higher values of stress. The governing equations, constitutive models, and yield surfaces discussed above are not sufficient if we are to compute the stresses and displacements in a rock body that is undergoing plastic deformation. An additional kinematic assumption is needed, i.e., that the strain in the body can be decomposed additively (or multiplicatively in some cases) into an elastic part and a plastic part. The elastic part of the strain can be computed from a linear elastic constitutive model
https://en.wikipedia.org/wiki?curid=39471791
Rock mass plasticity However, determination of the plastic part of the strain requires a flow rule and a hardening model. Typical flow plasticity theories (for small deformation perfect plasticity or hardening plasticity) are developed on the basis on the following requirements: The above requirements can be expressed in three dimensions as follows. In metal plasticity, the assumption that the plastic strain increment and deviatoric stress tensor have the same principal directions is encapsulated in a relation called the flow rule. Rock plasticity theories also use a similar concept except that the requirement of pressure-dependence of the yield surface requires a relaxation of the above assumption. Instead, it is typically assumed that the plastic strain increment and the normal to the pressure-dependent yield surface have the same direction, i.e., where formula_63 is a hardening parameter. This form of the flow rule is called an associated flow rule and the assumption of co-directionality is called the normality condition. The function formula_64 is also called a plastic potential. The above flow rule is easily justified for perfectly plastic deformations for which formula_65 when formula_66, i.e., the yield surface remains constant under increasing plastic deformation. This implies that the increment of elastic strain is also zero, formula_67, because of Hooke's law. Therefore, Hence, both the normal to the yield surface and the plastic strain tensor are perpendicular to the stress tensor and must have the same direction
https://en.wikipedia.org/wiki?curid=39471791
Rock mass plasticity For a work hardening material, the yield surface can expand with increasing stress. We assume Drucker's second stability postulate which states that for an infinitesimal stress cycle this plastic work is positive, i.e., The above quantity is equal to zero for purely elastic cycles. Examination of the work done over a cycle of plastic loading-unloading can be used to justify the validity of the associated flow rule. The Prager consistency condition is needed to close the set of constitutive equations and to eliminate the unknown parameter formula_70 from the system of equations. The consistency condition states that formula_71 at yield because formula_72, and hence
https://en.wikipedia.org/wiki?curid=39471791
GeneTalk is a web-based platform, tool, and database, for filtering, reduction and prioritization of human sequence variants from next-generation sequencing (NGS) data. allows editing annotation about sequence variants and build up a crowd sourced database with clinically relevant information for diagnostics of genetic disorders. allows searching for information about specific sequence variants and connects to experts on variants that are potentially disease-relevant. Users can upload NGS data in Variant Call Format (VCF) onto the server into their accounts. All entries of the file are preprocessed and shown in the integrated VCF viewer. Filtering tools are set by the user to reduce the number of clinically non-relevant variants. After filtering and prioritization users can interpret relevant variants by retrieving information (annotations) about variants from the database. The communication platform allow users to contact experts about specific variants, genes, or genetic disorders, to exchange knowledge and expertise. Steps required to analyze VCF files The following filtering options may be used to reduce the non-relevant sequence variants in VCF files. Users can share VCF files with colleagues and coworkers. The integrated mailing systems allows users to contact experts easily. Users can create annotations and comments and rate annotations regarding medical relevance and scientific evidence, that is helpful for the community of users for diagnosis of genetic disorders
https://en.wikipedia.org/wiki?curid=39479634
GeneTalk Registered users provide information about their field of knowledge in their profile and can be contacted by other users.
https://en.wikipedia.org/wiki?curid=39479634
History of fluorine Fluorine is a relatively new element in human applications. In ancient times, only minor uses of fluorine-containing minerals existed. The industrial use of fluorite, fluorine's source mineral, was first described by early scientist Georgius Agricola in the 16th century, in the context of smelting. The name "fluorite" (and later "fluorine") derives from Agricola's invented Latin terminology. In the late 18th century, hydrofluoric acid was discovered. By the early 19th century, it was recognized that fluorine was a bound element within compounds, similar to chlorine. Fluorite was determined to be calcium fluoride. Because of fluorine's tight bonding as well as the toxicity of hydrogen fluoride, the element resisted many attempts to isolate it. In 1886, French chemist Henri Moissan, later a Nobel Prize winner, succeeded in making elemental fluorine by electrolyzing a mixture of potassium fluoride and hydrogen fluoride. Large-scale production and use of fluorine began during World War 2 as part of the Manhattan Project. Earlier in the century, the main fluorochemicals were commercialized by the DuPont company: refrigerant gases (Freon) and polytetrafluoroethylene plastic (Teflon). Some instances of ancient use of fluorite, main source mineral of fluorine, for ornamental use carvings exist. However, archeological finds are rare, perhaps in part because of the stone's softness. Two Roman cups made of Persian fluorite have been discovered and are currently exhibited at the British museum
https://en.wikipedia.org/wiki?curid=39483774
History of fluorine Pliny the Elder described a soft stone from Persia used in cups that may have been fluorite. Fluorite carvings from about 1000 AD have been discovered in the Americas in Indian burial grounds. The word "fluorine" derives from the Latin stem of the main source mineral, fluorite, which was first mentioned in 1529 by Georgius Agricola, the "father of mineralogy". He described fluorite as a flux—an additive that helps melt ores and slags during smelting. Fluorite stones were called "schone flusse" in the German of the time. Agricola, writing in Latin but describing 16th century industry, invented several hundred new Latin terms. For the "schone flusse" stones, he used the Latin noun "fluores", "fluxes", because they made metal ores flow when in a fire. After Agricola, the name for the mineral evolved to fluorspar (still commonly used) and then to fluorite. Fluorite mineral was also described in the writings of alchemist Basilius Valentinus, supposedly in the late 15th century. However, it is alleged that "Valentinus" was a hoax as his writings were not known until about 1600. Some sources claim that the first production of hydrofluoric acid was by Heinrich Schwanhard, a German glass cutter, in 1670. A peer-reviewed study of Schwanhard's writings, though, showed no specific mention of fluorite and only discussion of an extremely strong acid. It was hypothesized that this was probably nitric acid or aqua regia, both capable of etching soft glass
https://en.wikipedia.org/wiki?curid=39483774
History of fluorine Andreas Sigismund Marggraf made the first definite preparation of hydrofluoric acid in 1764 when he heated fluorite with sulfuric acid in glass, which was greatly corroded by the product. In 1771, Swedish chemist Carl Wilhelm Scheele repeated this reaction. Scheele recognized the product of the reaction as an acid, which he called "fluss-spats-syran" (fluor-spar-acid); in English, it was known as "fluoric acid". In 1810, French physicist André-Marie Ampère suggested that hydrofluoric acid was a compound of hydrogen with an unknown element, analogous to chlorine. Fluorite was then shown to be mostly composed of calcium fluoride. Sir Humphry Davy originally suggested the name "fluorine", taking the root from the name of "fluoric acid" and the -ine suffix, similarly to other halogens. This name, with modifications, came to most European languages. (Greek, Russian, and several other languages use the name "ftor" or derivatives, which was suggested by Ampère and comes from the Greek φθόριος ("phthorios"), meaning "destructive".) The New Latin name ("fluorum") gave the element its current symbol, F, although the symbol Fl has been used in early papers. The symbol Fl is now used for the super-heavy element flerovium. Progress in isolating the element was slowed by the exceptional dangers of generating fluorine: several 19th century experimenters, the "fluorine martyrs", were killed or blinded
https://en.wikipedia.org/wiki?curid=39483774
History of fluorine Davy, as well as the notable French chemists Joseph Louis Gay-Lussac and Louis Jacques Thénard, experienced severe pains from inhaling hydrogen fluoride gas; Davy's eyes were damaged. Irish chemists Thomas and George Knox developed fluorite apparatus for working with hydrogen fluoride, but nonetheless were severely poisoned. Thomas nearly died and George was an invalid for three years. Belgian chemist Paulin Louyet and French chemist Jerome Nickles tried to follow the Knox work, but they died from HF poisoning even though they were aware of the dangers. Humphry Davy of England: poisoned, recovered. George and Thomas Knox of Ireland: both poisoned, one bedridden 3 years, recovered. P. Louyet of Belgium: poisoned, died. Jerome Nickels of Nancy, France: poisoned, died. George Gore of England: fluorine / hydrogen explosion, narrowly escaped injury. Henri Moissan of France: poisoned several times, success, but shortened life. Initial attempts to isolate the element were also hindered by material difficulties: the extreme corrosiveness and reactivity of hydrogen fluoride (and of fluorine gas) as well as problems getting a suitable conducting liquid for electrolysis. Davy tried to electrolyze HF but had to stop because the electrodes were damaged. He then shifted to (unsuccessful) chemical reactions. Edmond Frémy thought that passing electric current through pure hydrofluoric acid (dry HF) might work. Previously, hydrogen fluoride was only available in a water solution
https://en.wikipedia.org/wiki?curid=39483774
History of fluorine Frémy therefore devised a method for producing dry hydrogen fluoride by acidifying potassium bifluoride (KHF). Unfortunately, pure hydrogen fluoride did not pass an electric current. Frémy also tried electrolyzing molten calcium fluoride and probably produced some fluorine (since he made calcium metal at the other electrode), but he was unable to collect the gas. English chemist George Gore also tried electrolyzing dry HF and may have made small quantities of fluorine gas in 1860. He reported an explosion after running his cell (hydrogen and fluorine recombine dramatically), but he recognized that an oxygen leak could have also caused the reaction. French chemist Henri Moissan, formerly one of Frémy's students, continued the search. After trying many different approaches, he built on Frémy and Gore's earlier attempts by combining potassium bifluoride and hydrogen fluoride. The resultant solution conducted electricity. Moissan also constructed especially corrosion-resistant equipment: containers crafted from a mixture of platinum and iridium (more chemically resistant than pure platinum) with fluorite stoppers. After 74 years of effort by many chemists, on 26 June 1886, Moissan isolated elemental fluorine. Moissan's report to the French Academy of making fluorine showed appreciation for the feat: "One can indeed make various hypotheses on the nature of the liberated gas; the simplest would be that "we are in the presence of fluorine"
https://en.wikipedia.org/wiki?curid=39483774
History of fluorine " Moissan's 1887 publication documents reaction attempts of fluorine gas with several substances: sulfur (flames), hydrogen (explosion), carbon (no reaction), etc. Later, Moissan devised a less expensive apparatus for making fluorine: copper equipment coated with copper fluoride. Moissan also constructed special apparatus—5m long platinum tubes with fluorite windows—to determine the slight yellow color of fluorine gas. (The gas appears transparent in small tubes or when allowed to escape. The color observation was not repeated until the 1980s, when his result was confirmed.) In 1906, two months before his death, Moissan received the Nobel Prize in chemistry. The citation: ...in recognition of the great services rendered by him in his investigation and isolation of the element fluorine...The whole world has admired the great experimental skill with which you have studied that savage beast among the elements. During the 1930s and 1940s, the DuPont company commercialized organofluorine compounds at large scales. Following trials of chlorofluorcarbons as refrigerants by researchers at General Motors, DuPont developed large-scale production of Freon-12. The work was carried out by DuPont scientist Dr. Tomas Midgley Jr. DuPont and GM formed a joint venture in 1930 to market the new product; in 1949 DuPont took over the business. Freon proved to be a marketplace hit, rapidly replacing earlier, more toxic, refrigerants and growing the overall market for kitchen refrigerators
https://en.wikipedia.org/wiki?curid=39483774
History of fluorine In 1938, polytetrafluoroethylene (Teflon) was discovered by accident by a recently hired DuPont PhD, Roy J. Plunkett. While working with a cylinder of tetrafluoroethylene, he was unable to release the gas, although the weight had not changed. Scraping down the container, he found white flakes of a polymer new to the world. Tests showed the substance was resistant to corrosion from most substances and had better high temperature stability than any other plastic. By early 1941, a crash program was making commercial quantities. Large-scale productions of elemental fluorine began during World War II. Germany used high-temperature electrolysis to produce tons of chlorine trifluoride, a compound planned to be used as an incendiary. The Manhattan project in the United States produced even more fluorine for use in uranium separation. Gaseous uranium hexafluoride was used to separate uranium-235, an important nuclear explosive, from the heavier uranium-238 in diffusion plants. Because uranium hexafluoride releases small quantities of corrosive fluorine, the separation plants were built with special materials. All pipes were coated with nickel; joints and flexible parts were fabricated from Teflon. In 1958, a DuPont research manager in the Teflon business, Bill Gore, left the company because of its unwillingness to develop Teflon as wire-coating insulation. Gore's son Robert found a method for solving the wire-coating problem and the company W. L. Gore and Associates was born
https://en.wikipedia.org/wiki?curid=39483774
History of fluorine In 1969, Robert Gore developed an expanded polytetrafluoroethylene (ePTFE) membrane which led to the large Gore-Tex business in breathable rainwear. The company developed many other uses of PTFE. In the 1970s and 1980s, concerns developed over the role chlorofluorocarbons play in damaging the ozone layer. By 1996, almost all nations had banned chlorofluorocarbon refrigerants and commercial production ceased. Fluorine continued to play a role in refrigeration though: hydrochlorofluorocarbons (HCFCs) and hydrofluorocarbons (HFCs) were developed as replacement refrigerants.
https://en.wikipedia.org/wiki?curid=39483774
Elimination (pharmacology) In pharmacology the elimination or excretion of a drug is understood to be any one of a number of processes by which a drug is eliminated (that is, cleared and excreted) from an organism either in an unaltered form (unbound molecules) or modified as a metabolite. The kidney is the main excretory organ although others exist such as the liver, the skin, the lungs or glandular structures, such as the salivary glands and the lacrimal glands. These organs or structures use specific routes to expel a drug from the body, these are termed elimination pathways: Drugs are excreted from the kidney by glomerular filtration and by active tubular secretion following the same steps and mechanisms as the products of intermediate metabolism. Therefore, drugs that are filtered by the glomerulus are also subject to the process of passive tubular reabsorption. Glomerular filtration will only remove those drugs or metabolites that are not bound to proteins present in blood plasma (free fraction) and many other types of drugs (such as the organic acids) are actively secreted. In the proximal and distal convoluted tubules non-ionised acids and weak bases are reabsorbed both actively and passively. Weak acids are excreted when the tubular fluid becomes too alkaline and this reduces passive reabsorption. The opposite occurs with weak bases. Poisoning treatments use this effect to increase elimination, by alkalizing the urine causing forced diuresis which promotes excretion of a weak acid, rather than it getting reabsorbed
https://en.wikipedia.org/wiki?curid=39490943
Elimination (pharmacology) As the acid is ionised, it cannot pass through the plasma membrane back into the blood stream and instead gets excreted with the urine. Acidifying the urine has the same effect for weakly basic drugs. On other occasions drugs combine with bile juices and enter the intestines. In the intestines the drug will join with the unabsorbed fraction of the administered dose and be eliminated with the faeces or it may undergo a new process of absorption to eventually be eliminated by the kidney. The other elimination pathways are less important in the elimination of drugs, except in very specific cases, such as the respiratory tract for alcohol or anaesthetic gases. The case of mother's milk is of special importance. The liver and kidneys of newly born infants are relatively undeveloped and they are highly sensitive to a drug's toxic effects. For this reason it is important to know if a drug is likely to be eliminated from a woman's body if she is breast feeding in order to avoid this situation. Pharmacokinetics studies the manner and speed with which drugs and their metabolites are eliminated by the various excretory organs. This elimination will be proportional to the drug's plasmatic concentrations. In order to model these processes a working definition is required for some of the concepts related to excretion. The plasma half-life or "half life of elimination" is the time required to eliminate 50% of the absorbed dose of a drug from an organism
https://en.wikipedia.org/wiki?curid=39490943
Elimination (pharmacology) Or put another way, the time that it takes for the plasma concentration to fall by half from its maximum levels. The difference in a drug's concentration in arterial blood (before it has circulated around the body) and venous blood (after it has passed through the body's organs) represents the amount of the drug that the body has eliminated or "cleared". Although clearance may also involve other organs than the kidney, it is almost synonymous with renal clearance or renal plasma clearance. Clearance is therefore expressed as the plasma volume totally free of the drug per unit of time, and it is measured in units of volume per units of time. Clearance can be determined on an overall, organism level («systemic clearance») or at an organ level (hepatic clearance, renal clearance etc.). The equation that describes this concept is: formula_1 Where: formula_2 is the organ's clearance rate, formula_3 is the drug's plasma concentration in arterial blood, formula_4 is the drug's plasma concentration in venous blood and formula_5 an organ's blood flow. Each organ will have its own specific clearance conditions, which will relate to its mode of action
https://en.wikipedia.org/wiki?curid=39490943
Elimination (pharmacology) The «renal clearance» rate will be determined by factors such as the degree of plasma protein binding as the drug will only be filtered out if it is in the unbound free form, the degree of saturation of the transporters (active secretion depends on transporter proteins that can become saturated) or the number of functioning nephrons (hence the importance of factors such as kidney failure). As «hepatic clearance» is an active process it is therefore determined by factors that alter an organism's metabolism such as the number of functioning hepatocytes, this is the reason that liver failure has such clinical importance. The steady state or "stable concentration" is reached when the drug's supply to the blood plasma is the same as the rate of elimination from the plasma. It is necessary to calculate this concentration in order to decide the period between doses and the amount of drug supplied with each dose in prolonged treatments. Other parameters of interest include a drug's bioavailability and the "apparent volume of distribution". For elimination via bile please see: Estimation of Biliary Excretion of Foreign Compounds Using Properties of Molecular Structure. 2014. Sharifi M., Ghafourian T. AAPS J. 16(1) 65–78.
https://en.wikipedia.org/wiki?curid=39490943
Transition metal thiolate complex Transition metal thiolate complexes are metal complexes containing thiolate ligands. Thiolates are ligands that can be classified as soft Lewis bases. Therefore, thiolate ligands coordinate most strongly to metals that behave as soft Lewis acids as opposed to those that behave as hard Lewis acids. Most complexes contain other ligands in addition to thiolate, but many homoleptic complexes are known with only thiolate ligands. The amino acid cysteine has a thiol functional group, consequently many cofactors in proteins and enzymes feature cysteinate-metal cofactors. Metal thiolate complexes are commonly prepared by reactions of metal complexes with thiols (RSH), thiolates (RS), and disulfides (RS). The salt metathesis reaction route is common. In this method, an alkali metal thiolate is treated with a transition metal halide to produce an alkali metal halide and the metal thiolate complex: The thiol ligand can also effect protonolysis of anionic ligands, as illustrated by the formation of an organonickel thiolate from nickelocene and ethanethiol: Many thiolate complexes are prepared by redox reactions. Organic disulfides oxidize low valence metals, as illustrated by the oxidation of titanocene dicarbonyl: Some metal centers are oxidized by thiols, the coproduct being hydrogen gas: These reactions probably proceed via oxidative addition of the thiol. Thiols and especially thiolate salts are reducing agents. Consequently, they induce redox reactions with certain transition metals
https://en.wikipedia.org/wiki?curid=39491679
Transition metal thiolate complex This phenomenon is illustrated by the synthesis of cuprous thiolates from cupric precursors: Thiolate clusters of the type [FeS(SR)] occur in iron–sulfur proteins. Synthetic analogues can be prepared by combined redox and salt metathesis reactions: Divalent sulfur exhibits bond angles approaching 90°. Such acute angles are also seen in the M-S-C angles of metal thiolates. Having filled p-orbitals of suitable symmetry, thiolates are pi-donor ligands. This property plays a role in the stabilization of Fe(IV) states in the enzyme cytochrome P450. Thiolates are relatively basic ligands, being derived from conjugate acids with pK's of 6.5 (thiophenol) to 10.5 (butanethiol). Consequently, thiolate ligand often bridge pairs of metals. One example is Fe(SCH)(CO). Thiolate ligands, especially when nonbridging, are susceptible to attack by electrophiles including acids, alkylating agents, and oxidants. Metal thiolate functionality is pervasive in metalloenzymes. Iron-sulfur proteins, blue copper proteins, and the zinc-containing enzyme liver alcohol dehydrogenase feature thiolate ligands. Commonly thiolate is ligand is provided from the cysteine residue. All molybdoproteins feature thiolates in the form of cysteinyl and/or molybdopterin.
https://en.wikipedia.org/wiki?curid=39491679
Kharasch–Sosnovsky reaction The is the radical oxidation of an allylic alkene to a allylic alcohol using a copper catalyst and a peroxy ester (e.g. tert-Butyl peroxybenzoate) or a peroxide. Chiral ligands can be used to render the reaction asymmetric, constructing chiral C–O bonds via C–H bond activation. This is notable as asymmetric addition to allylic groups tends to be difficult due to the transition state being highly symmetric. The reaction is named after Morris S. Kharasch and George Sosnovsky who first reported it in 1958. Substituted oxazolines and thiazolines can be oxidized to the corresponding oxazoles and thiazoles via a modification of the classic reaction. The mechanism is believed to involve radical intermediates and copper in the I, II and III oxidation states, via the following steps: Cu(I) + BzO–O"t"Bu → Cu(II)–OBz + "t"BuO• "t"BuO• + CH=CH–CHR → "t"BuOH + CH=CH–CHR• CH=CH–CHR• + Cu(II)–OBz → CH=CH–CHR–Cu(III)–OBz CH=CH–CHR–Cu(III)–OBz → CH=CH–CHR(OBz) + Cu(I) The last step, a reductive elimination of an organocopper(III) intermediate to regenerate the Cu(I) catalyst and form the product, is proposed to take place via a seven-membered ring transition state.
https://en.wikipedia.org/wiki?curid=39494260
Rubber Chemistry and Technology is a quarterly peer-reviewed scientific journal covering research, technical developments, and chemical engineering relating to rubber and its allied substances. It was established in 1928, with Carroll C. Davis as its first editor-in-chief. It is published by the American Chemical Society Rubber Division. The journal currently publishes four issues per year. One issue is dedicated to reviews of topics in rubber science, including a review by the most recent Goodyear medalist. The remaining issues contain original research contributions. The journal is abstracted indexed in: According to the "Journal Citation Reports", the journal has a 2017 impact factor of 1.747. The following persons have been editors-in-chief of the journal:
https://en.wikipedia.org/wiki?curid=39496339
Trifluoromethylsulfur pentafluoride Trifluoromethylsulfur pentafluoride, CFSF, is a rare industrial greenhouse gas. It was first identified in the atmosphere in 2000. is considered to be one of the several "super-greenhouse gases". On a per molecule basis, it is considered to be the most potent greenhouse gas present in Earth's atmosphere, having a global warming potential of about 18,000 times that of carbon dioxide. The chemical is predicted to have a lifetime of 800 years in the atmosphere. However, the current concentration of trifluoromethylsulfur pentafluoride remains at a level that is unlikely to measurably contribute to global warming. The presence of the gas in the atmosphere is attributed to anthropogenic sources, possibly a by-product of the manufacture of fluorochemicals, originating from reactions of SF with fluoropolymers used in electronic devices and in microchips, or the formation can be associated with high voltage equipment created from SF (a breakdown product of high voltage equipment) reacting with CF to form the CFSF molecule. The chemistry of this compound is similar to that of sulfur hexafluoride (SF).
https://en.wikipedia.org/wiki?curid=39498136
Active center (polymer science) An active center (sometimes called active site or kinetic-chain carrier) in polymer science refers to the site on a chain carrier at which reaction occurs.
https://en.wikipedia.org/wiki?curid=39498243
Indian Salt Service is a Central Civil Service of the Government of India. Under the administrative control of the Ministry of Commerce and Industry, it is one of the smallest Central services under the Government of India. The organized and uniform collection of tax revenue on salt in British India began under the British Raj. Both before and after that, various native rulers of the Indian Princely states (outside British India proper) collected such revenue in accordance with their own revenue and administrative requirements and resources. In 1856, the government appointed the young William Chichele Plowden, Secretary of the Board of Revenue of the North West Provinces, to report on the establishment of a uniform system of revenue realisation from salt within the British Provinces, and he recommended the extension of the excise system, the reduction of duty, and the introduction of a system of licensing as the measures to achieve this goal. In 1876, separate departments under a Salt Commissioner were set up, and these operated at the level of each British Province and Presidency. It was with the passing of the Government of India Act 1935, that within British India (which then included much of present-day Pakistan) salt came under the exclusive control of the central government, with the Government of India taking over the task of collecting salt revenue and transferring it from the provincial salt agencies to the Central Excise and Revenue Department
https://en.wikipedia.org/wiki?curid=39501095
Indian Salt Service In 1944, the Government of India passed the Central Excises and Salt Act which unified and amended all laws dealing with duties on excise and salt. The Salt Department was originally a part of the Central Board of Revenue under the Ministry of Finance, but since a reorganisation of the ministries of India in 1957 it has come under the authority of the Ministry of Commerce and Industry. According to the Union List of subjects under the Seventh Schedule of the Indian Constitution, the "manufacture, supply and distribution of salt by Union agencies; regulation and control of manufacture, supply and distribution of salt by other agencies", is the responsibility of the Government of India. The posts of Salt Controller, Deputy Salt Controller and Assistant Salt Controller were re-categorized as Salt Commissioner, Deputy Salt Commissioner and Assistant Salt Commissioner in 1952 and the Indian Salt Services were created in 1954 for the realisation of the entry under the Union List. The Salt Service has both Group A and Group B wings. The Salt Service is one of the smallest services under the Government of India with a sanctioned strength of only 11 posts. As a central civil service, recruitment to the is conducted by the Union Public Service Commission. The is part of India's Salt Organization which is headquartered in Jaipur. The service is headed by the Salt Commissioner below whom are five Deputy Salt Commissioners and nine Assistant Salt Commissioners who man the agency with the help of other supporting staff
https://en.wikipedia.org/wiki?curid=39501095
Indian Salt Service The Deputy Salt Commissioners head regional offices and the Assistant Salt Commissioners are in charge of divisional offices of the organisation. The Service has four regional offices at Chennai, Mumbai, Ahmedabad and Kolkata and field offices in the salt producing states. The Salt Service is tasked with several functions including monitoring and quality updation of salt, setting production targets, providing technical guidance to salt manufacturers and leasing and managing department lands for the same, collection of cess, fees and rents and the implementation of various schemes aimed at combating iodine deficiency and programs for promoting the growth of the salt industry in India.
https://en.wikipedia.org/wiki?curid=39501095
4-Caffeoyl-1,5-quinide (4-caffeoylquinic-1,5-lactone or 4-CQL) is found in roasted coffee beans. It is formed by lactonization of 4-"O"-caffeoylquinic acid during the roasting process. It is reported to possess opioid antagonist properties in mice.
https://en.wikipedia.org/wiki?curid=39501838
Drucker stability (also called the postulates) refers to a set of mathematical criteria that restrict the possible nonlinear stress-strain relations that can be satisfied by a solid material. The postulates are named after Daniel C. Drucker. A material that does not satisfy these criteria is often found to be unstable in the sense that application of a load to a material point can lead to arbitrary deformations at that material point unless an additional length or time scale is specified in the constitutive relations. The postulates are often invoked in nonlinear finite element analysis. Materials that satisfy these criteria are generally well-suited for numerical analysis, while materials that fail to satisfy this criterion are likely to present difficulties (i.e. non-uniqueness or singularity) during the solution process. Drucker's first stability criterion (first proposed by Rodney Hill and also called Hill's stability criterion) is a strong condition on the incremental internal energy of a material which states that the incremental internal energy can only increase. The criterion may be written as follows: where dσ is the stress increment tensor associated with the strain increment tensor dε through the constitutive relation. Drucker's postulate is applicable to elastic-plastic materials and states that in a cycle of plastic deformation the second degree plastic work is always positive. This postulate can be expressed in incremental form as where dε is the incremental plastic strain tensor. 3.
https://en.wikipedia.org/wiki?curid=39504221
Ionomics is the measurement of the total elemental composition of an organism to address biological problems. Questions within physiology, ecology, evolution, and many other fields can be investigated using ionomics, often coupled with bioinformatics and other genetic tools. Observing an organism's ionome is a powerful approach to the functional analysis of its genes and the gene networks. Information about the physiological state of an organism can also be revealed indirectly through its ionome, for example iron deficiency in a plant can be identified by looking at a number of other elements, rather than iron itself. A more typical example is in a blood test, where a number of conditions involving nutrition or disease may be inferred from testing this single tissue for sodium, potassium, iron, chlorine, zinc, magnesium, calcium and copper. In practice, the total elemental composition of an organism is rarely determined. The number and type of elements measured are limited by the available instrumentation, the assumed value of the element in question, and the added cost of measuring each additional element. Also, a single tissue may be measured instead of the entire organism, as in the example given above of a blood test, or in the case of plants, the sampling of just the leaves or seeds. These are simply issues of practicality. Various techniques may be fruitfully used to measure elemental composition
https://en.wikipedia.org/wiki?curid=39505431
Ionomics Among the best are Inductively-Coupled Plasma Optical Emission Spectroscopy (ICP-OES), Inductively-Coupled Plasma Mass Spectrometry (ICP-MS), X-Ray Fluorescence (XRF), synchrotron-based microXRF, and Neutron activation analysis (NAA). This latter technique has been applied to perform ionomics in the study of breast cancer, colorectal cancer and brain cancer. High-throughput ionomic phenotyping has created the need for data management systems to collect, organize and share the collected data with researchers worldwide. The ionomicshub (iHUB) is a collaborative international network for ionomics
https://en.wikipedia.org/wiki?curid=39505431
Cupping tester Cupping testers are employed in the testing of the elongation and deformability of lacquers and protective coatings applied to metal substrates. This sort of test is essential because it allows one to test the durability of a lacquer or protective coating before the coating is applied to a product. The cupping tester operates by using a punch to push upon the unpainted side of a coated panel until the painted side shows signs of deformation (cracks) in the coating. It is when cracks start to appear that the lacquer or coating's durability can be recorded, this durability is known as the coating's flexibility rating. The test can also be performed according to a predetermined depth. For example, if a coating needed a certain flexibility rating, then the tester would be set to a depth in accordance with that rating, without the need to deform the coating until it fails. There are a variety of cupping testers, ranging from the manual to the fully automated, and from single substrate to multiple substrate testers. However, generally they consist of a solid metal base which forms a circle over which the coating is tested. There is a punch that will go up to and through this metal hole in order to apply pressure to the coating and substrate. In addition to these core components, there is usually an included magnifier which helps to determine the point of major deformation. These components can be arranged in a variety of ways, but these components remain consistent in all designs.
https://en.wikipedia.org/wiki?curid=39509720
Mutagenesis (molecular biology technique) In molecular biology, mutagenesis is an important laboratory technique whereby DNA mutations are deliberately engineered to produce libraries of mutant genes, proteins, strains of bacteria, or other genetically modified organisms. The various constituents of a gene, as well as its regulatory elements and its gene products, may be mutated so that the functioning of a genetic locus, process, or product can be examined in detail. The mutation may produce mutant proteins with interesting properties or enhanced or novel functions that may be of commercial use. Mutant strains may also be produced that have practical application or allow the molecular basis of a particular cell function to be investigated. Many methods of mutagenesis exist today. Initially, the kind of mutations artificially induced in the laboratory were entirely random; however, with the development of site-directed mutagenesis, more specific changes can be made. Since 2013, development of the CRISPR/Cas9 technology, based on a prokaryotic viral defense system, has allowed for the editing or mutagenesis of a genome "in vivo". Site directed mutagenesis has proved useful in situations that random mutagenesis is not. Random mechanisms such as UV can not target specific regions or sequences of the genome like combinatorial or insertional mutagenesis is capable of. There are many techniques and outcomes of site directed mutagenesis however two large mechanisms are combinatorial and insertional mutagenesis
https://en.wikipedia.org/wiki?curid=39510164
Mutagenesis (molecular biology technique) Mutagenesis that is not random has allowed researchers to clone DNA, find the effects of certain mutagens, engineer proteins, help immunocompromised patients, research HIV, fight cancers such as leukemia, and cure beta-thalassemia. Early approaches to mutagenesis relied on methods which produced entirely random mutations. In such methods, cells or organisms are exposed to mutagens such as UV radiation or mutagenic chemicals, and mutants with desired characteristics are then selected. Hermann Muller discovered in 1927 that X-rays can cause genetic mutations in fruit flies, and went on to use the mutants he created for his studies in genetics. For "Escherichia coli", mutants may be selected first by exposure to UV radiation, then plated onto an agar medium. The colonies formed are then replica-plated, one in a rich medium, another in a minimal medium, and mutants that have specific nutritional requirements can then be identified by their inability to grow in the minimal medium. Similar procedures may be repeated with other types of cells and with different media for selection. A number of methods for generating random mutations in specific proteins were later developed to screen for mutants with interesting or improved properties. These methods may involve the use of doped nucleotides in oligonucleotide synthesis, or conducting a PCR reaction in conditions that enhance misincorporation of nucleotides (error-prone PCR), for example by reducing the fidelity of replication or using nucleotide analogues
https://en.wikipedia.org/wiki?curid=39510164
Mutagenesis (molecular biology technique) A variation of this method for integrating non-biased mutations in a gene is sequence saturation mutagenesis. PCR products which contain mutation(s) are then cloned into an expression vector and the mutant proteins produced can then be characterised. In animal studies, alkylating agents such as "N"-ethyl-"N"-nitrosourea (ENU) have been used to generate mutant mice. Ethyl methanesulfonate (EMS) is also often used to generate animal and plant mutants. In a European Union law (as 2001/18 directive), this kind of mutagenesis may be used to produce GMOs but the products are exempted from regulation: no labeling, no evaluation. Random mutagenesis techniques provide an advantage in terms of control of how many mutations are added. UV mutagenesis allows for the change in single nucleotides, however it does not offer much control as to which nucleotide is being changed. Many researchers seek to introduce selected changes to DNA in a precise, site-specific manner. Prior to site directed mutation, all mutations made were random, following mutation scientists had to use selection for the desired phenotype to attain the desired mutation. Many researchers seek to introduce selected changes to DNA in a precise, site-specific manner. Early attempts uses analogs of nucleotides and other chemicals were first used to generate localized point mutations. Such chemicals include aminopurine, which induces an AT to GC transition, while nitrosoguanidine, bisulfite, and N-hydroxycytidine may induce a GC to AT transition
https://en.wikipedia.org/wiki?curid=39510164
Mutagenesis (molecular biology technique) These techniques allow specific mutations to be engineered into a protein; however, they are not flexible with respect to the kinds of mutants generated, nor are they as specific as later methods of site-directed mutagenesis and therefore have some degree of randomness. Other technologies such as cleavage of DNA at specific sites on the chromosome, addition of new nucleotides, and exchanging of base pairs it is now possible to decide where mutations can go. Current techniques for site-specific mutation commonly involve using pre-fabricated mutagenic oligonucleotides in a primer extension reaction with DNA polymerase. This methods allows for point mutation or deletion or insertion of small stretches of DNA at specific sites. Advances in methodology have made such mutagenesis now a relatively simple and efficient process. Newer and more efficient methods of site directed mutagenesis are being constantly developed. A new technique has been developed called "Seamless ligation cloning extract" (or SLiCE for short) that allows for the cloning of certain sequences of DNA within the genome. This method is now used in Polymerase chain reactions to make site directed mutagenesis more efficient by allowing more than one DNA fragment to be inserted into the genome at once. The uses for site directed mutagenesis are seemingly endless, however one important function is what the mutations tell us once they have occurred
https://en.wikipedia.org/wiki?curid=39510164
Mutagenesis (molecular biology technique) Site directed mutations were recently used to determine how susceptible certain species were to chemicals that are often used In labs. The experiment used site directed mutagenesis to mimic the expected mutations of the specific chemical. The mutation resulted in a change in specific amino acids and the affects of this mutation were analyzed. The site-directed approach may be done systematically in such techniques as alanine scanning mutagenesis, whereby residues are systematically mutated to alanine in order to identify residues important to the structure or function of a protein. Another comprehensive approach is site saturation mutagenesis where one codon or a set of codons may be substituted with all possible amino acids at the specific positions. Combinatorial mutagenesis is a site-directed protein engineering technique whereby multiple mutants of a protein can be simultaneously engineered based on analysis of the effects of additive individual mutations. It provides a useful method to assess the combinatorial effect of a large number of mutations on protein function. Large numbers of mutants may be screened for a particular characteristic by combinatorial analysis. In this technique, multiple positions or short sequences along a DNA strand may be exhaustively modified to obtain a comprehensive library of mutant proteins. The rate of incidence of beneficial variants can be improved by different methods for constructing mutagenesis libraries
https://en.wikipedia.org/wiki?curid=39510164
Mutagenesis (molecular biology technique) One approach to this technique is to extract and replace a portion of the DNA sequence with a library of sequences containing all possible combinations at the desired mutation site. The content of the inserted segment can include sequences of structural significance, immunogenic property, or enzymatic function. A segment may also be inserted randomly into the gene in order to assess structural or functional significance of a particular part of a protein. The insertion of one or more base pairs, resulting in DNA mutations, is also known as insertional mutagenesis. Engineered mutations such as these can provide important information in cancer research, such as mechanistic insights into the development of the disease. Retroviruses and transposons are the chief instrumental tools in insertional mutagenesis. Retroviruses, such as the mouse mammory tumor virus and murine leukemia virus, can be used to identify genes involved in carcinogenesis and understand the biological pathways of specific cancers. Transposons, chromosomal segments that can undergo transposition, can be designed and applied to insertional mutagenesis as an instrument for cancer gene discovery. These chromosomal segments allow insertional mutagenesis to be applied to virtually any tissue of choice while also allowing for more comprehensive, unbiased depth in DNA sequencing. Researchers have found four mechanisms of insertional mutagenesis that can be used on humans. the first mechanism is called enhancer insertion
https://en.wikipedia.org/wiki?curid=39510164
Mutagenesis (molecular biology technique) Enhancers boost transcription of a particular gene by interacting with a promoter of that gene. This particular mechanism was first used to help severely immunocompromised patients I need of bone marrow. Gammaretroviruses carrying enhancers were then inserted into patients. The second mechanism is referred to as promoter insertion. Promoters provide our cells with the specific sequences needed to begin translation. Promoter insertion has helped researchers learn more about the HIV virus. The third mechanism is gene inactivation. A example of gene inactivation is using insertional mutagenesis to insert a retrovirus that disrupts the genome of the T cell in leukemia patients and giving them a specific antigen called CAR allowing the T cells to target cancer cells. The final mechanisms is referred to as mRNA 3' end substitution. Our genes occasionally undergo point mutations causing beta-thalassemia that interrupts red blood cell function. To fix this problem the correct gene sequence for the red blood cells are introduced and a substitution is made. Homologous recombination can be used to produce specific mutation in an organism. Vector containing DNA sequence similar to the gene to be modified is introduced to the cell, and by a process of recombination replaces the target gene in the chromosome. This method can be used to introduce a mutation or knock out a gene, for example as used in the production of knockout mice
https://en.wikipedia.org/wiki?curid=39510164
Mutagenesis (molecular biology technique) Since 2013, the development of CRISPR-Cas9 technology has allowed for the efficient introduction of different types of mutations into the genome of a wide variety of organisms. The method does not require a transposon insertion site, leaves no marker, and its efficiency and simplicity has made it the preferred method for genome editing. As the cost of DNA oligonucleotide synthesis falls, artificial synthesis of a complete gene is now a viable method for introducing mutations into a gene. This method allows for extensive mutation at multiple sites, including the complete redesign of the codon usage of a gene to optimise it for a particular organism.
https://en.wikipedia.org/wiki?curid=39510164
C18H36O The molecular formula CHO may refer to:
https://en.wikipedia.org/wiki?curid=39510740
Mass cytometry is a mass spectrometry technique based on inductively coupled plasma mass spectrometry and time of flight mass spectrometry used for the determination of the properties of cells (cytometry). In this approach, antibodies are conjugated with isotopically pure elements, and these antibodies are used to label cellular proteins. Cells are nebulized and sent through an argon plasma, which ionizes the metal-conjugated antibodies. The metal signals are then analyzed by a time-of-flight mass spectrometer. The approach overcomes limitations of spectral overlap in flow cytometry by utilizing discrete isotopes as a reporter system instead of traditional fluorophores which have broad emission spectra. Tagging technology and instrument development occurred at the University of Toronto and DVS Sciences, Inc. CyTOF (cytometry by time of flight) was initially commercialized by DVS Sciences in 2009. In 2014, Fluidigm acquired DVS Sciences to become a reference company in single cell technology. The CyTOF, CyTOF2, and Helios (CyTOF3) have been commercialized up to now. Fluidigm sells a variety of commonly used metal-antibody conjugates, and an antibody conjugation kit. data is recorded in tables that list, for each cell, the signal detected per channel, which is proportional to the number of antibodies tagged with the corresponding channel's isotope bound to that cell. These data are formatted as FCS files, which are compatible with traditional flow cytometry software
https://en.wikipedia.org/wiki?curid=39515134
Mass cytometry Due to the high-dimensional nature of mass cytometry data, novel data analysis tools have been developed as well. Advantages include minimal overlap in metal signals meaning the instrument is theoretically capable of detecting 100 parameters per cell, entire cell signaling networks can be inferred organically without reliance on prior knowledge, and one well-constructed experiment produces large amounts of data. Disadvantages include the practical flow rate is around 500 cells per second versus several thousand in flow cytometry, current chemical methods limit cytometer use to around 40 parameters per cell, and CyTOF is much more expensive to own and operate. Additionally, mass cytometry is a destructive method and cells cannot be sorted for further analysis. has research applications in medical fields including immunology, hematology, and oncology. It has been used in studies of hematopoiesis, cell cycle, cytokine expression, and differential signaling responses.
https://en.wikipedia.org/wiki?curid=39515134
SiC–SiC matrix composite is a particular type of ceramic matrix composite (CMC) which have been accumulating interest mainly as high temperature materials for use in applications such as gas turbines, as an alternative to metallic alloys. CMCs are generally a system of materials that are made up of ceramic fibers or particles that lie in a ceramic matrix phase. In this case, a SiC/SiC composite is made by having a SiC (silicon carbide) matrix phase and a fiber phase incorporated together by different processing methods. Outstanding properties of SiC/SiC composites include high thermal, mechanical, and chemical stability while also providing high strength to weight ratio. SiC/SiC composites are mainly processed through three different methods. However, these processing methods are often subjected to variations in order to create the desired structure or property: Mechanical properties of CMCs, including SiC–SiC composites can vary depending on the properties of their various components, namely, the fiber, matrix, and interphases. For example, the size, composition, crystallinity, or alignment of the fibers will dictate the properties of the composite. The interplay between matrix microcracking and fiber-matrix debonding often dominates the failure mechanism of SiC/SiC composites. This results in SiC/SiC composites having non-brittle behavior despite being fully ceramic. Additionally, creep rates at high temperatures are also extremely low, but still dependent on its various constituents
https://en.wikipedia.org/wiki?curid=39516879