text stringlengths 11 1.65k | source stringlengths 38 44 |
|---|---|
Hippo signaling pathway The Hippo pathway is a recently identified signaling cascade that plays an evolutionarily conserved role in organ size control by inhibiting cell proliferation, promoting apoptosis, regulating fates of stem/progenitor cells, and in some circumstances, limiting cell size. Research indicates a key role of this pathway in regulation of cardiomyocyte proliferation and heart size. Inactivation of the Hippo pathway or activation of its downstream effector, the Yes-associated protein transcription coactivator, improves cardiac regeneration. Several known upstream signals of the Hippo pathway such as mechanical stress, G-protein-coupled receptor signaling, and oxidative stress are known to play critical roles in cardiac physiology. In addition, Yes-associated protein has been shown to regulate cardiomyocyte fate through multiple transcriptional mechanisms. | https://en.wikipedia.org/wiki?curid=22108748 |
GHS precautionary statements Precautionary statements form part of the Globally Harmonized System of Classification and Labelling of Chemicals (GHS). They are intended to form a set of standardized phrases giving advice about the correct handling of chemical substances and mixtures, which can be translated into different languages. As such, they serve the same purpose as the well-known S-phrases, which they are intended to replace. Precautionary statements are one of the key elements for the labelling of containers under the GHS, along with: Each precautionary statement is designated a code, starting with the letter P and followed by three digits. Statements which correspond to related hazards are grouped together by code number, so the numbering is not consecutive. The code is used for reference purposes, for example to help with translations, but it is the "actual phrase" which should appear on labels and safety data sheets. Some precautionary phrases are combinations, indicated by a plus sign "+". In several cases, there is a choice of wording, for example "Avoid breathing dust/fume/gas/mist/vapours/spray": the supplier or regulatory agency should choose the appropriate wording for the product concerned. | https://en.wikipedia.org/wiki?curid=22116598 |
Glass transition The glass–liquid transition, or glass transition, is the gradual and reversible transition in amorphous materials (or in amorphous regions within semicrystalline materials) from a hard and relatively brittle "glassy" state into a viscous or rubbery state as the temperature is increased. An amorphous solid that exhibits a glass transition is called a glass. The reverse transition, achieved by supercooling a viscous liquid into the glass state, is called vitrification. The glass-transition temperature "T" of a material characterizes the range of temperatures over which this glass transition occurs. It is always lower than the melting temperature, "T", of the crystalline state of the material, if one exists. Hard plastics like polystyrene and poly(methyl methacrylate) are used well below their glass transition temperatures, i.e., when they are in their glassy state. Their "T" values are well above room temperature, both at around . Rubber elastomers like polyisoprene and polyisobutylene are used above their "T", that is, in the rubbery state, where they are soft and flexible. Despite the change in the physical properties of a material through its glass transition, the transition is not considered a phase transition; rather it is a phenomenon extending over a range of temperature and defined by one of several conventions. Such conventions include a constant cooling rate () and a viscosity threshold of 10 Pa·s, among others | https://en.wikipedia.org/wiki?curid=22122416 |
Glass transition Upon cooling or heating through this glass-transition range, the material also exhibits a smooth step in the thermal-expansion coefficient and in the specific heat, with the location of these effects again being dependent on the history of the material. The question of whether some phase transition underlies the glass transition is a matter of continuing research. In a more recent model of glass transition, the glass transition temperature corresponds to the temperature at which the largest openings between the vibrating elements in the liquid matrix become smaller than the smallest cross-sections of the elements or parts of them when the temperature is decreasing. As a result of the fluctuating input of thermal energy into the liquid matrix, the harmonics of the oscillations are constantly disturbed and temporary cavities ("free volume") are created between the elements, the number and size of which depend on the temperature. The glass transition temperature "T" defined in this way is a fixed material constant of the disordered (non-crystalline) state that is dependent only on the pressure. As a result of the increasing inertia of the molecular matrix when approaching "T", the setting of the thermal equilibrium is successively delayed, so that the usual measuring methods for determining the glass transition temperature in principle deliver "T" values that are too high. In principle, the slower the temperature change rate is set during the measurement, the closer the measured "T" value "T" approaches | https://en.wikipedia.org/wiki?curid=22122416 |
Glass transition (Karl Günter Sturm Microscopic-Phenomenological Model of Glass Transition I. Foundations of the model (DOI: 10.13140/RG.2.2.19831.73121) The glass transition of a liquid to a solid-like state may occur with either cooling or compression. The transition comprises a smooth increase in the viscosity of a material by as much as 17 orders of magnitude within a temperature range of 500 K without any pronounced change in material structure. The consequence of this dramatic increase is a glass exhibiting solid-like mechanical properties on the timescale of practical observation. This transition is in contrast to the freezing or crystallization transition, which is a first-order phase transition in the Ehrenfest classification and involves discontinuities in thermodynamic and dynamic properties such as volume, energy, and viscosity. In many materials that normally undergo a freezing transition, rapid cooling will avoid this phase transition and instead result in a glass transition at some lower temperature. Other materials, such as many polymers, lack a well defined crystalline state and easily form glasses, even upon very slow cooling or compression. The tendency for a material to form a glass while quenched is called glass forming ability. This ability depends on the composition of the material and can be predicted by the rigidity theory. Below the transition temperature range, the glassy structure does not relax in accordance with the cooling rate used | https://en.wikipedia.org/wiki?curid=22122416 |
Glass transition The expansion coefficient for the glassy state is roughly equivalent to that of the crystalline solid. If slower cooling rates are used, the increased time for structural relaxation (or intermolecular rearrangement) to occur may result in a higher density glass product. Similarly, by annealing (and thus allowing for slow structural relaxation) the glass structure in time approaches an equilibrium density corresponding to the supercooled liquid at this same temperature. "T" is located at the intersection between the cooling curve (volume versus temperature) for the glassy state and the supercooled liquid. The configuration of the glass in this temperature range changes slowly with time towards the equilibrium structure. The principle of the minimization of the Gibbs free energy provides the thermodynamic driving force necessary for the eventual change. At somewhat higher temperatures than "T", the structure corresponding to equilibrium at any temperature is achieved quite rapidly. In contrast, at considerably lower temperatures, the configuration of the glass remains sensibly stable over increasingly extended periods of time. Thus, the liquid-glass transition is not a transition between states of thermodynamic equilibrium. It is widely believed that the true equilibrium state is always crystalline. Glass is believed to exist in a kinetically locked state, and its entropy, density, and so on, depend on the thermal history. Therefore, the glass transition is primarily a dynamic phenomenon | https://en.wikipedia.org/wiki?curid=22122416 |
Glass transition Time and temperature are interchangeable quantities (to some extent) when dealing with glasses, a fact often expressed in the time–temperature superposition principle. On cooling a liquid, "internal degrees of freedom successively fall out of equilibrium". However, there is a longstanding debate whether there is an underlying second-order phase transition in the hypothetical limit of infinitely long relaxation times. Refer to the figure on the upper right plotting the heat capacity as a function of temperature. In this context, "T" is the temperature corresponding to point A on the curve. The linear sections below and above "T" are colored green. "T" is the temperature at the intersection of the red regression lines. Different operational definitions of the glass transition temperature "T" are in use, and several of them are endorsed as accepted scientific standards. Nevertheless, all definitions are arbitrary, and all yield different numeric results: at best, values of "T" for a given substance agree within a few kelvins. One definition refers to the viscosity, fixing "T" at a value of 10 poise (or 10 Pa·s). As evidenced experimentally, this value is close to the annealing point of many glasses. In contrast to viscosity, the thermal expansion, heat capacity, shear modulus, and many other properties of inorganic glasses show a relatively sudden change at the glass transition temperature. Any such step or kink can be used to define "T" | https://en.wikipedia.org/wiki?curid=22122416 |
Glass transition To make this definition reproducible, the cooling or heating rate must be specified. The most frequently used definition of "T" uses the energy release on heating in differential scanning calorimetry (DSC, see figure). Typically, the sample is first cooled with 10 K/min and then heated with that same speed. Yet another definition of "T" uses the kink in dilatometry (a.k.a. thermal expansion). Here, heating rates of are common. Summarized below are "T" values characteristic of certain classes of materials. Dry nylon-6 has a glass transition temperature of . Nylon-6,6 in the dry state has a glass transition temperature of about . Whereas polyethene has a glass transition range of The above are only mean values, as the glass transition temperature depends on the cooling rate and molecular weight distribution and could be influenced by additives. For a semi-crystalline material, such as polyethene that is 60–80% crystalline at room temperature, the quoted glass transition refers to what happens to the amorphous part of the material upon cooling. As a liquid is supercooled, the difference in entropy between the liquid and solid phase decreases. By extrapolating the heat capacity of the supercooled liquid below its glass transition temperature, it is possible to calculate the temperature at which the difference in entropies becomes zero. This temperature has been named the Kauzmann temperature | https://en.wikipedia.org/wiki?curid=22122416 |
Glass transition If a liquid could be supercooled below its Kauzmann temperature, and it did indeed display a lower entropy than the crystal phase, the consequences would be paradoxical. This Kauzmann paradox has been the subject of much debate and many publications since it was first put forward by Walter Kauzmann in 1948. One resolution of the Kauzmann paradox is to say that there must be a phase transition before the entropy of the liquid decreases. In this scenario, the transition temperature is known as the "calorimetric ideal glass transition temperature" "T". In this view, the glass transition is not merely a kinetic effect, i.e. merely the result of fast cooling of a melt, but there is an underlying thermodynamic basis for glass formation. The glass transition temperature: The Gibbs-DiMarzio model specifically predicts that a supercooled liquid's configurational entropy disappears in the limit formula_2, where the liquid's existence regime ends, its microstructure becomes identical to the crystal's, and their property curves intersect in a true second-order phase transition. This has never been experimentally verified due to the difficulty of realizing a slow enough cooling rate while avoiding accidental crystallization. There are at least three other possible resolutions to the Kauzmann paradox. It could be that the heat capacity of the supercooled liquid near the Kauzmann temperature smoothly decreases to a smaller value | https://en.wikipedia.org/wiki?curid=22122416 |
Glass transition It could also be that a first order phase transition to another liquid state occurs before the Kauzmann temperature with the heat capacity of this new state being less than that obtained by extrapolation from higher temperature. Finally, Kauzmann himself resolved the entropy paradox by postulating that all supercooled liquids must crystallize before the Kauzmann temperature is reached. Silica (the chemical compound SiO) has a number of distinct crystalline forms in addition to the quartz structure. Nearly all of the crystalline forms involve tetrahedral SiO units linked together by "shared vertices" in different arrangements. Si-O bond lengths vary between the different crystal forms. For example, in α-quartz the bond length is , whereas in α-tridymite it ranges from . The Si-O-Si bond angle also varies from 140° in α-tridymite to 144° in α-quartz to 180° in β-tridymite. Any deviations from these standard parameters constitute microstructural differences or variations that represent an approach to an amorphous, vitreous or glassy solid. The transition temperature "T" in silicates is related to the energy required to break and re-form covalent bonds in an amorphous (or random network) lattice of covalent bonds. The "T" is clearly influenced by the chemistry of the glass. For example, addition of elements such as B, Na, K or Ca to a silica glass, which have a valency less than 4, helps in breaking up the network structure, thus reducing the "T" | https://en.wikipedia.org/wiki?curid=22122416 |
Glass transition Alternatively, P, which has a valency of 5, helps to reinforce an ordered lattice, and thus increases the "T". "T" is directly proportional to bond strength, e.g. it depends on quasi-equilibrium thermodynamic parameters of the bonds e.g. on the enthalpy "H" and entropy "S" of configurons – broken bonds: "T" = "H" / ["S" + Rln[(1-"f")/ "f"] where R is the gas constant and "f" is the percolation threshold. For strong melts such as Si"O" the percolation threshold in the above equation is the universal Scher-Zallen critical density in the 3-D space e.g. "f" = 0.15, however for fragile materials the percolation thresholds are material-dependent and "f" « 1. The enthalpy "H" and the entropy "S" of configurons – broken bonds can be found from available experimental data on viscosity. In polymers the glass transition temperature, "T", is often expressed as the temperature at which the Gibbs free energy is such that the activation energy for the cooperative movement of 50 or so elements of the polymer is exceeded . This allows molecular chains to slide past each other when a force is applied. From this definition, we can see that the introduction of relatively stiff chemical groups (such as benzene rings) will interfere with the flowing process and hence increase "T". The stiffness of thermoplastics decreases due to this effect (see figure.) When the glass temperature has been reached, the stiffness stays the same for a while, i.e., at or near "E", until the temperature exceeds "T", and the material melts | https://en.wikipedia.org/wiki?curid=22122416 |
Glass transition This region is called the rubber plateau. In ironing, a fabric is heated through this transition so that the polymer chains become mobile. The weight of the iron then imposes a preferred orientation. "T" can be significantly decreased by addition of plasticizers into the polymer matrix. Smaller molecules of plasticizer embed themselves between the polymer chains, increasing the spacing and free volume, and allowing them to move past one another even at lower temperatures. The addition of nonreactive side groups to a polymer can also make the chains stand off from one another, reducing "T". If a plastic with some desirable properties has a "T" that is too high, it can sometimes be combined with another in a copolymer or composite material with a "T" below the temperature of intended use. Note that some plastics are used at high temperatures, e.g., in automobile engines, and others at low temperatures. In viscoelastic materials, the presence of liquid-like behavior depends on the properties of and so varies with rate of applied load, i.e., how quickly a force is applied. The silicone toy Silly Putty behaves quite differently depending on the time rate of applying a force: pull slowly and it flows, acting as a heavily viscous liquid; hit it with a hammer and it shatters, acting as a glass. On cooling, rubber undergoes a "liquid-glass transition", which has also been called a "rubber-glass transition" | https://en.wikipedia.org/wiki?curid=22122416 |
Glass transition Molecular motion in condensed matter can be represented by a Fourier series whose physical interpretation consists of a superposition of longitudinal and transverse waves of atomic displacement with varying directions and wavelengths. In monatomic systems, these waves are called "density fluctuations". (In polyatomic systems, they may also include compositional fluctuations.) Thus, thermal motion in liquids can be decomposed into elementary longitudinal vibrations (or acoustic phonons) while transverse vibrations (or shear waves) were originally described only in elastic solids exhibiting the highly ordered crystalline state of matter. In other words, simple liquids cannot support an applied force in the form of a shearing stress, and will yield mechanically via macroscopic plastic deformation (or viscous flow). Furthermore, the fact that a solid deforms locally while retaining its rigidity – while a liquid yields to macroscopic viscous flow in response to the application of an applied shearing force – is accepted by many as the mechanical distinction between the two. The inadequacies of this conclusion, however, were pointed out by Frenkel in his revision of the kinetic theory of solids and the theory of elasticity in liquids. This revision follows directly from the continuous characteristic of the structural transition from the liquid state into the solid one when this transition is not accompanied by crystallization—ergo the supercooled viscous liquid | https://en.wikipedia.org/wiki?curid=22122416 |
Glass transition Thus we see the intimate correlation between transverse acoustic phonons (or shear waves) and the onset of rigidity upon vitrification, as described by Bartenev in his mechanical description of the vitrification process. The velocities of longitudinal acoustic phonons in condensed matter are directly responsible for the thermal conductivity that levels out temperature differentials between compressed and expanded volume elements. Kittel proposed that the behavior of glasses is interpreted in terms of an approximately constant "mean free path" for lattice phonons, and that the value of the mean free path is of the order of magnitude of the scale of disorder in the molecular structure of a liquid or solid. The thermal phonon mean free paths or relaxation lengths of a number of glass formers have been plotted versus the glass transition temperature, indicating a linear relationship between the two. This has suggested a new criterion for glass formation based on the value of the phonon mean free path. It has often been suggested that heat transport in dielectric solids occurs through elastic vibrations of the lattice, and that this transport is limited by elastic scattering of acoustic phonons by lattice defects (e.g. randomly spaced vacancies). These predictions were confirmed by experiments on commercial glasses and glass ceramics, where mean free paths were apparently limited by "internal boundary scattering" to length scales of | https://en.wikipedia.org/wiki?curid=22122416 |
Glass transition The relationship between these transverse waves and the mechanism of vitrification has been described by several authors who proposed that the onset of correlations between such phonons results in an orientational ordering or "freezing" of local shear stresses in glass-forming liquids, thus yielding the glass transition. The influence of thermal phonons and their interaction with electronic structure is a topic that was appropriately introduced in a discussion of the resistance of liquid metals. Lindemann's theory of melting is referenced, and it is suggested that the drop in conductivity in going from the crystalline to the liquid state is due to the increased scattering of conduction electrons as a result of the increased amplitude of atomic vibration. Such theories of localization have been applied to transport in metallic glasses, where the mean free path of the electrons is very small (on the order of the interatomic spacing). The formation of a non-crystalline form of a gold-silicon alloy by the method of splat quenching from the melt led to further considerations of the influence of electronic structure on glass forming ability, based on the properties of the metallic bond. Other work indicates that the mobility of localized electrons is enhanced by the presence of dynamic phonon modes. One claim against such a model is that if chemical bonds are important, the nearly free electron models should not be applicable | https://en.wikipedia.org/wiki?curid=22122416 |
Glass transition However, if the model includes the buildup of a charge distribution between all pairs of atoms just like a chemical bond (e.g., silicon, when a band is just filled with electrons) then it should apply to solids. Thus, if the electrical conductivity is low, the mean free path of the electrons is very short. The electrons will only be sensitive to the short-range order in the glass since they do not get a chance to scatter from atoms spaced at large distances. Since the short-range order is similar in glasses and crystals, the electronic energies should be similar in these two states. For alloys with lower resistivity and longer electronic mean free paths, the electrons could begin to sense that there is disorder in the glass, and this would raise their energies and destabilize the glass with respect to crystallization. Thus, the glass formation tendencies of certain alloys may therefore be due in part to the fact that the electron mean free paths are very short, so that only the short-range order is ever important for the energy of the electrons. It has also been argued that glass formation in metallic systems is related to the "softness" of the interaction potential between unlike atoms. Some authors, emphasizing the strong similarities between the local structure of the glass and the corresponding crystal, suggest that chemical bonding helps to stabilize the amorphous structure | https://en.wikipedia.org/wiki?curid=22122416 |
Glass transition Other authors have suggested that the electronic structure yields its influence on glass formation through the directional properties of bonds. Non-crystallinity is thus favored in elements with a large number of polymorphic forms and a high degree of bonding anisotropy. Crystallization becomes more unlikely as bonding anisotropy is increased from isotropic metallic to anisotropic metallic to covalent bonding, thus suggesting a relationship between the group number in the periodic table and the glass forming ability in elemental solids. | https://en.wikipedia.org/wiki?curid=22122416 |
Strecker degradation The is a chemical reaction which converts an α-amino acid into an aldehyde containing the side chain, by way of an imine intermediate. It is named after Adolph Strecker, a German chemist. The original observation by Strecker involved the use of alloxan as the oxidant in the first step, followed by hydrolysis: The reaction can take place using a variety of organic and inorganic reagents. | https://en.wikipedia.org/wiki?curid=22125656 |
IMes is an abbreviation for an organic compound that is a common ligand in organometallic chemistry. It is an N-heterocyclic carbene (NHC). The compound, a white solid, is often not isolated but instead is generated upon attachment to the metal centre. First prepared by Arduengo, the heterocycle is synthesized by condensation of 2,4,6-trimethylaniline and glyoxal to give the diimine. In the presence of acid, this diimine condenses with formaldehyde to give the dimesitylimidazolium cation. This cation is the conjugate acid of the NHC. Bulkier than is the NHC ligand IPr (CAS RN 244187-81-3). IPr features diisopropylphenyl in place of the mesityl substituents. Some variants of and IPr have saturated backbones, two such ligands are Sand SIPr. They are prepared by alkylation of substituted anilines with dibromoethane followed by ring closure and dehydrohalogenation of the dihydroimidazolium salt. | https://en.wikipedia.org/wiki?curid=22126226 |
GHS hazard pictograms Hazard pictograms form part of the international Globally Harmonized System of Classification and Labelling of Chemicals (GHS). Two sets of pictograms are included within the GHS: one for the labelling of containers and for workplace hazard warnings, and a second for use during the transport of dangerous goods. Either one or the other is chosen, depending on the target audience, but the two are not used together. The two sets of pictograms use the same symbols for the same hazards, although certain symbols are not required for transport pictograms. Transport pictograms come in wider variety of colors and may contain additional information such as a subcategory number. Hazard pictograms are one of the key elements for the labelling of containers under the GHS, along with: The GHS chemical hazard pictograms are intended to provide the basis for or to replace national systems of hazard pictograms. It has still to be implemented by the European Union (CLP regulation) in 2009. The GHS transport pictograms are the same as those recommended in the UN Recommendations on the Transport of Dangerous Goods, widely implemented in national regulations such as the U.S. Federal Hazardous Materials Transportation Act (49 U.S.C. 5101–5128) and D.O.T. regulations at 49 C.F.R. 100–185. The following pictograms are included in the UN Model Regulations but have not been incorporated into the GHS because of the nature of the hazards. | https://en.wikipedia.org/wiki?curid=22131077 |
Monolithic HPLC column A monolithic HPLC column, or monolithic column, is a column used in high-performance liquid chromatography (HPLC). The internal structure of the monolithic column is created in such a way that many channels form inside the column. The material inside the column which separates the channels can be porous and functionalized. In contrast, most HPLC configurations use particulate packed columns; in these configurations, tiny beads of an inert substance, typically a modified silica, are used inside the column. In analytical chromatography, the goal is to separate and uniquely identify each of the compounds in a substance. Alternatively, preparative scale chromatography is a method of purification of large batches of material in a production environment. The basic methods of separation in HPLC rely on a mobile phase (water, organic solvents, etc.) being passed through a stationary phase (particulate silica packings, monoliths, etc.) in a closed environment (column); the differences in reactivity among the solvent of interest and the mobile and stationary phases distinguish compounds from one another in a series of adsorption and desorption phenomena. The results are then visually displayed in a resulting chromatogram. Stationary phases are available in many varieties of packing styles as well as chemical structures and can be functionalized for added specificity. Monolithic-style columns, or monoliths, are one of many types of stationary phase structure | https://en.wikipedia.org/wiki?curid=22131699 |
Monolithic HPLC column Monoliths, in chromatographic terms, are porous rod structures characterized by mesopores and macropores. These pores provide monoliths with high permeability, a large number of channels, and a high surface area available for reactivity. The backbone of a monolithic column is composed of either an organic or inorganic substrate, and can easily be chemically altered for specific applications. Their unique structure gives them several physico-mechanical properties that enable them to perform competitively against traditionally packed columns. Historically, the typical HPLC column consists of high-purity particulate silica compressed into stainless steel tubing. To decrease run times and increase selectivity, smaller diffusion distances have been pursued. To achieve smaller diffusion distances there has been a decrease in the particle sizes. However, as the particle size decreases, the backpressure (for a given column diameter and a given volumetric flow) increases proportionally. Pressure is inversely proportional to the square of the particle size; i.e., when particle size is halved, pressure increases by a factor of four. This is because as the particle sizes get smaller, the interstitial voids (the spaces between the particles) do as well, and it is harder to push the compounds through the smaller spaces. Modern HPLC systems are generally designed to withstand about of backpressure in order to deal with this problem | https://en.wikipedia.org/wiki?curid=22131699 |
Monolithic HPLC column Monoliths also have very short diffusion distances, while also providing multiple pathways for solute dispersion. Packed particle columns have pore connectivity values of about 1.5, while monoliths have values ranging from 6 to greater than 10. This means that, in a particulate column, a given analyte may diffuse into and out of the same pore, or enter through one pore and exit through a connected pore. By contrast, an analyte in a monolith is able to enter one channel and exit through any of 6 or more different venues. Little of the surface area in a monolith is inaccessible to compounds in the mobile phase. The high degree of interconnectivity in monoliths confers an advantage seen in the low backpressures and readily achievable high flow rates. Monoliths are ideally suited for large molecules. As mentioned previously, particle sizes are decreasing in an attempt to achieve higher resolution and faster separations, which led to higher backpressures. When the smaller particle sizes are used to separate biomolecules, backpressures increase further because of the large molecule size. In monoliths, where backpressures are low and channel sizes are large, small molecule separations are less efficient. This is demonstrated by the dynamic binding capacities, a measure of how much sample can bind to the surface of the stationary phase. Dynamic binding capacities of monoliths for large molecules can be an order of ten times greater than that for particulate packings | https://en.wikipedia.org/wiki?curid=22131699 |
Monolithic HPLC column Monoliths exhibit no shear forces or eddying effects. High interconnectivity of the mesopores allows for multiple avenues of convective flow through the column. Mass transport of solutes through the column is relatively unaffected by flow rate. This is completely at odds to traditional particulate packings, whereby eddy effects and shear forces contribute greatly to the loss of resolution and capacity, as seen in the vanDeemter curve. Monoliths can, however, suffer from a different flow disadvantage: wall effects. Silica monoliths, especially, have a tendency to pull away from the sides of their column encasing. When this happens, the flow of the mobile phase occurs around the stationary phase as well as through it, decreasing resolution. Wall effects have been reduced greatly by advances in column construction. Other advantages of monoliths conferred by their individual construction include greater column to column and batch to batch reproducibility. One technique of creating monolith columns is to polymerize the structure in situ. This involves filling the mold or column tubing with a mixture of monomers, a cross-linking agent, a free-radical initiator, and a porogenic solvent, then initiating the polymerization process under carefully controlled thermal or irradiating conditions. Monolithic in situ polymerization avoids the primary source of column to column variability, which is the packing procedure | https://en.wikipedia.org/wiki?curid=22131699 |
Monolithic HPLC column Additionally, packed particle columns must be maintained in a solvent environment and cannot be exposed to air during or after the packing procedure. If exposed to air, the pores dry out and no longer provide adequate surface area for reactivity; the column must be repacked or discarded. Further, because particle compression and packing uniformity are not relevant to monoliths, they exhibit greater mechanical robustness; if particulate columns are dropped, for example, the integrity of the column may be corrupted. Monolithic columns are more physically stable than their particulate counterparts. The roots of liquid chromatography extend back over a century ago to 1900, when Russian botanist Mikhail Tsvet began experimenting with plant pigments in chlorophyll. He noted that, when a solvent was applied, distinct bands appeared that migrated at different rates along a stationary phase. For this new observation, he coined the term “chromatography,” a colored picture. His first lecture on the subject was presented in 1903, but his most important contribution occurred three years later, in 1906, when the paper “Adsorption analysis and chromatographic method. Applications on the chemistry of chlorophyll,” was published. Rivalry with a colleague who readily and vocally denounced his work meant that chromatographic analysis was shelved for almost 25 years. The great irony of the matter is that it was his rival's students who later took up the chromatography banner in their work with carotins | https://en.wikipedia.org/wiki?curid=22131699 |
Monolithic HPLC column Greatly unchanged from Tswett's time until the 1940s, normal phase chromatography was performed by passing a gravity-fed solvent through small glass tubes packed with pellicular adsorbent beads. It was in the 1940s, however, that there was a great revolution in gas chromatography (GC). Although GC was a wonderful technique for analyzing inorganic compounds, less than 20% of organic molecules are able to be separated using this technique. It was Richard Synge, who in 1952 won the Nobel Prize in Chemistry for his work with partition chromatography, who applied the theoretical knowledge gained from his work in GC to LC. From this revolution, the 1950s also saw the advent of paper chromatography, reversed-phase partition chromatography (RPC), and hydrophobic interaction chromatography (HIC). The first gels for use in LC were created using cross-linked dextrans (Sephadex) in an attempt to realize Synge's prediction that a unique single-piece stationary phase could provide an ideal chromatographic solution. In the 1960s, polyacrylamide and agarose gels were created in a further attempt to create a single-piece stationary phase, but the purity of and stability of available components did not prove useful for implementation in the HPLC. In this decade, affinity chromatography was invented, an ultra-violet (UV) detector was used for the first time in conjunction with LC, and, most importantly, the modern HPLC was born | https://en.wikipedia.org/wiki?curid=22131699 |
Monolithic HPLC column Csaba Horvath led the development of modern HPLC by piecing together laboratory equipment to suit his purposes. In 1968, Picker Nuclear Company marketed the first commercially available HPLC as a “Nucleic Acid Analyzer.” The following year, the first international symposia on HPLC was held, and Kirkland at DuPont was able to functionalize controlled porosity pellicular particles for the first time. The 1970s and 1980s witnessed a renewed interest in separations media with reduced interparticular void volumes. Perfusion chromatography showed, for the first time, that chromatography media could support high flow rates without sacrificing resolution. Monoliths aptly fit into this new class of media, as they exhibit no void volume and can withstand flow rates up to 9mL/minute. Polymeric monoliths as they exist today were developed independently by three different labs in the late 1980s led by Hjerten, Svec, and Tennikova. Simultaneously, bioseparations became increasingly important, and monolith technologies proved beneficial in biotechnology separations. Though industry focus in the 1980s was on biotechnology, focus in the 1990s shifted to process engineering. While mainstream chromatographers were using 3μm particulate columns, sub-2μm columns were in research phase. The smaller particles meant better resolution and shorter run times; there was also an associated increase in backpressure | https://en.wikipedia.org/wiki?curid=22131699 |
Monolithic HPLC column In order to withstand the pressure, a new field of chromatography came into being: UHPLC or UPLC- ultra high pressure liquid chromatography. The new instruments were able to endure pressures of up to , as opposed to conventional machines, which, as previously state, can hold up to . UPLC is an alternative solution to the same problems monolithic columns solve. Similarly to UPLC, monolith chromatography can help the bottom line by increasing sample throughput, but without the need to spend capital on new equipment. In 1996, Nobuo Tanaka, at the Kyoto Institute of Technology, prepared silica monoliths using a colloidal suspension synthesis (aka “sol-gel”) developed by a colleague. The process is different from that used in polymeric monoliths. Polymeric monoliths, as mentioned above, are created in situ, using a mixture of monomers and a porogen within the column tubing. Silica monoliths, on the other hand, are created in a mold, undergo a significant amount of shrinkage, and are then clad in a polymeric shrink tubing like PEEK (polyetheretherketone) to reduce wall effects. This method limits the size of columns that can be produced to less than 15 cm long, and though standard analytical inner diameters are readily achieved, there is currently a trend in developing nanoscale capillary and prep scale silica monoliths. Silica monoliths have only been commercially available since 2001, when Merck began their Chromolith campaign | https://en.wikipedia.org/wiki?curid=22131699 |
Monolithic HPLC column The Chromolith technology was licensed from Soga and Nakanishi's group at Kyoto University. The new product won the PittCon Editors’ Gold Award for Best New Product, as well as an R&D 100 Award, both in 2001. Individual monolith columns have a life cycle that generally exceeds that of its particulate competitors. When selecting an HPLC column supplier, column lifetime was second only to column-to-column reproducibility in importance to the purchaser. Chromolith columns, for example, have demonstrated reproducibility of 3,300 sample injections and 50,000 column volumes of mobile phase. Also important to the life cycle of the monolith is its increased mechanical robustness; polymeric monoliths are able to withstand pH ranges from 1 to 14, can endure elevated temperatures, and do not need to be handled delicately. “Monoliths are still teenagers,” affirms Frantisec Svec, a leader in the field of novel stationary phases for LC. Liquid chromatography as we know it today really got its start in 1969, when the first modern HPLC was designed and marketed as a nucleic acid analyzer. Columns throughout the 1970s were unreliable, pump flow rates were inconsistent, and many biologically active compounds escaped detection by UV and fluorescence detectors. Focus on purification methods in the '70s morphed into faster analyses in the 1980s, when computerized controls were integrated into HPLC equipment. Higher degrees of computerization then led to emphasis on more precise, faster, automated equipment in the 1990s | https://en.wikipedia.org/wiki?curid=22131699 |
Monolithic HPLC column Atypical of many technologies of the '60s and '70s, the emphasis in improvements was not on “bigger and better,” but on “smaller and better”. At the same time the HPLC user-interface was improving, it was critical to be able to isolate hundreds of peptides or biomarkers from ever decreasing sample sizes. Laboratory analytical instrumentation has only been recognized as a separate and distinct industry by NAICS and SIC since 1987. This market segmentation includes not only gas and liquid chromatography, but also mass spectrometry and spectrophotometric instruments. Since first recognized as a separate market, sales of analytical laboratory equipment increased from about $3.5 billion in 1987 to more than $26 billion in 2004. Revenues in the world liquid chromatography market, specifically, are expected to grow from $3.4 billion in 2007 to $4.7 billion in 2013, with a slight decrease in spending expected in 2008 and 2009 from the worldwide economic slump and decreased or stagnant spending. The pharmaceutical industry alone accounts for 35% of all the HPLC instruments in use. The main source of growth in LC stems from biosciences and pharmaceutical companies. In its earliest form, liquid chromatography was used to separate the pigments of chlorophyll by a Russian botanist. Decades later, other chemists used the procedure for the study of carotins | https://en.wikipedia.org/wiki?curid=22131699 |
Monolithic HPLC column Liquid chromatography was then used for the isolation of small molecules and organic compounds like amino acids, and most recently has been used in peptide and DNA research. Monolith columns have been instrumental in advancing the field of biomolecular research. In recent trade shows and international meetings for HPLC, interest in column monoliths and biomolecular applications has grown steadily, and this correlation is no coincidence. Monoliths have been shown to possess great potential in the “omics” fields- genomics, proteomics, metabolomics, and pharmacogenomics, among others. The reductionist approach to understanding the chemical pathways of the body and reactions to different stimuli, like drugs, are essential to new waves of healthcare like personalized medicine. Pharmacogenomics studies how responses to pharmaceutical products differ in efficacy and toxicity based on variations in the patient's genome; it is a correlation of drug response to gene expression in a patient. Jeremy K. Nicholson of the Imperial College, London, used a postgenomic viewpoint to understand adverse drug reactions and the molecular basis of human disesase. His group studied gut microbial metabolic profiles and were able to see distinct differences in reactions to drug toxicity and metabolism even among various geographical distributions of the same race. Affinity monolith chromatography provides another approach to drug response measurements | https://en.wikipedia.org/wiki?curid=22131699 |
Monolithic HPLC column David Hage at the University of Nebraska binds ligands to monolithic supports and measures the equilibrium phenomena of binding interactions between drugs and serum proteins. A monolith-based approach at the University of Bologna, Italy, is currently in use for high-speed screening of drug candidates in the treatment of Alzheimer's. In 2003, Regnier and Liu of Purdue University described a multi-dimensional LC procedure for identifying single nucleotide polymorphisms (SNPs) in proteins. SNPs are alterations in the genetic code that can sometimes cause changes in protein conformation, as is the case with sickle cell anemia. Monoliths are particularly useful in these kinds of separations because of their superior mass transport capabilities, low backpressures coupled with faster flow rates, and relative ease of modification of the support surface. Bioseparations on a production scale are enhanced by monolith column technologies as well. The fast separations and high resolving power of monoliths for large molecules means that real-time analysis on production fermentors is possible. Fermentation is well known for its use in making alcoholic beverages, but is also an essential step in the production of vaccines for rabies and other viruses. Real-time, on-line analysis is critical for monitoring of production conditions, and adjustments can be made if necessary. Boehringer Ingelheim Austria has validated a method with cGMP (commercial good manufacturing practices) for production of pharmaceutical-grade DNA plasmids | https://en.wikipedia.org/wiki?curid=22131699 |
Monolithic HPLC column They are able to process 200L of fermentation broth on an 800mL monolith. At BIA Separations, processing time of the tomato mosaic virus decreased considerably from the standard five days of manually intensive work to equivalent purity and better recovery in only two hours with a monolith column. Other viruses have been purified on monoliths as well. Another area of interest for HPLC is forensics. GC-MS (Gas Chromatography-Mass Spectroscopy) is generally considered the gold standard for forensic analysis. It is used in conjunction with online databases for rapid analysis of compounds in tests for blood alcohol, cause of death, street drugs, and food analysis, especially in poisoning cases. Analysis of buprenorphine, a heroin substitute, demonstrated the potential utility of multidimensional LC as a low-level detection method. HPLC methods can measure this compound at 40 ng/mL, compared to GC-MS at 0.5 ng/mL, but LC-MS-MS can detect buprenorphine at levels as low as 0.02 ng/mL. The sensitivity of multidimensional LC is therefore 2000 times greater than that of conventional HPLC. The liquid chromatography marketplace is incredibly diverse. Five to ten firms are consistently market leaders, yet nearly half of the market is made up of small, fragmented companies. This section of the report will focus on the roles that a few companies have had in bringing monolith column technologies to the commercial market. In 1998, start-up biotechnology company BIA Separations of Ljubljana, Slovenia, came into being | https://en.wikipedia.org/wiki?curid=22131699 |
Monolithic HPLC column The technology was originally developed by Tatiana Tennikova and Frantisek Svec during a collaboration between their respective institutes. The patent for these columns was acquired by BIA Separations and Ales Podgornik and Milos Barut developed the first commercially available monolith column in the form of a short disc encapsulated in a plastic housing. Trademarked CIM, BIA Separations has since introduced full lines of reversed-phase, normal-phase, ion-exchange, and affinity polymeric monoliths. Ales Podgornik and Janez Jancar then went on to develop large scale tube monolithic columns for industrial use. The largest column currently available is 8L. In May 2008, LC instrumentation powerhouse Agilent technologies agreed to market BIA Separations’ analytical columns based on monolith technology. Agilent's commercialized the columns with strong and weak ion exchange phases and Protein A in September 2008 when they unveiled their new Bio-Monolith product line at the BioProcess International conference. While BIA Separations was the first to commercially market polymeric monoliths, Merck KGaA was the first company to market silica monoliths. In 1996, Tanaka and coworkers at the Kyoto Institute of Technology published extensive work on silica monolith technologies. Merck was later issued a license from Kyoto Institute of Technology to develop and produce the silica monoliths | https://en.wikipedia.org/wiki?curid=22131699 |
Monolithic HPLC column Promptly thereafter, in 2001, Merck introduced its Chromolith line of monolithic HPLC columns at analytical instrumentation trade show PittCon. Initially, says Karin Cabrera, senior scientist at Merck, the high flow rate was the selling point for the Chromolith line. Based on customer feedback, though, Merck soon learned that the columns were more stable and longer-lived than particle-packed columns. The columns were the recipients of various new product awards. Difficulties in production of the silica monoliths and tight patent protection have precluded attempts by other companies at developing a similar product. It has been noted that there are more patents concerning how to encapsulate the silica rod than there are on the manufacture of the silica itself. Historically, Merck has been known for its superior chemical products, and, in liquid chromatography, for the purity and reliability of its particulate silica. Merck is not known for its LC columns. Five years after the introduction of its Chromolith line, Merck made a very strategic marketing decision. They granted a worldwide sublicense of the technology to a small (less than $100M in sales), innovative company well known for its cutting-edge column technology: Phenomenex. This was a superior strategic move for two reasons. As mentioned above, Merck is not well known for its column manufacturing. Furthermore, having more than one silica monolith manufacturer serves to better validate the technology | https://en.wikipedia.org/wiki?curid=22131699 |
Monolithic HPLC column Having sublicensed the technology from Merck, Phenomenex introduced its Onyx product line in January 2005. On the other side of monolith technologies are the polymerics. Unlike the inorganic silica columns, the polymer monoliths are made of an organic polymer base. Dionex, traditionally known for its ion chromatography capabilities, has led this side of the field. In the 1990s, Dionex first acquired a license for the polymeric monolith technology developed by leading monolithic chromatography researcher Frantisec Svec while he was at Cornell University. In 2000, they acquired LC Packings, whose competencies were in LC column packings. LC Packings/Dionex revealed their first monolithic capillary column at the Montreux LC-MS Conference. Earlier that year, another company, Isco, introduced a polystyrene divinylbenzene (PS-DVB) monolith column under the brand SWIFT. In January 2005, Dionex was sold the rights to Teledyne Isco's SWIFT media products, intellectual property, technology, and related assets. Though the core competencies of Dionex have traditionally been in ion chromatography, through strategic acquisitions and technology transfers, it has quickly established itself as the primary producer of polymeric monoliths. Though the many advances of HPLC and monoliths are highly visible within the confines of the analytical and pharmaceutical industries, it is unlikely that general society is aware of these developments | https://en.wikipedia.org/wiki?curid=22131699 |
Monolithic HPLC column Currently, consumers may witness technology developments in the analytical sciences industry in the form of a broader array of available pharmaceutical products of higher purity, advanced forensic testing in criminal trials, better environmental monitoring, and faster returns on medical tests. In the future, presumably, this may not be the case. As medicine becomes more individualized over time, consumer awareness that something is improving their quality of care seems more likely. The further thought that monoliths or HPLC are involved is unlikely to concern the general public, however. There are two main cost drivers behind technological change in this industry. Though many different analytical areas use LC, including food and beverage industries, forensics labs, and clinical testing facilities, the largest impetus toward technology developments comes from the research and development and production arms of the pharmaceutical industry. The areas in which high-throughput monolithic column technologies are likely to have the largest economic impact are R&D and downstream processing. From the Research and Development field comes the desire for more resolved, faster separations from smaller sample quantities. The only phase of drug development under direct control of a pharmaceutical company is the R&D stage. The goal of analytical work is to obtain as much information as possible from the sample. At this stage, high-throughput and analysis of tiny sample quantities are critical | https://en.wikipedia.org/wiki?curid=22131699 |
Monolithic HPLC column Pharmaceutical companies are looking for tools that will better enable them to measure and predict the efficacy of candidate drugs in shorter times and with less expensive clinical trials. To this end, nano-scale separations, highly automated HPLC equipment, and multi-dimensional chromatography have become influential. The prevailing method to increase the sensitivity of analytical methods has been multi-dimensional chromatography. This practice uses other analysis techniques in conjunction with liquid chromatography. For example, mass spectrometry (MS) has very much gained in popularity as an on-line analytical technique following HPLC. It is limited, however, in that MS, like nuclear magnetic resonance spectroscopy (NMR) or electrospray ionization techniques (ESI), is only feasible when using very small quantities of solute and solvent; LC-MS is used with nano or capillary scale techniques, but cannot be used in prep-scale. Another tactic for increasing selectivity in multi-dimensional chromatography is to use two columns with different selectivity orthogonally; ie... linking an ion exchange column to a C18 endcapped column. In 2007, Karger reported that, through multi-dimensional chromatography and other techniques, starting with only about 12,000 cells containing 1-4μg of protein, he was able to identify 1867 unique proteins. Of those, Karger can isolate 4 that may be of interest as cervical cancer markers | https://en.wikipedia.org/wiki?curid=22131699 |
Monolithic HPLC column Today, liquid chromatographers using multi-dimensional LC can isolate compounds at the femtomole (10 mole) and attomole (10 mole) levels. After a drug has been approved by the U.S. Food and Drug Administration (FDA), the emphasis at a pharmaceutical company is on getting a product to market. This is where prep or process scale chromatography has a role. In contrast to analytical analysis, preparatory scale chromatography focuses on isolation and purity of compounds. There is a trade-off between the degree of purity of compound and the amount of time required to achieve that purity. Unfortunately, many of the preparatory or process scale solutions used by pharmaceutical companies are proprietary, due to difficulties in patenting a process. Hence, there is not a great deal of literature available. However, some attempts to address the problems of prep scale chromatography include monoliths and simulated moving beds. A comparison of immunoglobulin protein capture on a conventional column and a monolithic column yields some economically interesting results. If processing times are equivalent, process volumes of IgG, an antibody, are 3,120L for conventional columns versus 5,538L for monolithic columns. This represents a 78% increase in process volume efficiency, while at the same time only a tenth of the media waste volume is generated | https://en.wikipedia.org/wiki?curid=22131699 |
Monolithic HPLC column Not only is the monolith column more economically prudent when considering the value of product processing times, but, at the same time, less media is used, representing a significant reduction in variable costs. | https://en.wikipedia.org/wiki?curid=22131699 |
Laser ablation synthesis in solution (LASiS) is a commonly used method for obtaining colloidal solution of nanoparticles in a variety of solvents. In the LASiS method, nanoparticles are produced during the condensation of a plasma plume formed by the laser ablation of a bulk metal plate dipped in a liquid solution. LASiS is usually considered a top-down physical approach. In the past years, laser ablation synthesis in solution (LASiS) emerged as a reliable alternative to traditional chemical reduction methods for obtaining noble metal nanoparticles (NMNp). LASiS is a technique for the synthesis of stable NMNp in water or in organic solvents, which does not need stabilizing molecules or other chemicals. The so obtained NMNp are highly available for further functionalization or can be used wherever unprotected metal nanoparticles are desired. Surface functionalization of NMNp can be monitored in real time by UV-visible spectroscopy of the plasmon resonance. However, LASiS has some limitations in the size control of NMNp, which can be overcome by laser treatments of NMNp. | https://en.wikipedia.org/wiki?curid=22132096 |
List of human hormones The following is a list of hormones found in "Homo sapiens". Spelling is not uniform for many hormones. For example, current North American and international usage is estrogen, gonadotropin, while British usage retains the Greek digraph in oestrogen and favors the earlier spelling gonadotrophin (from "trophē" 'nourishment, sustenance' rather than "tropē" 'turning, change'). Eicosanoid for more information about this class of paracrine signalling chemicals and hormones. | https://en.wikipedia.org/wiki?curid=22134777 |
Liquid chalk The term liquid chalk refers to several different kinds of chalk: Despite the term, some forms of "liquid chalk" contain no actual chalk. can be a variation of normal chalk (see: magnesium carbonate) used to improve grip for sports, such as rock climbing, weight lifting, or gymnastics. Rock climbers use liquid chalk to prevent their hands from sweating. It may be used by climbers in situations where powdered chalk is restricted. It is preferred by athletes because it remains effective longer and leaves less residue on rocks and equipment. for rock climbers is made from magnesium carbonate. Since liquid chalk does not leave a white residue, it is an environmentally friendly alternative. In five forms of climbing, liquid chalk may prove more useful than powdered chalk. In other sports, liquid chalk is less beneficial to the athlete, because re-chalking can be done more easily between sets or rounds. However, some gyms require liquid chalk because it leaves less residue on gym equipment. adheres to the hand better, reducing the need to re-chalk. Some liquid-chalk mixtures for climbing are made with magnesium carbonate, colophony, and ethanol or an alcohol that dissolves the colophony and quickly evaporates from the solution (as isopropyl alcohol or ethanol). Sometimes, resin or rosin is added to increase gripping properties or an additive for aroma is included because of the bad smell of spirit. Sports liquid chalk is sold in bottles | https://en.wikipedia.org/wiki?curid=22146365 |
Liquid chalk The user takes a small amount into their palms, spreading the chalk onto areas that require grip. The liquid evaporates when it comes into contact with the warmth of a users hand, leaving behind chalk. Alcohol disrupts the bonds between water molecules, reducing the energy needed to cause evaporation. | https://en.wikipedia.org/wiki?curid=22146365 |
Eccentric reducer An eccentric reducer is a fitting used in piping systems between two pipes of different diameters. They are used where the diameter of the pipe on the upstream side of the fitting (i.e. where flow is coming from) is larger than the downstream side. Unlike a concentric reducer, which resembles a cone, eccentric reducers have an edge that is parallel to the connecting pipe. This parallel edge results in the two pipes having offset center lines. The same fitting can be used in reverse as an eccentric increaser/expander. Horizontal liquid reducers are always eccentric, top flat (unless on control set, same as PV, TV, HV, LV) or (pipe rack), which prevents the build up of air bubbles in the system. Eccentric reducers are used at the suction side of pumps to ensure air does not accumulate in the pipe. The gradual accumulation of air in a concentric reducer could result in a large bubble that could eventually cause the pump to stall or cause cavitation when drawn into the pump. Horizontal gas reducers are always eccentric, bottom flat, which allows condensed water or oil to drain at low points. Reducers in vertical lines are generally concentric unless the layout dictates otherwise. http://www.hydrocarbonprocessing.com/Article/2663961/Eccentric-reducers-and-straight-runs-of-pipe-at-pump-suction.html | https://en.wikipedia.org/wiki?curid=22146849 |
Multislice The multislice algorithm is a method for the simulation of the elastic interaction of an electron beam with matter, including all multiple scattering effects. The method is reviewed in the book by Cowley. The algorithm is used in the simulation of high resolution Transmission electron microscopy micrographs, and serves as a useful tool for analyzing experimental images. Here we describe relevant background information, the theoretical basis of the technique, approximations used, and several software packages that implement this technique. Moreover, we delineate some of the advantages and limitations of the technique and important considerations that need to be taken into account for real-world use. The multislice method has found wide application in electron crystallography. The mapping from a crystal structure to its image or diffraction pattern has been relatively well understood and documented. However, the reverse mapping from electron micrograph images to the crystal structure is generally more complicated. The fact that the images are two-dimensional projections of three-dimensional crystal structure makes it tedious to compare these projections to all plausible crystal structures. Hence, the use of numerical techniques in simulating results for different crystal structure is integral to the field of electron microscopy and crystallography. Several software packages exist to simulate electron micrographs | https://en.wikipedia.org/wiki?curid=22157068 |
Multislice There are two widely used simulation techniques that exist in literature: the Bloch wave method, derived from Hans Bethe's original theoretical treatment of the Davisson-Germer experiment, and the multislice method. In this paper, we will primarily focus on the multislice method for simulation of diffraction patterns, including multiple elastic scattering effects. Most of the packages that exist implement the multislice algorithm along with Fourier analysis to incorporate electron lens aberration effects to determine electron microscope image and address aspects such as phase contrast and diffraction contrast. For electron microscope samples in the form of a thin crystalline slab in the transmission geometry, the aim of these software packages is to provide a map of the crystal potential, however this inversion process is greatly complicated by the presence of multiple elastic scattering. The first description of what is now known as the multislice theory was given in the classic paper by Cowley and Moodie. In this work, the authors describe scattering of electrons using a physical optics approach without invoking quantum mechanical arguments. Many other derivations of these iterative equations have since been given using alternative methods, such as Greens functions, differential equations, scattering matrices or path integral methods. A summary of the development of a computer algorithm from the multislice theory of Cowley and Moodie for numerical computation was reported by Goodman and Moodie | https://en.wikipedia.org/wiki?curid=22157068 |
Multislice They also discussed in detail the relationship of the multislice to the other formulations. Specifically, using Zassenhaus's theorem, this paper gives the mathematical path from multislice to 1. Schroedingers equation (derived from the multislice), 2. Darwin's differential equations, widely used for diffraction contrast TEM image simulations - the Howie-Whelan equations derived from the multislice. 3. Sturkey's scattering matrix method. 4. the free-space propagation case, 5. The phase grating approximation, 6. A new "thick-phase grating" approximation, which has never been used, 7. Moodie's polynomial expression for multiple scattering, 8. The Feynman path-integral formulation, and 9. relationship of multislice to the Born series. The relationship between algorithms is summarized in Section 5.11 of Spence (2013), (see Figure 5.9). The form of multislice algorithm presented here has been adapted from Peng, Dudarev and Whelan 2003. The multislice algorithm is an approach to solving the Schrödinger wave equation: formula_1 In 1957, Cowley and Moodie showed that the Schrödinger equation can be solved analytically to evaluate the amplitudes of diffracted beams. Subsequently, the effects of dynamical diffraction can be calculated and the resulting simulated image will exhibit good similarities with the actual image taken from a microscope under dynamical conditions | https://en.wikipedia.org/wiki?curid=22157068 |
Multislice Furthermore, the multislice algorithm does not make any assumption about the periodicity of the structure and can thus be used to simulate HREM images of aperiodic systems as well. The following section will include a mathematical formulation of the algorithm. The Schrödinger equation can also be represented in the form of incident and scattered wave as: formula_2 where formula_3 is the Green's function that represents the amplitude of the electron wave function at a point formula_4 due to a source at point formula_5. Hence for an incident plane wave of the form formula_6 the Schrödinger equation can be written as We then choose the coordinate axis in such a way that the incident beam hits the sample at (0,0,0) in the formula_7-direction, i.e., formula_8. Now we consider a wave-function formula_9 with a modulation function formula_10 for the amplitude. Equation () becomes then an equation for the modulation function, i.e., formula_11. Now we make substitutions with regards to the coordinate system we have adhered, i.e., formula_12 and thus formula_13 and convergence. This program is good to use if one already has structure files for a material that have been used in other calculations (for example, Density Functional Theory). These structure files can be used to general X-Ray structure factors which are then used as input for the PTBV routine in NUMIS. Microscope parameters can be changed through the MICROVB routine. This software is specifically developed to run in Mac OS X by Dr | https://en.wikipedia.org/wiki?curid=22157068 |
Multislice Roar Kilaas of Lawrence Berkeley National Laboratory. It is designed to have a user-friendly user interface and has been well-maintained relative to many other codes (last update May 2013). It is available (for a fee) from here. This is a software for multislice simulation was written in FORTRAN 77 by Dr. J. M. Zuo, while he was a postdoc research fellow at Arizona State University under the guidance of Prof. John C. H. Spence. The source code was published in the book of Electron Microdiffraction. A comparison between multislice and Bloch wave simulations for ZnTe was also published in the book. A separate comparison between several multislice algorithms at the year of 2000 was reported. The Quantitative TEM/STEM (QSTEM) simulations software package was written by Professor Christopher Koch of Humboldt University of Berlin in Germany. Allows simulation of HAADF, ADF, ABF-STEM, as well as conventional TEM and CBED. The executable and source code are available as a free download on the Koch group website. This is a code written by Dr Vincenzo Grillo of the Institute for Nanoscience (CNR) in Italy. This code is essentially a graphical frontend to the multislice code written by Kirkland, with more additional features. These include tools to generate complex crystalline structures, simulate HAADF images and model the STEM probe, as well as modeling of strain in materials. Tools for image analysis (e.g. GPA) and filtering are also available | https://en.wikipedia.org/wiki?curid=22157068 |
Multislice The code is updated quite often with new features and a user mailing list is maintained. Freely available on their website. Multi-slice image simulations for high-resolution scanning and coherent imaging transmission electron microscopy written by Dr. Juri Barthel from the Ernst Ruska-Centre at the Jülich Research Centre. The software comprises a graphical user interface version for direct visualization of STEM image calculations, as well as a bundle of command-line modules for more comprehensive calculation tasks. The programs have been written using Visual C++, Fortran 90, and Perl. Executable binaries for Microsoft Windows 32-bit and 64-bit operating systems are available for free from the website. OpenCL accelerated multislice software written by Dr. Adam Dyson and Dr. Jonathan Peters from University of Warwick. clTEM is under development as of October 2019. cudaEM is a multi-GPU enabled code based on CUDA for multislice simulations developed by the group of Prof. Stephen Pennycook. | https://en.wikipedia.org/wiki?curid=22157068 |
Rigid-band model The Rigid-Band Model (or RBM) is one of the models used to describe the behavior of metal alloys. In some cases the model is even used for non-metal alloys such as Si alloys. According to the RBM the shape of the constant energy surfaces (hence the Fermi surface as well) and curve of density of states of the alloy are the same as those of the solvent metal under the following conditions: The only effect of the addition of the solute, given that its valence is greater than that of the solvent, is the addition of electrons to the valence band. This results to swelling the Fermi surface and filling the density of states curve to a higher energy. In a pure metal, because of the periodicity of the lattice, the features of its electronic structure are well known. The single-particle states can be described in terms of Bloch states, the energy structure is characterized by Brillouin zone boundaries, energy gaps and energy bands. In reality though no metal is perfectly pure. When the amount of the foreign element is dilute, the added atoms may be treated as impurities. But when its concentration exceeds several atomic % , an alloy is formed and the interaction among the added atoms can no longer be neglected. Before giving a more mathematical outline of the RBM it is convenient to give somewhat of a visualization of what happens to a metal upon alloying it. In a pure metal, we'll take silver as an example, all lattice sites are occupied by silver atoms | https://en.wikipedia.org/wiki?curid=22159309 |
Rigid-band model When different kind of atoms are dissolved into it, for example 10% of copper, some random lattice sites become occupied by copper atoms. Since silver has a valence of 1 and copper has a valence of 2, the alloy will now have a valence of 1.1. Most lattice sites however are still occupied by silver atoms and consequently the changes in electronic structure are minimal. In a pure metal of valence Z, all atoms become positive ions with the valence +Z by releasing the outermost Z electrons per atom to form the valence band. As a result, conduction electrons carrying negative charges are uniformly distributed over any atomic site with equal probability densities and maintain charge neutrality with the array of ions with positive charges. When an impurity atom of valence Z is introduced, the periodic potential is disturbed, conduction electrons are scattered and a screening potential is formed where U(r) is the potential of the electrons in distance r, 1/λ is the screening radius and formula_2. The Fermi surface of the pure metal is constructed under the assumption that the wave vector k of the Bloch electron is a good quantum number. But alloying destroys the periodicity of the lattice potential and thus results in scattering of the Bloch electron. The wave vector k changes upon scattering of the Bloch electron and can no longer be taken as a good quantum number | https://en.wikipedia.org/wiki?curid=22159309 |
Rigid-band model In spite of such fundamental difficulties, experimental and theoretical works have provided ample evidence that the concept of the Fermi surface and Brillouin zone is still valid even in concentrated crystalline alloys In an alloy of atoms A and B, an intermetallic compound super-lattice structure tends to be formed. The chemical bonding between the unlike atoms leads to a very strong potential of the form where formula_4 is the potential on position formula_5 due to ion X, whose position is specified by formula_6. X here stands for either A or B, so that formula_7 indicates the potential of ion A. The RBM assumes formula_8, hence ignores the difference in the potential of ions A and B. Thus, the electronic structure of the pure metal A is assumed to be the same as that of the pure metal B or any compositions in the alloy A–B. The Fermi level is then chosen so as to be consistent with the electron concentration of the alloy. It is convenient to divide the predictions of the rigid-band model into two categories, geometric and density of states. The geometric predictions are those that use only the geometric properties of the constant energy surfaces. The density-of-states predictions are related to those properties which depend on the density of states at the Fermi energy such as the electronic specific heat. In a pure metal the eigenstates are Bloch wave functions Ψ with energies e | https://en.wikipedia.org/wiki?curid=22159309 |
Rigid-band model When the periodicity of the pure metal is destroyed by alloying, these Bloch states are no longer eigenstates and their energy becomes complex formula_9 The imaginary part Γ shows that the Bloch state in the alloy is no longer an eigenstate but scatters into other states with a lifetime of the order of (2Γ). However, if formula_10, where Δ is the width of the band, then the Bloch states are approximately eigenstates and they can be used to calculate the properties of the alloys. In this case we can ignore Γ . The change in the energy of a Bloch state with alloying is then When the perturbation is fairly localized about the solute site (which is one of the conditions of the RBM), ΔE depends only on e and not on k and thus formula_12. Therefore, the plot of formula_13 versus k for the alloy will have the same shape of constant energy surfaces as for the plot of formula_14versus k for the pure solvent. A given energy surface of the alloy will naturally correspond to a different energy value from that of the same shaped surface of the pure solvent, but the shapes will remain exactly the same. According to the Rigid Band Model formula_15 is constant (for a given energy level) and the density of states of the alloy has the shame shape as that of the pure solvent, displaced by formula_15. When the concentration of the solute a is small, formula_15 is also small and the density of states of the alloy at constant a is where formula_19 is the density of states of the pure solvent | https://en.wikipedia.org/wiki?curid=22159309 |
Rigid-band model In the case when formula_15 is constant we get meaning that the shape of the density of states will be the same, only displaced by formula_15. | https://en.wikipedia.org/wiki?curid=22159309 |
Dynamic binding (chemistry) In complexation catalysis, the term dynamic binding refers to any stabilizing interaction that is stronger at the transition state level than in the reactant-catalyst complex. Being directly related to transition state stabilization, dynamic binding is the very hearth of complexation catalysis. It was defined by A.J. Kirby in 1996 as opposed to the passive binding, "i.e." the whole of interactions that are equally strong at the reactant and the transition state level. | https://en.wikipedia.org/wiki?curid=22160206 |
Cis-regulatory module "Cis"-regulatory module (CRM) is a stretch of DNA, usually 100–1000 DNA base pairs in length, where a number of transcription factors can bind and regulate expression of nearby genes and regulate their transcription rates. They are labeled as "cis" because they are typically located on the same DNA strand as the genes they control as opposed to "trans", which refers to effects on genes not located on the same strand or farther away, such as transcription factors. One "cis"-regulatory element can regulate several genes, and conversely, one gene can have several "cis"-regulatory modules."Cis"-regulatory modules carry out their function by integrating the active transcription factors and the associated co-factors at a specific time and place in the cell where this information is read and an output is given. "Cis"-regulatory modules are one of several types of functional regulatory elements. Regulatory elements are binding sites for transcription factors, which are involved in gene regulation. "Cis"-regulatory modules perform a large amount of developmental information processing. "Cis"-regulatory modules are non-random clusters at their specified target site that contain transcription factor binding sites. The original definition presented cis-regulatory modules as enhancers of cis-acting DNA, which increased the rate of transcription from a linked promoter | https://en.wikipedia.org/wiki?curid=22164509 |
Cis-regulatory module However, this definition has changed to define "cis"-regulatory modules as a DNA sequence with transcription factor binding sites which are clustered into modular structures, including -but not limited to- locus control regions, promoters, enhancers, silencers, boundary control elements and other modulators. "Cis"-regulatory modules can be divided into three classes; enhancers, which regulate gene expression positively; insulators, which work indirectly by interacting with other nearby "cis"-regulatory modules; and silencers that turn off expression of genes. The design of "cis"-regulatory modules is such that transcription factors and epigenetic modifications serve as inputs, and the output of the module is the command given to the transcription machinery, which in turn determines the rate of gene transcription or whether it is turned on or off. There are two types of transcription factor inputs: those that determine when the target gene is to be expressed and those that serve as functional "drivers", which come into play only during specific situations during development. These inputs can come from different time points, can represent different signal ligands, or can come from different domains or lineages of cells. However, a lot still remains unknown. Additionally, the regulation of chromatin structure and nuclear organization also play a role in determining and controlling the function of cis-regulatory modules | https://en.wikipedia.org/wiki?curid=22164509 |
Cis-regulatory module Thus gene-regulation functions (GRF) provide a unique characteristic of a cis-regulatory module (CRM), relating the concentrations of transcription factors (input) to the promoter activities (output). The challenge is to predict GRFs. This challenge still remains unsolved. In general, gene-regulation functions do not use Boolean logic, although in some cases the approximation of the Boolean logic is still very useful. Within the assumption of the Boolean logic, principles guiding the operation of these modules includes the design of the module which determines the regulatory function. In relation to development, these modules can generate both positive and negative outputs. The output of each module is a product of the various operations performed on it. Common operations include "OR" logic gate – This design indicates that in an output will be given when either input is given [3]. "AND" logic gate – In this design two different regulatory factors are necessary to make sure that a positive output results. "Toggle Switches" – This design occurs when the signal ligand is absent while the transcription factor is present; this transcription factor ends up acting as a dominant repressor. However, once the signal ligand is present the transcription factor's role as repressor is eliminated and transcription can occur. Other Boolean logic operations can occur as well, such as sequence specific transcriptional repressors, which when they bind to the "cis"-regulatory module lead to an output of zero | https://en.wikipedia.org/wiki?curid=22164509 |
Cis-regulatory module Additionally, besides influence from the different logic operations, the output of a "cis"-regulatory module will also be influenced by prior events. 4) "Cis"-regulatory modules must interact with other regulatory elements. For the most part, even with the presence of functional overlap between "cis"-regulatory modules of a gene, the modules' inputs and outputs tend to not be the same. While the assumption of Boolean logic is important for "systems biology", detailed studies show that in general the logic of gene regulation is not Boolean. This means, for example, that in the case of a "cis"-regulatory module regulated by two transcription factors, experimentally determined gene-regulation functions can not be described by the 16 possible Boolean functions of two variables. Non-Boolean extensions of the gene-regulatory logic have been proposed to correct for this issue. Besides experimentally determining CRMs, there are various bioinformatics algorithms for predicting them. Most algorithms try to search for significant combinations of transcription factor binding sites (DNA binding sites) in promoter sequences of co-expressed genes. More advanced methods combine the search for significant motifs with correlation in gene expression datasets between transcription factors and target genes. Both methods have been implemented, for example, in the ModuleMaster. Other programs created for the identification and prediction of "cis"-regulatory modules include: INSECT 2 | https://en.wikipedia.org/wiki?curid=22164509 |
Cis-regulatory module 0 is a web server that allows to search Cis-regulatory modules in a genome-wide manner. The program relies on the definition of strict restrictions among the Transcription Factor Binding Sites (TFBSs) that compose the module in order to decrease the false positives rate. INSECT is designed to be user-friendly since it allows automatic retrieval of sequences and several visualizations and links to third-party tools in order to help users to find those instances that are more likely to be true regulatory sites. INSECT 2.0 algorithm was previously published and the algorithm and theory behind it explained in Stubb uses hidden Markov models to identify statistically significant clusters of transcription factor combinations. It also uses a second related genome to improve the prediction accuracy of the model. Bayesian Networks use an algorithm that combines site predictions and tissue-specific expression data for transcription factors and target genes of interest. This model also uses regression trees to depict the relationship between the identified "cis"-regulatory module and the possible binding set of transcription factors. CRÈME examine clusters of target sites for transcription factors of interest. This program uses a database of confirmed transcription factor binding sites that were annotated across the human genome. A search algorithm is applied to the data set to identify possible combinations of transcription factors, which have binding sites that are close to the promoter of the gene set of interest | https://en.wikipedia.org/wiki?curid=22164509 |
Cis-regulatory module The possible cis-regulatory modules are then statistically analyzed and the significant combinations are graphically represented Active "cis"-regulatory modules in a genomic sequence have been difficult to identify. Problems in identification arise because often scientists find themselves with a small set of known transcription factors, so it makes it harder to identify statistically significant clusters of transcription factor binding sites. Additionally, high costs limit the use of large whole genome tiling arrays. "Cis"-regulatory modules can be characterized by the information processing that they encode and the organization of their transcription factor binding sites. Additionally, "cis"-regulatory modules are also characterized by the way they affect the probability, proportion, and rate of transcription. Highly cooperative and coordinated "cis"-regulatory modules are classified as enhanceosomes. The architecture and the arrangement of the transcription factor binding sites are critical because disruption of the arrangement could cancel out the function. Functional flexible "cis"-regulatory modules are called billboards. Their transcriptional output is the summation effect of the bound transcription factors. Enhancers affect the probability of a gene being activated, but have little or no effect on rate. The Binary response model acts like an on/off switch for transcription. This model will increase or decrease the amount of cells that transcribe a gene, but it does not affect the rate of transcription | https://en.wikipedia.org/wiki?curid=22164509 |
Cis-regulatory module Rheostatic response model describes cis-regulatory modules as regulators of the initiation rate of transcription of its associated gene. "Cis"-regulatory modules can regulate their target genes over large distances. Several models have been proposed to describe the way that these modules may communicate with their target gene promoter. These include the DNA scanning model, the DNA sequence looping model and the facilitated tracking model. In the DNA scanning model, the transcription factor and cofactor complex form at the "cis"-regulatory module and then continues to move along the DNA sequence until it finds the target gene promoter. In the looping model, the transcription factor binds to the "cis"-regulatory module, which then causes the "looping" of the DNA sequence and allows for the interaction with the target gene promoter. The transcription factor-"cis"-regulatory module complex causes the looping of the DNA sequence slowly towards the target promoter and forms a stable looped configuration. The facilitated tracking model combines parts of the two previous models. The function of a gene regulatory network depends on the architecture of the nodes, whose function is dependent on the multiple "cis"-regulatory modules. The layout of "cis"-regulatory modules can provide enough information to generate spatial and temporal patterns of gene expression | https://en.wikipedia.org/wiki?curid=22164509 |
Cis-regulatory module During development each domain, where each domain represents a different spatial regions of the embryo, of gene expression will be under the control of different "cis"-regulatory module(s). The design of regulatory modules help in producing feedback, feed forward, and cross-regulatory loops. | https://en.wikipedia.org/wiki?curid=22164509 |
Freshwater environmental quality parameters are the natural and man-made chemical, biological and microbiological characteristics of rivers, lakes and ground-waters, the ways they are measured and the ways that they change. The values or concentrations attributed to such parameters can be used to describe the pollution status of an environment, its biotic status or to predict the likelihood or otherwise of a particular organisms being present. Monitoring of environmental quality parameters is a key activity in managing the environment, restoring polluted environments and anticipating the effects of man-made changes on the environment. are those chemical, physical or biological parameters that can be used to characterise a freshwater body. Because almost all water bodies are dynamic in their composition, the relevant quality parameters are typically expressed as a range of expected concentrations. The first step in understanding the chemistry of freshwater is to establish the relevant concentrations of the parameters of interest. Conventionally this is done by taking representative samples of the water for subsequent analysis in a laboratory . However, in-situ monitoring using hand-held analytical equipment or using bank-side monitoring stations are also used. Freshwaters are surprisingly difficult to sample because they are rarely homogeneous and their quality varies during the day and during the year. In addition the most representative sampling locations are often at a distance from the shore or bank increasing the logistic complexity | https://en.wikipedia.org/wiki?curid=22166495 |
Freshwater environmental quality parameters Filling a clean bottle with river water is a very simple task, but a single sample is only representative of that point along the river the sample was taken from and at that point in time. Understanding the chemistry of a whole river, or even a significant tributary, requires prior investigation to understand how homogeneous or mixed the flow is and to determine if the quality changes during the course of a day and during the course of a year. Almost all natural rivers will have very significant patterns of change through the day and through the seasons. Water remote sensing offers a spatially continuous tool to improve understanding of spatial and temporal river water quality. Many rivers also have a very large flow that is unseen. This flows through underlying gravel and sand layers and is called hyporheic flow. How much mixing there is between the hyporheic zone and the water in the open channel will depend on a variety of factors, some of which relate to flows leaving aquifers which may have been storing water for many years. Ground waters by their very nature are often very difficult to access to take a sample. As a consequence the majority of ground-water data comes from samples taken from springs, wells, water supply bore-holes and in natural caves | https://en.wikipedia.org/wiki?curid=22166495 |
Freshwater environmental quality parameters In recent decades as the need to understand ground water dynamics has increased, an increasing number or monitoring bore-holes have been drilled into aquifers "see also Limnology" Lakes and ponds can be very large and support a complex eco-system in which environmental parameters vary widely in all three physical dimensions and with time. Large lakes in the temperate zone often stratify in the warmer months into a warmer upper layers rich in oxygen and a colder lower layer with low oxygen levels. In the autumn, falling temperatures and occasional high winds result in the mixing of the two layers into a more homogeneous whole. When stratification occurs it not only affects oxygen levels but also many related parameters such as iron, phosphate and manganese which are all changed in their chemical form by change in the redox potential of the environment. Lakes also receive waters, often from many different sources with varying qualities. Solids from stream inputs will typically settle near the mouth of the stream and depending on a variety of factors the incoming water may float over the surface of the lake, sink beneath the surface or rapidly mix with the lake water. All of these phenomena can skew the results of any environmental monitoring unless the process are well understood. Where two rivers meet at a confluence there exists a mixing zone | https://en.wikipedia.org/wiki?curid=22166495 |
Freshwater environmental quality parameters A mixing zone may be very large and extend for many miles as in the case of the Mississippi and Missouri rivers in the United States and the River Clwyd and River Elwy in North Wales. In a mixing zone water chemistry may be very variable and can be difficult to predict. The chemical interactions are not just simple mixing but may be complicated by biological processes from submerged macrophytes and by water joining the channel from the hyporheic zone or from springs draining an aquifer. The geology that underlies a river or lake has a major impact on its chemistry. A river flowing across very ancient precambrian schists is likely to have dissolved very little from the rocks and maybe similar to de-ionised water at least in the headwaters. Conversely a river flowing through chalk hills, and especially if its source is in the chalk, will have a high concentration of carbonates and bicarbonates of Calcium and possibly Magnesium. As a river progresses along its course it may pass through a variety of geological types and it may have inputs from aquifers that do not appear on the surface anywhere in the locality. Oxygen is probably the most important chemical constituent of surface water chemistry, as all aerobic organisms require it for survival. It enters the water mostly via diffusion at the water-air interface. Oxygen's solubility in water decreases as water temperature increases | https://en.wikipedia.org/wiki?curid=22166495 |
Freshwater environmental quality parameters Fast, turbulent streams expose more of the water's surface area to the air and tend to have low temperatures and thus more oxygen than slow, backwaters. Oxygen is a by-product of photosynthesis, so systems with a high abundance of aquatic algae and plants may also have high concentrations of oxygen during the day. These levels can decrease significantly during the night when primary producers switch to respiration. Oxygen can be limiting if circulation between the surface and deeper layers is poor, if the activity of animals is very high, or if there is a large amount of organic decay occurring such as following Autumn leaf-fall. Most other atmospheric inputs come from man-made or anthropogenic sources the most significant of which are the oxides of sulphur produced by burning sulphur rich fuels such as coal and oil which give rise to acid rain. The chemistry of sulphur oxides is complex both in the atmosphere and in river systems. However the effect on the overall chemistry is simple in that it reduces the pH of the water making it more acidic. The pH change is most marked in rivers with very low concentrations of dissolved salts as these cannot buffer the effects of the acid input. Rivers downstream of major industrial conurbations are also at greatest risk. In parts of Scandinavia and West Wales and Scotland many rivers became so acidic from oxides of sulphur that most fish life was destroyed and pHs as low as pH4 were recorded during critical weather conditions | https://en.wikipedia.org/wiki?curid=22166495 |
Freshwater environmental quality parameters The majority of rivers on the planet and many lakes have received or are receiving inputs from human-kind's activities. In the industrialised world, many rivers have been very seriously polluted, at least during the 19th and the first half of the 20th centuries. Although in general there has been much improvement in the developed world, there is still a great deal of river pollution apparent on the planet. In most environmental situations the presence or absence of an organism is determined by a complex web of interactions only some of which will be related to measurable chemical or biological parameters. Flow rate, turbulence, inter and intra specific competition, feeding behaviour, disease, parasitism, commensalism and symbiosis are just a few of the pressures and opportunities facing any organism or population. Most chemical constituents favour some organisms and are less favourable to others. However, there are some cases where a chemical constituent exerts a toxic effect. i.e. where the concentration can kill or severely inhibit the normal functioning of the organism. Where a toxic effect has been demonstrated this may be noted in the sections below dealing with the individual parameters. Often it is the colour of freshwater or how clear or hazy the water is that is the most obvious visual characteristic. Unfortunately neither colour nor turbidity are strong indicators of the overall chemical composition of water | https://en.wikipedia.org/wiki?curid=22166495 |
Freshwater environmental quality parameters However both colour and turbidity reduce the amount of light penetrating the water and can have significant impact on algae and macrophytes. Some algae in particular are highly dependent on water with low colour and turbidity Many rivers draining high moor-lands overlain by peat have a very deep yellow brown colour caused by dissolved humic acids. One of the principal sources of elevated concentrations of organic chemical constituents is from treated sewage. Dissolved organic material is most commonly measured using either the Biochemical oxygen demand (BOD) test or the Chemical oxygen demand (COD) test. Organic constituents are significant in river chemistry for the effect that they have on dissolved oxygen concentration and for the impact that individual organic species may have directly on aquatic biota. Any organic and degradable material consumes oxygen as it decomposes. Where organic concentrations are significantly elevated the effects on oxygen concentrations can be significant and as conditions get extreme the river bed may become anoxic. Some organic constituents such as synthetic hormones, pesticides, phthalates have direct metabolic effects on aquatic biota and even on humans drinking water taken from the river. Understanding such constituents and how they can be identified and quantified is becoming of increasing importance in the understanding of freshwater chemistry | https://en.wikipedia.org/wiki?curid=22166495 |
Freshwater environmental quality parameters A wide range of metals may be found in rivers from natural sources where metal ores are present in the rocks over which the river flows or in the aquifers feeding water into the river. However many rivers have an increased load of metals because of industrial activities which include mining and quarrying and the processing and use of metals. Iron, usually as Fe is a common constituent of river waters at very low levels. Higher iron concentrations in acidic springs or an anoxic hyporheic zone may cause visible orange/brown staining or semi-gelatinous precipitates of dense orange iron bacterial floc carpeting the river bed. Such conditions are very deleterious to most organisms and can cause serious damage in a river system. Coal mining is also a very significant source of Iron both in mine-waters and from stocking yards of coal and from coal processing. Long abandoned mines can be a highly intractable source of high concentrations of Iron. Low levels of iron are common in spring waters emanating from deep-seated aquifers and maybe regarding as health giving springs. Such springs are commonly called Chalybeate springs and have given rise to a number of Spa towns in Europe and the United States. Zinc is normally associated with metal mining, especially Lead and Silver mining but is also a component pollutant associated with a variety of other metal mining activities and with Coal mining. Zinc is toxic at relatively low concentrations to many aquatic organisms | https://en.wikipedia.org/wiki?curid=22166495 |
Freshwater environmental quality parameters "Microregma" starts to show a toxic reaction at concentrations as low as 0.33 mg/l Lead and silver in river waters are commonly found together and associated with lead mining. Impacts from very old mines can be very long-lived. In the River Ystwyth in Wales for example, the effects of silver and lead mining in the 17th and 18th centuries in the headwaters still causes unacceptably high levels of Zinc and Lead in the river water right down to its confluence with the sea. Silver is very toxic even at very low concentrations but leaves no visible evidence of its contamination. Lead is also highly toxic to freshwater organisms and to humans if the water is used as drinking water. As with Silver, Lead pollution is not visible to the naked eye. The River Rheidol in west Wales had a major series of lead mines in its headwaters until the end of the 19th century and its mine discharges and waste tips remain to this day. In 1919 - 1921 only 14 species of invertebrates were found in the lower Rheidol when Lead concentrations were between 0.2ppm and 0.5ppm. By 1932 the lead concentration had reduced to 0.02ppm to 0.1ppm because of the abandonment of mining and, at those concentrations, the bottom fauna had stabilized to 103 species including three leeches. Coal mining is also a very significant source of metals, especially Iron, Zinc and Nickel particularly where the coal is rich if pyrites which oxidises on contact with the air producing a very acidic leachate which is able to dissolve metals from the coal | https://en.wikipedia.org/wiki?curid=22166495 |
Freshwater environmental quality parameters Significant levels of copper are unusual in rivers and where it does it occur the source is most likely to be mining activities, coal stocking, or pig farming. Rarely elevated levels may be of geological origin. Copper is acutely toxic to many freshwater organisms, especially algae, at very low concentrations and significant concentration in river water may have serious adverse effects on the local ecology. Nitrogenous compounds have a variety of sources including washout of oxides of nitrogen from the atmosphere, some geological inputs and some from macrophyte and algal nitrogen fixation. However, for many rivers in the proximity of humans, the largest input is from sewage whether treated or untreated. The nitrogen derives from breakdown products of proteins found in urine and faeces. These products, being very soluble, often pass through sewage treatment process and are discharged into rivers as a component of sewage treatment effluent. Nitrogen may be in the form of nitrate, nitrite, ammonia or ammonium salts or what is termed albuminoid nitrogen or nitrogen still within an organic proteinoid molecule. The differing forms of nitrogen are relatively stable in most river systems with nitrite slowly transforming into nitrate in well oxygenated rivers and ammonia transforming into nitrite/ nitrate. However, the process are slow in cool rivers and reduction in concentration may more often be attributed to simple dilution | https://en.wikipedia.org/wiki?curid=22166495 |
Freshwater environmental quality parameters All forms of nitrogen are taken up by macrophytes and algae and elevated levels of nitrogen are often associated with overgrowths of plants or eutrophication. These can have the effect of blocking channels and inhibiting navigation. However, ecologically, the more significant effect is on dissolved oxygen concentrations which may become super-saturated during daylight due to plant photosynthesis but then drop to very low levels during darkness as plant respiration uses up the dissolved oxygen. Coupled with the release of oxygen in photosynthesis is the creation of bi-carbonate ions which cause a steep rise in pH and this is matched in darkness as carbon dioxide is released through respiration which substantially lowers the pH. Thus high levels of nitrogenous compounds tends to lead to eutrophication with extreme variations in parameters which in turn can substantially degrade the ecological worth of the watercourse. Ammonium ions also have a toxic effect, especially on fish. The toxicity of ammonia is dependent on both pH and temperature and an added complexity is the buffering effect of the blood/water interface across the gill membrane which masks any additional toxicity over about pH 8.0. The management of river chemistry to avoid ecological damage is particularly difficult in the case of ammonia as a wide range of potential scenarios of concentration, pH and temperature have to be considered and the diurnal pH fluctuation caused by photosynthesis considered | https://en.wikipedia.org/wiki?curid=22166495 |
Freshwater environmental quality parameters On warm summer days with high-bi-carbonate concentrations unexpectedly toxic conditions can be created. Phosphorus compounds are usually found as relatively insoluble phosphates in river water and, except in some exceptional circumstances, their origin is agriculture or human sewage. Phosphorus can encourage excessive growths of plants and algae and contribute to eutrophication. If a river discharges into a lake or reservoir phosphate can be mobilised year after year by natural processes. In the summer time, lakes stratify so that warm oxygen rich water floats on top of cold oxygen poor water. In the warm upper layers - the epilimnion- plants consume the available phosphate. As the plants die in the late summer they fall into the cool water layers underneath - the hypolimnion - and decompose. During winter turn-over, when a lake becomes fully mixed through the action of winds on a cooling body of water - the phosphates are spread throughout the lake again to feed a new generation of plants. This process is one of the principal causes of persistent algal blooms at some lakes. Geological deposits of arsenic may be released into rivers where deep ground-waters are exploited as in parts of Pakistan. Many metalloid ores such as lead, gold and copper contain traces of arsenic and poorly stored tailings may result in arsenic entering the hydrological cycle. Inert solids are produced in all montane rivers as the energy of the water helps grind away rocks into gravel, sand and finer material | https://en.wikipedia.org/wiki?curid=22166495 |
Freshwater environmental quality parameters Much of this settles very quickly and provides an important substrate for many aquatic organisms. Many salmonid fish require beds of gravel and sand in which to lay their eggs. Many other types of solids from agriculture, mining, quarrying, urban run-off and sewage may block-out sunlight from the river and may block interstices in gravel beds making them useless for spawning and supporting insect life. Both agriculture and sewage treatment produce inputs into rivers with very high concentrations of bacteria and viruses including a wide range of pathogenic organisms. Even in areas with little human activity significant levels of bacteria and viruses can be detected originating from fish and aquatic mammals and from animals grazing near rivers such as deer. Upland waters draining areas frequented by sheep, goats or deer may also harbour a variety of opportunistic human parasites such as liver fluke. Consequently, there are very few rivers from which the water is safe to drink without some form of sterilisation or disinfection. In rivers used for contact recreation such as swimming, safe levels of bacteria and viruses can be established based on risk assessment. Under certain conditions bacteria can colonise freshwaters occasionally making large rafts of filamentous mats known as "sewage fungus" – usually "Sphaerotilus natans". The presence of such organisms is almost always an indicator of extreme organic pollution and would be expected to be matched with low dissolved oxygen concentrations and high BOD vales. E | https://en.wikipedia.org/wiki?curid=22166495 |
Freshwater environmental quality parameters coli bacteria have been commonly found in recreational waters and their presence is used to indicate the presence of recent faecal contamination, but E. coli presence may not be indicative of human waste. " E. coli" are found in all warm-blooded animals. "E. coli" have also been found in fish and turtles. Enterobacteria may also persist in the environment in mud, sediments, sand and soil for considerable lengths of time. pH in rivers is affected by the geology of the water source, atmospheric inputs and a range of other chemical contaminants. pH is only likely to become an issue on very poorly buffered upland rivers where atmospheric sulphur and nitrogen oxides may very significantly depress the pH as low as pH4 or in eutrophic alkaline rivers where photosynthetic bi-carbonate ion production in photosynthesis may drive the pH up above pH10 Drinking water quality standards<br> Harmonised monitoring scheme | https://en.wikipedia.org/wiki?curid=22166495 |
Bioceramic Bioceramics and bioglasses are ceramic materials that are biocompatible. Bioceramics are an important subset of biomaterials. Bioceramics range in biocompatibility from the ceramic oxides, which are inert in the body, to the other extreme of resorbable materials, which are eventually replaced by the body after they have assisted repair. Bioceramics are used in many types of medical procedures. Bioceramics are typically used as rigid materials in surgical implants, though some bioceramics are flexible. The ceramic materials used are not the same as porcelain type ceramic materials. Rather, bioceramics are closely related to either the body's own materials or are extremely durable metal oxides. Prior to 1925, the materials used in implant surgery were primarily relatively pure metals. The success of these materials was surprising considering the relatively primitive surgical techniques. The 1930s marked the beginning of the era of better surgical techniques as well as the first use of alloys such as vitallium. In 1969, L. L. Hench and others discovered that various kinds of glasses and ceramics could bond to living bone. Hench was inspired by the idea on his way to a conference on materials. He was seated next to a colonel who had just returned from the Vietnam War. The colonel shared that after an injury the bodies of soldiers would often reject the implant. Hench was intrigued and began to investigate materials that would be biocompatible. The final product was a new material which he called bioglass | https://en.wikipedia.org/wiki?curid=22183423 |
Bioceramic This work inspired a new field called bioceramics. With the discovery of bioglass, interest in bioceramics grew rapidly. On April 26, 1988, the first international symposium on bioceramics was held in Kyoto, Japan. Ceramics are now commonly used in the medical fields as dental and bone implants. Surgical cermets are used regularly. Joint replacements are commonly coated with bioceramic materials to reduce wear and inflammatory response. Other examples of medical uses for bioceramics are in pacemakers, kidney dialysis machines, and respirators. The global demand on medical ceramics and ceramic components was about U.S. $9.8 billion in 2010. It was forecast to have an annual growth of 6 to 7 percent in the following years, with world market value predicted to increase to U.S. $15.3 billion by 2015 and reach U.S. $18.5 billion by 2018. Bioceramics are meant to be used in extracorporeal circulation systems (dialysis for example) or engineered bioreactors; however, they're most common as implants. Ceramics show numerous applications as biomaterials due to their physico-chemical properties. They have the advantage of being inert in the human body, and their hardness and resistance to abrasion makes them useful for bones and teeth replacement. Some ceramics also have excellent resistance to friction, making them useful as replacement materials for malfunctioning joints. Properties such as appearance and electrical insulation are also a concern for specific biomedical applications | https://en.wikipedia.org/wiki?curid=22183423 |
Bioceramic Some bioceramics incorporate alumina (AlO) as their lifespan is longer than that of the patient's. The material can be used in inner ear ossicles, ocular prostheses, electrical insulation for pacemakers, catheter orifices and in numerous prototypes of implantable systems such as cardiac pumps. Aluminosilicates are commonly used in dental prostheses, pure or in ceramic-polymer composites. The ceramic-polymer composites are a potential way to filling of cavities replacing amalgams suspected to have toxic effects. The aluminosilicates also have a glassy structure. Contrary to artificial teeth in resin, the colour of tooth ceramic remains stable Zirconia doped with yttrium oxide has been proposed as a substitute for alumina for osteoarticular prostheses. The main advantages are a greater failure strength, and a good resistance to fatigue. Vitreous carbon is also used as it is light, resistant to wear, and compatible with blood. It is mostly used in cardiac valve replacement. Diamond can be used for the same application, but in coating form. Calcium phosphate-based ceramics constitute, at present, the preferred bone substitute in orthopaedic and maxillofacial surgery. They are similar to the mineral phase of the bone in structure and/or chemical composition. The material is typically porous, which provide a good bone-implant interface due to the increase of surface area that encourages cell colonisation and revascularisation | https://en.wikipedia.org/wiki?curid=22183423 |
Bioceramic Additionally, it has lower mechanical strength compared to bone, making highly porous implants very delicate. Since Young's modulus of ceramics is generally much higher than that of the bone tissue, the implant can cause mechanical stresses at the bone interface. Calcium phosphates usually found in bioceramics include hydroxyapatite (HAP) Ca(PO)(OH); tricalcium phosphate β (β TCP): Ca (PO); and mixtures of HAP and β TCP. Table 1: Bioceramics Applications Table 2: Mechanical Properties of Ceramic Biomaterials A number of implanted ceramics have not actually been designed for specific biomedical applications. However, they manage to find their way into different implantable systems because of their properties and their good biocompatibility. Among these ceramics, we can cite silicon carbide, titanium nitrides and carbides, and boron nitride. TiN has been suggested as the friction surface in hip prostheses. While cell culture tests show a good biocompatibility, the analysis of implants shows significant wear, related to a delaminating of the TiN layer. Silicon carbide is another modern-day ceramic which seems to provide good biocompatibility and can be used in bone implants. In addition to being used for their traditional properties, bioactive ceramics have seen specific use for due to their biological activity. Calcium phosphates, oxides, and hydroxides are common examples | https://en.wikipedia.org/wiki?curid=22183423 |
Bioceramic Other natural materials — generally of animal origin — such as bioglass and other composites feature a combination of mineral-organic composite materials such as HAP, alumina, or titanium dioxide with the biocompatible polymers (polymethylmethacrylate): PMMA, poly(L-lactic) acid: PLLA, poly(ethylene). Composites can be differentiated as bioresorbable or non-bioresorbable, with the latter being the result of the combination of a non-bioresorbable calcium phosphate (HAP) with a non-bioresorbable polymer (PMMA, PE). These materials may become more widespread in the future, on account of the many combination possibilities and their aptitude at combining a biological activity with mechanical properties similar to those of the bone. Bioceramics' properties of being anticorrosive, biocompatible, and aesthetic make them quite suitable for medical usage. Zirconia ceramic has bioinertness and noncytotoxicity. Carbon is another alternative with similar mechanical properties to bone, and it also features blood compatibility, no tissue reaction, and non-toxicity to cells. None of the three bioinert ceramics exhibit bonding with the bone. However, bioactivity of bioinert ceramics can be achieved by forming composites with bioactive ceramics. Bioglass and glass ceramics are nontoxic and chemically bond to bone. Glass ceramics elicit osteoinductive properties, while calcium phosphate ceramics also exhibit non-toxicity to tissues and bioresorption | https://en.wikipedia.org/wiki?curid=22183423 |
Bioceramic The ceramic particulate reinforcement has led to the choice of more materials for implant applications that include ceramic/ceramic, ceramic/polymer, and ceramic/metal composites. Among these composites ceramic/polymer composites have been found to release toxic elements into the surrounding tissues. Metals face corrosion related problems, and ceramic coatings on metallic implants degrade over time during lengthy applications. Ceramic/ceramic composites enjoy superiority due to similarity to bone minerals, exhibiting biocompatibility and a readiness to be shaped. The biological activity of bioceramics has to be considered under various "in vitro" and "in vivo" studies. Performance needs must be considered in accordance with the particular site of implantation. Technically, ceramics are composed of raw materials such as powders and natural or synthetic chemical additives, favoring either compaction (hot, cold or isostatic), setting (hydraulic or chemical), or accelerating sintering processes. According to the formulation and shaping process used, bioceramics can vary in density and porosity as cements, ceramic depositions, or ceramic composites. A developing material processing technique based on the biomimetic processes aims to imitate natural and biological processes and offer the possibility of making bioceramics at ambient temperature rather than through conventional or hydrothermal processes [GRO 96] | https://en.wikipedia.org/wiki?curid=22183423 |
Bioceramic The prospect of using these relatively low processing temperatures opens up possibilities for mineral organic combinations with improved biological properties through the addition of proteins and biologically active molecules (growth factors, antibiotics, anti-tumor agents, etc.). However, these materials have poor mechanical properties which can be improved, partially, by combining them with bonding proteins. Common bioactive materials available commercially for clinical use include 45S5 bioactive glass, A/W bioactive glass ceramic, dense synthetic HA, and bioactive composites such as a polyethylene–HA mixture. All these materials form an interfacial bond with adjacent tissue. High-purity alumina bioceramics are currently commercially available from various producers. U.K. manufacturer Morgan Advanced Ceramics (MAC) began manufacturing orthopaedic devices in 1985 and quickly became a recognised supplier of ceramic femoral heads for hip replacements. MAC Bioceramics has the longest clinical history for alumina ceramic materials, manufacturing HIP Vitox® alumina since 1985. Some calcium-deficient phosphates with an apatite structure were thus commercialised as "tricalcium phosphate" even though they did not exhibit the expected crystalline structure of tricalcium phosphate. Currently, numerous commercial products described as HA are available in various physical forms (e.g. granules, specially designed blocks for specific applications) | https://en.wikipedia.org/wiki?curid=22183423 |
Bioceramic HA/polymer composite (HA/polyethyelene, HAPEXTM) is also commercially available for ear implants, abrasives, and plasma-sprayed coating for orthopedic and dental implants. Bioceramics have been proposed as a possible treatment for cancer. Two methods of treatment have been proposed: hyperthermia and radiotherapy. Hyperthermia treatment involves implanting a bioceramic material that contains a ferrite or other magnetic material. The area is then exposed to an alternating magnetic field, which causes the implant and surrounding area to heat up. Alternatively, the bioceramic materials can be doped with β-emitting materials and implanted into the cancerous area. Other trends include engineering bioceramics for specific tasks. Ongoing research involves the chemistry, composition, and micro- and nanostructures of the materials to improve their biocompatibility. | https://en.wikipedia.org/wiki?curid=22183423 |
Turbulent diffusion is the transport of mass, heat, or momentum within a system due to random and chaotic time dependent motions. It occurs when turbulent fluid systems reach critical conditions in response to shear flow, which results from a combination of steep concentration gradients, density gradients, and high velocities. It occurs much more rapidly than molecular diffusion and is therefore extremely important for problems concerning mixing and transport in systems dealing with combustion, contaminants, dissolved oxygen, and solutions in industry. In these fields, turbulent diffusion acts as an excellent process for quickly reducing the concentrations of a species in a fluid or environment, in cases where this is needed for rapid mixing during processing, or rapid pollutant or contaminant reduction for safety. However, it has been extremely difficult to develop a concrete and fully functional model that can be applied to the diffusion of a species in all turbulent systems due to the inability to characterize both an instantaneous and predicted fluid velocity simultaneously. In turbulent flow, this is a result of several characteristics such as unpredictability, rapid diffusivity, high levels of fluctuating vorticity, and dissipation of kinetic energy. Atmospheric dispersion, or diffusion, studies how pollutants are mixed in the environment | https://en.wikipedia.org/wiki?curid=22187097 |
Turbulent diffusion There are many factors included in this modeling process, such as which level of atmosphere(s) the mixing is taking place, the stability of the environment and what type of contaminant and source is being mixed. The Eulerian and Lagrangian (discussed below) models have both been used to simulate atmospheric diffusion, and are important for a proper understanding of how pollutants react and mix in different environments. Both of these models take into account both vertical and horizontal wind, but additionally integrate Fickian diffusion theory to account for turbulence. While these methods have to use ideal conditions and make numerous assumptions, at this point in time, it is difficult to better calculate the effects of turbulent diffusion on pollutants. Fickian diffusion theory and further advancements in research on atmospheric diffusion can be applied to model the effects that current emission rates of pollutants from various sources have on the atmosphere. Using planar laser-induced fluorescence (PLIF) and particle image velocimetry (PIV) processes, there has been on-going research on the effects of turbulent diffusion in flames. Main areas of study include combustion systems in gas burners used for power generation and chemical reactions in jet diffusion flames involving methane (CH), hydrogen (H) and nitrogen (N). Additionally, double-pulse Rayleigh temperature imaging has been used to correlate extinction and ignition sites with changes in temperature and the mixing of chemicals in flames | https://en.wikipedia.org/wiki?curid=22187097 |
Turbulent diffusion The Eulerian approach to turbulent diffusion focuses on an infinitesimal volume at a specific space and time in a fixed frame of reference, at which physical properties such as mass, momentum, and temperature are measured. The model is useful because Eulerian statistics are consistently measurable and offer great application to chemical reactions. Similarly to molecular models, it must satisfy the same principles as the continuity equation below, where the advection of an element or species is balanced by its diffusion, generation by reaction, and addition from other sources or points, and the Navier–Stokes equations. formula_1 formula_2 Where formula_3 = species concentration of interest, formula_4 = velocity t= time, formula_5= direction, formula_6 = molecular diffusion constant, formula_7 = rate of formula_3 generated reaction, formula_9 = rate of formula_3 generated by source. If we consider an inert species (no reaction) with no sources and assume molecular diffusion to be negligible, only the advection terms on the left hand side of the equation survive. The solution to this model seems trivial at first, however we have ignored the random component of the velocity plus the average velocity in u= ū + u’ that is typically associated with turbulent behavior. In turn, the concentration solution for the Eulerian model must also have a random component c= c+ c’. This results in a closure problem of infinite variables and equations and makes it impossible to solve for a definite c on the assumptions stated | https://en.wikipedia.org/wiki?curid=22187097 |
Turbulent diffusion Fortunately there exists a closure approximation in introducing the concept of eddy diffusivity and its statistical approximations for the random concentration and velocity components from turbulent mixing. formula_13 Where K is the eddy diffusivity. Substituting into the first continuity equation and ignoring reactions, sources, and molecular diffusion results in the following differential equation considering only the turbulent diffusion approximation in eddy diffusion: formula_14 Unlike the molecular diffusion constant D, the eddy diffusivity is a matrix expression that may vary in space, and thus may not be taken outside the outer derivative. The Lagrangian model to turbulent diffusion uses a moving frame of reference to follow the trajectories and displacements of the species as they move and follows the statistics of each particle individually. Initially, the particle sits at a location x’ (x, x, x) at time "t"’. The motion of the particle is described by its probability of existing in a specific volume element at time "t", that is described by Ψ(x, x, x, "t") dx dx dx = Ψ(x,"t")dx which follows the probability density function (pdf) such that: formula_15 Where function "Q" is the probably density for particle transition | https://en.wikipedia.org/wiki?curid=22187097 |
Turbulent diffusion The concentration of particles at a location x and time t can then be calculated by summing the probabilities of the number of particles observed as follows: formula_16 Which is then evaluated by returning to the pdf integral formula_17 Thus, this approach is used to evaluate the position and velocity of particles relative to their neighbors and environment, and approximates the random concentrations and velocities associated with turbulent diffusion in the statistics of their motion. The resulting solution for solving the final equations listed above for both the Eulerian and Lagrangian models for analyzing the statistics of species in turbulent flow, both result in very similar expressions for calculating the average concentration at a location from a continuous source. Both solutions develop a Gaussian Plume and are virtually identical under the assumption that the variances in the x,y,z directions are related to the eddy diffusivity. formula_18 Where formula_19 q= species emission rate, u = wind speed, σ = variance in "i" direction. Under various external conditions such as directional flow speed (wind) and environmental conditions, the variances and diffusivities of turbulent diffusion are measured and used to calculate a good estimate of concentrations at a specific point from a source | https://en.wikipedia.org/wiki?curid=22187097 |
Turbulent diffusion This model is very useful in atmospheric sciences, especially when dealing with concentrations of contaminants in air pollution that emanate from sources such as combustion stacks, rivers, or strings of automobiles on a road. Because applying mathematical equations to turbulent flow and diffusion is so difficult, research in this area has been lacking until recently. In the past, laboratory efforts have used data from steady flow in streams or from fluids, that have a high Reynolds number, flowing through pipes, but it is difficult to obtain accurate data from these methods. This is because these methods involve ideal flow, which cannot simulate the conditions of turbulent flow necessary for developing turbulent diffusion models. With the advancement in computer-aided modeling and programming, scientists have been able to simulate turbulent flow in order to better understand turbulent diffusion in the atmosphere and in fluids. Currently in use on research efforts are two main non-intrusive applications. The first is planar laser-induced fluorescence (PLIF), which is used to detect instantaneous concentrations at up to one million points per second. This technology can be paired with particle image velocimetry (PIV), which detects instantaneous velocity data. In addition to finding concentration and velocity data, these techniques can be used to deduce spatial correlations and changes in the environment | https://en.wikipedia.org/wiki?curid=22187097 |
Turbulent diffusion As technology and computer abilities are rapidly expanding, these methods will also improve greatly, and will more than likely be at the forefront of future research on modeling turbulent diffusion. Aside from these efforts, there also have been advances in fieldwork used before computers were available. Real-time monitoring of turbulence, velocity and currents for fluid mixing is now possible. This research has proved important for studying the mixing cycles of contaminants in turbulent flows, especially for drinking water supplies. As researching techniques and availability increase, many new areas are showing interest in utilizing these methods. Studying how robotics or computers can detect odor and contaminants in a turbulent flow is one area that will likely produce a lot of interest in research. These studies could help the advancement of recent research on placing sensors in aircraft cabins to effectively detect biological weapons and/or viruses. | https://en.wikipedia.org/wiki?curid=22187097 |
Psychochemical warfare involves the use of psychopharmacological agents (mind-altering drugs or chemicals) with the intention of incapacitating an adversary through the temporary induction of hallucinations or delirium. These agents have generally been considered chemical weapons and, more narrowly, constitute a specific type of incapacitating agent. Although never developed into an effective weapons system, psychochemical warfare theory and research—along with overlapping mind control drug research—was secretly pursued in the mid-20th century by the US military and Central Intelligence Agency (CIA) in the context of the Cold War. These research programs were ended when they came to light and generated controversy in the 1970s. The degree to which the Soviet Union developed or deployed similar agents during the same period remains largely unknown. The use of chemicals to induce altered states of mind dates back to antiquity and includes the use of plants such as thornapple ("Datura stramonium") that contain combinations of anticholinergic alkaloids. In 184 B.C., Hannibal's army used belladonna plants to induce disorientation. Records indicate that in 1611, in the British Jamestown Colony of Virginia, an unidentified, but toxic and hallucinogenic, drug derived from local plants was deployed with some success against the white settlers by Chief Powhatan | https://en.wikipedia.org/wiki?curid=22191978 |
Psychochemical warfare In 1881, members of a French railway surveying expedition crossing Tuareg territory in North Africa ate dried dates that tribesmen had apparently deliberately contaminated with Egyptian henbane ("Hyoscyamus muticus", or "H. falezlez"), to devastating effect. In the 1950s, the CIA investigated LSD (lysergic acid diethylamide) as part of its Project MKUltra. In the same period, the US Army undertook the secret Edgewood Arsenal human experiments which grew out of the U.S. chemical warfare program and involved studies of several hundred volunteer test subjects. Britain was also investigating the possible use of LSD and the chemical BZ (3-quinuclidinyl benzilate) as nonlethal battlefield drug-weapons. The United States eventually weaponized BZ for delivery in the M43 BZ cluster bomb until stocks were destroyed in 1989. Both the US and Britain concluded that the desired effects of drug weapons were unpredictable under battlefield conditions and gave up experimentation. Reports of drug weapons associated with the Soviet bloc have been considered unreliable given the apparent absence of documentation in state archives. Hungarian researcher Lajos Rosza wrote that records of Hungary's State Defense Council meetings from 1962 to 1978 suggest that the Warsaw Pact forum had considered a psychochemical agent such as methylamphetamine as a possible weapon. | https://en.wikipedia.org/wiki?curid=22191978 |
Signorini problem The is an elastostatics problem in linear elasticity: it consists in finding the elastic equilibrium configuration of an anisotropic non-homogeneous elastic body, resting on a rigid frictionless surface and subject only to its mass forces. The name was coined by Gaetano Fichera to honour his teacher, Antonio Signorini: the original name coined by him is problem with ambiguous boundary conditions. The problem was posed by Antonio Signorini during a course taught at the "Istituto Nazionale di Alta Matematica" in 1959, later published as the article , expanding a previous short exposition he gave in a note published in 1933. himself called it "problem with ambiguous boundary conditions", since there are two alternative sets of boundary conditions the solution "must satisfy" on any given contact point. The statement of the problem involves not only equalities "but also inequalities", and "it is not a priori known what of the two sets of boundary conditions is satisfied at each point". Signorini asked to determine if the problem is well-posed or not in a physical sense, i.e. if its solution exists and is unique or not: he explicitly invited young analysts to study the problem. Gaetano Fichera and Mauro Picone attended the course, and Fichera started to investigate the problem: since he found no references to similar problems in the theory of boundary value problems, he decided to approach it by starting from first principles, specifically from the virtual work principle | https://en.wikipedia.org/wiki?curid=22194510 |
Signorini problem During Fichera's researches on the problem, Signorini began to suffer serious health problems: nevertheless, he desired to know the answer to his question before his death. Picone, being tied by a strong friendship with Signorini, began to chase Fichera to find a solution: Fichera himself, being tied as well to Signorini by similar feelings, perceived the last months of 1962 as worrying days. Finally, on the first days of January 1963, Fichera was able to give a complete proof of the existence of a unique solution for the problem with ambiguous boundary condition, which he called the "Signorini problem" to honour his teacher. A preliminary research announcement, later published as , was written up and submitted to Signorini exactly a week before his death. Signorini expressed great satisfaction to see a solution to his question. A few days later, during a conversation with his family Doctor Damiano Aprile, Signorini told him: According to the solution of the coincides with the birth of the field of variational inequalities. The content of this section and the following subsections follows closely the treatment of Gaetano Fichera in , and also : his derivation of the problem is different from Signorini's one in that he does not consider only incompressible bodies and a plane rest surface, as Signorini does | https://en.wikipedia.org/wiki?curid=22194510 |
Signorini problem The problem consist in finding the displacement vector from the natural configuration formula_1 of an anisotropic non-homogeneous elastic body that lies in a subset formula_2 of the three-dimensional euclidean space whose boundary is formula_3 and whose interior normal is the vector formula_4, resting on a rigid frictionless surface whose contact surface (or more generally contact set) is formula_5 and subject only to its body forces formula_6, and surface forces formula_7 applied on the free (i.e. not in contact with the rest surface) surface formula_8: the set formula_2 and the contact surface formula_5 characterize the natural configuration of the body and are known a priori. Therefore, the body has to satisfy the general equilibrium equations written using the Einstein notation as all in the following development, the ordinary boundary conditions on formula_12 and the following two sets of boundary conditions on formula_5, where formula_15 is the Cauchy stress tensor. Obviously, the body forces and surface forces cannot be given in arbitrary way but they must satisfy a condition in order for the body to reach an equilibrium configuration: this condition will be deduced and analyzed in the following development | https://en.wikipedia.org/wiki?curid=22194510 |
Signorini problem If formula_16 is any tangent vector to the contact set formula_5, then the ambiguous boundary condition in each point of this set are expressed by the following two systems of inequalities Let's analyze their meaning: Knowing these facts, the set of conditions applies to points of the boundary of the body which "do not" leave the contact set formula_5 in the equilibrium configuration, since, according to the first relation, the displacement vector formula_27 "has no components" directed as the normal vector formula_4, while, according to the second relation, the tension vector "may have a component" directed as the normal vector formula_4 and having the same sense. In an analogous way, the set of conditions applies to points of the boundary of the body which "leave" that set in the equilibrium configuration, since displacement vector formula_27 "has a component" directed as the normal vector formula_4, while the tension vector "has no components" directed as the normal vector formula_4. For both sets of conditions, the tension vector has no tangent component to the contact set, according to the hypothesis that the body rests on a rigid "frictionless" surface | https://en.wikipedia.org/wiki?curid=22194510 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.