id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
27,461,135
https://en.wikipedia.org/wiki/Unified%20Communications%20Interoperability%20Forum
The Unified Communications Interoperability Forum (UCIF) is a non-profit alliance between communications technology vendors. It was announced on May 19, 2010, with the vision to maximize the interoperability of UC based on existing standards. Founding members of UCIF were HP, Microsoft, Polycom, Logitech / LifeSize Communications, and Juniper Networks. On July 28, 2014, UCIF merged with International Multimedia Telecommunications Consortium (UMTC) into one consortium. Unified communications Unified communications (UC) is the integration of real-time communication services such as instant messaging (chat), presence information, Telephony (including IP telephony), video conferencing, call control, and speech recognition with non real-time communication services such as unified messaging (integrated voicemail, e-mail, SMS, and fax). UC is not a single product, but a set of products that provides a consistent unified user interface and user experience across multiple devices and media types. UC also refers to a trend to offer business process integration, i.e. to simplify and integrate all forms of communications in view to optimize business processes and reduce the response time, manage flows and eliminate device and media dependencies. Members The original founding members were HP, Juniper Networks, Logitech / LifeSize Communications, Microsoft, and Polycom. Other members were Acme Packet, Huawei, Aspect, AudioCodes, Broadcom, BroadSoft, Brocade Communications Systems, ClearOne, Jabra, Plantronics, RADVISION, Siemens Enterprise Communications, Teliris, Vidyo, and VOSS Solutions. At launch, news outlets drew attention to the absence of Cisco and Avaya from the member list, though UCIF has invited them to join as early members. On July 28, 2014, UCIF merged with International Multimedia Telecommunications Consortium (IMTC) into one consortium. See also Unified communications Telepresence Unified messaging List of unified communications companies References External links UCIF Official website IMTC Official website Human–computer interaction Teleconferencing Videotelephony Organizations established in 2010 Organizations disestablished in 2014
Unified Communications Interoperability Forum
[ "Engineering" ]
439
[ "Human–computer interaction", "Human–machine interaction" ]
27,461,561
https://en.wikipedia.org/wiki/Theories%20of%20cloaking
Theories of cloaking discusses various theories based on science and research, for producing an electromagnetic cloaking device. Theories presented employ transformation optics, event cloaking, dipolar scattering cancellation, tunneling light transmittance, sensors and active sources, and acoustic cloaking. A cloaking device is one where the purpose of the transformation is to hide something, so that a defined region of space is invisibly isolated from passing electromagnetic fields (see Metamaterial cloaking) or sound waves. Objects in the defined location are still present, but incident waves are guided around them without being affected by the object itself. Along with this basic "cloaking device", other related concepts have been proposed in peer reviewed, scientific articles, and are discussed here. Naturally, some of the theories discussed here also employ metamaterials, either electromagnetic or acoustic, although often in a different manner than the original demonstration and its successor, the broad-band cloak. The first electromagnetic cloak The first electromagnetic cloaking device was produced in 2006, using gradient-index metamaterials. This has led to the burgeoning field of transformation optics (and now transformation acoustics), where the propagation of waves is precisely manipulated by controlling the behaviour of the material through which the light (sound) is travelling. Ordinary spatial cloaking Waves and the host material in which they propagate have a symbiotic relationship: both act on each other. A simple spatial cloak relies on fine tuning the properties of the propagation medium in order to direct the flow smoothly around an object, like water flowing past a rock in a stream, but without reflection, or without creating turbulence. Another analogy is that of a flow of cars passing a symmetrical traffic island – the cars are temporarily diverted, but can later reassemble themselves into a smooth flow that holds no information about whether the traffic island was small or large, or whether flowers or a large advertising billboard might have been planted on it. Although both analogies given above have an implied direction (that of the water flow, or of the road orientation), cloaks are often designed so as to be isotropic, i.e. to work equally well for all orientations. However, they do not need to be so general, and might only work in two dimensions, as in the original electromagnetic demonstration, or only from one side, as for the so-called carpet cloak. Spatial cloaks have other characteristics: whatever they contain can (in principle) be kept invisible forever, since an object inside the cloak may simply remain there. Signals emitted by the objects inside the cloak that are not absorbed can likewise be trapped forever by its internal structure. If a spatial cloak could be turned off and on again at will, the objects inside would then appear and disappear accordingly. Space-time cloaking The event cloak is a means of manipulating electromagnetic radiation in space and time in such a way that a certain collection of happenings, or events, is concealed from distant observers. Conceptually, a safecracker can enter a scene, steal the cash and exit, whilst a surveillance camera records the safe door locked and undisturbed all the time. The concept utilizes the science of metamaterials in which light can be made to behave in ways that are not found in naturally occurring materials. The event cloak works by designing a medium in which different parts of the light illuminating a certain region can be either slowed or accelerated. A leading portion of the light is accelerated so that it arrives before the events occur, whilst a trailing part is slowed and arrives too late. After their occurrence, the light is reformed by slowing the leading part and accelerating the trailing part. The distant observer only sees a continuous illumination, whilst the events that occurred during the dark period of the cloak's operation remain undetected. The concept can be related to traffic flowing along a highway: at a certain point some cars are accelerated up, whilst the ones behind are slowed. The result is a temporary gap in the traffic allowing a pedestrian to cross. After this, the process can be reversed so that the traffic resumes its continuous flow without a gap. Regarding the cars as light particles (photons), the act of the pedestrian crossing the road is never suspected by the observer down the highway, who sees an uninterrupted and unperturbed flow of cars. For absolute concealment, the events must be non-radiating. If they do emit light during their occurrence (e.g. by fluorescence), then this light is received by the distant observer as a single flash. Applications of the Event Cloak include the possibility to achieve `interrupt-without-interrupt' in data channels that converge at a node. A primary calculation can be temporarily suspended to process priority information from another channel. Afterwards the suspended channel can be resumed in such a way as to appear as though it was never interrupted. The idea of the event cloak was first proposed by a team of researchers at Imperial College London (UK) in 2010, and published in the Journal of Optics. An experimental demonstration of the basic concept using nonlinear optical technology has been presented in a preprint on the Cornell physics arXiv. This uses time lenses to slow down and speed up the light, and thereby improves on the original proposal from McCall et al. which instead relied on the nonlinear refractive index of optical fibres. The experiment claims a cloaked time interval of about 10 picoseconds, but that extension into the nanosecond and microsecond regimes should be possible. An event cloaking scheme that requires a single dispersive medium (instead of two successive media with opposite dispersion) has also been proposed based on accelerating wavepackets. The idea is based on modulating a part of a monochromatic light wave with a discontinuous nonlinear frequency chirp so that two opposite accelerating caustics are created in space–time as the different frequency components propagate at different group velocities in the dispersive medium. Due to the structure of the frequency chirp, the expansion and contraction of the time gap happen continuously in the same medium thus creating a biconvex time gap that conceals the enclosed events. Anomalous localized resonance cloaking In 2006, the same year as the first metamaterial cloak, another type of cloak was proposed. This type of cloaking exploits resonance of light waves while matching the resonance of another object. In particular a particle placed near a superlens would appear to disappear as the light surrounding the particle resonates as the same frequency as the superlens. The resonance would effectively cancel out the light reflecting from the particle, rendering the particle electromagnetically invisible. Cloaking objects at a distance In 2009, a passive cloaking device was designed to be an 'external invisibility device' that leaves the concealed object out in the open so that it can ‘see’ its surroundings. This is based on the premise that cloaking research has not adequately provided a solution to an inherent problem; because no electromagnetic radiation can enter or leave the cloaked space, this leaves the concealed object of the cloak without ability to detect visually, or otherwise, anything outside the cloaked space. Such a cloaking device is also capable of ‘cloaking’ only parts of an object, such as opening a virtual peep hole on a wall so as to see the other side. The traffic analogy used above for the spatial cloak can be adapted (albeit imperfectly) to describe this process. Imagine that a car has broken down in the vicinity of the roundabout, and is disrupting the traffic flow, causing cars to take different routes or creating a traffic jam. This exterior cloak corresponds to a carefully misshapen roundabout which manages to cancel or counteract the effect of the broken down car – so that as the traffic flow departs, there is again no evidence in it of either the roundabout or of the broken down car. Plasmonic cover The plasmonic cover, mentioned alongside metamaterial covers (see plasmonic metamaterials), theoretically utilizes plasmonic resonance effects to reduce the total scattering cross section of spherical and cylindrical objects. These are lossless metamaterial covers near their plasma resonance which could possibly induce a dramatic drop in the scattering cross section, making these objects nearly “invisible” or “transparent” to an outside observer. Low loss, even no-loss, passive covers might be utilized that do not require high dissipation, but rely on a completely different mechanism. Materials with either negative or low value constitutive parameters, are required for this effect. Certain metals near their plasma frequency, or metamaterials with negative parameters could fill this need. For example, several noble metals achieve this requirement because of their electrical permittivity at the infra-red or visible wavelengths with relatively low loss. Currently only microscopically small objects could possibly appear transparent. These materials are further described as a homogeneous, isotropic, metamaterial covers near plasma frequency dramatically reducing the fields scattered by a given object. Furthermore, These do not require any absorptive process, any anisotropy or inhomogeneity, and nor any interference cancellation. The "classical theory" of metamaterial covers works with light of only one specific frequency. A new research, of Kort-Kamp et al, who won the prize “School on Nonlinear Optics and Nanophotonics” of 2013, shows that is possible to tune the metamaterial to different light frequencies. Tunneling light transmission cloak As implied in the nomenclature, this is a type of light transmission. Transmission of light (EM radiation) through an object such as metallic film occurs with an assist of tunnelling between resonating inclusions. This effect can be created by embedding a periodic configuration of dielectrics in a metal, for example. By creating and observing transmission peaks interactions between the dielectrics and interference effects cause mixing and splitting of resonances. With an effective permittivity close to unity, the results can be used to propose a method for turning the resulting materials invisible. More research in cloaking technology There are other proposals for use of the cloaking technology. In 2007 cloaking with metamaterials is reviewed and deficiencies are presented. At the same time, theoretical solutions are presented that could improve the capability to cloak objects. Later in 2007, a mathematical improvement in the cylindrical shielding to produce an electromagnetic "wormhole", is analyzed in three dimensions. Electromagnetic wormholes, as an optical device (not gravitational) are derived from cloaking theories has potential applications for advancing some current technology. Other advances may be realized with an acoustic superlens. In addition, acoustic metamaterials have realized negative refraction for sound waves. Possible advances could be enhanced ultrasound scans, sharpening sonic medical scans, seismic maps with more detail, and buildings no longer susceptible to earthquakes. Underground imaging may be improved with finer details. The acoustic superlens, acoustic cloaking, and acoustic metamaterials translates into novel applications for focusing, or steering, sonic waves. Acoustic cloaking technology could be used to stop a sonar-using observer from detecting the presence of an object that would normally be detectable as it reflects or scatters sound waves. Ideally, the technology would encompass a broad spectrum of vibrations on a variety of scales. The range might be from miniature electronic or mechanical components up to large earthquakes. Although most progress has been made on mathematical and theoretical solutions, a laboratory metamaterial device for evading sonar has been recently demonstrated. It can be applied to sound wavelengths from 40 to 80 kHz. Waves also apply to bodies of water. A theory has been developed for a cloak that could "hide", or protect, man-made platforms, ships, and natural coastlines from destructive ocean waves, including tsunamis. See also Chirality (electromagnetism) Invisibility Metamaterial absorber Metamaterial antennas Negative index metamaterials Nonlinear metamaterials Photonic metamaterials Photonic crystal Seismic metamaterials Split-ring resonator Tunable metamaterials Books Metamaterials Handbook Metamaterials: Physics and Engineering Explorations References Metamaterials Theoretical physics
Theories of cloaking
[ "Physics", "Materials_science", "Engineering" ]
2,494
[ "Theoretical physics", "Metamaterials", "Materials science" ]
27,467,080
https://en.wikipedia.org/wiki/Abramov%20reaction
The Abramov reaction is the related conversions of trialkyl to α-hydroxy phosphonates by the addition to carbonyl compounds. In terms of mechanism, the reaction involves attack of the nucleophilic phosphorus atom on the carbonyl carbon. It was named after the Russian chemist Vasilii Semenovich Abramov (1904–1968) in 1957. Introduction Electron-rich sources of phosphorus such as phosphites, phosphonites, and phosphinites may undergo nucleophilic addition to carbon atoms in simple carbonyl compounds. When fully esterified phosphites are used (Abramov reaction), neutralization of the resulting tetrahedral intermediate usually occurs via the transfer of an alkyl or silyl group from an oxygen attached to phosphorus to the newly created alkoxide center. Conjugate addition is also possible, and gives γ-functionalized carbonyl compounds or enol ethers after group transfer. The use of siloxy-containing phosphorus sources has greatly expanded the scope of this reaction, as the resulting α-siloxy compounds can be converted into the corresponding α-hydroxy derivatives in the presence of an alcoholic solvent (1) Mechanism and stereochemistry Prevailing mechanism Phosphites add reversibly to the carbonyl carbon of simple carbonyl compounds. Under mild conditions, reversion to the starting materials is faster than both inter- and intramolecular alkyl group transfer—the four-center transition state for intramolecular transfer exhibits poor orbital overlap. Transfer can be facilitated under conditions of high temperature or pressure. If two equivalents of aldehyde are used, addition of the tetrahedral intermediate to a second molecule of aldehyde leads either to cyclic phosphoranes 1 or linear alkyl transfer products 2. More practical is the use of silylated phosphorus sources, which undergo intramolecular silyl group transfer in a frontside fashion, providing α-siloxy phosphorus compounds 3. (3) Scope and limitations Phosphorus reagents Phosphites are commonly used to generate α-hydroxy phosphonates. In the presence of two equivalents of aldehyde cyclic phosphoranes 1 (equation 3) predominate, but these can be easily hydrolyzed to give the corresponding hydroxy phosphonates. (6) When phosphonous acids are employed in the presence of catalytic amounts of base, phosphine oxides can result. The sodium salts of phosphonous acids have historically worked well in this context, and bases such as sodium amide have been used. However, asymmetric induction and selective direct addition (for conjugated carbonyl compounds) can be achieved in the presence of chiral amine bases. (7) The discovery and use of silylated phosphorus reagents in this reaction represented a methodological advance. Selective silyl group transfer occurs in mixed reagents, and cleavage of the resulting silicon-oxygen bonds can often be accomplished hydrolytically, providing access to α-hydroxy derivatives. Alkylation of α-siloxy products provides a convenient route to otherwise difficult to access α-alkoxy phosphorus compounds. They can function as acyl anion equivalents when deprotonated, and give ketones after elimination under basic conditions. (8) Carbonyl substrates Simple ketones and aldehydes readily undergo addition of phosphites at the carbonyl carbon. In one interesting application, addition to ketenes gives products identical to the Arbuzov reaction of acid halides. (9) α,β-Unsaturated ketones and aldehydes also undergo the reaction. Dienyl carbonyl substrates can experience 1,6-addition, as in the example below. (10) Imines can also undergo the reaction (Pudovik reaction), affording α-alkylamino phosphonates. Primary amines can be produced only after acidic hydrolysis of an intermediate tert-butylamine; the use of unsubstituted imines requires very harsh conditions and gives low yields. (11) Synthetic utility The α-hydroxy alkylphosphonates produced by this method can be used for additional transformations. The original carbonyl carbon is acidified by its proximity to the phosphonate group. Deprotonation at this position generates a masked acyl anion, as the phosphonate functionality can be removed after the anion reacts. Phosphonate anions can undergo alkylation and olefination (the Horner-Wadsworth-Emmons reaction). When α-amino alkylphosphonates are employed in olefination, the resulting enamines can be hydrolyzed to ketones. (12) Addition to unsaturated carbonyl compounds and deprotonation affords homoenolate equivalents. Comparison with other methods Silylated phosphite reagents are some of the most efficient for the production of α-hydroxyphosphonates. However, a few other methods exist to make these compounds. For instance, the phosphate-phosphonate rearrangement gives α-hydroxyphosphonates via a three-membered cyclic intermediate. (13) Experimental conditions and procedures Generally, phosphorus addition reactions are operationally simple. Solutions of reagents in polar (acetonitrile, ethanol, tert-butanol) and non-polar (benzene) solvents may be used. Acid catalysis may be needed for additions of phosphite diesters or for in situ formation of imines. Base catalysis can also be employed in the former case. Distillation is generally sufficient to isolate pure products. See also Michaelis–Arbuzov reaction - the reaction of a trialkyl phosphite and an alkyl halide to form a phosphonate. References Name reactions
Abramov reaction
[ "Chemistry" ]
1,243
[ "Coupling reactions", "Name reactions", "Organic reactions" ]
27,470,361
https://en.wikipedia.org/wiki/Electrophilic%20amination
Electrophilic amination is a chemical process involving the formation of a carbon–nitrogen bond through the reaction of a nucleophilic carbanion with an electrophilic source of nitrogen. Introduction Electrophilic amination reactions can be classified as either additions or substitutions. Although the resulting product is not always an amine, these reactions are unified by the formation of a carbon–nitrogen bond and the use of an electrophilic aminating agent. A wide variety of electrophiles have been used; for substitutions, these are most commonly amines substituted with electron-withdrawing groups: chloramines, hydroxylamines, hydrazines, and oxaziridines, for instance. Addition reactions have employed imines, oximes, azides, azo compounds, and others. Mechanism and stereochemistry Prevailing mechanisms A nitrogen bound to both a good electrofuge and a good nucleofuge is known as a nitrenoid (for its resemblance to a nitrene). Nitrenes lack a full octet of electrons are thus highly electrophilic; nitrenoids exhibit analogous behavior and are often good substrates for electrophilic amination reactions. Nitrenoids can be generated from O-alkylhydroxylamines containing an N−H bond via deprotonation or from O-alkyloximes via nucleophilic addition. These intermediates react with carbanions to give substituted amines. Other electron-deficient, sp3 amination reagents react by similar mechanisms to give substitution products. In aminations involving oxaziridines, nucleophilic attack takes place on the nitrogen atom of the three-membered ring. For some substrates (α-cyano ketones, for example), the resulting alkoxide reacts further to afford unexpected products. Straightforward β-elimination of the alkoxide leads to the formation of an amine. Additions across pi bonds appear to proceed by typical nucleophilic addition pathways in most cases. Alkyl-, aryl-, and heteroaryllithium reagents add to azides to afford triazene salts. Reduction of these salts leads to amines, although they also may be converted to azides upon acidic workup with overall elimination of sulfinic acid. Enantioselective variants The most synthetically useful aminations of enolate anions employ N-acyloxazolidinone substrates. The chiral auxiliaries on these compounds are easily removed after hydrazine formation (with azo compounds) or azidation (with trisyl azide). Azidation using the latter reagent is more efficient than bromination followed by nucleophilic substitution by the azide anion Palladium on carbon and hydrogen gas reduce both azide and hydrazide products (the latter only after conversion to the hydrazine). Scope and limitations Aminating reagents Electrophilic aminating reagents rely on the presence of an electron-withdrawing functional group attached to nitrogen. A variety of hydroxylamine derivatives have been used for this purpose. Sulfonylhydroxylamines are able to aminate a wide array of carbanions. Azo compounds afford hydrazines after addition to the N=N bond. These additions have been rendered enantioselective through the use of chiral auxiliaries (see above) and chiral catalysts. Although the enantioselectivity of the proline-catalyzed process is good, yields are low and reaction times are long. Upon treatment with sulfonyl azides, a variety of Grignard reagents or enolates may be converted into azides or amines. A significant side reaction that occurs under these conditions is the diazo transfer reaction: instead of fragmenting into an azide and sulfinic acid, the intermediate triazene salt may break down to a diazo compound and sulfonamide. Changing workup conditions may favor one product over another. In general, for reactions of enolates substituted with Evans oxazolidinones, trifluoroacetic acid promotes diazo transfer while acetic acid encourages azidation (the reasons for this are unclear). Solvent and the enolate counterion also influence the observed ratio of diazo to azide products. Other electrophilic aminating reagents include oxaziridines, diazo compounds, and in rare cases, imines. Organometallic substrates The scope of organometallic reagents that may be aminated by electrophilic methods is large. Alkyl Grignard reagents, alkylithium compounds, alkylzinc compounds, and alkylcuprates have been aminated with electrophilic reagents successfully. Among sp2-centered carbanions, vinyllithium compounds, vinylcuprates, and vinyl Grignard reagents react with electrophilic aminating reagents to afford enamines. Aryl and heteroaryl organolithium reagents undergo efficient electrophilic amination under copper (I) catalyzed condition mediated by recoverable silicon reagents, termed siloxane transfer agents. The scope of sp-centered carbanions is limited to alkynylcuprates. Enolates and silyl enol ethers, the most widely used class of carbon nucleophiles in electrophilic amination reactions, participate in amination, adization and hydrazination reactions. The primary application of alkylmetal reagents in electrophilic amination reactions is the synthesis of hindered amines, many of which are difficult to prepare through nucleophilic displacement with an alkyl halide (nucleophilic amination). For instance, in the presence of a copper(II) catalyst, bulky organozinc reagents react with O-acylhydroxylamines to afford hindered amines. Allylic metal species can be used to prepare allylic amines through electrophilic amination. Although allylic amines are usually prepared through nucleophilic amination of allylic halides, a few examples of electrophlic amination of allylic substrates are known. In the example below, an allylic zirconium reagent (obtained by hydrozirconation) is trapped with an O-alkylhydroxylamine. The electrophilic amination of enolates yields α-amino carbonyl compounds. When chiral oxazolidinones are used in conjunction with azo compounds, enantioselectivity is observed (see above). BINAP can also be used for this purpose in the amination of silyl enol ethers. Aryl and heteroaryl organometallic reagents undergo many of the same transformations as their aliphatic counterparts. Formation of amines, hydrazines, and azides is possible through the use of various electrophilic aminating reagents. An example employing a nitrenoid reagent is shown below. Intramolecular amination is possible, and has been used to prepare small and medium rings. In the example below, deprotonation of an activated methylene compound containing an O-phosphinoylhydroxylamine led to the cyclic amine shown. Comparison with other methods Several other methods for the electrophilic formation of C-N bonds are available. Nitrites and nitrates can be used to form oximes and nitro compounds, respectively. Additionally, organoboranes can serve the role of the nucleophile and often provide higher yields with fewer complications than analogous carbanions. The Neber rearrangement offers an alternative to electrophilic amination through the intermediacy of an azirine. Typical conditions The wide variety of electrophilic aminating reagents precludes generalization of reaction conditions. Electrophilic nitrogen sources are, however, either toxic or explosive in general. Great care should be taken while handling these reagents. Many electrophilic nitrogen sources do not provide amines immediately, but a number of methods exist to generate the corresponding amines. Tosylamines: tributyltin hydride Azo compounds: H2/Pd Triazenes: sodium borohydride Azides: H2/Pd, H2/Pt, lithium aluminum hydride, triphenylphosphine Conversion to other nitrogen-containing functionality, including enamines, imines, and amides, is also possible. References Organic reactions
Electrophilic amination
[ "Chemistry" ]
1,813
[ "Organic reactions" ]
27,470,863
https://en.wikipedia.org/wiki/Transformation%20optics
Transformation optics is a branch of optics which applies metamaterials to produce spatial variations, derived from coordinate transformations, which can direct chosen bandwidths of electromagnetic radiation. This can allow for the construction of new composite artificial devices, which probably could not exist without metamaterials and coordinate transformation. Computing power that became available in the late 1990s enables prescribed quantitative values for the permittivity and permeability, the constitutive parameters, which produce localized spatial variations. The aggregate value of all the constitutive parameters produces an effective value, which yields the intended or desired results. Hence, complex artificial materials, known as metamaterials, are used to produce transformations in optical space. The mathematics underpinning transformation optics is similar to the equations that describe how gravity warps space and time, in general relativity. However, instead of space and time, these equations show how light can be directed in a chosen manner, analogous to warping space. For example, one potential application is collecting sunlight with novel solar cells by concentrating the light in one area. Hence, a wide array of conventional devices could be markedly enhanced by applying transformation optics. Coordinate transformations Transformation optics has its beginnings in two research endeavors, and their conclusions. They were published on May 25, 2006, in the same issue of the peer-reviewed journal Science. The two papers describe tenable theories on bending or distorting light to electromagnetically conceal an object. Both papers notably map the initial configuration of the electromagnetic fields on to a Cartesian mesh. Twisting the Cartesian mesh, in essence, transforms the coordinates of the electromagnetic fields, which in turn conceal a given object. Hence, with these two papers, transformation optics is born. Transformation optics subscribes to the capability of bending light, or electromagnetic waves and energy, in any preferred or desired fashion, for a desired application. Maxwell's equations do not vary even though coordinates transform. Instead values of chosen parameters of materials "transform", or alter, during a certain time period. Transformation optics developed from the capability to choose which parameters for a given material, known as a metamaterial. Hence, since Maxwell's equations retain the same form, it is the successive values of permittivity and permeability that change, over time. Permittivity and permeability are in a sense responses to the electric and magnetic fields of a radiated light source respectively, among other descriptions. The precise degree of electric and magnetic response can be controlled in a metamaterial, point by point. Since so much control can be maintained over the responses of the material, this leads to an enhanced and highly flexible gradient-index material. Conventionally predetermined refractive index of ordinary materials become independent spatial gradients, that can be controlled at will. Therefore, transformation optics is a new method for creating novel and unique optical devices. Transformation optics can go beyond cloaking (mimic celestial mechanics) because its control of the trajectory and path of light is highly effective. Transformation optics is a field of optical and material engineering and science embracing nanophotonics, plasmonics, and optical metamaterials. Developments Developments in this field focus on advances in research of transformation optics. Transformation optics is the foundation for exploring a diverse set of theoretical, numerical, and experimental developments, involving the perspectives of the physics and engineering communities. The multi-disciplinary perspectives for inquiry and designing of materials develop understanding of their behaviors, properties, and potential applications for this field. If a coordinate transformation can be derived or described, a ray of light (in the optical limit) will follow lines of a constant coordinate. There are constraints on the transformations, as listed in the references. In general, however, a particular goal can be accomplished using more than one transformation. The classic cylindrical cloak (first both simulated and demonstrated experimentally) can be created with many transformations. The simplest, and most often used, is a linear coordinate mapping in the radial coordinate. There is significant ongoing research into determining advantages and disadvantages of particular types of transformations, and what attributes are desirable for realistic transformations. One example of this is the broadband carpet cloak: the transformation used was quasi-conformal. Such a transformation can yield a cloak that uses non-extreme values of permittivity and permeability, unlike the classic cylindrical cloak, which required some parameters to vary towards infinity at the inner radius of the cloak. General coordinate transformations can be derived which compress or expand space, bend or twist space, or even change the topology (e.g. by mimicking a wormhole). Much current interest involves designing invisibility cloaks, event cloaks, field concentrators, or beam-bending waveguides. Mimicking celestial mechanics The interactions of light and matter with spacetime, as predicted by general relativity, can be studied using the new type of artificial optical materials that feature extraordinary abilities to bend light (which is actually electromagnetic radiation). This research creates a link between the newly emerging field of artificial optical metamaterials to that of celestial mechanics, thus opening a new possibility to investigate astronomical phenomena in a laboratory setting. The recently introduced, new class, of specially designed optical media can mimic the periodic, quasi-periodic and chaotic motions observed in celestial objects that have been subjected to gravitational fields. Hence, a new class of metamaterials introduced with the nomenclature “continuous-index photon traps” (CIPTs). CIPTz have applications as optical cavities. As such, CIPTs can control, slow and trap light in a manner similar to celestial phenomena such as black holes, strange attractors, and gravitational lenses. A composite of air and the dielectric Gallium Indium Arsenide Phosphide (GaInAsP), operated in the infrared spectral range and featured a high refractive index with low absorptions. This opens an avenue to investigate light phenomena that imitates orbital motion, strange attractors and chaos in a controlled laboratory environment by merging the study of optical metamaterials with classical celestial mechanics. If a metamaterial could be produced that did not have high intrinsic loss and a narrow frequency range of operation then it could be employed as a type of media to simulate light motion in a curved spacetime vacuum. Such a proposal is brought forward, and metamaterials become prospective media in this type of study. The classical optical-mechanical analogy renders the possibility for the study of light propagation in homogeneous media as an accurate analogy to the motion of massive bodies, and light, in gravitational potentials. A direct mapping of the celestial phenomena is accomplished by observing photon motion in a controlled laboratory environment. The materials could facilitate periodic, quasi-periodic and chaotic light motion inherent to celestial objects subjected to complex gravitational fields. Twisting the optical metamaterial effects its "space" into new coordinates. The light that travels in real space will be curved in the twisted space, as applied in transformational optics. This effect is analogous to starlight when it moves through a closer gravitational field and experiences curved spacetime or a gravitational lensing effect. This analogue between classic electromagnetism and general relativity, shows the potential of optical metamaterials to study relativity phenomena such as the gravitational lens. Observations of such celestial phenomena by astronomers can sometimes take a century of waiting. Chaos in dynamic systems is observed in areas as diverse as molecular motion, population dynamics and optics. In particular, a planet around a star can undergo chaotic motion if a perturbation, such as another large planet, is present. However, owing to the large spatial distances between the celestial bodies, and the long periods involved in the study of their dynamics, the direct observation of chaotic planetary motion has been a challenge. The use of the optical-mechanical analogy may enable such studies to be accomplished in a bench-top laboratory setting at any prescribed time. The study also points toward the design of novel optical cavities and photon traps for application in microscopic devices and lasers systems. For related information see:Chaos theory and General relativity Producing black holes with metamaterials Matter propagating in a curved spacetime is similar to the electromagnetic wave propagation in a curved space and in an in homogeneous metamaterial, as stated in the previous section. Hence a black hole can possibly be simulated using electromagnetic fields and metamaterials. In July 2009 a metamaterial structure forming an effective black hole was theorized, and numerical simulations showed a highly efficient light absorption. The first experimental demonstration of electromagnetic black hole at microwave frequencies occurred in October 2009. The proposed black hole was composed of non-resonant, and resonant, metamaterial structures, which can absorb electromagnetic waves efficiently coming from all directions due to the local control of electromagnetic fields. It was constructed of a thin cylinder at 21.6 centimeters in diameter comprising 60 concentric rings of metamaterials. This structure created a gradient index of refraction, necessary for bending light in this way. However, it was characterized as being artificially inferior substitute for a real black hole. The characterization was justified by an absorption of only 80% in the microwave range, and that it has no internal source of energy. It is singularly a light absorber. The light absorption capability could be beneficial if it could be adapted to technologies such as solar cells. However, the device is limited to the microwave range. Also in 2009, transformation optics were employed to mimic a black hole of Schwarzschild form. Similar properties of photon sphere were also found numerically for the metamaterial black hole. Several reduced versions of the black hole systems were proposed for easier implementations. MIT computer simulations by Fung along with lab experiments are designing a metamaterial with a multilayer sawtooth structure that slows and absorbs light over a wide range of wavelength frequencies, and at a wide range of incident angles, at 95% efficiency. This has an extremely wide window for colors of light. Multi-dimensional universe Engineering optical space with metamaterials could be useful to reproduce an accurate laboratory model of the physical multiverse. "This ‘metamaterial landscape’ may include regions in which one or two spatial dimensions are compactified." Metamaterial models appear to be useful for non-trivial models such as 3D de Sitter space with one compactified dimension, 2D de Sitter space with two compactified dimensions, 4D de Sitter dS4, and anti-de Sitter AdS4 spaces. Gradient index lensing Transformation optics is employed to increase capabilities of gradient index lenses. Conventional optical limitations Optical elements (lenses) perform a variety of functions, ranging from image formation, to light projection or light collection. The performance of these systems is frequently limited by their optical elements, which dominate system weight and cost, and force tradeoffs between system parameters such as focal length, field of view (or acceptance angle), resolution, and range. Conventional lenses are ultimately limited by geometry. Available design parameters are a single index of refraction (n) per lens element, variations in the element surface profile, including continuous surfaces (lens curvature) and/or discontinuous surfaces (diffractive optics). Light rays undergo refraction at the surfaces of each element, but travel in straight lines within the lens. Since the design space of conventional optics is limited to a combination of refractive index and surface structure, correcting for aberrations (for example through the use of achromatic or diffractive optics) leads to large, heavy, complex designs, and/or greater losses, lower image quality, and manufacturing difficulties. GRIN lenses Gradient index lenses (or GRIN lenses) as the name implies, are optical elements whose index of refraction varies within the lens. Control of the internal refraction allows the steering of light in curved trajectories through the lens. GRIN optics thus increase the design space to include the entire volume of the optical elements, providing the potential for dramatically reduced size, weight, element count, and assembly cost, as well as opening up new space to trade between performance parameters. However, past efforts to make large aperture GRIN lenses have had limited success due to restricted refractive index change, poor control over index profiles, and/or severe limitations in lens diameter. Recent advances Recent steps forward in material science have led to at least one method for developing large (>10 mm) GRIN lenses with 3-dimensional gradient indexes. There is a possibility of adding expanded deformation capabilities to the GRIN lenses. This translates into controlled expansion, contraction, and shear (for variable focus lenses or asymmetric optical variations). These capabilities have been demonstrated. Additionally, recent advances in transformation optics and computational power provide a unique opportunity to design, assemble and fabricate elements in order to advance the utility and availability of GRIN lenses across a wide range of optics-dependent systems, defined by needs. A possible future capability could be to further advance lens design methods and tools, which are coupled to enlarged fabrication processes. Battlefield applications Transformation optics has potential applications for the battlefield. The versatile properties of metamaterials can be tailored to fit almost any practical need, and transformation optics shows that space for light can be bent in almost any arbitrary way. This is perceived as providing new capabilities to soldiers in the battlefield. For battlefield scenarios benefits from metamaterials have both short term and long-term impacts. For example, determining whether a cloud in the distance is harmless or an aerosol of enemy chemical or biological warfare is very difficult to assess quickly. However, with the new metamaterials being developed, the ability exists to see things smaller than the wavelength of light – something which has yet to be achieved in the far field. Using metamaterials in the creation of a new lens may allow soldiers to be able to see pathogens and viruses that are impossible to detect with any visual device. Harnessing subwavelength capabilities then allow for other advancements which appear to be beyond the battlefield. All kinds of materials could be manufactured with nano-manufacturing, which could go into electronic and optical devices from night vision goggles to distance sensors to other kinds of sensors. Longer-term views include the possibility for cloaking materials, which would provide "invisibility" by redirecting light around a cylindrical shape. See also Acoustic metamaterials Chirality (electromagnetism) Metamaterial Metamaterial absorber Metamaterial antennas Metamaterial cloaking Negative index metamaterials Nonlinear metamaterials Photonic metamaterials Photonic crystal Seismic metamaterials Split-ring resonator Superlens Theories of cloaking Tunable metamaterials Books Metamaterials Handbook Metamaterials: Physics and Engineering Explorations References Further reading and general references Electromagnetism Metamaterials
Transformation optics
[ "Physics", "Materials_science", "Engineering" ]
2,995
[ "Electromagnetism", "Physical phenomena", "Metamaterials", "Materials science", "Fundamental interactions" ]
32,809,163
https://en.wikipedia.org/wiki/Henry%20Clay%20Furnace
Henry Clay Furnace is an historic iron furnace located in Cooper's Rock State Forest near Cheat Neck, Monongalia County, West Virginia. It was built between 1834 and 1836 by Leonard Lamb. It is a 30-foot square, 30 feet high stone structure in the shape of a truncated pyramid. It was the first steam-powered blast furnace to be built in Western Virginia and had a capacity to produce 4 tons of pig iron per day. In 1839 it was sold to the Ellicott Brothers, who also purchased the Jackson Ironworks at the same time. They made significant improvements, such as connecting it via tram lines to their ironworks at Ices Ferry. It supported a community of approximately 100 people (some sources say as many as 500 people with 100 dwellings). The small settlement included a school, store and a church. No structures apart from the furnace exist today. It is believed to have ceased production in 1847-48 when the Ellicott's business failed. The furnace may have continued to operate until 1868 when all the Cheat River iron works ceased production. It is among the ten or more abandoned iron furnaces still existing in northern West Virginia. It was listed on the National Register of Historic Places in 1970. References Industrial buildings and structures on the National Register of Historic Places in West Virginia Industrial buildings completed in 1836 Buildings and structures in Monongalia County, West Virginia National Register of Historic Places in Monongalia County, West Virginia Industrial furnaces Ironworks and steel mills in the United States
Henry Clay Furnace
[ "Chemistry" ]
306
[ "Metallurgical processes", "Industrial furnaces" ]
32,814,049
https://en.wikipedia.org/wiki/Flux%20switching%20alternator
A flux switching alternator is a form of high-speed alternator, an AC electrical generator, intended for direct drive by a turbine. They are simple in design with the rotor containing no coils or magnets, making them rugged and capable of high rotation speeds. This makes them suitable for their only widespread use, in guided missiles. Guided missiles Guided missiles require a source of electrical power during flight. This is needed to power the guidance and fuzing systems, possibly also the high-power loads of an active radar seeker (i.e. a transmitter) and rarely the missile's control surfaces. Control surface actuators for a high-speed missile require a high force and so these are usually powered by some non-electric means, such as tapping propellant exhaust gas from the missile's motor. Rare exceptions where electrically powered control surfaces are used are mostly medium-range subsonic naval missiles, e.g. Exocet, Harpoon and Martel. The total load varies for different missiles between around 100W to several kW. The electrical supply for a missile must be reliable, particularly after long storage. Depending on the missile type, it may also be required to start delivering power almost immediately after start-up, or even before launch to allow gyroscopes to be accelerated to speed, and to provide power for varying lengths of time. Small anti-tank or air-to-air missiles may only require power for a few seconds of flight. Others, such as tactical missiles or ICBMs, may require power for several minutes. Turbojet-powered cruise missiles have the longest flight times (being long-ranged, yet also slowest in flight); however, these also have engines that are capable of driving a more conventional generator. Two technologies are used in practice to power missiles: batteries and generators. The batteries used are usually esoteric types rarely found outside missiles, such as silver-zinc or thermal batteries. The generators used are simple high-speed generators, driven directly by a turbine rotor that is powered by either the rocket motor's exhaust, or else a dedicated gas generator. Alternator principles The generator is required to be rugged and capable of very high speeds, as it is driven at the turbine's speed, without reduction gearing. The rotor must thus be simple in design and there can also be no sliding contacts to sliprings or other brushgear. Although the power requirement for the missile may be a largely DC supply, the AC alternator and its need for a rectifier is still favoured for its mechanical robustness. Unusually, both the field coils and the armature winding are carried on the fixed stator. The rotor is a simple toothed wheel, with no windings or electrical components. In the simplest case, the stator has four poles and the field coils and armature windings are arranged alternately around the stator between the poles. The field magnets are arranged with their poles opposing each other, i.e. one armature is between the two North poles, one between the two South. The rotor is a simple toothed disc of magnetic, but unmagnetized, iron. As it rotates between poles, it links the flux between a single pair of opposing poles. The magnetic circuit of the stator is thus a pair of triangles, each containing a field, an armature and a shared path through the rotor. Flux passes in each circuit from one field and through one armature. As the rotor turns, the other triangular path is formed, switching the flux from one pair of field and armature to the other and also reversing the direction of the flux in the armature coil. It is this reversal of flux that produces the alternating emf. The rotor must bridge the path between opposing pole pieces, but must never bridge all four simultaneously. It must thus have an even number of poles, but this must not be divisible by four. Practical rotors use six poles. As the rotation of one tooth pitch is sufficient to generate one AC cycle, the output frequency is thus the product of the rotation speed (in revs. per second) and the number of rotor teeth. Early AC systems used the standard frequency of 400 Hz, which limited alternators to two pole rotors and a maximum rotation speed of 24,000 rpm. The use of higher frequencies, from multi-pole rotors, was already recognised as a future means to achieve greater power for the same weight. The Seaslug missile alternator used a speed of 24,000 rpm to produce 1.5 kVA of electricity at 2,400 Hz. The field may be supplied by either permanent magnets or by field coils. Regulation of the output voltage is achieved by controlling the current through a winding, either the field coil, or a control winding around a permanent magnet. Alternator drive Propulsion motor The simplest solution taps off some hot exhaust gas from the propulsion motor and routes it instead through the generator turbine. This gas may also be used to power the control surface actuators, as was used for Vigilant. This is one of the simplest and lightest electrical power supplies available for a missile. Bleeding exhaust gas from the motor increases the amount of fuel required, but this effect is trivial, around 1%. The exhaust is hot, possibly as hot as 2,400 °C, and at pressures varying from 2,600 psi at the boost phase to 465 psi during sustain. A more serious drawback is the amount of sooty particulates in the exhaust, which requires a filter to keep them from the turbine. As such filters may themselves clog, this method is best suited for short flight durations. Gas generator A gas generator is a chemical device that burns to provide a supply of gas under pressure. Although still hot, comparable to rocket motor exhaust, this gas can be cooler and cleaner of particulates than rocket efflux. Both solid and liquid-fuelled gas generators may be used. Advantages of a gas generator drive, rather than motor exhaust are: Cleaner, cooler exhaust, which is less likely to cause turbine problems. Ability to start the gas generator before launching, allowing time for gyroscopes to be spun up to speed, power for control surfaces etc. Ability to continue power generation after the motor has burned out, during a ballistic coast phase. Development history The first alternators of this type began with the first missiles requiring considerable electric power, those using radar seekers (initially semi-active radar homing). Development of these began in the late 1940s, with air-to-air missiles such as Sparrow. Sparrow was a relatively large missile with an airframe 8 inches in diameter. By the late 1950s, turbine-driven alternators were also being used in lightweight anti-tank missiles such as Vigilant. Vigilant has a body diameter of 4 inches, including a  inch central jetpipe. The alternator and turbine were fitted into a remaining annular space of only 1 inches. Permanent magnet magnetos An alternative high-speed generator is the permanent magnet magneto. Achieving the output needed depends on the use of modern rare-earth magnets, such as samarium cobalt or neodymium. The output coil is formed as a stator, with axial magnetic flux from a rotating multi-pole ring magnet. See also Alexanderson alternator Variable reluctance sensor Switched reluctance motor References Alternators Electrical generators Missile technology
Flux switching alternator
[ "Physics", "Technology" ]
1,520
[ "Physical systems", "Electrical generators", "Machines" ]
32,816,134
https://en.wikipedia.org/wiki/List%20of%20Folding%40home%20cores
The distributed-computing project Folding@home uses scientific computer programs, referred to as "cores" or "fahcores", to perform calculations. Folding@home's cores are based on modified and optimized versions of molecular simulation programs for calculation, including TINKER, GROMACS, AMBER, CPMD, SHARPEN, ProtoMol, and Desmond. These variants are each given an arbitrary identifier (Core xx). While the same core can be used by various versions of the client, separating the core from the client enables the scientific methods to be updated automatically as needed without a client update. Active cores These cores listed below are currently used by the project. GROMACS Core a7 Available for Windows, Linux, and macOS, use Advanced Vector Extensions if available, for a significant speed improvement. Core a8 Available for Windows, Linux, macOS and ARM, uses Gromacs 2020.5 GPU Cores for the Graphics Processing Unit use the graphics chip of modern video cards to do molecular dynamics. The GPU Gromacs core is not a true port of Gromacs, but rather key elements from Gromacs were taken and enhanced for GPU capabilities. GPU3 These are the third generation GPU cores, and are based on OpenMM, Pande Group's own open library for molecular simulation. Although based on the GPU2 code, this adds stability and new capabilities. core 22 (last core to use old style numbering convention) v0.0.16 Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL and CUDA, if available. It uses OpenMM 7.5.1 v0.0.17 Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL and CUDA, if available. It uses OpenMM 7.5.1 v0.0.18 Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL and CUDA, if available. It uses OpenMM 7.6.0 v0.0.20 Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL and CUDA, if available. It uses OpenMM 7.7.0, which provides performance improvements and many new science features core 23 v8.0.3 Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL and CUDA, if available. It uses OpenMM 8.0.0, which provides performance improvements, particularly to CUDA, and many new science features core 24 v8.1.3 Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL and CUDA, if available. It uses OpenMM 8.1.1, which includes some major bug fixes. core 25 Not publicly released core 26 v8.2 Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL and CUDA, if available. It uses OpenMM 8.2, which includes some major bug fixes, including OpenCL and GLIBC. Inactive cores These cores are not currently used by the project, as they are either retired due to becoming obsolete, or are not yet ready for general release. TINKER TINKER is a computer software application for molecular dynamics simulation with a complete and general package for molecular mechanics and molecular dynamics, with some special features for biopolymers. Tinker core (Core 65) An unoptimized uniprocessor core, this was officially retired as the AMBER and Gromacs cores perform the same tasks much faster. This core was available for Windows, Linux, and Macs. GROMACS GroGPU (Core 10) Available for ATI series 1xxx GPUs running under Windows. Although mostly Gromacs based, parts of the core were rewritten. This core was retired as of June 6, 2008 due to a move to the second generation of the GPU clients. Gro-SMP (Core a1) Available for Windows x86, Mac x86, and Linux x86/64 clients, this was the first generation of the SMP variant, and used MPI for Inter-process communication. This core was retired due to a move to a thread-based SMP2 client. GroCVS (Core a2) Available only to x86 Macs and x86/64 Linux, this core is very similar to Core a1, as it uses much of the same core base, including use of MPI. However, this core utilizes more recent Gromacs code, and supports more features such as extra-large work units. Officially retired due to move to a threads-based SMP2 client. Gro-PS3 Also known as the SCEARD core, this variant was for the PlayStation 3 game system, which supported a Folding@Home client until it was retired in November 2012. This core performed implicit solvation calculations like the GPU cores, but was also capable of running explicit solvent calculations like the CPU cores, and took the middle ground between the inflexible high-speed GPU cores and flexible low-speed CPU cores. This core used SPE cores for optimization, but did not support SIMD. Gromacs (Core 78) This is the original Gromacs core, and is currently available for uniprocessor clients only, supporting Windows, Linux, and macOS. Gromacs 33 (Core a0) Available to Windows, Linux, and macOS uniprocessor clients only, this core uses the Gromacs 3.3 codebase, which allowing a broader range of simulations to be run. Gromacs SREM (Core 80) This core uses the Serial Replica Exchange Method, which is also known as REMD (Replica Exchange Molecular Dynamics) or GroST (Gromacs Serial replica exchange with Temperatures) in its simulations, and is available for Windows and Linux uniprocessor clients only. GroSimT (Core 81) This core performs simulated tempering, of which the basic idea is to enhance sampling by periodically raising and lowering temperature. This may allow Folding@home to more efficiently sample the transitions between folded and unfolded conformations of proteins. Available for Windows and Linux uniprocessor clients only. DGromacs (Core 79) Available for uniprocessor clients, this core uses SSE2 processor optimization where supported and is capable of running on Windows, Linux, and macOS. DGromacsB (Core 7b) Distinct from Core 79 in that it has several scientific additions. Initially released only to the Linux platform in August 2007, it will eventually be available for all platforms. DGromacsC (Core 7c) Very similar to Core 79, and initially released for Linux and Windows in April 2008 for Windows, Linux, and macOS uniprocessor clients. GB Gromacs (Core 7a) Available solely for all uniprocessor clients on Windows, Linux, and macOS. GB Gromacs (Core a4) Available for Windows, Linux, and macOS, this core was originally released in early October 2010, and as of February 2010 uses the latest version of Gromacs, v4.5.3. SMP2 (Core a3) The next generation of the SMP cores, this core uses threads instead of MPI for inter-process communication, and is available for Windows, Linux, and macOS. SMP2 bigadv (Core a5) Similar to a3, but this core is specifically designed to run larger-than-normal simulations. SMP2 bigadv (Core a6) A newer version of the a5 core, project ended January 2015. CPMD Short for Car–Parrinello Molecular Dynamics, this core performs ab-initio quantum mechanical molecular dynamics. Unlike classical molecular dynamics calculations which use a force field approach, CPMD includes the motion of electrons in the calculations of energy, forces and motion. Quantum chemical calculations have the possibility to yield a very reliable potential energy surface, and can naturally incorporate multi-body interactions. QMD (Core 96) This is a double-precision variant for Windows and Linux uniprocessor clients. This core is currently "on hold" due to the main QMD developer, Young Min Rhee, graduating in 2006. This core can use a substantial amount of memory, and was only available to machines that chose to "opt in". SSE2 optimization on Intel CPUs is supported. Due to licensing issues involving Intel libraries and SSE2, QMD Work Units were not assigned to AMD CPUs. SHARPEN SHARPEN Core In early 2010 Vijay Pande said "We've put SHARPEN on hold for now. No ETA to give, sorry. Pushing it further depends a lot on the scientific needs at the time." This core uses different format to standard F@H cores, in that there is more than one "Work Unit" (using the normal definition) in each work packet sent to clients. Desmond The software for this core was developed at D. E. Shaw Research. Desmond performs high-speed molecular dynamics simulations of biological systems on conventional computer clusters. The code uses novel parallel algorithms and numerical techniques to achieve high performance on platforms containing a large number of processors, but may also be executed on a single computer. Desmond and its source code are available without cost for non-commercial use by universities and other not-for-profit research institutions. Desmond Core Possible available for Windows x86 and Linux x86/64, this core is currently in development. AMBER Short for Assisted Model Building with Energy Refinement, AMBER is a family of force fields for molecular dynamics, as well as the name for the software package that simulates these force fields. AMBER was originally developed by Peter Kollman at the University of California, San Francisco, and is currently maintained by professors at various universities. The double-precision AMBER core is not currently optimized with SSE nor SSE2, but AMBER is significantly faster than Tinker cores and adds some functionality which cannot be performed using Gromacs cores. PMD (Core 82) Available for Windows and Linux uniprocessor clients only. ProtoMol ProtoMol is an object-oriented, component based, framework for molecular dynamics (MD) simulations. ProtoMol offers high flexibility, easy extendibility and maintenance, and high performance demands, including parallelization. In 2009, the Pande Group was working on a complementary new technique called Normal Mode Langevin Dynamics which had the possibility to greatly speed simulations while maintaining the same accuracy. ProtoMol Core (Core b4) Available to Linux x86/64 and x86 Windows. GPU GPU2 These are the second generation GPU cores. Unlike the retired GPU1 cores, these variants are for ATI CAL-enabled 2xxx/3xxx or later series and NVIDIA CUDA-enabled NVIDIA 8xxx or later series GPUs. GPU2 (Core 11) Available for x86 Windows clients only. Supported until approximately September 1, 2011 due to AMD/ATI dropping support for the utilized Brook programming language and moving to OpenCL. This forced F@h to rewrite its ATI GPU core code in OpenCL, the result of which is Core 16. GPU2 (Core 12) Available for x86 Windows clients only. GPU2 (Core 13) Available for x86 Windows clients only. GPU2 (Core 14) Available for x86 Windows clients only, this core was officially released Mar 02, 2009. GPU3 These are the third generation GPU cores, and are based on OpenMM, Pande Group's own open library for molecular simulation. Although based on the GPU2 code, this adds stability and new capabilities. GPU3 (core 15) Available to x86 Windows only. GPU3 (core 16) Available to x86 Windows only. Released alongside the new v7 client, this is a rewrite of Core 11 in OpenCL. GPU3 (core 17) Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL. Much better performance because of OpenMM 5.1 GPU3 (core 18) Available to Windows for AMD and NVIDIA GPUs using OpenCL. This core was developed to address some critical scientific issues in Core17 and uses the latest technology from OpenMM 6.0.1. There are currently issues regarding the stability and performance of this core on some AMD and NVIDIA Maxwell GPUs. This is why assignment of work units running on this core has been temporarily stopped for some GPUs. GPU3 (core 21) Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL. It uses OpenMM 6.2 and fixes the Core 18 AMD/NVIDIA performance issues. References External links Main F@h Core FAQ GROMACS Core FAQ AMBER Core FAQ PROTOMOL Core FAQ QMD Core FAQ Molecular dynamics software Molecular modelling software Simulation software Computational biology Mathematical and theoretical biology Computational chemistry Science software for macOS Science software for Windows Science software for Linux PlayStation 3 software
List of Folding@home cores
[ "Chemistry", "Mathematics", "Biology" ]
2,720
[ "Molecular dynamics software", "Molecular modelling software", "Computational chemistry software", "Mathematical and theoretical biology", "Applied mathematics", "Theoretical chemistry", "Computational chemistry", "Molecular modelling", "Molecular dynamics", "Computational biology" ]
3,734,039
https://en.wikipedia.org/wiki/ANOVA%20gauge%20R%26R
ANOVA gauge repeatability and reproducibility is a measurement systems analysis technique that uses an analysis of variance (ANOVA) random effects model to assess a measurement system. The evaluation of a measurement system is not limited to gauge but to all types of measuring instruments, test methods, and other measurement systems. Purpose ANOVA Gage R&R measures the amount of variability induced in measurements by the measurement system itself, and compares it to the total variability observed to determine the viability of the measurement system. There are several factors affecting a measurement system, including: Measuring instruments, the gage or instrument itself and all mounting blocks, supports, fixtures, load cells, etc. The machine's ease of use, sloppiness among mating parts, and, "zero" blocks are examples of sources of variation in the measurement system. In systems making electrical measurements, sources of variation include electrical noise and analog-to-digital converter resolution. Operators (people), the ability and/or discipline of a person to follow the written or verbal instructions. Test methods, how the devices are set up, the test fixtures, how the data is recorded, etc. Specification, the measurement is reported against a specification or a reference value. The range or the engineering tolerance does not affect the measurement, but is an important factor in evaluating the viability of the measurement system. Parts or specimens (what is being measured), some items are easier to be measured than others. A measurement system may be good for measuring steel block length but not for measuring rubber pieces, for example. There are two important aspects of a Gage R&R: Repeatability: The variation in measurements taken by a single person or instrument on the same or replicate item and under the same conditions. Reproducibility: the variation induced when different operators, instruments, or laboratories measure the same or replicated specimen. It is important to understand the difference between accuracy and precision to understand the purpose of Gage R&R. Gage R&R addresses only the precision of a measurement system. It is common to examine the P/T ratio which is the ratio of the precision of a measurement system to the (total) tolerance of the manufacturing process of which it is a part. If the P/T ratio is low, the impact on product quality of variation due to the measurement system is small. If the P/T ratio is larger, it means the measurement system is "eating up" a large fraction of the tolerance, in that the parts that do not have sufficient tolerance may be measured as acceptable by the measurement system. Generally, a P/T ratio less than 0.1 indicates that the measurement system can reliably determine whether any given part meets the tolerance specification. A P/T ratio greater than 0.3 suggests that unacceptable parts will be measured as acceptable (or vice versa) by the measurement system, making the system inappropriate for the process for which it is being used. ANOVA Gage R&R is an important tool within the Six Sigma methodology, and it is also a requirement for a production part approval process (PPAP) documentation package. Examples of Gage R&R studies can be found in part 1 of Czitrom & Spagon. There is not a universal criterion of minimum sample requirements for the GRR matrix, it being a matter for the Quality Engineer to assess risks depending on how critical the measurement is and how costly they are. The "10×2×2" (ten parts, two operators, two repetitions) is an acceptable sampling for some studies, although it has very few degrees of freedom for the operator component. Several methods of determining the sample size and degree of replication are used. Calculating variance components In one common crossed study, 10 parts might each be measured two times by two different operators. The ANOVA then allows the individual sources of variation in the measurement data to be identified; the part-to-part variation, the repeatability of the measurements, the variation due to different operators; and the variation due to part by operator interaction. The calculation of variance components and standard deviations using ANOVA is equivalent to calculating variance and standard deviation for a single variable but it enables multiple sources of variation to be individually quantified which are simultaneously influencing a single data set. When calculating the variance for a data set the sum of the squared differences between each measurement and the mean is calculated and then divided by the degrees of freedom (n – 1). The sums of the squared differences are calculated for measurements of the same part, by the same operator, etc., as given by the below equations for the part (SSPart), the operator (SSOp), repeatability (SSRep) and total variation (SSTotal). where nOp is the number of operators, nRep is the number of replicate measurements of each part by each operator, is the number of parts, x̄ is the grand mean, x̄i.. is the mean for each part, x̄·j· is the mean for each operator, x<sub>ijk'''</sub> is each observation and x̄ij is the mean for each factor level. When following the spreadsheet method of calculation the n terms are not explicitly required since each squared difference is automatically repeated across the rows for the number of measurements meeting each condition. The sum of the squared differences for part by operator interaction (SS''Part · Op) is the residual variation given by See also Measurement uncertainty Random effects model References Six Sigma Measurement Analysis of variance
ANOVA gauge R&R
[ "Physics", "Mathematics" ]
1,117
[ "Quantity", "Physical quantities", "Measurement", "Size" ]
3,735,260
https://en.wikipedia.org/wiki/Tetrahedral-octahedral%20honeycomb
The tetrahedral-octahedral honeycomb, alternated cubic honeycomb is a quasiregular space-filling tessellation (or honeycomb) in Euclidean 3-space. It is composed of alternating regular octahedra and tetrahedra in a ratio of 1:2. Other names include half cubic honeycomb, half cubic cellulation, or tetragonal disphenoidal cellulation. John Horton Conway calls this honeycomb a tetroctahedrille, and its dual a dodecahedrille. R. Buckminster Fuller combines the two words octahedron and tetrahedron into octet truss, a rhombohedron consisting of one octahedron (or two square pyramids) and two opposite tetrahedra. It is vertex-transitive with 8 tetrahedra and 6 octahedra around each vertex. It is edge-transitive with 2 tetrahedra and 2 octahedra alternating on each edge. It is part of an infinite family of uniform honeycombs called alternated hypercubic honeycombs, formed as an alternation of a hypercubic honeycomb and being composed of demihypercube and cross-polytope facets. It is also part of another infinite family of uniform honeycombs called simplectic honeycombs. In this case of 3-space, the cubic honeycomb is alternated, reducing the cubic cells to tetrahedra, and the deleted vertices create octahedral voids. As such it can be represented by an extended Schläfli symbol h{4,3,4} as containing half the vertices of the {4,3,4} cubic honeycomb. There is a similar honeycomb called gyrated tetrahedral-octahedral honeycomb which has layers rotated 60 degrees so half the edges have neighboring rather than alternating tetrahedra and octahedra. The tetrahedral-octahedral honeycomb can have its symmetry doubled by placing tetrahedra on the octahedral cells, creating a nonuniform honeycomb consisting of tetrahedra and octahedra (as triangular antiprisms). Its vertex figure is an order-3 truncated triakis tetrahedron. This honeycomb is the dual of the triakis truncated tetrahedral honeycomb, with triakis truncated tetrahedral cells. Cartesian coordinates For an alternated cubic honeycomb, with edges parallel to the axes and with an edge length of 1, the Cartesian coordinates of the vertices are: (For all integral values: i,j,k with i+j+k even) (i, j, k) Symmetry There are two reflective constructions and many alternated cubic honeycomb ones; examples: Alternated cubic honeycomb slices The alternated cubic honeycomb can be sliced into sections, where new square faces are created from inside of the octahedron. Each slice will contain up and downward facing square pyramids and tetrahedra sitting on their edges. A second slice direction needs no new faces and includes alternating tetrahedral and octahedral. This slab honeycomb is a scaliform honeycomb rather than uniform because it has nonuniform cells. Projection by folding The alternated cubic honeycomb can be orthogonally projected into the planar square tiling by a geometric folding operation that maps one pairs of mirrors into each other. The projection of the alternated cubic honeycomb creates two offset copies of the square tiling vertex arrangement of the plane: A3/D3 lattice Its vertex arrangement represents an A3 lattice or D3 lattice. This lattice is known as the face-centered cubic lattice in crystallography and is also referred to as the cubic close packed lattice as its vertices are the centers of a close-packing with equal spheres that achieves the highest possible average density. The tetrahedral-octahedral honeycomb is the 3-dimensional case of a simplectic honeycomb. Its Voronoi cell is a rhombic dodecahedron, the dual of the cuboctahedron vertex figure for the tet-oct honeycomb. The D packing can be constructed by the union of two D3 (or A3) lattices. The D packing is only a lattice for even dimensions. The kissing number is 22=4, (2n-1 for n<8, 240 for n=8, and 2n(n-1) for n>8). ∪ The A or D lattice (also called A or D) can be constructed by the union of all four A3 lattices, and is identical to the vertex arrangement of the disphenoid tetrahedral honeycomb, dual honeycomb of the uniform bitruncated cubic honeycomb: It is also the body centered cubic, the union of two cubic honeycombs in dual positions. ∪ ∪ ∪ = dual of = ∪ . The kissing number of the D lattice is 8 and its Voronoi tessellation is a bitruncated cubic honeycomb, , containing all truncated octahedral Voronoi cells, . Related honeycombs C3 honeycombs The [4,3,4], , Coxeter group generates 15 permutations of uniform honeycombs, 9 with distinct geometry including the alternated cubic honeycomb. The expanded cubic honeycomb (also known as the runcinated tesseractic honeycomb) is geometrically identical to the cubic honeycomb. B3 honeycombs The [4,31,1], , Coxeter group generates 9 permutations of uniform honeycombs, 4 with distinct geometry including the alternated cubic honeycomb. A3 honeycombs This honeycomb is one of five distinct uniform honeycombs constructed by the Coxeter group. The symmetry can be multiplied by the symmetry of rings in the Coxeter–Dynkin diagrams: Quasiregular honeycombs Cantic cubic honeycomb The cantic cubic honeycomb, cantic cubic cellulation or truncated half cubic honeycomb is a uniform space-filling tessellation (or honeycomb) in Euclidean 3-space. It is composed of truncated octahedra, cuboctahedra and truncated tetrahedra in a ratio of 1:1:2. Its vertex figure is a rectangular pyramid. John Horton Conway calls this honeycomb a truncated tetraoctahedrille, and its dual half oblate octahedrille. Symmetry It has two different uniform constructions. The construction can be seen with alternately colored truncated tetrahedra. Related honeycombs It is related to the cantellated cubic honeycomb. Rhombicuboctahedra are reduced to truncated octahedra, and cubes are reduced to truncated tetrahedra. Runcic cubic honeycomb The runcic cubic honeycomb or runcic cubic cellulation is a uniform space-filling tessellation (or honeycomb) in Euclidean 3-space. It is composed of rhombicuboctahedra, cubes, and tetrahedra in a ratio of 1:1:2. Its vertex figure is a triangular frustum, with a tetrahedron on one end, cube on the opposite end, and three rhombicuboctahedra around the trapezoidal sides. John Horton Conway calls this honeycomb a 3-RCO-trille, and its dual quarter cubille. Quarter cubille The dual of a runcic cubic honeycomb is called a quarter cubille, with Coxeter diagram , with faces in 2 of 4 hyperplanes of the , [4,31,1] symmetry fundamental domain. Cells can be seen as 1/4 of dissected cube, using 4 vertices and the center. Four cells exist around 6 edges, and 3 cells around 3 edges. Related honeycombs It is related to the runcinated cubic honeycomb, with quarter of the cubes alternated into tetrahedra, and half expanded into rhombicuboctahedra. This honeycomb can be divided on truncated square tiling planes, using the octagons centers of the rhombicuboctahedra, creating square cupolae. This scaliform honeycomb is represented by Coxeter diagram , and symbol s3{2,4,4}, with coxeter notation symmetry [2+,4,4]. . Runcicantic cubic honeycomb The runcicantic cubic honeycomb or runcicantic cubic cellulation is a uniform space-filling tessellation (or honeycomb) in Euclidean 3-space. It is composed of truncated cuboctahedra, truncated cubes and truncated tetrahedra in a ratio of 1:1:2, with a mirrored sphenoid vertex figure. It is related to the runcicantellated cubic honeycomb. John Horton Conway calls this honeycomb a f-tCO-trille, and its dual half pyramidille. Half pyramidille The dual to the runcitruncated cubic honeycomb is called a half pyramidille, with Coxeter diagram . Faces exist in 3 of 4 hyperplanes of the [4,31,1], Coxeter group. Cells are irregular pyramids and can be seen as 1/12 of a cube, or 1/24 of a rhombic dodecahedron, each defined with three corner and the cube center. Related skew apeirohedra A related uniform skew apeirohedron exists with the same vertex arrangement, but triangles and square removed. It can be seen as truncated tetrahedra and truncated cubes augmented together. Related honeycombs Gyrated tetrahedral-octahedral honeycomb The gyrated tetrahedral-octahedral honeycomb or gyrated alternated cubic honeycomb is a space-filling tessellation (or honeycomb) in Euclidean 3-space made up of octahedra and tetrahedra in a ratio of 1:2. It is vertex-uniform with 8 tetrahedra and 6 octahedra around each vertex. It is not edge-uniform. All edges have 2 tetrahedra and 2 octahedra, but some are alternating, and some are paired. It can be seen as reflective layers of this layer honeycomb: Construction by gyration This is a less symmetric version of another honeycomb, tetrahedral-octahedral honeycomb, in which each edge is surrounded by alternating tetrahedra and octahedra. Both can be considered as consisting of layers one cell thick, within which the two kinds of cell strictly alternate. Because the faces on the planes separating these layers form a regular pattern of triangles, adjacent layers can be placed so that each octahedron in one layer meets a tetrahedron in the next layer, or so that each cell meets a cell of its own kind (the layer boundary thus becomes a reflection plane). The latter form is called gyrated. The vertex figure is called a triangular orthobicupola, compared to the tetrahedral-octahedral honeycomb whose vertex figure cuboctahedron in a lower symmetry is called a triangular gyrobicupola, so the gyro- prefix is reversed in usage. Construction by alternation The geometry can also be constructed with an alternation operation applied to a hexagonal prismatic honeycomb. The hexagonal prism cells become octahedra and the voids create triangular bipyramids which can be divided into pairs of tetrahedra of this honeycomb. This honeycomb with bipyramids is called a ditetrahedral-octahedral honeycomb. There are 3 Coxeter-Dynkin diagrams, which can be seen as 1, 2, or 3 colors of octahedra: Gyroelongated alternated cubic honeycomb The gyroelongated alternated cubic honeycomb or elongated triangular antiprismatic cellulation is a space-filling tessellation (or honeycomb) in Euclidean 3-space. It is composed of octahedra, triangular prisms, and tetrahedra in a ratio of 1:2:2. It is vertex-transitive with 3 octahedra, 4 tetrahedra, 6 triangular prisms around each vertex. It is one of 28 convex uniform honeycombs. The elongated alternated cubic honeycomb has the same arrangement of cells at each vertex, but the overall arrangement differs. In the elongated form, each prism meets a tetrahedron at one of its triangular faces and an octahedron at the other; in the gyroelongated form, the prism meets the same kind of deltahedron at each end. Elongated alternated cubic honeycomb The elongated alternated cubic honeycomb or elongated triangular gyroprismatic cellulation is a space-filling tessellation (or honeycomb) in Euclidean 3-space. It is composed of octahedra, triangular prisms, and tetrahedra in a ratio of 1:2:2. It is vertex-transitive with 3 octahedra, 4 tetrahedra, 6 triangular prisms around each vertex. Each prism meets an octahedron at one end and a tetrahedron at the other. It is one of 28 convex uniform honeycombs. It has a gyrated form called the gyroelongated alternated cubic honeycomb with the same arrangement of cells at each vertex. See also Architectonic and catoptric tessellation Cubic honeycomb Space frame Notes References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, (2008) The Symmetries of Things, (Chapter 21, Naming the Archimedean and Catalan polyhedra and tilings, Architectonic and Catoptric tessellations, p 292–298, includes all the nonprismatic forms) George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) Branko Grünbaum, Uniform tilings of 3-space. Geombinatorics 4(1994), 49 - 56. Norman Johnson Uniform Polytopes, Manuscript (1991) Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380–407, MR 2,10] (1.9 Uniform space-fillings) (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] A. Andreini, Sulle reti di poliedri regolari e semiregolari e sulle corrispondenti reti correlative (On the regular and semiregular nets of polyhedra and on the corresponding correlative nets), Mem. Società Italiana della Scienze, Ser.3, 14 (1905) 75–129. D. M. Y. Sommerville, An Introduction to the Geometry of n Dimensions. New York, E. P. Dutton, 1930. 196 pp. (Dover Publications edition, 1958) Chapter X: The Regular Polytopes External links Architectural design made with Tetrahedrons and regular Pyramids based square.(2003) Uniform Honeycombs in 3-Space: 11-Octet 3-honeycombs Uniform 4-polytopes
Tetrahedral-octahedral honeycomb
[ "Physics" ]
3,278
[ "Uniform 4-polytopes", "Uniform polytopes", "Symmetry" ]
25,631,381
https://en.wikipedia.org/wiki/Friedel%20oscillations
Friedel oscillations, named after French physicist Jacques Friedel, arise from localized perturbations in a metallic or semiconductor system caused by a defect in the Fermi gas or Fermi liquid. Friedel oscillations are a quantum mechanical analog to electric charge screening of charged species in a pool of ions. Whereas electrical charge screening utilizes a point entity treatment to describe the make-up of the ion pool, Friedel oscillations describing fermions in a Fermi fluid or Fermi gas require a quasi-particle or a scattering treatment. Such oscillations depict a characteristic exponential decay in the fermionic density near the perturbation followed by an ongoing sinusoidal decay resembling sinc function. In 2020, magnetic Friedel oscillations were observed on a metal surface. One-dimensional electron gas As a simple model, consider one-dimensional electron gas in a half-space . The electrons do not penetrate into the half-space , so that the boundary condition for the electron wave function is . The oscillating wave functions that satisfy this condition are , where is the electron wave vector, and is the length of the one-dimensional box (we use the 'box" normalization here). We consider degenerate electron gas, so that the electrons fill states with energies less than the Fermi energy . Then, the electron density is calculated as , where summation is taken over all wave vectors less than the Fermi wave vector , the factor 2 accounts for the spin degeneracy. By transforming the sum over into the integral we obtain . We see that the boundary perturbs the electron density leading to its spatial oscillations with the period near the boundary. These oscillations decay into the bulk with the decay length also given by . At the electron density equals to the unperturbed density of the one-dimensional electron gas . Scattering description The electrons that move through a metal or semiconductor behave like free electrons of a Fermi gas with a plane wave-like wave function, that is . Electrons in a metal behave differently than particles in a normal gas because electrons are fermions and they obey Fermi–Dirac statistics. This behaviour means that every k-state in the gas can only be occupied by two electrons with opposite spin. The occupied states fill a sphere in the band structure k-space, up to a fixed energy level, the so-called Fermi energy. The radius of the sphere in k-space, kF, is called the Fermi wave vector. If there is a foreign atom embedded in the metal or semiconductor, a so-called impurity, the electrons that move freely through the solid are scattered by the deviating potential of the impurity. During the scattering process the initial state wave vector ki of the electron wave function is scattered to a final state wave vector kf. Because the electron gas is a Fermi gas only electrons with energies near the Fermi level can participate in the scattering process because there must be empty final states for the scattered states to jump to. Electrons that are too far below the Fermi energy EF can't jump to unoccupied states. The states around the Fermi level that can be scattered occupy a limited range of k-values or wavelengths. So only electrons within a limited wavelength range near the Fermi energy are scattered resulting in a density modulation around the impurity of the form . Qualitative description In the classic scenario of electric charge screening, a dampening in the electric field is observed in a mobile charge-carrying fluid upon the presence of a charged object. Since electric charge screening considers the mobile charges in the fluid as point entities, the concentration of these charges with respect to distance away from the point decreases exponentially. This phenomenon is governed by Poisson–Boltzmann equation. The quantum mechanical description of a perturbation in a one-dimensional Fermi fluid is modelled by the Tomonaga-Luttinger liquid. The fermions in the fluid that take part in the screening cannot be considered as a point entity but a wave-vector is required to describe them. Charge density away from the perturbation is not a continuum but fermions arrange themselves at discrete spaces away from the perturbation. This effect is the cause of the circular ripples around the impurity. N.B. Where classically near the charged perturbation an overwhelming number of oppositely charged particles can be observed, in the quantum mechanical scenario of Friedel oscillations periodic arrangements of oppositely charged fermions followed by spaces with same charged regions. In the figure to the right, a 2-dimensional Friedel oscillations has been illustrated with an STM image of a clean surface. As the image is taken on a surface, the regions of low electron density leave the atomic nuclei ‘exposed’ which result in a net positive charge. See also Lindhard theory References External links http://gravityandlevity.wordpress.com/2009/06/02/friedel-oscillations-wherein-we-learn-that-the-electron-has-a-size/ - a simple explanation of the phenomenon Condensed matter physics
Friedel oscillations
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,065
[ "Phases of matter", "Condensed matter physics", "Matter", "Materials science" ]
6,657,332
https://en.wikipedia.org/wiki/Gromov%27s%20compactness%20theorem%20%28topology%29
In the mathematical field of symplectic topology, Gromov's compactness theorem states that a sequence of pseudoholomorphic curves in an almost complex manifold with a uniform energy bound must have a subsequence which limits to a pseudoholomorphic curve which may have nodes or (a finite tree of) "bubbles". A bubble is a holomorphic sphere which has a transverse intersection with the rest of the curve. This theorem, and its generalizations to punctured pseudoholomorphic curves, underlies the compactness results for flow lines in Floer homology and symplectic field theory. If the complex structures on the curves in the sequence do not vary, only bubbles can occur; nodes can occur only if the complex structures on the domain are allowed to vary. Usually, the energy bound is achieved by considering a symplectic manifold with compatible almost-complex structure as the target, and assuming that curves to lie in a fixed homology class in the target. This is because the energy of such a pseudoholomorphic curve is given by the integral of the target symplectic form over the curve, and thus by evaluating the cohomology class of that symplectic form on the homology class of the curve. The finiteness of the bubble tree follows from (positive) lower bounds on the energy contributed by a holomorphic sphere. References Symplectic topology Compactness theorems
Gromov's compactness theorem (topology)
[ "Mathematics" ]
290
[ "Compactness theorems", "Mathematical theorems", "Theorems in topology", "Topology stubs", "Topology", "Mathematical problems" ]
6,657,904
https://en.wikipedia.org/wiki/Etynodiol%20diacetate
Etynodiol diacetate, or ethynodiol diacetate, sold under the brand name Ovulen among others, is a progestin medication which is used in birth control pills. The medication is available only in combination with an estrogen. It is taken by mouth. Etynodiol diacetate is a progestin, or a synthetic progestogen, and hence is an agonist of the progesterone receptor, the biological target of progestogens like progesterone. It has weak androgenic and estrogenic activity and no other important hormonal activity. The medication is a prodrug of norethisterone in the body, with etynodiol occurring as an intermediate. Etynodiol, a related compound, was discovered in 1954, and etynodiol diacetate was introduced for medical use in 1965. The combination ethynodiol with mestranol (Ovulen) was approved for medical use in the United States in 1966. The combination ethinylestradiol with ethynodiol (Demulen) was approved for medical use in the United States in 1970. In 2021, the combination with ethinylestradiol was the 276th most commonly prescribed medication in the United States, with more than 800,000 prescriptions. Medical uses Etynodiol diacetate is used in combination with an estrogen such as ethinylestradiol or mestranol in combined oral contraceptives for women for the prevention of pregnancy. Side effects Pharmacology Etynodiol diacetate is virtually inactive in terms of affinity for the progesterone and androgen receptors and acts as a rapidly converted prodrug of norethisterone, with etynodiol occurring as an intermediate. Upon oral administration and during first-pass metabolism in the liver, etynodiol diacetate is rapidly converted by esterases into etynodiol, which is followed by oxygenation of the C3 hydroxyl group to produce norethisterone. In addition to its progestogenic activity, etynodiol diacetate has weak androgenic activity, and, unlike most progestins but similarly to norethisterone and noretynodrel, also has some estrogenic activity. The pharmacokinetics of etynodiol diacetate have been reviewed. Chemistry Etynodiol diacetate, also known as 3β-hydroxy-17α-ethynyl-19-nortestosterone 3β,17β-diaceate, 3β-hydroxynorethisterone 3β,17β-diacetate, or 17α-ethynylestr-4-ene-3β,17β-diol 3β,17β-diacetate, is a synthetic estrane steroid and a derivative of testosterone. It is specifically a derivative of 19-nortestosterone and 17α-ethynyltestosterone, or of norethisterone (17α-ethynyl-19-nortestosterone), in which the C3 ketone group has been dehydrogenated into a C3β hydroxyl group and acetate esters have been attached at the C3β and C17β positions. Etynodiol diacetate is the 3β,17β-diacetate ester of etynodiol (17α-ethynylestr-4-ene-3β,17β-diol). Synthesis Chemical syntheses of etynodiol diacetate have been published. Reduction of norethisterone (1) affords the 3,17-diol. The 3β-hydroxy compound is the desired product; since reactions at C3 do not show nearly the stereoselectivity as those at C17 by virtue of the relative lack of stereo-directing proximate substituents, the formation of the desired isomer is engendered by use of a bulky reducing agent, lithium tri-tert-butoxyaluminum hydride. Acetylation of the 3β,17β-diol affords etynodiol diacetate (3). History Etynodiol was first synthesized in 1954, via reduction of norethisterone, and etynodiol diacetate was introduced for medical use in 1965. Society and culture Generic names Etynodiol diacetate is the generic name of the drug (the of its free alcohol form is etynodiol), while ethynodiol diacetate is its , , and . It is also known by its former developmental code names CB-8080 and SC-11800. Brand names Etynodiol diacetate is or has been marketed under brand names including Conova, Continuin, Demulen, Femulen, Kelnor, Lo-Malmorede, Luteonorm, Luto-Metrodiol, Malmorede, Metrodiol, Ovulen, Soluna, Zovia, and others. Availability Etynodiol diacetate is marketed in only a few countries, including the United States, Canada, Argentina, and Oman. References Acetate esters Ethynyl compounds Anabolic–androgenic steroids Combination drugs Drugs developed by Merck Estranes Prodrugs Progestogen esters Progestogens Synthetic estrogens
Etynodiol diacetate
[ "Chemistry" ]
1,154
[ "Chemicals in medicine", "Prodrugs" ]
6,657,908
https://en.wikipedia.org/wiki/Etynodiol
Etynodiol, or ethynodiol, is a steroidal progestin of the 19-nortestosterone group which was never marketed. A diacylated derivative, etynodiol diacetate, is used as a hormonal contraceptive. Etynodiol is sometimes used as a synonym for etynodiol diacetate. It was patented in 1955. Pharmacology Etynodiol is a prodrug of norethisterone, and is converted immediately and completely into norethisterone. Etynodiol is an intermediate in the conversion of the prodrug lynestrenol into norethisterone. Chemistry Etynodiol is a 19-nortestosterone derivative. Structurally, it is almost identical to norethisterone and lynestrenol, differing only in its C3 substituent. Whereas norethisterone has a ketone at C3 and lynestrenol has no substituent at C3, etynodiol has a hydroxyl group at the position. Synthesis Society and culture Generic names Etynodiol is the generic name of the drug and its , while ethynodiol is its . References Ethynyl compounds Anabolic–androgenic steroids Diols Estranes Prodrugs Progestogens Synthetic estrogens
Etynodiol
[ "Chemistry" ]
288
[ "Chemicals in medicine", "Prodrugs" ]
6,657,979
https://en.wikipedia.org/wiki/Norethisterone
Norethisterone, also known as norethindrone and sold under the brand name Norlutin among others, is a progestin medication used in birth control pills, menopausal hormone therapy, and for the treatment of gynecological disorders. The medication is available in both low-dose and high-dose formulations and both alone and in combination with an estrogen. It is used by mouth or, as norethisterone enanthate, by injection into muscle. Side effects of norethisterone include menstrual irregularities, headaches, nausea, breast tenderness, mood changes, acne, increased hair growth. Norethisterone is a progestin, or a synthetic progestogen, and hence is an agonist of the progesterone receptor, the biological target of progestogens like progesterone. It has weak androgenic and estrogenic activity, mostly at high dosages, and no other important hormonal activity. Norethisterone was discovered in 1951 and was one of the first progestins to be developed. It was first introduced for medical use on its own in 1957 and was introduced in combination with an estrogen for use as a birth control pill in 1963. It is sometimes referred to as a "first-generation" progestin. Like desogestrel and Norgestrel, Norethisterone is available as a progestogen-only "mini pill" for birth control. Norethisterone is marketed widely throughout the world. It is available as a generic medication. In 2022, it was the 135th most commonly prescribed medication in the United States, with more than 4million prescriptions. It is on the World Health Organization's List of Essential Medicines. Medical uses Norethisterone is used as a hormonal contraceptive in combination with an estrogen – usually ethinylestradiol (EE) – in combined oral contraceptive pills and alone in progestogen-only pills. Another medical use of norethisterone is to alleviate endometriosis related pain. In fact, 50% of patients who received medical or surgical treatment for endometriosis-related pelvic pain have benefited from progestin therapy. This could be due to the fact that norethisterone induces endometrial proliferation during secretory phase, which has been shown to alleviate endometrial pain complaints. Another way in which norethisterone may be acting to reduce endometrial pain is via inhibition of ovulation. Endometriosis pain and discomfort is worse during ovulation. Contraindications High-dose (10 mg/day) norethisterone has been associated with hepatic veno-occlusive disease, and because of this adverse effect, norethisterone should not be given to patients undergoing allogeneic bone marrow transplantation, as it has been associated with substantially lower one-year survival post-transplantation. Side effects At contraceptive and hormone replacement dosages (0.35 to 1 mg/day), norethisterone has essentially progestogenic side effects only. In most clinical studies of norethisterone for contraception or menopausal hormone therapy, the drug has been combined with an estrogen, and for this reason, it is difficult to determine which of the side effects were caused by norethisterone and which of them were caused by estrogen in such research. However, norethisterone enanthate, an intramuscularly administered prodrug of norethisterone which is used as a long-acting contraceptive, is used without an estrogen, and hence can be employed as a surrogate for norethisterone in terms of understanding its effects and tolerability. In clinical studies, the most common side effect with norethisterone enanthate has been menstrual disturbances, including prolonged bleeding or spotting and amenorrhea. Other side effects have included periodic abdominal bloating and breast tenderness, both of which are thought to be due to water retention and can be relieved with diuretics. There has been no association with weight gain, and blood pressure, blood clotting, and glucose tolerance have all remained normal. However, a decrease in cholesterol has been observed. At high doses (5 to 60 mg/day), for instance those used in the treatment of gynecological disorders, norethisterone can cause hypogonadism due to its antigonadotropic effects and can have estrogenic and weak androgenic side effects. High doses of norethisterone acetate (10 mg/day) have been associated with abnormal liver function tests, including significant elevations in liver enzymes. These liver enzymes included lactate dehydrogenase and glutamate pyruvate transaminase. Although they were described as having no clinical relevance, the elevated liver enzymes associated with norethisterone acetate may have precluded its further development for male hormonal contraception. Androgenic Due to its weak androgenic activity, norethisterone can produce androgenic side effects such as acne, hirsutism, and voice changes of slight severity in some women at high dosages (e.g., 10 to 40 mg/day). This is notably not the case with combined oral contraceptives that contain norethisterone and EE, however. Such formulations contain low dosages of norethisterone (0.35 to 1 mg/day) in combination with estrogen and are actually associated with improvement in acne symptoms. In accordance, they are in fact approved by the for the treatment of acne in women in the United States. The improvement in acne symptoms is believed to be due to a 2- to 3-fold increase in sex hormone-binding globulin (SHBG) levels and a consequent decrease in free testosterone levels caused by EE, which results in an overall decrease in androgenic signaling in the body. The sebaceous glands are highly androgen-sensitive and their size and activity are potential markers of androgenic effect. A high dosage of 20 mg/day norethisterone or norethisterone acetate has been found to significantly stimulate the sebaceous glands, whereas lower dosages of 5 mg/day and 2.5 mg/day norethisterone and norethisterone acetate, respectively, did not significantly stimulate sebum production and were consequently regarded as devoid of significant androgenicity. Conversely, dosages of norethisterone of 0.5 to 3 mg/day have been found to dose-dependently decrease SHBG levels (and hence to suppress hepatic SHBG production), which is another highly sensitive marker of androgenicity. A large clinical study of high to very high oral dosages of norethisterone (10 to 40 mg/day) administered for prolonged periods of time (4 to 35 weeks) to prevent miscarriage in pregnant women found that 5.5% of the women experienced mild androgenic side effects such as mild voice changes (hoarseness), acne, and hirsutism and that 18.3% of female infants born to the mothers showed, in most cases only slight, virilization of the genitals. Maternal androgenic symptoms occurred most often in women who received a dosage of norethisterone of 30 mg/day or more for a period of 15 weeks or longer. In the female infants who experienced virilization of the genitals, the sole manifestation in 86.7% of the cases was varied but almost always slight enlargement of the clitoris. In the remaining 13.3% of the affected cases, marked clitoral enlargement and partial fusion of the labioscrotal folds occurred. The dosages used in these cases were 20 to 40 mg/day. In a letter to the editor on the topic of virilization caused by high dosages of norethisterone acetate in women, a physician expressed that they had not observed the "slightest evidence of virilization" and that there had "certainly been no hirsutism nor any voice changes" in 55 women with advanced breast cancer that they had treated with 30 to 60 mg/day norethisterone for up to six months. High-dosage norethisterone has been used to suppress menstruation in women with severe intellectual disability who were incapable of handling their own menses. A study of 118 nulliparous women treated with 5 mg/day norethisterone for a period of 2 to 30 months found that the drug was effective in producing amenorrhea in 86% of the women, with breakthrough bleeding occurring in the remaining 14%. Side effects including weight gain, hirsutism, acne, headache, nausea, and vomiting all did not appear to increase in incidence and no "disturbing side effects" were noted in any of the women. Another study of 5 mg/day norethisterone in 132 women also made no mention of androgenic side effects. These findings suggest little to no risk of androgenic side effects with norethisterone at a dosage of 5 mg/day. A study of 194 women treated with 5 to 15 mg/day norethisterone acetate for a median duration of 13 months of therapy to suppress symptoms of endometriosis observed no side effects in 55.2% of patients, weight gain in 16.1%, acne in 9.9%, mood lability in 8.9%, hot flashes in 8.3%, and voice deepening in two women (1.0%). Estrogenic Norethisterone is weakly estrogenic (via conversion into its metabolite EE), and for this reason, it has been found at high dosages to be associated with high rates of estrogenic side effects such as breast enlargement in women and gynecomastia in men, but also with improvement of menopausal symptoms in postmenopausal women. It has been suggested that very high dosages (e.g., 40 mg/day, which are sometimes used in clinical practice for various indications) of norethisterone acetate (and by extension norethisterone) may result in an increased risk of venous thromboembolism (VTE) analogously to high dosages (above 50 μg/day) of EE, and that even doses of norethisterone acetate of 10 to 20 mg, which correspond to EE doses of approximately 20 to 30 μg/day, may in certain women be associated with increased risk. A study also found that ethinylestradiol and norethisterone had a greater influence on coagulation factors when the dose of norethisterone was 3 or 4 mg than when it was 1 mg. This might have been due to additional ethinylestradiol generated by higher doses of norethisterone. Overdose There have been no reports of serious side effects with overdose of norethisterone, even in small children. As such, overdose usually does not require treatment. High dosages of as much as 60 mg/day norethisterone have been studied for extended treatment durations without serious adverse effects described. Interactions 5α-Reductase plays an important role in the metabolism of norethisterone, and 5α-reductase inhibitors such as finasteride and dutasteride can inhibit its metabolism. Norethisterone is partially metabolized via hydroxylation by CYP3A4, and inhibitors and inducers of CYP3A4 can significantly alter circulating levels of norethisterone. For instance, the CYP3A4 inducers rifampicin and bosentan have been found to decrease norethisterone exposure by 42% and 23%, respectively, and the CYP3A4 inducers carbamazepine and St. John's wort have also been found to accelerate norethisterone clearance. Pharmacology Pharmacodynamics Norethisterone is a potent progestogen and a weak androgen and estrogen. That is, it is a potent agonist of the progesterone receptor (PR) and a weak agonist of the androgen receptor (AR) and the estrogen receptor (ER). Norethisterone itself has insignificant affinity for the ER; its estrogenic activity is from an active metabolite that is formed in very small amounts, ethinylestradiol (EE), which is a very potent estrogen. Norethisterone and its metabolites have negligible affinity for the glucocorticoid receptor (GR) and mineralocorticoid receptor (MR) and hence have no glucocorticoid, antiglucocorticoid, mineralocorticoid, or antimineralocorticoid activity. Progestogenic activity Norethisterone is a potent progestogen and binds to the PR with approximately 150% of the affinity of progesterone. In contrast, its parent compounds, testosterone, nandrolone (19-nortestosterone), and ethisterone (17α-ethynyltestosterone), have 2%, 22%, and 44% of the relative binding affinity of progesterone for the PR. Unlike norethisterone, its major active metabolite 5α-dihydronorethisterone (5α-DHNET), which is formed by transformation via 5α-reductase, has been found to possess both progestogenic and marked antiprogestogenic activity, although its affinity for the PR is greatly reduced relative to norethisterone at only 25% of that of progesterone. Norethisterone produces similar changes in the endometrium and vagina, such as endometrial transformation, and is similarly antigonadotropic, ovulation-inhibiting, and thermogenic in women compared to progesterone, which is in accordance with its progestogenic activity. Androgenic activity Norethisterone has approximately 15% of the affinity of the anabolic–androgenic steroid (AAS) metribolone (R-1881) for the AR, and in accordance, is weakly androgenic. In contrast to norethisterone, 5α-DHNET, the major metabolite of norethisterone, shows higher affinity for the AR, with approximately 27% of the affinity of metribolone. However, although 5α-DHNET has higher affinity for the AR than norethisterone, it has significantly diminished and in fact almost abolished androgenic potency in comparison to norethisterone in rodent bioassays. Similar findings were observed for ethisterone (17α-ethynyltestosterone) and its 5α-reduced metabolite, whereas 5α-reduction enhanced both the AR affinity and androgenic potency of testosterone and nandrolone (19-nortestosterone) in rodent bioassays. As such, it appears that the ethynyl group of norethisterone at the C17α position is responsible for its loss of androgenicity upon 5α-reduction. Norethisterone (0.5 to 3 mg/day) has been found to dose-dependently decrease circulating SHBG levels, which is a common property of androgens and is due to AR-mediated suppression of hepatic SHBG production. The drug also has estrogenic activity, and estrogens are known to increase SHBG hepatic production and circulating levels, so it would appear that the androgenic activity of norethisterone overpowers its estrogenic activity in this regard. Norethisterone is bound to a considerable extent (36%) to SHBG in circulation. Although it has lower affinity for SHBG than endogenous androgens and estrogens, Norethisterone may displace testosterone from SHBG and thereby increase free testosterone levels, and this action may contribute to its weak androgenic effects. Estrogenic activity Norethisterone binds to the ERs, the ERα and the ERβ, with 0.07% and 0.01% of the relative binding affinity of estradiol. Due to these very low relative affinities, it is essentially inactive itself as a ligand of the ERs at clinical concentrations. However, norethisterone has been found to be a substrate for aromatase and is converted in the liver to a small extent (0.35%) to the highly potent estrogen ethinylestradiol (EE), and for this reason, unlike most other progestins, norethisterone has some estrogenic activity. However, with typical dosages of norethisterone used in oral contraceptives (0.5 to 1 mg), the levels of EE produced are low, and it has been said that they are probably without clinical relevance. Conversely, doses of 5 and 10 mg of norethisterone, which are used in the treatment of gynecological disorders, are converted at rates of 0.7% and 1.0% and produce levels of EE that correspond to those produced by 30 and 60 μg dosages of EE, respectively. The levels of EE formed by 0.5 and 1 mg of norethisterone have been estimated based on higher dosages as corresponding to 2 and 10 μg dosages of EE, respectively. At high doses, norethisterone may increase the risk of venous thromboembolism due to metabolism into EE. Neurosteroid activity Like progesterone and testosterone, norethisterone is metabolized into 3,5-tetrahydro metabolites. Whether these metabolites of norethisterone interact with the GABAA receptor similarly to the 3,5-tetrahydro metabolites of progesterone and testosterone like allopregnanolone and 3α-androstanediol, respectively, is a topic that does not appear to have been studied and hence requires clarification. Steroidogenesis inhibition Norethisterone is a substrate for and is known to be an inhibitor of 5α-reductase, with 4.4% and 20.1% inhibition at 0.1 and 1 μM, respectively. However, therapeutic concentrations of norethisterone are in the low nanomolar range, so this action may not be clinically relevant at typical dosages. Norethisterone and its major active metabolite 5α-DHNET have been found to act as irreversible aromatase inhibitors (Ki = 1.7 μM and 9.0 μM, respectively). However, like the case of 5α-reductase, the concentrations required are probably too high to be clinically relevant at typical dosages. 5α-DHNET specifically has been assessed and found to be selective in its inhibition of aromatase, and does not affect cholesterol side-chain cleavage enzyme (P450scc), 17α-hydroxylase/17,20-lyase, 21-hydroxylase, or 11β-hydroxylase. Since it is not aromatized (and hence cannot be transformed into an estrogenic metabolite), unlike norethisterone, 5α-DHNET has been proposed as a potential therapeutic agent in the treatment of ER-positive breast cancer. Other activities Norethisterone is a very weak inhibitor of CYP2C9 and CYP3A4 (IC50 = 46 μM and 51 μM, respectively), but these actions require very high concentrations of norethisterone that are far above therapeutic circulating levels (which are in the nanomolar range) and hence are probably not clinically relevant. Norethisterone and some of its 5α-reduced metabolites have been found to produce vasodilating effects in animals that are independent of sex steroid receptors and hence appear to be non-genomic in mechanism. Norethisterone stimulates the proliferation of MCF-7 breast cancer cells in vitro, an action that is independent of the classical PRs and is instead mediated via the progesterone receptor membrane component-1 (PGRMC1). Certain other progestins act similarly in this assay, whereas progesterone acts neutrally. It is unclear if these findings may explain the different risks of breast cancer observed with progesterone and progestins in clinical studies. Antigonadotropic effects Due to its progestogenic activity, norethisterone suppresses the hypothalamic–pituitary–gonadal axis (HPG axis) and hence has antigonadotropic effects. The estrogenic activity of norethisterone at high doses would also be expected to contribute to its antigonadotropic effects. Due to its antigonadotropic effects, norethisterone suppresses gonadal sex hormone production, inhibits ovulation in women, and suppresses spermatogenesis in men. The ovulation-inhibiting dosage of both oral norethisterone and oral norethisterone acetate is about 0.5 mg/day in women. However, some conflicting data exist, suggesting that higher doses might be necessary for full inhibition of ovulation. An intramuscular injection of 200 mg norethisterone enanthate has been found to prevent ovulation and suppress levels of estradiol, progesterone, luteinizing hormone (LH), and follicle-stimulating hormone (FSH) in women. Early studies of oral norethisterone in men employing doses of 20 to 50 mg/day observed suppression of 17-ketosteroid excretion, increased estrogen excretion (due to conversion into ethinylestradiol), suppression of spermatogenesis, libido, and erectile function, and incidence of gynecomastia. A dosage of oral norethisterone of 25 mg/day for 3 weeks in men has been reported to suppress testosterone levels by about 70%, to 100 to 200 ng/dL, within 4 or 5 days, as well as to suppress sperm count and to have no effect on libido or erectile function over this short time period. In healthy young men, norethisterone acetate alone at a dose of 5 to 10 mg/day orally for 2 weeks suppressed testosterone levels from ~527 ng/dL to ~231 ng/dL (–56%). A single 200 mg intramuscular injection of norethisterone enanthate alone or in combination with 2 mg estradiol valerate has been found to produce a rapid, strong, and sustained decrease in gonadotropin and testosterone levels for up to one month in men. Intramuscular injections of 200 mg norethisterone enanthate once every 3 weeks have also been found to suppress spermatogenesis in men. Similarly, a single intramuscular injection of 50 mg norethisterone enanthate in combination with 5 mg estradiol valerate has been found to strongly suppress testosterone levels in men. Levels of testosterone decreased from ~503 ng/dL at baseline to ~30 ng/dL at the lowest point (–94%) which occurred at day 7 post-injection. Pharmacokinetics The pharmacokinetics of norethisterone have been reviewed. Absorption The oral bioavailability of norethisterone is between 47 and 73%, with a mean oral bioavailability of 64%. Micronization has been found to significantly improve the oral bioavailability of norethisterone by increasing intestinal absorption and reducing intestinal metabolism. A single 2 mg oral dose of norethisterone has been found to result in peak circulating levels of the drug of 12 ng/mL (40 nmol/L), whereas a single 1 mg oral dose of norethisterone in combination with 2 mg estradiol resulted in peak levels of norethisterone of 8.5 ng/mL (29 nmol/L) one-hour post-administration. Distribution The plasma protein binding of norethisterone is 97%. It is bound 61% bound to albumin and 36% bound to SHBG. Metabolism Norethisterone has an elimination half-life of 5.2 to 12.8 hours, with a mean elimination half-life of 8.0 hours. The metabolism of norethisterone is very similar to that of testosterone (and nandrolone) and is mainly via reduction of the Δ4 double bond to 5α- and 5β-dihydronorethisterone, which is followed by the reduction of the C3 keto group to the four isomers of 3,5-tetrahydronorethisterone. These transformations are catalyzed by 5α- and 5β-reductase and 3α- and 3β-hydroxysteroid dehydrogenase both in the liver and in extrahepatic tissues such as the pituitary gland, uterus, prostate gland, vagina, and breast. With the exception of 3α,5α- and 3β,5α-tetrahydronorethisterone, which have significant affinity for the ER and are estrogenic to some degree, the 3,5-tetrahydro metabolites of norethisterone are inactive in terms of affinity for sex steroid receptors (specifically, the PR, AR, and ER). A small amount of norethisterone is also converted by aromatase into EE. Norethisterone is metabolized in the liver via hydroxylation as well, mainly by CYP3A4. Some conjugation (including glucuronidation and sulfation) of norethisterone and its metabolites occurs in spite of steric hindrance by the ethynyl group at C17α. The ethynyl group of norethisterone is preserved in approximately 90% of all of its metabolites. Norethisterone is used in birth control pills, opposed to progesterone itself, because it is not metabolized as rapidly as progesterone when consumed orally. When progesterone is consumed orally it is rapidly metabolized in the gastrointestinal tract and the liver, and broken down into many different metabolites. Whereas, norethisterone is not as rapidly metabolized allowing norethisterone to be present in higher quantities allowing it to more effectively compete for progesterone receptor binding sites. Elimination Norethisterone is eliminated 33 to 81% in urine and 35 to 43% in feces. Chemistry Norethisterone, also known as 17α-ethynyl-19-nortestosterone or as 17α-ethynylestra-4-en-17β-ol-3-one, is a synthetic estrane steroid and a derivative of testosterone. It is specifically a derivative of testosterone in which an ethynyl group has been added at the C17α position and the methyl group at the C19 position has been removed; hence, it is a combined derivative of ethisterone (17α-ethynyltestosterone) and nandrolone (19-nortestosterone). These modifications result in increased progestogenic activity and oral bioavailability as well as decreased androgenic/anabolic activity. Derivatives Norethisterone (NET) is the parent compound of a large group of progestins that includes most of the progestins known as the 19-nortestosterone derivatives. This group is divided by chemical structure into the estranes (derivatives of norethisterone) and the gonanes (18-methylgonanes or 13β-ethylestranes; derivatives of levonorgestrel) and includes the following marketed medications: Estranes Etynodiol diacetate (3β-hydroxy-NET 3β,17β-diacetate) Lynestrenol (3-desoxy-NET) Norethisterone acetate (NET 17β-acetate) Norethisterone enanthate (17β-enanthate) Noretynodrel (δ5(10)-NET) Norgestrienone (δ9,11-NET) Quingestanol acetate (NET 17β-acetate 3-cyclopentyl enol ether) Tibolone (7α-methyl-δ5(10)-NET) Gonanes Desogestrel (3-deketo-11-methylene-18-methyl-NET) Etonogestrel (11-methylene-18-methyl-NET) Gestodene (18-methyl-δ15-NET) Gestrinone (18-methyl-δ9,11-NET) Levonorgestrel (18-methyl-NET) Norelgestromin (18-methyl-NET 3-oxime) Norgestimate (18-methyl-NET 3-oxime 17β-acetate) Norgestrel (13-ethyl-NET) Several of these act as prodrugs of norethisterone, including norethisterone acetate, norethisterone enanthate, etynodiol diacetate, lynestrenol, and quingestanol acetate. Noretynodrel may also be a prodrug of norethisterone. Norethisterone acetate is taken by mouth similarly to norethisterone, while norethisterone enanthate is given by injection into muscle. Non-17α-ethynylated 19-Nortestosterone (19-NT) progestins which are technically not derivatives of norethisterone (as they do not have a C17α ethynyl group) but are still closely related (with other substitutions at the C17α and/or C16β positions) include the following marketed medications: The C17α vinyl (ethenyl) derivatives norgesterone (17α-vinyl-δ5(10)-19-NT) and norvinisterone (17α-vinyl-19-NT) The C17α allyl derivatives allylestrenol (3-deketo-17α-allyl-19-NT) and altrenogest (17α-allyl-δ9,11-19-NT) The C17α alkyl derivative normethandrone (17α-methyl-19-NT) The C17α cyanomethyl derivative dienogest (17α-cyanomethyl-δ9-19-NT) The C16β ethyl derivative oxendolone (16β-ethyl-19-NT) Many anabolic steroids of the 19-nortestosterone family, like norethandrolone and ethylestrenol, are also potent progestogens, but were never marketed as such. Synthesis Chemical syntheses of norethisterone have been published. Synthesis 1 Estradiol 3-methyl ether (1, EME) is partially reduced to the 1,5-diene (2) as also occurs for the first step in the synthesis of nandrolone. Oppenauer oxidation then transforms the C17β hydroxyl group into a ketone functionality (3). This is then reacted with metal acetylide into the corresponding C17α ethynyl compound (4). Hydrolysis of the enol ether under mild conditions leads directly to (5), which appears to be noretynodrel (although Lednicer states that it is "etynodrel" in his book (which may be a synonym etynodiol); etynodrel is with a chlorine atom attached), an orally active progestin. This is the progestogen component of the first oral contraceptive to be offered for sale (i.e., Enovid). Treatment of the ethynyl enol ether with strong acid leads to norethisterone (6). In practice, these and all other combined oral contraceptives are mixtures of 1 to 2% EE or mestranol and an oral progestin. It has been speculated that the discovery of the necessity of estrogen in addition to progestin for contraceptive efficacy is due to the presence of a small amount of unreduced EME (1) in early batches of 2. This when subjected to oxidation and ethynylation, would of course lead to mestranol (3). In any event, the need for the presence of estrogen in the mixture is now well established experimentally. Synthesis 2 Norethisterone is made from estr-4-ene-3,17-dione (bolandione), which in turn is synthesized by partial reduction of the aromatic region of the 3-O-methyl ether of estrone with lithium in liquid ammonia, and simultaneously of the keto group at C17α to a hydroxyl group, which is then oxidized back to a keto group by chromium trioxide in acetic acid. The conjugated C4-C5 olefin and the carbonyl group at C3 is then transformed to dienol ethyl ether using ethyl orthoformate. The obtained product is ethynylated by acetylene in the presence of potassium tert-butoxide. After hydrochloride hydrolysis of the formed O-potassium derivative, during which the enol ether is also hydrolyzed, and the remaining double bond is shifted, the desired norethisterone is obtained. History Norethisterone was synthesized for the first time by chemists Luis Miramontes, Carl Djerassi, and George Rosenkranz at Syntex in Mexico City in 1951. It was derived from ethisterone, and was found to possess about 20-fold greater potency as a progestogen in comparison. Norethisterone was the first highly active oral progestogen to be synthesized, and was preceded (as a progestogen) by progesterone (1934), ethisterone (1938), 19-norprogesterone (1944), and 17α-methylprogesterone (1949) as well as by nandrolone (1950), whereas noretynodrel (1952) and norethandrolone (1953) followed the synthesis of norethisterone. The drug was introduced as Norlutin in the United States in 1957. Norethisterone was subsequently combined with mestranol and marketed as Ortho-Novum in the United States in 1963. It was the second progestin, after noretynodrel in 1960, to be used in an oral contraceptive. In 1964, additional contraceptive preparations containing norethisterone in combination with mestranol or EE, such as Norlestrin and Norinyl, were marketed in the United States. Society and culture Generic names Norethisterone is the and of the drug while norethindrone is its . Brand names Norethisterone is available in Bangladesh as Menogia (ACI), Normens (Renata) etc. Norethisterone (NET), including as norethisterone acetate and norethisterone enanthate, has been marketed under many brand names throughout the world. Availability United States Norethisterone was previously available alone in 5 mg tablets under the brand name Norlutin in the United States, but this formulation has since been discontinued. However, norethisterone acetate remains available alone in 5 mg tablets under the brand name Aygestin in the United States. It is one of the only non-contraceptive progestogen-only drug formulations that remains available in the United States. The others include progesterone, medroxyprogesterone acetate, megestrol acetate, and hydroxyprogesterone caproate, as well as the atypical agent danazol. Both norethisterone and norethisterone acetate are also available in the United States as contraceptives. Norethisterone is available both alone (brand names Camila, Errin, Heather, Micronor, Nor-QD, others) and in combination with EE (Norinyl, Ortho-Novum, others) or mestranol (Norinyl, Ortho-Novum, others), while norethisterone acetate is available only in combination with EE (Norlestrin, others). Norethisterone enanthate is not available in the United States in any form. Research Norethisterone, as norethisterone acetate and norethisterone enanthate, has been studied for use as a potential male hormonal contraceptive in combination with testosterone in men. Long-acting norethisterone microspheres for intramuscular injection have been studied for potential use in birth control. References Further reading Ethynyl compounds 3β-Hydroxysteroid dehydrogenase inhibitors 5α-Reductase inhibitors Anabolic–androgenic steroids Aromatase inhibitors Enones Estranes Hormonal contraception Human drug metabolites Progestogens Syntex Synthetic estrogens
Norethisterone
[ "Chemistry" ]
7,917
[ "Chemicals in medicine", "Human drug metabolites" ]
6,660,265
https://en.wikipedia.org/wiki/Equilibrium%20unfolding
In biochemistry, equilibrium unfolding is the process of unfolding a protein or RNA molecule by gradually changing its environment, such as by changing the temperature or pressure, pH, adding chemical denaturants, or applying force as with an atomic force microscope tip. If the equilibrium was maintained at all steps, the process theoretically should be reversible during equilibrium folding. Equilibrium unfolding can be used to determine the thermodynamic stability of the protein or RNA structure, i.e. free energy difference between the folded and unfolded states. Theoretical background In its simplest form, equilibrium unfolding assumes that the molecule may belong to only two thermodynamic states, the folded state (typically denoted N for "native" state) and the unfolded state (typically denoted U). This "all-or-none" model of protein folding was first proposed by Tim Anson in 1945, but is believed to hold only for small, single structural domains of proteins (Jackson, 1998); larger domains and multi-domain proteins often exhibit intermediate states. As usual in statistical mechanics, these states correspond to ensembles of molecular conformations, not just one conformation. The molecule may transition between the native and unfolded states according to a simple kinetic model N U with rate constants and for the folding (U -> N) and unfolding (N -> U) reactions, respectively. The dimensionless equilibrium constant can be used to determine the conformational stability by the equation where is the gas constant and is the absolute temperature in kelvin. Thus, is positive if the unfolded state is less stable (i.e., disfavored) relative to the native state. The most direct way to measure the conformational stability of a molecule with two-state folding is to measure its kinetic rate constants and under the solution conditions of interest. However, since protein folding is typically completed in milliseconds, such measurements can be difficult to perform, usually requiring expensive stopped flow or (more recently) continuous-flow mixers to provoke folding with a high time resolution. Dual polarisation interferometry is an emerging technique to directly measure conformational change and . Chemical denaturation In the less extensive technique of equilibrium unfolding, the fractions of folded and unfolded molecules (denoted as and , respectively) are measured as the solution conditions are gradually changed from those favoring the native state to those favoring the unfolded state, e.g., by adding a denaturant such as guanidinium hydrochloride or urea. (In equilibrium folding, the reverse process is carried out.) Given that the fractions must sum to one and their ratio must be given by the Boltzmann factor, we have Protein stabilities are typically found to vary linearly with the denaturant concentration. A number of models have been proposed to explain this observation prominent among them being the denaturant binding model, solvent-exchange model (both by John Schellman) and the Linear Extrapolation Model (LEM; by Nick Pace). All of the models assume that only two thermodynamic states are populated/de-populated upon denaturation. They could be extended to interpret more complicated reaction schemes. The denaturant binding model assumes that there are specific but independent sites on the protein molecule (folded or unfolded) to which the denaturant binds with an effective (average) binding constant k. The equilibrium shifts towards the unfolded state at high denaturant concentrations as it has more binding sites for the denaturant relative to the folded state (). In other words, the increased number of potential sites exposed in the unfolded state is seen as the reason for denaturation transitions. An elementary treatment results in the following functional form: where is the stability of the protein in water and [D] is the denaturant concentration. Thus the analysis of denaturation data with this model requires 7 parameters: ,, k, and the slopes and intercepts of the folded and unfolded state baselines. The solvent exchange model (also called the ‘weak binding model’ or ‘selective solvation’) of Schellman invokes the idea of an equilibrium between the water molecules bound to independent sites on protein and the denaturant molecules in solution. It has the form: where is the equilibrium constant for the exchange reaction and is the mole-fraction of the denaturant in solution. This model tries to answer the question of whether the denaturant molecules actually bind to the protein or they seem to be bound just because denaturants occupy about 20-30% of the total solution volume at high concentrations used in experiments, i.e. non-specific effects – and hence the term ‘weak binding’. As in the denaturant-binding model, fitting to this model also requires 7 parameters. One common theme obtained from both these models is that the binding constants (in the molar scale) for urea and guanidinium hydrochloride are small: ~ 0.2 for urea and 0.6 for GuHCl. Intuitively, the difference in the number of binding sites between the folded and unfolded states is directly proportional to the differences in the accessible surface area. This forms the basis for the LEM which assumes a simple linear dependence of stability on the denaturant concentration. The resulting slope of the plot of stability versus the denaturant concentration is called the m-value. In pure mathematical terms, m-value is the derivative of the change in stabilization free energy upon the addition of denaturant. However, a strong correlation between the accessible surface area (ASA) exposed upon unfolding, i.e. difference in the ASA between the unfolded and folded state of the studied protein (dASA), and the m-value has been documented by Pace and co-workers. In view of this observation, the m-values are typically interpreted as being proportional to the dASA. There is no physical basis for the LEM and it is purely empirical, though it is widely used in interpreting solvent-denaturation data. It has the general form: where the slope is called the "m-value"(> 0 for the above definition) and (also called Cm) represents the denaturant concentration at which 50% of the molecules are folded (the denaturation midpoint of the transition, where ). In practice, the observed experimental data at different denaturant concentrations are fit to a two-state model with this functional form for , together with linear baselines for the folded and unfolded states. The and are two fitting parameters, along with four others for the linear baselines (slope and intercept for each line); in some cases, the slopes are assumed to be zero, giving four fitting parameters in total. The conformational stability can be calculated for any denaturant concentration (including the stability at zero denaturant) from the fitted parameters and . When combined with kinetic data on folding, the m-value can be used to roughly estimate the amount of buried hydrophobic surface in the folding transition state. Structural probes Unfortunately, the probabilities and cannot be measured directly. Instead, we assay the relative population of folded molecules using various structural probes, e.g., absorbance at 287 nm (which reports on the solvent exposure of tryptophan and tyrosine), far-ultraviolet circular dichroism (180-250 nm, which reports on the secondary structure of the protein backbone), dual polarisation interferometry (which reports the molecular size and fold density) and near-ultraviolet fluorescence (which reports on changes in the environment of tryptophan and tyrosine). However, nearly any probe of folded structure will work; since the measurement is taken at equilibrium, there is no need for high time resolution. Thus, measurements can be made of NMR chemical shifts, intrinsic viscosity, solvent exposure (chemical reactivity) of side chains such as cysteine, backbone exposure to proteases, and various hydrodynamic measurements. To convert these observations into the probabilities and , one generally assumes that the observable adopts one of two values, or , corresponding to the native or unfolded state, respectively. Hence, the observed value equals the linear sum By fitting the observations of under various solution conditions to this functional form, one can estimate and , as well as the parameters of . The fitting variables and are sometimes allowed to vary linearly with the solution conditions, e.g., temperature or denaturant concentration, when the asymptotes of are observed to vary linearly under strongly folding or strongly unfolding conditions. Thermal denaturation Assuming a two state denaturation as stated above, one can derive the fundamental thermodynamic parameters namely, , and provided one has knowledge on the of the system under investigation. The thermodynamic observables of denaturation can be described by the following equations: where , and indicate the enthalpy, entropy and Gibbs free energy of unfolding under a constant pH and pressure. The temperature, is varied to probe the thermal stability of the system and is the temperature at which half of the molecules in the system are unfolded. The last equation is known as the Gibbs–Helmholtz equation. Determining the heat capacity of proteins In principle one can calculate all the above thermodynamic observables from a single differential scanning calorimetry thermogram of the system assuming that the is independent of the temperature. However, it is difficult to obtain accurate values for this way. More accurately, the can be derived from the variations in vs. which can be achieved from measurements with slight variations in pH or protein concentration. The slope of the linear fit is equal to the . Note that any non-linearity of the datapoints indicates that is probably not independent of the temperature. Alternatively, the can also be estimated from the calculation of the accessible surface area (ASA) of a protein prior and after thermal denaturation as follows: For proteins that have a known 3d structure, the can be calculated through computer programs such as Deepview (also known as swiss PDB viewer). The can be calculated from tabulated values of each amino acid through the semi-empirical equation: where the subscripts polar, non-polar and aromatic indicate the parts of the 20 naturally occurring amino acids. Finally for proteins, there is a linear correlation between and through the following equation: Assessing two-state unfolding Furthermore, one can assess whether the folding proceeds according to a two-state unfolding as described above. This can be done with differential scanning calorimetry by comparing the calorimetric enthalpy of denaturation i.e. the area under the peak, to the van 't Hoff enthalpy described as follows: at the can be described as: When a two-state unfolding is observed the . The is the height of the heat capacity peak. Generalization to protein complexes and multi-domain proteins Using the above principles, equations that relate a global protein signal, corresponding to the folding states in equilibrium, and the variable value of a denaturing agent, either temperature or a chemical molecule, have been derived for homomeric and heteromeric proteins, from monomers to trimers and potentially tetramers. These equations provide a robust theoretical basis for measuring the stability of complex proteins, and for comparing the stabilities of wild type and mutant proteins. Such equations cannot be derived for pentamers of higher oligomers because of mathematical limitations (Abel–Ruffini theorem). References Further reading Pace CN. (1975) "The Stability of Globular Proteins", CRC Critical Reviews in Biochemistry, 1-43. Santoro MM and Bolen DW. (1988) "Unfolding Free Energy Changes Determined by the Linear Extrapolation Method. 1. Unfolding of Phenylmethanesulfonyl α-Chymotrypsin Using Different Denaturants", Biochemistry, 27, 8063–8068. Privalov PL. (1992) "Physical Basis for the Stability of the Folded Conformations of Proteins", in Protein Folding, TE Creighton, ed., W. H. Freeman, pp. 83–126. Yao M and Bolen DW. (1995) "How Valid Are Denaturant-Induced Unfolding Free Energy Measurements? Level of Conformance to Common Assumptions over an Extended Range of Ribonuclease A Stability", Biochemistry, 34, 3771–3781. Jackson SE. (1998) "How do small single-domain proteins fold?", Folding & Design, 3, R81-R91. Schwehm JM and Stites WE. (1998) "Application of Automated Methods for Determination of Protein Conformational Stability", Methods in Enzymology, 295, 150–170. Protein structure Equilibrium chemistry
Equilibrium unfolding
[ "Chemistry" ]
2,679
[ "Equilibrium chemistry", "Protein structure", "Structural biology" ]
6,661,744
https://en.wikipedia.org/wiki/Chemical%20modification
Chemical modification refers to a number of various processes involving the alteration of the chemical constitution or structure of molecules. Chemical modification of proteins Chemical modification is the change of biomolecular structure and function due to the addition or removal of modifying elements.  This is usually accomplished via chemical reactions or a series of chemical reactions that may or may not be reversible. Chemical modifications can be done to any of the four major macromolecules (proteins, nucleic acids, carbohydrates, and lipids); however, we will be focusing on the modification of proteins in this article. Chemical modifications are important because they can improve the molecule’s stability, which would increase the stability of the biomolecules and would have a role in allowing the organism to better cope with physiological stressors. Modification of proteins can also introduce the possibility of using them as drugs for possible treatment of a wide range of diseases. Chemical modifiers on compounds that can be used as drugs can also be used to attempt to increase the shelf life of the product or extend its function. Chemical modification is also another method in which more variability is introduced into the proteome. Chemical modifications of proteins are ever-changing due to the fluctuating needs of the organism. Common chemical modifications include phosphorylation, glycosylation, ubiquitination, methylation, lipidation, and proteolysis.  Although we will cover each type of chemical modification singularly, they can often work in conjunction with each other to modify the protein. Due to the large variety of modifications possible, the study of chemical modifications is ongoing. Phosphorylation Phosphorylation occurs when a PO3 (phosphoryl) group is added to a protein. This chemical modification is the most extensively studied and is reversible. The result of those studies has shown that phosphorylation acts as a regulator for proteins in two ways: the addition or removal of phosphoryl group can impact enzyme kinetics by turning on or off the enzymatic function via conformational changes and the phosphorylation of one protein can attract neighboring similar proteins to also bind to the phosphorylated motif to induce signal transduction pathways. The mechanism for phosphorylation utilizes kinases and phosphatases which are enzymes that are used to transfer the phosphoryl group onto and off of the targeted biomolecule. Often, kinases are accompanied by ATP or GTP to help facilitate the transfer of the phosphoryl group. Phosphorylation of a kinase can trigger one of two signal transduction pathways. These pathways may either be linear or a cascade transduction pathway. Cascade signal transduction pathways lead to the phosphorylation of many amino acids and utilize second messengers to amplify the signal to elicit a larger response. Phosphatases can act as a regulator and editor of cellular signaling pathways forming transient protein-protein interactions. Kinases are most associated with activating enzymatic activity, and phosphatases are most associated with turning off enzymatic activity, they can also perform the opposite function (Kinases can turn off enzymatic activity and Phosphatases can turn off enzymatic activity). Kinases and phosphatases can also have other binding sites that can attach to other signaling proteins. Phosphorylation and dephosphorylation of proteins through the activity of kinases and phosphatases play an important role in many biological processes such as cell proliferation through the MAPK, PI3K, Akt, mTOR, PKA, and PKC signaling pathway Because over-activation of kinases is associated with cancer progression, drugs that work to inhibit the function of kinases have been developed as possible treatments. Glycosylation Another well-studied chemical modification is glycosylation. Glycosylation is the process by which sugar molecules are attached to protein. The length of the attached saccharide is variable and impacts the structure, activity, and stability of the protein it is attached to. Many proteins that are glycosylated are often found on cell surfaces and play a large role in determining blood type. Ubiquitination Ubiquitin is made up of 76 amino acids that can exist on its own or attached to a protein. When ubiquitin is attached to a protein (the amount of ubiquitin that binds to the protein varies) it can function to target that protein for degradation or trigger kinase activation.  There are three enzymes that function in the ubiquitination pathway: Ubiquitin-activating enzyme (E1), Ubiquitin-conjugating enzyme (E2), and Ubiquitin-protein ligase (E3). Generally, E1 activates ubiquitin and transfers it to E2. E3 transfers ubiquitin to the target protein. This pathway is closely regulated and is very specific. Monoubiquitination (one ubiquitin protein) of a protein does not typically signal for protein degradation, instead it primarily functions to facilitate histone regulation, endocytosis, and nuclear export. Polyubiquination (multiple ubiquitin proteins) of a protein typically triggers protein degradation, especially if they are bound to a lysine residue. The degradation function of ubiquitin is the most well-understood as it has been linked to the NF-𝜿B signaling pathway for triggering inflammation. It has also been implicated as playing a role in cancer and other diseases. Methylation Methylation is the transfer of one methyl group (Carbon atom that is bonded to three Hydrogen atoms) to a protein via enzymes called methyltransferases. It is also often used by histone proteins to allow certain regions of the genome to wind and unwind and become accessible for transcription. Lipidation Lipidation is the process of attaching lipids to proteins to tag them as membrane-bound proteins. Different lipid attachments increase the protein’s affinity for different membrane types (plasma membrane, organelle membrane, and vesicles). There are four common types of lipidation: GPI anchors, N-terminal myristoylation, S-myristoylation, and S-prenylation. Proteolysis Proteolysis is the pathway used to break peptide bonds. Oftentimes, peptide bonds are stable in typical physiological conditions and may need enzymes called proteases to assist in breaking polypeptides into smaller components. This is especially important during cell signaling, removing misfolded proteins,  and programmed cell death (apoptosis). In some cases, proteolysis can be used to regulate the enzymatic activity of zymogens (inactive enzymes that require some bonds to be cleaved in order to be activated). There are four main types of proteases: serine proteases, cysteine proteases, aspartic acid proteases, and zinc metalloproteases. Chemically modified electrodes Chemically modified electrodes are electrodes that have their surfaces chemically converted to change the electrode's properties, such as its physical, chemical, electrochemical, optical, electrical, and transport characteristics. These electrodes are used for advanced purposes in research and investigation. In biochemistry In biochemistry, chemical modification is the technique of anatomically reacting a protein or nucleic acid with a reagent or reagents. Obtaining laboratory information through chemical modification which can be utilized to: identify which parts of a molecule are exposed to a solvent. determine which residues are important for a particular phenotype, e.g., which residues are important for an enzymatic activity; introduce new groups into a macromolecule; and crosslink macromolecules intra- and intermolecularly. Chemical modification of protein side chains Iodoacetamide Iodoacetic acid PEGylation BisSulfosuccinimidyl suberate 1-Ethyl-3-(3-dimethylaminopropyl)carbodiimide N-Ethylmaleimide Methyl methanethiosulfonate S-(1-oxyl-2,2,5,5-tetramethyl-2,5-dihydro-1H-pyrrol-3-yl)methyl methanesulfonothioate References Protein structure
Chemical modification
[ "Chemistry" ]
1,757
[ "Protein structure", "Structural biology" ]
6,662,023
https://en.wikipedia.org/wiki/Formestane
Formestane, formerly sold under the brand name Lentaron among others, is a steroidal, selective aromatase inhibitor which is used in the treatment of estrogen receptor-positive breast cancer in postmenopausal women. The drug is not active orally, and was available only as an intramuscular depot injection. Formestane was not approved by the United States FDA and the injectable form that was used in Europe in the past has been withdrawn from the market. Formestane is an analogue of androstenedione. Formestane is often used to suppress the production of estrogens from anabolic steroids or prohormones. It also acts as a prohormone to 4-hydroxytestosterone, an active steroid which displays weak androgenic activity in addition to acting as a weak aromatase inhibitor. References Enols Anabolic–androgenic steroids Androstanes Aromatase inhibitors Diketones Hormonal antineoplastic drugs Cyclohexenols Enones
Formestane
[ "Chemistry" ]
219
[ "Enols", "Functional groups" ]
6,662,400
https://en.wikipedia.org/wiki/Dienestrol
Dienestrol (, ) (brand names Dienoestrol, Denestrolin, Dienol and many others), also known as dienoestrol (), is a synthetic nonsteroidal estrogen medication of the stilbestrol group which is or was used to treat menopausal symptoms in the United States and Europe. It has been studied for use by rectal administration in the treatment of prostate cancer in men as well. The medication was introduced in the U.S. in 1947 by Schering as Synestrol and in France in 1948 as Cycladiene. Dienestrol is a close analogue of diethylstilbestrol. It has approximately 223% and 404% of the affinity of estradiol at the ERα and ERβ, respectively. Dienestrol diacetate (brand names Faragynol, Gynocyrol, others) also exists and has been used medically. Isomers See also Benzestrol Hexestrol Methestrol Notes References Abandoned drugs 4-Hydroxyphenyl compounds Stilbenoids Synthetic estrogens
Dienestrol
[ "Chemistry" ]
234
[ "Drug safety", "Abandoned drugs" ]
6,664,738
https://en.wikipedia.org/wiki/Laccase
Laccases () are multicopper oxidases found in plants, fungi, and bacteria. Laccases oxidize a variety of phenolic substrates, performing one-electron oxidations, leading to crosslinking. For example, laccases play a role in the formation of lignin by promoting the oxidative coupling of monolignols, a family of naturally occurring phenols. Other laccases, such as those produced by the fungus Pleurotus ostreatus, play a role in the degradation of lignin, and can therefore be classed as lignin-modifying enzymes. Other laccases produced by fungi can facilitate the biosynthesis of melanin pigments. Laccases catalyze ring cleavage of aromatic compounds. Laccase was first studied by Hikorokuro Yoshida in 1883 and then by Gabriel Bertrand in 1894 in the sap of the Japanese lacquer tree, where it helps to form lacquer, hence the name laccase. Active site The active site consists of four copper centers, which adopt structures classified as type I, type II, and type III. A tricopper ensemble contains types II and III copper (see figure). It is this center that binds O2 and reduces it to water. Each Cu(I,II) couple delivers one electron required for this conversion. The type I copper does not bind O2, but functions solely as an electron transfer site. The type I copper center consists of a single copper atom that is ligated to a minimum of two histidine residues and a single cysteine residue, but in some laccases produced by certain plants and bacteria, the type I copper center contains an additional methionine ligand. The type III copper center consists of two copper atoms that each possess three histidine ligands and are linked to one another via a hydroxide bridging ligand. The final copper center is the type II copper center, which has two histidine ligands and a hydroxide ligand. The type II together with the type III copper center forms the tricopper ensemble, which is where dioxygen reduction takes place. The type III copper can be replaced by Hg(II), which causes a decrease in laccase activity. Cyanide removes all copper from the enzyme, and re-embedding with type I and type II copper has been shown to be impossible. Type III copper, however, can be re-embedded back into the enzyme. A variety of other anions inhibit laccase. Laccases affects the oxygen reduction reaction at low overpotentials. The enzyme has been examined as the cathode in enzymatic biofuel cells. They can be paired with an electron mediator to facilitate electron transfer to a solid electrode wire. Laccases are some of the few oxidoreductases commercialized as industrial catalysts. Activity in wheat dough Laccases have the potential to crosslink food polymers such as proteins and nonstarch polysaccharides in dough. In non-starch polysaccharides, such as arabinoxylans (AX), laccase catalyzes the oxidative gelation of feruloylated arabinoxylans by dimerization of their ferulic esters. These cross-links have been found to greatly increase the maximum resistance and decrease extensibility of the dough. The resistance was increased due to the crosslinking of AX via ferulic acid and resulting in a strong AX and gluten network. Although laccase is known to crosslink AX, under the microscope it was found that the laccase also acted on the flour proteins. Oxidation of the ferulic acid on AX to form ferulic acid radicals increased the oxidation rate of free SH groups on the gluten proteins and thus influenced the formation of S-S bonds between gluten polymers. Laccase is also able to oxidize peptide-bound tyrosine, but very poorly. Because of the increased strength of the dough, it showed irregular bubble formation during proofing. This was a result of the gas (carbon dioxide) becoming trapped within the crust so it could not diffuse out (like it would have normally) and causing abnormal pore size. Resistance and extensibility was a function of dosage, but at very high dosage the dough showed contradictory results: maximum resistance was reduced drastically. The high dosage may have caused extreme changes in the structure of dough, resulting in incomplete gluten formation. Another reason is that it may mimic overmixing, causing negative effects on gluten structure. Laccase-treated dough had low stability over prolonged storage. The dough became softer and this is related to laccase mediation. The laccase-mediated radical mechanism creates secondary reactions of FA-derived radicals that result in breaking of covalent linkages in AX and weakening of the AX gel. Potential applications Due to the ability of laccase to catalyze oxidation reactions of a range of substrates, the use of laccase as a biocatalyst in different industrial applications have been investigated. Laccases have been applied in the production of wines. Laccase is produced by a number of fungal species that can infect grapes, most notably Botrytis cinerea Pers. (1794). Laccase is active at wine pH and its activity is not readily suppressed by sulfur dioxide. It has been noted to cause oxidative browning in white wines and loss of colour in red wines. It can also degrade a number of key phenolic compounds critical to wine quality. Aside from wine, laccases are of interest in the food industry, including food packaging. The ability of laccases to modify complex organic molecules has attracted attention in the area of organic synthesis. Laccases have been also been studied as catalysts in bioremediation to degrade emerging pollutants and pharmaceuticals. See also References Citations General sources External links BRENDA Copper enzymes EC 1.10.3 Natural phenols metabolism Proteins
Laccase
[ "Chemistry" ]
1,245
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
6,665,878
https://en.wikipedia.org/wiki/Phraselator
The Phraselator is a weatherproof handheld language translation device developed by Applied Data Systems and VoxTec, a former division of the military contractor Marine Acoustics, located in Annapolis, Maryland, USA. It was designed to serve as a handheld computer device that translates English into one of 40 different languages. The device The Phraselator is a small speech translation PDA-sized device designed to aid in interpretation. The device does not produce synthesized speech like that utilized by Stephen Hawking; instead, it plays pre-recorded foreign language MP3 files. Users can select the phrase they wish to convey from an English list on the screen or speak into the device. It then uses speech recognition technology called DynaSpeak, developed by SRI International, to play the proper sound file. The accuracy of the speech recognition software is over 70 percent according to software developer Jack Buchanan. The device can also record replies for translation later. Pre-recorded phrases are stored on Secure Digital flash memory cards. A 128 MB card can hold up to 12,000 phrases in four or five languages. Users can download phrase modules from the official website, which contained over 300,000 phrases as of March 2005. Users can also construct their own custom phrase modules. Earlier devices were known to have run on an SA-1110 Strong Arm 206 MHz CPU with 32MB SDRAM and 32MB onboard Flash RAM. A newer model, the P2, was released in 2004 and developed according to feedback from U.S. soldiers. It translates one way from English to approximately 60 other languages. It has a directional microphone, a larger library of phrases and a longer battery life. The 2004 release was created by and utilizes a computer board manufactured by InHand Electronics, Inc. In the future, the device will be able to display pictures so users can ask questions such as "Have you seen this person?" Developer Ace Sarich notes that the device is inferior to human interpreter. Conclusions derived from a Nepal field test conducted by U.S. and Nepal based NGO Himalayan Aid in 2004 seemed to confirm Sarich's comparisons: The very concept of using a machine as a communication point between individuals seemed to actually encourage a more limited form of interaction between tester and respondent. Usually, when limited language skills are present between parties, the genuine struggle and desire to communicate acts as a display of good will – we openly display our weakness in this regard – and the result is a more relaxed and human encounter. This was not necessarily present with the Phraselator as all parties abandoned learning about each other and instead focused on learning how to work with the device. As a tool for bridging any cultural differences or communicating effectively at any length, the Phraselator would not be recommended. This device, at least in the form tested, would best be used in large-scale operations where there is no time for language training and there is a need to communicate fixed ideas, quickly, over the greatest distance by employing large amounts of unskilled users. Large humanitarian or natural disasters in remote areas of third-world countries might be an effective example. Origin The original idea for the device came from Lee Morin, a Navy doctor in Operation Desert Storm. To communicate with patients, he played Arabic audio files from his laptop. He informed Ace Sarich, the vice president of VoxTec, about the idea. VoxTec won a DARPA Small Business Innovation Research grant in early 2001 to develop a military-grade handheld phrase translator. During its development, the Phraselator was tested and evaluated by scientists from the Army Research Laboratory. The device was first field tested in Afghanistan in 2001. By 2002, about 500 Phraselators were built for soldiers around the world with another 250 ordered by the U.S. Special Forces. The device cost $2000 to develop and could convert spoken English into one of 200,000 recorded commands and questions in 30 languages. However, the device could only translate one-way. At the time, the only existing two-way voice translator that could convert speech back and forth between languages was the Audio Voice Translation Guide System, or TONGUES, which was developed by Carnegie Mellon University for Lockheed Martin. As part of a DARPA program known as the Spoken Language Communication and Translation System for Tactical Use, SRI International has further developed two-way translation software for use in Iraq called IraqComm in 2006 which contains a vocabulary of 40,000 English words and 50,000 words in Iraqi Arabic. Notable users The handheld translator was recently used by U.S. troops while providing relief to tsunami victims in early 2005. About 500 prototypes of the device were provided to U.S. military forces in Operation Enduring Freedom. Units loaded with Haitian dialects have been provided to U.S. troops in Haiti. Army military police have used it in Kandahar to communicate with POWs. In late 2004, the U.S. Navy began to augment some ships with a version of the device attached to large speakers in order to broadcast clear voice instructions up to away. Corrections officers and law enforcement in Oneida County, New York, have tested the device. Hospital emergency rooms and health departments have also evaluated it. Several Native American tribes such as the Choctaw Nation, the Ponca, and the Comanche Nation have also used the device to preserve their dying languages. Various law enforcement agencies, such as the Los Angeles Police Department, also use the phraselator in their patrol cars. Awards In March 2004, DARPA director Dr. Tony Tether presented the Small Business Innovative Research Award to the VoxTec division of Marine Acoustics at DARPATech 2004 in Anaheim, CA. The device was recently listed as one of "Ten Emerging Technologies That Will Change Your World" in MIT's Technology Review. Pop culture Software developer Jack Buchanan believes that building a device similar to the fictional universal translator seen in Star Trek would be harder than building the Enterprise. The device was mentioned in a list of "Top 10 Star Trek Tech" on Space.com. References External links Phraselator official site Voxtec official site Marine Acoustics official site SRI International official site IraqComm official site SRI DynaSpeak web page DARPA-Developed Device Bridges Language Divides Helping Troops in Iraq & Afghanistan Connect with Locals InHand Electronics P2 case study Machine translation Computer-assisted translation
Phraselator
[ "Technology" ]
1,274
[ "Machine translation", "Natural language and computing", "Computer-assisted translation" ]
2,771,364
https://en.wikipedia.org/wiki/Giuga%20number
In number theory, a Giuga number is a composite number such that for each of its distinct prime factors we have , or equivalently such that for each of its distinct prime factors pi we have . The Giuga numbers are named after the mathematician Giuseppe Giuga, and relate to his conjecture on primality. Definitions Alternative definition for a Giuga number due to Takashi Agoh is: a composite number n is a Giuga number if and only if the congruence holds true, where B is a Bernoulli number and is Euler's totient function. An equivalent formulation due to Giuseppe Giuga is: a composite number n is a Giuga number if and only if the congruence and if and only if All known Giuga numbers n in fact satisfy the stronger condition Examples The sequence of Giuga numbers begins 30, 858, 1722, 66198, 2214408306, 24423128562, 432749205173838, … . For example, 30 is a Giuga number since its prime factors are 2, 3 and 5, and we can verify that 30/2 - 1 = 14, which is divisible by 2, 30/3 - 1 = 9, which is 3 squared, and 30/5 - 1 = 5, the third prime factor itself. Properties The prime factors of a Giuga number must be distinct. If divides , then it follows that , where is divisible by . Hence, would not be divisible by , and thus would not be a Giuga number. Thus, only square-free integers can be Giuga numbers. For example, the factors of 60 are 2, 2, 3 and 5, and 60/2 - 1 = 29, which is not divisible by 2. Thus, 60 is not a Giuga number. This rules out squares of primes, but semiprimes cannot be Giuga numbers either. For if , with primes, then , so will not divide , and thus is not a Giuga number. All known Giuga numbers are even. If an odd Giuga number exists, it must be the product of at least 14 primes. It is not known if there are infinitely many Giuga numbers. It has been conjectured by Paolo P. Lava (2009) that Giuga numbers are the solutions of the differential equation n' = n+1, where n' is the arithmetic derivative of n. (For square-free numbers , , so n' = n+1 is just the last equation in the above section Definitions, multiplied by n.) José Mª Grau and Antonio Oller-Marcén have shown that an integer n is a Giuga number if and only if it satisfies n' = a n + 1 for some integer a > 0, where n' is the arithmetic derivative of n. (Again, n' = a n + 1 is identical to the third equation in Definitions, multiplied by n.) See also Carmichael number Primary pseudoperfect number Znám's problem References Eponymous numbers in mathematics Integer sequences Unsolved problems in number theory
Giuga number
[ "Mathematics" ]
644
[ "Sequences and series", "Unsolved problems in mathematics", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Mathematical objects", "Unsolved problems in number theory", "Combinatorics", "Mathematical problems", "Numbers", "Number theory" ]
2,772,921
https://en.wikipedia.org/wiki/G%20Scorpii
G Scorpii (abbreviated G Sco), also named Fuyue, is a giant star in the constellation of Scorpius. It has an apparent magnitude of +3.19. It is approximately 126 light-years from the Sun. Nomenclature G Scorpii is the star's Bayer designation. It was formerly situated in the constellation of Telescopium where it was designated γ Telescopii, Latinised to Gamma Telescopii. It was resited in Scorpius and redesignated G Scorpii by Benjamin Apthorp Gould. On 30 June 2017 it was included in the List of IAU-approved Star Names. G Scorpii bore the traditional name Fuyue () in ancient China. Fu Yue was a former slave that became a high-ranking minister to Shang dynasty ruler Wu Ding. Properties G Scorpii is an orange K-type giant. The measured angular diameter is . At the estimated distance of this system, this yields a physical size of about 16 times the radius of the Sun. Calculations based on its physical properties give a diameter of about . With an effective surface temperature of , it has a bolometric luminosity of . Evolutionary models show that G Scorpii has probably left the red giant branch and is now fusing helium in its core. This makes it a red clump star, at the cool end of the horizontal branch. Just 5 arcminutes to the east is the globular cluster NGC 6441. At magnitude 3.2, G Scorpii is around 40 times brighter than the entire globular cluster. References Scorpii, G Durchmusterung objects 6630 161892 087261 K-type giants Scorpius
G Scorpii
[ "Astronomy" ]
373
[ "Scorpius", "Constellations" ]
2,774,201
https://en.wikipedia.org/wiki/Boiler%20explosion
A boiler explosion is a catastrophic failure of a boiler. There are two types of boiler explosions. One type is a failure of the pressure parts of the steam and water sides. There can be many different causes, such as failure of the safety valve, corrosion of critical parts of the boiler, or low water level. Corrosion along the edges of lap joints was a common cause of early boiler explosions. In steam locomotive boilers, as knowledge was gained by trial and error in early days, the explosive situations and consequent damage due to explosions were inevitable. However, improved design and maintenance markedly reduced the number of boiler explosions by the end of the 19th century. Further improvements continued in the 20th century. On land-based boilers, explosions of the pressure systems happened regularly in stationary steam boilers in the Victorian era, but are now very rare because of the various protections provided, and because of regular inspections compelled by governmental and industry requirements. The second kind is a fuel/air explosion in the furnace, which would more properly be termed a firebox explosion. Firebox explosions in solid-fuel-fired boilers are rare, but firebox explosions in gas or oil-fired boilers are still a potential hazard. Principle Boiler steam explosions Many shell-type boilers carry a large bath of liquid water which is heated to a higher temperature and pressure (enthalpy) than boiling water would be at atmospheric pressure. During normal operation, the liquid water remains in the bottom of the boiler due to gravity, steam bubbles rise through the liquid water and collect at the top for use until saturation pressure is reached, then the boiling stops. If some pressure is released, boiling begins again, and so on. If steam is released normally, say by opening a throttle valve, the bubbling action of the water remains moderate and relatively dry steam can be drawn from the highest point in the vessel. If steam is released more quickly, the more vigorous boiling action that results can throw a fine spray of droplets up as "wet steam" which can cause damage to piping, engines, turbines and other equipment downstream. If a large crack or other opening in the boiler vessel allows the internal pressure to drop very suddenly, the heat energy remaining in the water will cause even more of the liquid to flash into steam bubbles, which then rapidly displace the remaining liquid. The potential energy of the escaping steam and water are now transformed into work, just as they would have done in an engine; with enough force to peel back the material around the break, severely distorting the shape of the plate which was formerly held in place by stays, or self-supported by its original cylindrical shape. The rapid release of steam and water can provide a very potent blast, and cause great damage to surrounding property or personnel. A failure of this type qualifies as a boiling liquid expanding vapor explosion (BLEVE). The rapidly expanding steam bubbles can also perform work by throwing large "slugs" of water inside the boiler in the direction of the opening, and at astonishing velocities. A fast-moving mass of water carries a great deal of kinetic energy, and in collision with the shell of the boiler results in a violent destructive effect. This can greatly enlarge the original rupture, or tear the shell in two. Many plumbers, firefighters, and steamfitters are aware of this phenomenon, which is called "water hammer". A several-ounce "slug" of water passing through a steam line at high velocity and striking a 90-degree elbow can instantly fracture a fitting that is otherwise capable of handling several times the normal static pressure. It can then be understood that a few hundred, or even a few thousand pounds of water moving at the same velocity inside a boiler shell can easily blow out a tube sheet, collapse a firebox, even toss the entire boiler a surprising distance through reaction as the water exits the boiler, like the recoil of a heavy cannon firing a ball. Several accounts of the SL-1 experimental reactor accident vividly describe the incredibly powerful effect of water hammer on a pressure vessel: A steam locomotive operating at would have a temperature of about , and a specific enthalpy of . Since standard pressure saturated water has a specific enthalpy of just , the difference between the two specific enthalpies, , is the total energy expended in the explosion. So in the case of a large locomotive which can hold as much as of water at a high pressure and temperature state, this explosion would have a theoretical energy release equal to about of TNT. Firebox explosions In the case of a firebox explosion, these typically occur after a burner flameout. Oil fumes, natural gas, propane, coal, or any other fuel can build up inside the combustion chamber. This is especially of concern when the vessel is hot; the fuels will rapidly volatilize due to the temperature. Once the lower explosive limit (LEL) is reached, any source of ignition will cause an explosion of the vapors. A fuel explosion within the confines of the firebox may damage the pressurized boiler tubes and interior shell, potentially triggering structural failure, steam or water leakage, and/or a secondary boiler shell failure and steam explosion. A common form of minor firebox "explosion" is known as "drumming" and can occur with any type of fuel. Instead of the normal "roar" of the fire, a rhythmic series of "thumps" and flashes of fire below the grate and through the firedoor indicate that the combustion of the fuel is proceeding through a rapid series of detonations, caused by an inappropriate air/fuel mixture with regard to the level of draft available. This usually causes no damage in locomotive type boilers, but can cause cracks in masonry boiler settings if allowed to continue. Grooving The plates of early locomotive boilers were joined by simple overlapping joints. This practice was satisfactory for the annular joints, running around the boiler, but in longitudinal joints, along the length of the boiler, the overlap of the plates diverted the boiler cross-section from its ideal circular shape. Under pressure the boiler strained to reach, as nearly as possible, the circular cross-section. Because the double-thickness overlap was stronger than the surrounding metal, the repeated bending and release caused by the variations in boiler pressure caused internal cracks, or grooves (deep pitting), along the length of the joint. The cracks offered a starting point for internal corrosion, which could hasten failure. It was eventually found that this internal corrosion could be reduced by using plates of sufficient size so that no joints were situated below the water level. Eventually the simple lap seam was replaced by the single or double butt-strap seams, which do not suffer from this defect. Due to the constant expansion and contraction of the firebox a similar form of "stress corrosion" can take place at the ends of staybolts where they enter the firebox plates, and is accelerated by poor water quality. Often referred to as "necking", this type of corrosion can reduce the strength of the staybolts until they are incapable of supporting the firebox at normal pressure. Grooving (deep, localized pitting) also occurs near the waterline, particularly in boilers that are fed with water that has not been de-aerated or treated with oxygen scavenging agents. All "natural" sources of water contain dissolved air, which is released as a gas when the water is heated. The air (which contains oxygen) collects in a layer near the surface of the water and greatly accelerates corrosion of the boiler plates in that area. Firebox The intricate shape of a locomotive firebox, whether made of soft copper or of steel, can only resist the steam pressure on its internal walls if these are supported by stays attached to internal girders and the outer walls. They are liable to fail through fatigue (because the inner and outer walls expand at different rates under the heat of the fire), from corrosion, or from wasting as the heads of the stays exposed to the fire are burned away. If the stays fail the firebox will explode inwards. Regular visual inspection, internally and externally, is employed to prevent this. Even a well-maintained firebox will fail explosively if the water level in the boiler is allowed to fall far enough to leave the top plate of the firebox (crown sheet) uncovered. This can occur when crossing the summit of the hill, as the water flows to the front part of the boiler and can expose the firebox crown sheet. The majority of locomotive explosions are firebox explosions caused by such crown sheet uncovering. Causes There are many causes for boiler explosions such as poor water treatment causing scaling and over heating of the plates, low water level, a stuck safety valve, or even a furnace explosion that in turn, if severe enough, can cause a boiler explosion. Poor operator training resulting in neglect or other mishandling of the boiler has been a frequent cause of explosions since the beginning of the industrial revolution. In the late 19th and early 20th century, the inspection records of various sources in the U.S., UK, and Europe showed that the most frequent cause of boiler explosions was weakening of boilers through simple rusting, by anywhere from two to five times more than all other causes. Before materials science, inspection standards, and quality control caught up with the rapidly growing boiler manufacturing industry, a significant number of boiler explosions were directly traceable to poor design, workmanship, and undetected flaws in poor quality materials. The alarming frequency of boiler failures in the U.S. due to defects in materials and design were attracting the attention of international engineering standards organizations, such as the ASME, which established their first Boiler Testing Code in 1884. The boiler explosion that caused the Grover Shoe Factory disaster in Brockton, Massachusetts, on 10 March 1905, resulted in 58 deaths and 150 injuries, and inspired the state of Massachusetts to publish its first boiler laws in 1908. Several written sources provide a concise description of the causes of boiler explosions: And: Early investigations into causes The stationary steam engines used to power machinery first came to prominence during the Industrial Revolution, and in the early days there were many boiler explosions from a variety of causes. One of the first investigators of the problem was William Fairbairn, who helped establish the first insurance company dealing with the losses such explosions could cause. He also established experimentally that the hoop stress in a cylindrical pressure vessel like a boiler was twice the longitudinal stress. Such investigations helped him and others explain the importance of stress concentrations in weakening boilers. While deterioration and mishandling are probably the most common causes of boiler explosions, the actual mechanism of a catastrophic boiler failure was not well documented until extensive experimentation was undertaken by U.S. boiler inspectors in the early 20th century. Several different attempts were made to cause a boiler to explode by various means, but one of the most interesting experiments demonstrated that in certain circumstances, if a sudden opening in the boiler allowed steam to escape too rapidly, water hammer could cause destruction of the entire pressure vessel: But the highly destructive mechanism of water hammer in boiler explosions was understood long before then, as D. K. Clark wrote on 10 February 1860, in a letter to the editors of Mechanics Magazine: Boiler explosions are common in sinking ships once the hot boiler touches cold sea water, as the sudden cooling of the hot metal causes it to crack; for instance, when the was torpedoed by a U-boat, the torpedoes and resulting boiler explosion caused the ship to go down in two minutes, leaving Poon Lim as the only survivor in a complement of 53 crew. In locomotives Boiler explosions are of a particular danger in (locomotive-type) fire tube boilers because the top of the firebox (crown sheet) must be covered with some amount of water at all times; or the heat of the fire can weaken the crown sheet or crown stays to the point of failure, even at normal working pressure. This was the cause of the Gettysburg Railroad firebox explosion near Gardners, Pennsylvania, in 1995, where low water allowed the front of the crown sheet to overheat until the regular crown stays pulled through the sheet, releasing a great deal of steam and water under full boiler pressure into the firebox. The crown sheet design included several alternating rows of button-head safety stays, which limited the failure of the crown sheet to the first five or six rows of conventional stays, preventing a collapse of the entire crown sheet. This type of failure is not limited to railway engines, as locomotive-type boilers have been used for traction engines, portable engines, skid engines used for mining or logging, stationary engines for sawmills and factories, for heating, and as package boilers providing steam for other processes. In all applications, maintaining the proper water level is essential for safe operation. Hewison (1983) gives a comprehensive account of British boiler explosions, listing 137 between 1815 and 1962. It is noteworthy that 122 of these were in the 19th century and only 15 in the 20th century. Boiler explosions generally fell into two categories. The first is the breakage of the boiler barrel itself, through weakness/damage or excessive internal pressure, resulting in sudden discharge of steam over a wide area. Stress corrosion cracking at the lap joints was a common cause of early boiler explosions, probably caused by caustic embrittlement. The water used in boilers was not often closely controlled, and if acidic, could corrode the wrought iron boiler plates. Galvanic corrosion was an additional problem where copper and iron were in contact. Boiler plates have been thrown up to a quarter of a mile (Hewison, Rolt). The second type is the collapse of the firebox under steam pressure from the adjoining boiler, releasing flames and hot gases into the cab. Improved design and maintenance almost totally eliminated the first type, but the second type is always possible if the driver and fireman do not maintain the water level in the boiler. Boiler barrels could explode if the internal pressure became too high. To prevent this, safety valves were installed to release the pressure at a set level. Early examples were spring-loaded, but John Ramsbottom invented a tamper-proof valve which was universally adopted. The other common cause of explosions was internal corrosion which weakened the boiler barrel so that it could not withstand normal operating pressure. In particular, grooves could occur along horizontal seams (lap joints) below water level. Dozens of explosions resulted, but were eliminated by 1900 by the adoption of butt joints, plus improved maintenance schedules and regular hydraulic testing. Fireboxes were generally made of copper, though later locomotives had steel fireboxes. They were held to the outer part of the boiler by stays (numerous small supports). Parts of the firebox in contact with full steam pressure have to be kept covered with water, to stop them overheating and weakening. The usual cause of firebox collapses is that the boiler water level falls too low and the top of the firebox (crown sheet) becomes uncovered and overheats. This occurs if the fireman has failed to maintain water level or the level indicator (gauge glass) is faulty. A less common reason is breakage of large numbers of stays, due to corrosion or unsuitable material. Throughout the 20th century, two boiler barrel failures and thirteen firebox collapses occurred in the UK. The boiler barrel failures occurred at Cardiff in 1909 and Buxton in 1921; both were caused by misassembly of the safety valves causing the boilers to exceed their design pressures. Of the 13 firebox collapses, four were due to broken stays, one to scale buildup on the firebox, and the rest were due to low water level. Steamboat boilers The Pennsylvania was a side wheeler steamboat which suffered a boiler explosion in the Mississippi River and sank at Ship Island near Memphis, Tennessee, on 13 June 1858. Of the 450 passengers on board more than 250 died, including Henry Clemens, the younger brother of the author Mark Twain. , a small steamboat used to transfer passengers and cargo to and from the large coastal steamships that stopped in San Pedro Harbor in the early 1860s, suffered disaster when its boiler exploded violently in San Pedro Bay, the port of Los Angeles, near Wilmington, California, on 27 April 1863, killing twenty-six people and injuring many others of the fifty-three or more passengers on board. The steamboat Sultana was destroyed in an explosion on 27 April 1865, resulting in the greatest maritime disaster in United States history. An estimated 1,549 passengers were killed when three of the ship's four boilers exploded and the Sultana burned and sank not far from Memphis, Tennessee. The cause was traced to a poorly executed repair to the shell of one boiler; the patch failed, and debris from that boiler ruptured two more. Another US Civil War steamboat explosion was the steamer Eclipse on 27 January 1865, which was carrying members of the 9th Indiana Artillery. One official record reports 10 killed and 68 injured; a later report mentions that 27 were killed and 78 wounded. Fox's Regimental Losses reports 29 killed. The boiler of Canada's PS Waubuno may have exploded on the ship's final voyage in 1879, though the cause of the sinking remains unknown. An explosion could have occurred due to negligent upkeep or to contact with the cold water of Georgian Bay while foundering in a storm. Nuclear reactor explosions A steam explosion can occur in any kind of a water heater, where a sufficient amount of energy is delivered and the steam created exceeds the strength of the vessel. When the heat delivery is sufficiently rapid, a localized superheating can occur, resulting in a water hammer destroying the vessel. The SL-1 nuclear reactor accident is an example of a superheated burst of steam. However, in the SL-1 example the pressure was released by the forced ejection of control rods which allowed the steam to be vented. The reactor did not explode, nor did the vessel rupture. Modern boilers Modern boilers are designed with redundant pumps, valves, water level monitors, fuel cutoffs, automated controls, and pressure relief valves. In addition, the construction must adhere to strict engineering guidelines set by the relevant authorities. The NBIC, ASME, and others attempt to ensure safe boiler designs by publishing detailed standards. The result is a boiler unit which is less prone to catastrophic accidents. Also improving safety is the increasing use of "package boilers". These are boilers which are built at a factory then shipped out as a complete unit to the job site. These typically have better quality and fewer issues than boilers which are site assembled tube-by-tube. A package boiler only needs the final connections to be made (electrical, breaching, condensate lines, etc.) to complete the installation. Key safety developments Notable accidents See also Fusible plug John Hick Lists of rail accidents Notes Bibliography References Further reading Bartrip, P. W. J. "The state and the steam boiler in Britain". International review of social history 25, 1980, 77–105. Government intervention and the role of interest groups in 19th century Britain in regard to stationary boilers. Winship, I. R. "The decline in locomotive boiler explosions in Britain 1850–1900". Transactions – Newcomen Society 60, 1988–89, 73–94. Technical and other factors that reduced the incidence of explosions. External links Boilers Explosion protection Steam power
Boiler explosion
[ "Physics", "Chemistry", "Engineering" ]
3,933
[ "Explosion protection", "Physical quantities", "Steam power", "Combustion engineering", "Power (physics)", "Explosions", "Boilers", "Pressure vessels" ]
2,775,268
https://en.wikipedia.org/wiki/Refinement%20calculus
The refinement calculus is a formalized approach to stepwise refinement for program construction. The required behaviour of the final executable program is specified as an abstract and perhaps non-executable "program", which is then refined by a series of correctness-preserving transformations into an efficiently executable program. Proponents include Ralph-Johan Back, who originated the approach in his 1978 PhD thesis On the Correctness of Refinement Steps in Program Development, and Carroll Morgan, especially with his book Programming from Specifications (Prentice Hall, 2nd edition, 1994, ). In the latter case, the motivation was to link Abrial's specification notation Z, via a rigorous relation of behaviour-preserving program refinement, to an executable programming notation based on Dijkstra's language of guarded commands. Behaviour-preserving in this case means that any Hoare triple satisfied by a program should also be satisfied by any refinement of it, which notion leads directly to specification statements as pre- and postconditions standing, on their own, for any program that could soundly be placed between them. References Formal methods Formal specification languages Logical calculi
Refinement calculus
[ "Mathematics", "Engineering" ]
241
[ "Software engineering", "Mathematical logic", "Logical calculi", "Formal methods" ]
30,208,106
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Gallai%20theorem
The Erdős–Gallai theorem is a result in graph theory, a branch of combinatorial mathematics. It provides one of two known approaches to solving the graph realization problem, i.e. it gives a necessary and sufficient condition for a finite sequence of natural numbers to be the degree sequence of a simple graph. A sequence obeying these conditions is called "graphic". The theorem was published in 1960 by Paul Erdős and Tibor Gallai, after whom it is named. Statement A sequence of non-negative integers can be represented as the degree sequence of a finite simple graph on n vertices if and only if is even and holds for every in . Proofs It is not difficult to show that the conditions of the Erdős–Gallai theorem are necessary for a sequence of numbers to be graphic. The requirement that the sum of the degrees be even is the handshaking lemma, already used by Euler in his 1736 paper on the bridges of Königsberg. The inequality between the sum of the largest degrees and the sum of the remaining degrees can be established by double counting: the left side gives the numbers of edge-vertex adjacencies among the highest-degree vertices, each such adjacency must either be on an edge with one or two high-degree endpoints, the term on the right gives the maximum possible number of edge-vertex adjacencies in which both endpoints have high degree, and the remaining term on the right upper bounds the number of edges that have exactly one high degree endpoint. Thus, the more difficult part of the proof is to show that, for any sequence of numbers obeying these conditions, there exists a graph for which it is the degree sequence. The original proof of was long and involved. cites a shorter proof by Claude Berge, based on ideas of network flow. Choudum instead provides a proof by mathematical induction on the sum of the degrees: he lets be the first index of a number in the sequence for which (or the penultimate number if all are equal), uses a case analysis to show that the sequence formed by subtracting one from and from the last number in the sequence (and removing the last number if this subtraction causes it to become zero) is again graphic, and forms a graph representing the original sequence by adding an edge between the two positions from which one was subtracted. consider a sequence of "subrealizations", graphs whose degrees are upper bounded by the given degree sequence. They show that, if G is a subrealization, and i is the smallest index of a vertex in G whose degree is not equal to di, then G may be modified in a way that produces another subrealization, increasing the degree of vertex i without changing the degrees of the earlier vertices in the sequence. Repeated steps of this kind must eventually reach a realization of the given sequence, proving the theorem. Relation to integer partitions describe close connections between the Erdős–Gallai theorem and the theory of integer partitions. Let ; then the sorted integer sequences summing to may be interpreted as the partitions of . Under majorization of their prefix sums, the partitions form a lattice, in which the minimal change between an individual partition and another partition lower in the partition order is to subtract one from one of the numbers and add it to a number that is smaller by at least two ( could be zero). As Aigner and Triesch show, this operation preserves the property of being graphic, so to prove the Erdős–Gallai theorem it suffices to characterize the graphic sequences that are maximal in this majorization order. They provide such a characterization, in terms of the Ferrers diagrams of the corresponding partitions, and show that it is equivalent to the Erdős–Gallai theorem. Graphic sequences for other types of graph Similar theorems describe the degree sequences of simple directed graphs, simple directed graphs with loops, and simple bipartite graphs . The first problem is characterized by the Fulkerson–Chen–Anstee theorem. The latter two cases, which are equivalent, are characterized by the Gale–Ryser theorem. Stronger version proved that it suffices to consider the th inequality such that with and for . restrict the set of inequalities for graphs in an opposite thrust. If an even-summed positive sequence has no repeated entries other than the maximum and the minimum (and the length exceeds the largest entry), then it suffices to check only the th inequality, where . Generalization A finite sequences of nonnegative integers with is graphic if is even and there exists a sequence that is graphic and majorizes . This result was given by . reinvented it and gave a more direct proof. See also Havel–Hakimi algorithm References . . Gallai theorem Theorems in graph theory
Erdős–Gallai theorem
[ "Mathematics" ]
988
[ "Theorems in graph theory", "Theorems in discrete mathematics" ]
30,220,726
https://en.wikipedia.org/wiki/Quasisymmetric%20map
In mathematics, a quasisymmetric homeomorphism between metric spaces is a map that generalizes bi-Lipschitz maps. While bi-Lipschitz maps shrink or expand the diameter of a set by no more than a multiplicative factor, quasisymmetric maps satisfy the weaker geometric property that they preserve the relative sizes of sets: if two sets A and B have diameters t and are no more than distance t apart, then the ratio of their sizes changes by no more than a multiplicative constant. These maps are also related to quasiconformal maps, since in many circumstances they are in fact equivalent. Definition Let (X, dX) and (Y, dY) be two metric spaces. A homeomorphism f:X → Y is said to be η-quasisymmetric if there is an increasing function η : [0, ∞) → [0, ∞) such that for any triple x, y, z of distinct points in X, we have Basic properties Inverses are quasisymmetric If f : X → Y is an invertible η-quasisymmetric map as above, then its inverse map is -quasisymmetric, where Quasisymmetric maps preserve relative sizes of sets If and are subsets of and is a subset of , then Examples Weakly quasisymmetric maps A map f:X→Y is said to be H-weakly-quasisymmetric for some if for all triples of distinct points in , then Not all weakly quasisymmetric maps are quasisymmetric. However, if is connected and and are doubling, then all weakly quasisymmetric maps are quasisymmetric. The appeal of this result is that proving weak-quasisymmetry is much easier than proving quasisymmetry directly, and in many natural settings the two notions are equivalent. δ-monotone maps A monotone map f:H → H on a Hilbert space H is δ-monotone if for all x and y in H, To grasp what this condition means geometrically, suppose f(0) = 0 and consider the above estimate when y = 0. Then it implies that the angle between the vector x and its image f(x) stays between 0 and arccos δ < π/2. These maps are quasisymmetric, although they are a much narrower subclass of quasisymmetric maps. For example, while a general quasisymmetric map in the complex plane could map the real line to a set of Hausdorff dimension strictly greater than one, a δ-monotone will always map the real line to a rotated graph of a Lipschitz function L:ℝ → ℝ. Doubling measures The real line Quasisymmetric homeomorphisms of the real line to itself can be characterized in terms of their derivatives. An increasing homeomorphism f:ℝ → ℝ is quasisymmetric if and only if there is a constant C > 0 and a doubling measure μ on the real line such that Euclidean space An analogous result holds in Euclidean space. Suppose C = 0 and we rewrite the above equation for f as Writing it this way, we can attempt to define a map using this same integral, but instead integrate (what is now a vector valued integrand) over ℝn: if μ is a doubling measure on ℝn and then the map is quasisymmetric (in fact, it is δ-monotone for some δ depending on the measure μ). Quasisymmetry and quasiconformality in Euclidean space Let and be open subsets of ℝn. If f : Ω → Ω´ is η-quasisymmetric, then it is also K-quasiconformal, where is a constant depending on . Conversely, if f : Ω → Ω´ is K-quasiconformal and is contained in , then is η-quasisymmetric on , where depends only on . Quasi-Möbius maps A related but weaker condition is the notion of quasi-Möbius maps where instead of the ratio only the cross-ratio is considered: Definition Let (X, dX) and (Y, dY) be two metric spaces and let η : [0, ∞) → [0, ∞) be an increasing function. An η-quasi-Möbius homeomorphism f:X → Y is a homeomorphism for which for every quadruple x, y, z, t of distinct points in X, we have See also Douady–Earle extension References Geometry Homeomorphisms Mathematical analysis Metric geometry
Quasisymmetric map
[ "Mathematics" ]
947
[ "Mathematical analysis", "Topology", "Homeomorphisms", "Geometry" ]
34,310,141
https://en.wikipedia.org/wiki/Tests%20of%20relativistic%20energy%20and%20momentum
Tests of relativistic energy and momentum are aimed at measuring the relativistic expressions for energy, momentum, and mass. According to special relativity, the properties of particles moving approximately at the speed of light significantly deviate from the predictions of Newtonian mechanics. For instance, the speed of light cannot be reached by massive particles. Today, those relativistic expressions for particles close to the speed of light are routinely confirmed in undergraduate laboratories, and necessary in the design and theoretical evaluation of collision experiments in particle accelerators. See also Tests of special relativity for a general overview. Overview In classical mechanics, kinetic energy and momentum are expressed as On the other hand, special relativity predicts that the speed of light is constant in all inertial frames of references. The relativistic energy–momentum relation reads: , from which the relations for rest energy , relativistic energy (rest + kinetic) , kinetic energy , and momentum of massive particles follow: , where . So relativistic energy and momentum significantly increase with speed, thus the speed of light cannot be reached by massive particles. In some relativity textbooks, the so-called "relativistic mass" is used as well. However, this concept is considered disadvantageous by many authors, instead the expressions of relativistic energy and momentum should be used to express the velocity dependence in relativity, which provide the same experimental predictions. Early experiments First experiments capable of detecting such relations were conducted by Walter Kaufmann, Alfred Bucherer and others between 1901 and 1915. These experiments were aimed at measuring the deflection of beta rays within a magnetic field so as to determine the mass-to-charge ratio of electrons. Since the charge was known to be velocity independent, any variation had to be attributed to alterations in the electron's momentum or mass (formerly known as transverse electromagnetic mass equivalent to the "relativistic mass" as indicated above). Since relativistic mass is not often used anymore in modern textbooks, those tests can be described of measurements of relativistic momentum or energy, because the following relation applies: Electrons traveling between 0.25–0.75c indicated an increase of momentum in agreement with the relativistic predictions, and were considered as clear confirmations of special relativity. However, it was later pointed out that although the experiments were in agreement with relativity, the precision was not sufficient to rule out competing models of the electron, such as the one of Max Abraham. Already in 1915, however, Arnold Sommerfeld was able to derive the Fine structure of hydrogen-like spectra by using the relativistic expressions for momentum and energy (in the context of the Bohr–Sommerfeld theory). Subsequently, Karl Glitscher simply substituted the relativistic expressions for Abraham's, demonstrating that Abraham's theory is in conflict with experimental data and is therefore refuted, while relativity is in agreement with the data. Precision measurements In 1940, Rogers et al. performed the first electron deflection test sufficiently precise to definitely rule out competing models. As in the Bucherer-Neumann experiments, the velocity and the charge-mass-ratio of beta particles of velocities up to 0.75c was measured. However, they made many improvements, including the employment of a Geiger counter. The accuracy of the experiment by which relativity was confirmed was within 1%. An even more precise electron deflection test was conducted by Meyer et al. (1963). They tested electrons traveling at velocities from 0.987 to 0.99c, which were deflected in a static homogenous magnetic field by which p was measured, and a static cylindrical electric field by which was measured. They confirmed relativity with an upper limit for deviations of ~0.00037. Also measurements of the charge-to-mass ratio and thus momentum of protons have been conducted. Grove and Fox (1953) measured 385-MeV protons moving at ~0.7c. Determination of the angular frequencies and of the magnetic field provided the charge-to-mass ratio. This, together with measuring the magnetic center, allowed to confirm the relativistic expression for the charge-to-mass ratio with a precision of ~0.0006. However, Zrelov et al. (1958) criticized the scant information given by Grove and Fox, emphasizing the difficulty of such measurements due to the complex motion of the protons. Therefore, they conducted a more extensive measurement, in which protons of 660 MeV with mean velocity of 0.8112c were employed. The proton's momentum was measured using a Litz wire, and the velocity was determined by evaluation of Cherenkov radiation. They confirmed relativity with an upper limit for deviations of ~0.0041. Bertozzi experiment Since the 1930s, relativity was needed in the construction of particle accelerators, and the precision measurements mentioned above clearly confirmed the theory as well. But those tests demonstrate the relativistic expressions in an indirect way, since many other effects have to be considered in order to evaluate the deflection curve, velocity, and momentum. So an experiment specifically aimed at demonstrating the relativistic effects in a very direct way was conducted by William Bertozzi (1962, 1964). He employed the electron accelerator facility at MIT in order to initiate five electron runs, with electrons of kinetic energies between 0.5 and 15 MeV. These electrons were produced by a Van de Graaff generator and traveled a distance of 8.4 m, until they hit an aluminium disc. First, the time of flight of the electrons was measured in all five runs – the velocity data obtained were in close agreement with the relativistic expectation. However, at this stage the kinetic energy was only indirectly determined by the accelerating fields. Therefore, the heat produced by some electrons hitting the aluminium disc was measured by calorimetry in order to directly obtain their kinetic energy - those results agreed with the expected energy within 10% error margin. Undergraduate experiments Various experiments have been performed which, due to their simplicity, are still used as undergraduate experiments. Mass, velocity, momentum, and energy of electrons have been measured in different ways in those experiments, all of them confirming relativity. They include experiments involving beta particles, Compton scattering in which electrons exhibit highly relativistic properties and positron annihilation. Particle accelerators In modern particle accelerators at high energies, the predictions of special relativity are routinely confirmed, and are necessary for the design and theoretical evaluation of collision experiments, especially in the ultrarelativistic limit. For instance, time dilation must be taken into account to understand the dynamics of particle decay, and the relativistic velocity addition theorem explains the distribution of synchrotron radiation. Regarding the relativistic energy-momentum relations, a series of high precision velocity and energy-momentum experiments have been conducted, in which the energies employed were necessarily much higher than the experiments mentioned above. Velocity Time of flight measurements have been conducted to measure differences in the velocities of electrons and light at the SLAC National Accelerator Laboratory. For instance, Brown et al. (1973) found no difference in the time of flight of 11-GeV electrons and visible light, setting an upper limit of velocity differences of . Another SLAC experiment conducted by Guiragossián et al. (1974) accelerated electrons up to energies of 15 to 20.5 GeV. They used a radio frequency separator (RFS) to measure time-of-flight differences and thus velocity differences between those electrons and 15-GeV gamma rays on a path length of 1015 m. They found no difference, increasing the upper limit to . Already before, Alväger et al. (1964) at the CERN Proton Synchrotron executed a time of flight measurement to test the Newtonian momentum relations for light, being valid in the so-called emission theory. In this experiment, gamma rays were produced in the decay of 6-GeV pions traveling at 0.99975c. If Newtonian momentum were valid, those gamma rays should have traveled at superluminal speeds. However, they found no difference and gave an upper limit of . Energy and Calorimetry The intrusion of particles into particle detectors is connected with electron–positron annihilation, Compton scattering, Cherenkov radiation etc., so that a cascade of effects is leading to the production of new particles (photons, electrons, neutrinos, etc.). The energy of such particle showers corresponds to the relativistic kinetic energy and rest energy of the initial particles. This energy can be measured by calorimeters in an electrical, optical, thermal, or acoustical way. Thermal measurements in order to estimate the relativistic kinetic energy were already carried out by Bertozzi as mentioned above. Additional measurements at SLAC followed, in which the heat produced by 20-GeV electrons was measured in 1982. A beam dump of water-cooled aluminium was employed as calorimeter. The results were in agreement with special relativity, even though the accuracy was only 30%. However, the experimentalists alluded to the fact, that calorimetric tests with 10-GeV electrons were executed already in 1969. There, copper was used as beam dump, and an accuracy of 1% was achieved. In modern calorimeters called electromagnetic or hadronic depending on the interaction, the energy of the particle showers is often measured by the ionization caused by them. Also excitations can arise in scintillators (see scintillation), whereby light is emitted and then measured by a scintillation counter. Cherenkov radiation is measured as well. In all of those methods, the measured energy is proportional to the initial particle energy. Annihilation and pair production Relativistic energy and momentum can also be measured by studying processes such as annihilation and pair production. For instance, the rest energy of electrons and positrons is 0.51 MeV respectively. When a photon interacts with an atomic nucleus, electron-positron pairs can be generated in case the energy of the photon matches the required threshold energy, which is the combined electron-positron rest energy of 1.02 MeV. However, if the photon energy is even higher, then the exceeding energy is converted into kinetic energy of the particles. The reverse process occurs in electron-positron annihilation at low energies, in which process photons are created having the same energy as the electron-positron pair. These are direct examples of (mass–energy equivalence). There are also many examples of conversion of relativistic kinetic energy into rest energy. In 1974, SLAC National Accelerator Laboratory accelerated electrons and positrons up to relativistic velocities, so that their relativistic energy (i.e. the sum of their rest energy and kinetic energy) is significantly increased to about 1500 MeV each. When those particles collide, other particles such as the J/ψ meson of rest energy of about 3000 MeV were produced. Much higher energies were employed at the Large Electron–Positron Collider in 1989, where electrons and positrons were accelerated up to 45 GeV each, in order to produce W and Z bosons of rest energies between 80 and 91 GeV. Later, the energies were considerably increased to 200 GeV to generate pairs of W bosons. Such bosons were also measured using proton-antiproton annihilation. The combined rest energy of those particles amounts to approximately 0.938 GeV each. The Super Proton Synchrotron accelerated those particle up to relativistic velocities and energies of approximately 270 GeV each, so that the center of mass energy at the collision reaches 540 GeV. Thereby, quarks and antiquarks gained the necessary energy and momentum to annihilate into W and Z bosons. Many other experiments involving the creation of a considerable amount of different particles at relativistic velocities have been (and still are) conducted in hadron colliders such as Tevatron (up to 1 TeV), the Relativistic Heavy Ion Collider (up to 200 GeV), and most recently the Large Hadron Collider (up to 7 TeV) in the course of searching for the Higgs boson. Nuclear reactions The relation can be tested in nuclear reactions, as the percent differences between the masses of the reactants and the products are big enough to measure; the change in total mass should account for the change in total kinetic energy. Einstein proposed such a test in the paper where he first stated the equivalence of mass and energy, mentioning the radioactive decay of radium as a possibility. The first test in a nuclear reaction, however, used the absorption of an incident proton by lithium-7, which then breaks into two alpha particles. The change in mass corresponded to the change in kinetic energy to within 0.5%. A particularly sensitive test was carried out in 2005 in the gamma decay of excited sulfur and silicon nuclei, in each case to the non-excited state (ground state). The masses of the excited and ground states were measured by measuring their revolution frequencies in an electromagnetic trap. The gamma rays' energies were measured by measuring their wavelengths with gamma-ray diffraction, similar to X-ray diffraction, and using the well-established relation between photon energy and wavelength. The results confirmed the predictions of relativity to a precision of 0.0000004. References External links Physics FAQ: List of SR tests Momentum Physics experiments Special relativity
Tests of relativistic energy and momentum
[ "Physics", "Mathematics" ]
2,818
[ "Physical quantities", "Physics experiments", "Quantity", "Special relativity", "Experimental physics", "Theory of relativity", "Momentum", "Moment (physics)" ]
34,311,590
https://en.wikipedia.org/wiki/FCC%20mark
The FCC logo or the FCC mark is a voluntary mark employed on electronic products manufactured or sold in the United States which indicates that the electromagnetic radiation from the device is below the limits specified by the Federal Communications Commission and the manufacturer has followed the requirements of the Supplier's Declaration of Conformity authorization procedures. The FCC label is found even on products sold outside the US territory, because they are either products manufactured in the US and had been exported, or they are also sold in the US. This makes the FCC label recognizable worldwide even to people to whom the name of the agency is not familiar. Formerly, devices classified under part 15 or part 18 of the FCC regulations were required to be labelled with the FCC mark, but in November 2017 the mark was made optional. Devices must still be accompanied by a Supplier's Declaration of Conformity (FCC Declaration of Conformity). The responsible party for the Supplier's Declaration of Conformity must be located within the United States. Overview The Federal Communications Commission established the regulations on electromagnetic interference under Part 15 of the FCC rules in 1975. After several amendments over the years, these regulation were reconstituted as the Declaration of Conformity and Certification procedures in 1998. The FCC mark is a stand-alone logo (as shown above) for devices falling under part 18 of Title 47 Code of Federal Regulations, for devices falling under part 15 rules, along with the logo, the label should display other data, viz., the trade name of the product, the model number, and information about whether the device was tested after assembling, or assembled from tested components. Electronic labeling is an alternative for devices equipped with a display. Even though most of the nations exporting electronic equipment into the US market have their own standards for EMI as well as independent certification and conformity marks (e.g.: The CCC certification mark for China, the VCCI (Voluntary Council for Control of Interference) mark for Japan, the KC mark by the Korea Communications Commission for South Korea, the ANATEL mark for Brazil, and the BSMI mark for Taiwan), most of the products still sold in these markets hold the FCC label. Electronic products sold in parts of Asia and Africa hold the FCC label even though it holds no legal significance, and also without any means to verify whether they actually conform to the specified standards or not. Canada's regulating body is called Innovation, Science and Economic Development Canada (ISED) - formally Industry Canada (IC). Products sold in Canada may have the FCC declaration and/or the CE declaration, however, neither declaration has any legal significance in Canada. See also CE mark Energy Star References Certification marks Electromagnetic compatibility Federal Communications Commission Symbols introduced in 1998 1998 establishments in the United States
FCC mark
[ "Mathematics", "Engineering" ]
551
[ "Radio electronics", "Electromagnetic compatibility", "Symbols", "Electrical engineering", "Certification marks" ]
34,317,494
https://en.wikipedia.org/wiki/QED%20vacuum
The QED vacuum or quantum electrodynamic vacuum is the field-theoretic vacuum of quantum electrodynamics. It is the lowest energy state (the ground state) of the electromagnetic field when the fields are quantized. When the Planck constant is hypothetically allowed to approach zero, QED vacuum is converted to classical vacuum, which is to say, the vacuum of classical electromagnetism. Another field-theoretic vacuum is the QCD vacuum of the Standard Model. Fluctuations The QED vacuum is subject to fluctuations about a dormant zero average-field condition; Here is a description of the quantum vacuum: Virtual particles It is sometimes attempted to provide an intuitive picture of virtual particles based upon the Heisenberg energy-time uncertainty principle: (where and are energy and time variations, and the Planck constant divided by 2) arguing along the lines that the short lifetime of virtual particles allows the "borrowing" of large energies from the vacuum and thus permits particle generation for short times. This interpretation of the energy-time uncertainty relation is not universally accepted, however. One issue is the use of an uncertainty relation limiting measurement accuracy as though a time uncertainty determines a "budget" for borrowing energy . Another issue is the meaning of "time" in this relation, because energy and time (unlike position and momentum , for example) do not satisfy a canonical commutation relation (such as ). Various schemes have been advanced to construct an observable that has some kind of time interpretation, and yet does satisfy a canonical commutation relation with energy. The many approaches to the energy-time uncertainty principle are a continuing subject of study. Quantization of the fields The Heisenberg uncertainty principle does not allow a particle to exist in a state in which the particle is simultaneously at a fixed location, say the origin of coordinates, and has also zero momentum. Instead the particle has a range of momentum and spread in location attributable to quantum fluctuations; if confined, it has a zero-point energy. An uncertainty principle applies to all quantum mechanical operators that do not commute. In particular, it applies also to the electromagnetic field. A digression follows to flesh out the role of commutators for the electromagnetic field. The standard approach to the quantization of the electromagnetic field begins by introducing a vector potential and a scalar potential to represent the basic electromagnetic electric field and magnetic field using the relations: The vector potential is not completely determined by these relations, leaving open a so-called gauge freedom. Resolving this ambiguity using the Coulomb gauge leads to a description of the electromagnetic fields in the absence of charges in terms of the vector potential and the momentum field , given by: where is the electric constant of the SI units. Quantization is achieved by insisting that the momentum field and the vector potential do not commute. That is, the equal-time commutator is: where , are spatial locations, is the reduced Planck constant, is the Kronecker delta and is the Dirac delta function. The notation denotes the commutator. Quantization can be achieved without introducing the vector potential, in terms of the underlying fields themselves: where the circumflex denotes a Schrödinger time-independent field operator, and is the antisymmetric Levi-Civita tensor. Because of the non-commutation of field variables, the variances of the fields cannot be zero, although their averages are zero. The electromagnetic field has therefore a zero-point energy, and a lowest quantum state. The interaction of an excited atom with this lowest quantum state of the electromagnetic field is what leads to spontaneous emission, the transition of an excited atom to a state of lower energy by emission of a photon even when no external perturbation of the atom is present. Electromagnetic properties As a result of quantization, the quantum electrodynamic vacuum can be considered as a material medium. It is capable of vacuum polarization. In particular, the force law between charged particles is affected. The electrical permittivity of quantum electrodynamic vacuum can be calculated, and it differs slightly from the simple of the classical vacuum. Likewise, its permeability can be calculated and differs slightly from . This medium is a dielectric with relative dielectric constant > 1, and is diamagnetic, with relative magnetic permeability < 1. Under some extreme circumstances in which the field exceeds the Schwinger limit (for example, in the very high fields found in the exterior regions of pulsars), the quantum electrodynamic vacuum is thought to exhibit nonlinearity in the fields. Calculations also indicate birefringence and dichroism at high fields. Many of electromagnetic effects of the vacuum are small, and only recently have experiments been designed to enable the observation of nonlinear effects. PVLAS and other teams are working towards the needed sensitivity to detect QED effects. Attainability A perfect vacuum is itself only attainable in principle. It is an idealization, like absolute zero for temperature, that can be approached, but never actually realized: Virtual particles make a perfect vacuum unrealizable, but leave open the question of attainability of a quantum electrodynamic vacuum or QED vacuum. Predictions of QED vacuum such as spontaneous emission, the Casimir effect and the Lamb shift have been experimentally verified, suggesting QED vacuum is a good model for a high quality realizable vacuum. There are competing theoretical models for vacuum, however. For example, quantum chromodynamic vacuum includes many virtual particles not treated in quantum electrodynamics. The vacuum of quantum gravity treats gravitational effects not included in the Standard Model. It remains an open question whether further refinements in experimental technique ultimately will support another model for realizable vacuum. See also Feynman diagram History of quantum field theory Precision tests of QED References Vacuum Electromagnetism Electromagnetic radiation Energy (physics) Concepts in physics Quantum electrodynamics Articles containing video clips
QED vacuum
[ "Physics", "Mathematics" ]
1,222
[ "Electromagnetism", "Physical phenomena", "Physical quantities", "Electromagnetic radiation", "Quantity", "Vacuum", "Energy (physics)", "Radiation", "Fundamental interactions", "nan", "Wikipedia categories named after physical quantities", "Matter" ]
22,707,107
https://en.wikipedia.org/wiki/Strain%20%28mechanics%29
In mechanics, strain is defined as relative deformation, compared to a position configuration. Different equivalent choices may be made for the expression of a strain field depending on whether it is defined with respect to the initial or the final configuration of the body and on whether the metric tensor or its dual is considered. Strain has dimension of a length ratio, with SI base units of meter per meter (m/m). Hence strains are dimensionless and are usually expressed as a decimal fraction or a percentage. Parts-per notation is also used, e.g., parts per million or parts per billion (sometimes called "microstrains" and "nanostrains", respectively), corresponding to μm/m and nm/m. Strain can be formulated as the spatial derivative of displacement: where is the identity tensor. The displacement of a body may be expressed in the form , where is the reference position of material points of the body; displacement has units of length and does not distinguish between rigid body motions (translations and rotations) and deformations (changes in shape and size) of the body. The spatial derivative of a uniform translation is zero, thus strains measure how much a given displacement differs locally from a rigid-body motion. A strain is in general a tensor quantity. Physical insight into strains can be gained by observing that a given strain can be decomposed into normal and shear components. The amount of stretch or compression along material line elements or fibers is the normal strain, and the amount of distortion associated with the sliding of plane layers over each other is the shear strain, within a deforming body. This could be applied by elongation, shortening, or volume changes, or angular distortion. The state of strain at a material point of a continuum body is defined as the totality of all the changes in length of material lines or fibers, the normal strain, which pass through that point and also the totality of all the changes in the angle between pairs of lines initially perpendicular to each other, the shear strain, radiating from this point. However, it is sufficient to know the normal and shear components of strain on a set of three mutually perpendicular directions. If there is an increase in length of the material line, the normal strain is called tensile strain; otherwise, if there is reduction or compression in the length of the material line, it is called compressive strain. Strain regimes Depending on the amount of strain, or local deformation, the analysis of deformation is subdivided into three deformation theories: Finite strain theory, also called large strain theory, large deformation theory, deals with deformations in which both rotations and strains are arbitrarily large. In this case, the undeformed and deformed configurations of the continuum are significantly different and a clear distinction has to be made between them. This is commonly the case with elastomers, plastically-deforming materials and other fluids and biological soft tissue. Infinitesimal strain theory, also called small strain theory, small deformation theory, small displacement theory, or small displacement-gradient theory where strains and rotations are both small. In this case, the undeformed and deformed configurations of the body can be assumed identical. The infinitesimal strain theory is used in the analysis of deformations of materials exhibiting elastic behavior, such as materials found in mechanical and civil engineering applications, e.g. concrete and steel. Large-displacement or large-rotation theory, which assumes small strains but large rotations and displacements. Strain measures In each of these theories the strain is then defined differently. The engineering strain is the most common definition applied to materials used in mechanical and structural engineering, which are subjected to very small deformations. On the other hand, for some materials, e.g., elastomers and polymers, subjected to large deformations, the engineering definition of strain is not applicable, e.g. typical engineering strains greater than 1%; thus other more complex definitions of strain are required, such as stretch, logarithmic strain, Green strain, and Almansi strain. Engineering strain Engineering strain, also known as Cauchy strain, is expressed as the ratio of total deformation to the initial dimension of the material body on which forces are applied. In the case of a material line element or fiber axially loaded, its elongation gives rise to an engineering normal strain or engineering extensional strain , which equals the relative elongation or the change in length per unit of the original length of the line element or fibers (in meters per meter). The normal strain is positive if the material fibers are stretched and negative if they are compressed. Thus, we have , where is the engineering normal strain, is the original length of the fiber and is the final length of the fiber. The true shear strain is defined as the change in the angle (in radians) between two material line elements initially perpendicular to each other in the undeformed or initial configuration. The engineering shear strain is defined as the tangent of that angle, and is equal to the length of deformation at its maximum divided by the perpendicular length in the plane of force application, which sometimes makes it easier to calculate. Stretch ratio The stretch ratio or extension ratio (symbol λ) is an alternative measure related to the extensional or normal strain of an axially loaded differential line element. It is defined as the ratio between the final length and the initial length of the material line. The extension ratio λ is related to the engineering strain e by This equation implies that when the normal strain is zero, so that there is no deformation, the stretch ratio is equal to unity. The stretch ratio is used in the analysis of materials that exhibit large deformations, such as elastomers, which can sustain stretch ratios of 3 or 4 before they fail. On the other hand, traditional engineering materials, such as concrete or steel, fail at much lower stretch ratios. Logarithmic strain The logarithmic strain , also called, true strain or Hencky strain. Considering an incremental strain (Ludwik) the logarithmic strain is obtained by integrating this incremental strain: where is the engineering strain. The logarithmic strain provides the correct measure of the final strain when deformation takes place in a series of increments, taking into account the influence of the strain path. Green strain The Green strain is defined as: Almansi strain The Euler-Almansi strain is defined as Strain tensor The (infinitesimal) strain tensor (symbol ) is defined in the International System of Quantities (ISQ), more specifically in ISO 80000-4 (Mechanics), as a "tensor quantity representing the deformation of matter caused by stress. Strain tensor is symmetric and has three linear strain and three shear strain (Cartesian) components." ISO 80000-4 further defines linear strain as the "quotient of change in length of an object and its length" and shear strain as the "quotient of parallel displacement of two surfaces of a layer and the thickness of the layer". Thus, strains are classified as either normal or shear. A normal strain is perpendicular to the face of an element, and a shear strain is parallel to it. These definitions are consistent with those of normal stress and shear stress. The strain tensor can then be expressed in terms of normal and shear components as: Geometric setting Consider a two-dimensional, infinitesimal, rectangular material element with dimensions , which, after deformation, takes the form of a rhombus. The deformation is described by the displacement field . From the geometry of the adjacent figure we have and For very small displacement gradients the squares of the derivative of and are negligible and we have Normal strain For an isotropic material that obeys Hooke's law, a normal stress will cause a normal strain. Normal strains produce dilations. The normal strain in the -direction of the rectangular element is defined by Similarly, the normal strain in the - and -directions becomes Shear strain The engineering shear strain () is defined as the change in angle between lines and . Therefore, From the geometry of the figure, we have For small displacement gradients we have For small rotations, i.e. and are ≪ 1 we have , . Therefore, thus By interchanging and and and , it can be shown that . Similarly, for the - and -planes, we have Volume strain Metric tensor A strain field associated with a displacement is defined, at any point, by the change in length of the tangent vectors representing the speeds of arbitrarily parametrized curves passing through that point. A basic geometric result, due to Fréchet, von Neumann and Jordan, states that, if the lengths of the tangent vectors fulfil the axioms of a norm and the parallelogram law, then the length of a vector is the square root of the value of the quadratic form associated, by the polarization formula, with a positive definite bilinear map called the metric tensor. See also Stress measures Strain rate Strain tensor References Tensors Continuum mechanics Non-Newtonian fluids Solid mechanics Dimensionless quantities
Strain (mechanics)
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
1,865
[ "Solid mechanics", "Tensors", "Physical quantities", "Continuum mechanics", "Deformation (mechanics)", "Quantity", "Classical mechanics", "Materials science", "Mechanics", "Dimensionless quantities" ]
22,710,151
https://en.wikipedia.org/wiki/Hydrological%20optimization
Hydrological optimization applies mathematical optimization techniques (such as dynamic programming, linear programming, integer programming, or quadratic programming) to water-related problems. These problems may be for surface water, groundwater, or the combination. The work is interdisciplinary, and may be done by hydrologists, civil engineers, environmental engineers, and operations researchers. Simulation versus optimization Groundwater and surface water flows can be studied with hydrologic simulation. A typical program used for this work is MODFLOW. However, simulation models cannot easily help make management decisions, as simulation is descriptive. Simulation shows what would happen given a certain set of conditions. Optimization, by contrast, finds the best solution for a set of conditions. Optimization models have three parts: An objective, such as "Minimize cost" Decision variables, which correspond to the options available to management Constraints, which describe the technical or physical requirements imposed on the options To use hydrological optimization, a simulation is run to find constraint coefficients for the optimization. An engineer or manager can then add costs or benefits associated with a set of possible decisions, and solve the optimization model to find the best solution. Examples of problems solved with hydrological optimization Contaminant remediation in aquifers. The decision problem is where to locate wells, and choose a pumping rate, to minimize the cost to prevent spread of a contaminant. The constraints are associated with the hydrogeological flows. Water allocation to improve wetlands. This optimization model recommends water allocation and invasive vegetation control to improve wetland habitat of priority bird species. These recommendations are subject to constraints like water availability, spatial connectivity, hydraulic infrastructure capacities, vegetation responses, and available financial resources. Maximizing well abstraction subject to environmental flow constraints. The goal is to measure the effects of each user's water use on other users and on the environment, as accurately as possible, and then optimize over the available feasible solutions. Improving water quality. A simple optimization model identifies the cost-minimizing mix of best management practices to reduce the excess of nutrients in a watershed. Hydrological optimization is now being proposed for use with smart markets for water-related resources. Pipe network optimization with genetic algorithms. PDE-constrained optimization Partial differential equations (PDEs) are widely used to describe hydrological processes, suggesting that a high degree of accuracy in hydrological optimization should strive to incorporate PDE constraints into a given optimization. Common examples of PDEs used in hydrology include: Groundwater flow equation Primitive equations Saint-Venant equations Other environmental processes to consider as inputs include: Evapotranspiration Geomorphology Sediment transport See also Drainage research Geographic information system Integrated water resources management Optimal control Pipe network analysis Water in California References Further reading Boyd, Stephen P.; Vandenberghe, Lieven (2004). Convex Optimization (PDF). Cambridge University Press. . Loucks, Daniel P.; van Beek, Eelco (2017). Water Resource Systems Planning and Management: An Introduction to Methods, Models, and Applications. Springer. . Nocedal, Jorge; Wright, Stephen (2006). Numerical Optimization. Springer Series in Operations Research and Financial Engineering, Springer. . Qin, Youwei; Kavetski, Dmitri; Kuczera, George (2018). "A Robust Gauss-Newton Algorithm for the Optimization of Hydrological Models: Benchmarking Against Industry-Standard Algorithms". Water Resources Research. 54 (11): 9637-9654. Tayfur, Gokmen (2017). "Modern Optimization Methods in Water Resources Planning, Engineering and Management". Water Resources Management. 31: 3205-3233. External links Water Resource Systems (MIT OpenCourseWare) Lecture notes Hydraulics Hydraulic engineering Hydrology Mathematical optimization Optimal control Water resources management
Hydrological optimization
[ "Physics", "Chemistry", "Mathematics", "Engineering", "Environmental_science" ]
764
[ "Hydrology", "Mathematical analysis", "Physical systems", "Hydraulics", "Civil engineering", "Environmental engineering", "Mathematical optimization", "Hydraulic engineering", "Fluid dynamics" ]
5,060,723
https://en.wikipedia.org/wiki/Signal%20conditioning
In electronics and signal processing, signal conditioning is the manipulation of an analog signal in such a way that it meets the requirements of the next stage for further processing. In an analog-to-digital converter (ADC) application, signal conditioning includes voltage or current limiting and anti-aliasing filtering. In control engineering applications, it is common to have a sensing stage (which consists of a sensor), a signal conditioning stage (where usually amplification of the signal is done) and a processing stage (often carried out by an ADC and a micro-controller). Operational amplifiers (op-amps) are commonly employed to carry out the amplification of the signal in the signal conditioning stage. In some transducers, signal conditioning is integrated with the sensor, for example in Hall effect sensors. In power electronics, before processing the input sensed signals by sensors like voltage sensor and current sensor, signal conditioning scales signals to level acceptable to the microprocessor. Inputs Signal inputs accepted by signal conditioners include DC voltage and current, AC voltage and current, frequency and electric charge. Sensor inputs can be accelerometer, thermocouple, thermistor, resistance thermometer, strain gauge or bridge, and LVDT or RVDT. Specialized inputs include encoder, counter or tachometer, timer or clock, relay or switch, and other specialized inputs. Outputs for signal conditioning equipment can be voltage, current, frequency, timer or counter, relay, resistance or potentiometer, and other specialized outputs. Processes Signal conditioning can include amplification, filtering, converting, range matching, isolation and any other processes required to make sensor output suitable for processing after conditioning. Input Coupling Use AC coupling when the signal contains a large DC component. If you enable AC coupling, you remove the large DC offset for the input amplifier and amplify only the AC component. This configuration makes effective use of the ADC dynamic range Filtering Filtering is the most common signal conditioning function, as usually not all the signal frequency spectrum contains valid data. For example, the 50 or 60 Hz AC power lines, present in most environments induce noise on signals that can cause interference if amplified. Amplification Signal amplification performs two important functions: increases the resolution of the input signal, and increases its signal-to-noise ratio. For example, the output of an electronic temperature sensor, which is probably in the millivolts range is probably too low for an analog-to-digital converter (ADC) to process directly. In this case it is necessary to bring the voltage level up to that required by the ADC. Commonly used amplifiers used for signal conditioning include sample and hold amplifiers, peak detectors, log amplifiers, antilog amplifiers, instrumentation amplifiers and programmable gain amplifiers. Attenuation Attenuation, the opposite of amplification, is necessary when voltages to be digitized are beyond the ADC range. This form of signal conditioning decreases the input signal amplitude so that the conditioned signal is within ADC range. Attenuation is typically necessary when measuring voltages that are more than 10 V. Excitation Some sensors require external voltage or current source of excitation, These sensors are called active sensors. (E.g. a temperature sensor like a thermistor & RTD, a pressure sensor (piezo-resistive and capacitive), etc.). The stability and precision of the excitation signal directly relates to the sensor accuracy and stability. Linearization Linearization is necessary when sensors produce voltage signals that are not linearly related to the physical measurement. Linearization is the process of interpreting the signal from the sensor and can be done either with signal conditioning or through software. Electrical isolation Signal isolation may be used to pass the signal from the source to the measuring device without a physical connection. It is often used to isolate possible sources of signal perturbations that could otherwise follow the electrical path from the sensor to the processing circuitry. In some situations, it may be important to isolate the potentially expensive equipment used to process the signal after conditioning from the sensor. Magnetic or optical isolation can be used. Magnetic isolation transforms the signal from a voltage to a magnetic field so the signal can be transmitted without physical connection (for example, using a transformer). Optical isolation works by using an electronic signal to modulate a signal encoded by light transmission (optical encoding). The decoded light transmission is then used for input for the next stage of processing. Surge protection A surge protector absorbs voltage spikes to protect the next stage from damage. References Electrical engineering
Signal conditioning
[ "Engineering" ]
939
[ "Electrical engineering" ]
5,061,875
https://en.wikipedia.org/wiki/Journal%20of%20the%20American%20Society%20for%20Mass%20Spectrometry
The Journal of the American Society for Mass Spectrometry is a monthly peer-reviewed scientific journal published by ACS Publications since 2020. From 2011 to 2019 it was published by Springer Science+Business Media and prior to that by Elsevier. It is the official publication of the American Society for Mass Spectrometry and freely available to members. The journal covers all aspects of mass spectrometry. Until 2015, Michael L. Gross (Washington University in St. Louis) was the founding editor-in-chief; he was succeeded by Joseph A. Loo (University of California, Los Angeles). The journal is abstracted and indexed in MEDLINE. References External links Delayed open access journals Mass spectrometry journals Academic journals established in 1990 English-language journals American Chemical Society academic journals
Journal of the American Society for Mass Spectrometry
[ "Physics", "Chemistry" ]
164
[ "Spectrum (physical sciences)", "Biochemistry journal stubs", "Biochemistry stubs", "Mass spectrometry", "Mass spectrometry journals" ]
5,063,146
https://en.wikipedia.org/wiki/Applied%20general%20equilibrium
In mathematical economics, applied general equilibrium (AGE) models were pioneered by Herbert Scarf at Yale University in 1967, in two papers, and a follow-up book with Terje Hansen in 1973, with the aim of empirically estimating the Arrow–Debreu model of general equilibrium theory with empirical data, to provide "“a general method for the explicit numerical solution of the neoclassical model” (Scarf with Hansen 1973: 1) Scarf's method iterated a sequence of simplicial subdivisions which would generate a decreasing sequence of simplices around any solution of the general equilibrium problem. With sufficiently many steps, the sequence would produce a price vector that clears the market. Brouwer's Fixed Point theorem states that a continuous mapping of a simplex into itself has at least one fixed point. This paper describes a numerical algorithm for approximating, in a sense to be explained below, a fixed point of such a mapping (Scarf 1967a: 1326). Scarf never built an AGE model, but hinted that “these novel numerical techniques might be useful in assessing consequences for the economy of a change in the economic environment” (Kehoe et al. 2005, citing Scarf 1967b). His students elaborated the Scarf algorithm into a tool box, where the price vector could be solved for any changes in policies (or exogenous shocks), giving the equilibrium ‘adjustments’ needed for the prices. This method was first used by Shoven and Whalley (1972 and 1973), and then was developed through the 1970s by Scarf’s students and others. Most contemporary applied general equilibrium models are numerical analogs of traditional two-sector general equilibrium models popularized by James Meade, Harry Johnson, Arnold Harberger, and others in the 1950s and 1960s. Earlier analytic work with these models has examined the distortionary effects of taxes, tariffs, and other policies, along with functional incidence questions. More recent applied models, including those discussed here, provide numerical estimates of efficiency and distributional effects within the same framework. Scarf's fixed-point method was a break-through in the mathematics of computation generally, and specifically in optimization and computational economics. Later researchers continued to develop iterative methods for computing fixed-points, both for topological models like Scarf's and for models described by functions with continuous second derivatives or convexity or both. Of course, "global Newton methods" for essentially convex and smooth functions and path-following methods for diffeomorphisms converged faster than did robust algorithms for continuous functions, when the smooth methods are applicable. AGE and CGE models AGE models, being based on Arrow–Debreu general equilibrium theory, work in a different manner than CGE models. The model first establishes the existence of equilibrium through the standard Arrow–Debreu exposition, then inputs data into all the various sectors, and then applies Scarf’s algorithm (Scarf 1967a, 1967b and Scarf with Hansen 1973) to solve for a price vector that would clear all markets. This algorithm would narrow down the possible relative prices through a simplex method, which kept reducing the size of the ‘net’ within which possible solutions were found. AGE modelers then consciously choose a cutoff, and set an approximate solution as the net never closed on a unique point through the iteration process. CGE models are based on macro balancing equations, and use an equal number of equations (based on the standard macro balancing equations) and unknowns solvable as simultaneous equations, where exogenous variables are changed outside the model, to give the endogenous results. References Bibliography Cardenete, M. Alejandro, Guerra, Ana-Isabel and Sancho, Ferran (2012). Applied General Equilibrium: An Introduction. Springer. Scarf, H.E., 1967a, “The approximation of Fixed Points of a continuous mapping”, SIAM Journal on Applied Mathematics 15: 1328–43 Scarf, H.E., 1967b, “On the computation of equilibrium prices” in Fellner, W.J. (ed.), Ten Economic Studies in the tradition of Irving Fischer, New York, NY: Wiley Scarf, H.E. with Hansen, T, 1973, The Computation of Economic Equilibria, Cowles Foundation for Research in economics at Yale University, Monograph No. 24, New Haven, CT and London, UK: Yale University Press Kehoe, T.J., Srinivasan, T.N. and Whalley, J., 2005, Frontiers in Applied General Equilibrium Modeling, In honour of Herbert Scarf, Cambridge, UK: Cambridge University Press Shoven, J. B. and Whalley, J., 1972, "A General Equilibrium Calculation of the Effects of Differential Taxation of Income from Capital in the U.S.", Journal of Public Economics 1 (3–4), November, pp. 281–321 Shoven, J.B. and Whalley, J., 1973, “General Equilibrium with Taxes: A Computational Procedure and an Existence Proof”, The Review of Economic Studies 40 (4), October, pp. 475–89 Velupillai, K.V., 2006, “Algorithmic foundations of computable general equilibrium theory”, Applied Mathematics and Computation 179, pp. 360–69 General equilibrium theory Fixed points (mathematics)
Applied general equilibrium
[ "Mathematics" ]
1,086
[ "Fixed points (mathematics)", "Mathematical analysis", "Topology", "Dynamical systems" ]
5,063,437
https://en.wikipedia.org/wiki/Potassium%20persulfate
Potassium persulfate is the inorganic compound with the formula K2S2O8. Also known as potassium peroxydisulfate, it is a white solid that is sparingly soluble in cold water, but dissolves better in warm water. This salt is a powerful oxidant, commonly used to initiate polymerizations. Structure The sodium and potassium salts are very similar. In the potassium salt, the O-O distance is 1.495Å. The individual sulfate groups are tetrahedral, with three short S-O distances near 1.43 and one long S-O bond at 1.65Å. Preparation Potassium persulfate can be prepared by electrolysis of a cold solution potassium bisulfate in sulfuric acid at a high current density. 2 KHSO4 → K2S2O8 + H2 It can also be prepared by adding potassium bisulfate (KHSO4) to a solution of the more soluble salt ammonium peroxydisulfate (NH4)2S2O8. In principle it can be prepared by chemical oxidation of potassium sulfate using fluorine. Several million kilograms of the ammonium, sodium, and potassium salts of peroxydisulfate are produced annually. Uses This salt is used to initiate polymerization of various alkenes leading to commercially important polymers such as styrene-butadiene rubber and polytetrafluoroethylene and related materials. In solution, the dianion dissociates to give radicals: [O3SO-OSO3]2− 2 [SO4]•− It is used in organic chemistry as an oxidizing agent, for instance in the Elbs persulfate oxidation of phenols and the Boyland–Sims oxidation of anilines. As a strong yet stable bleaching agent it also finds use in various hair bleaches and lighteners. Such brief and non-continuous use is normally hazard free, however prolonged contact can cause skin irritation. It has been used as an improving agent for flour with the E number E922, although it is no longer approved for this use within the EU. Precautions The salt is a strong oxidant and is incompatible with organic compounds. Prolonged skin contact can result in irritation. References Persulfates Potassium compounds Oxidizing agents Radical initiators
Potassium persulfate
[ "Chemistry", "Materials_science" ]
488
[ "Redox", "Radical initiators", "Oxidizing agents", "Polymer chemistry", "Reagents for organic chemistry", "Persulfates" ]
5,064,357
https://en.wikipedia.org/wiki/Nickel%28II%29%20oxide
Nickel(II) oxide is the chemical compound with the formula . It is the principal oxide of nickel. It is classified as a basic metal oxide. Several million kilograms are produced annually of varying quality, mainly as an intermediate in the production of nickel alloys. The mineralogical form of , bunsenite, is very rare. Other nickel(III) oxides have been claimed, for example: and , but remain unproven. Production can be prepared by multiple methods. Upon heating above 400 °C, nickel powder reacts with oxygen to give . In some commercial processes, green nickel oxide is made by heating a mixture of nickel powder and water at 1000 °C; the rate for this reaction can be increased by the addition of . The simplest and most successful method of preparation is through pyrolysis of nickel(II) compounds such as the hydroxide, nitrate, and carbonate, which yield a light green powder. Synthesis from the elements by heating the metal in oxygen can yield grey to black powders which indicates nonstoichiometry. Structure adopts the structure, with octahedral Ni2+ and O2− sites. The conceptually simple structure is commonly known as the rock salt structure. Like many other binary metal oxides, is often non-stoichiometric, meaning that the Ni:O ratio deviates from 1:1. In nickel oxide, this non-stoichiometry is accompanied by a color change, with the stoichiometrically correct NiO being green and the non-stoichiometric being black. Applications and reactions has a variety of specialized applications and generally, applications distinguish between "chemical grade", which is relatively pure material for specialty applications, and "metallurgical grade", which is mainly used for the production of alloys. It is used in the ceramic industry to make frits, ferrites, and porcelain glazes. The sintered oxide is used to produce nickel steel alloys. Charles Édouard Guillaume won the 1920 Nobel Prize in Physics for his work on nickel steel alloys which he called invar and elinvar. is a commonly used hole transport material in thin film solar cells. It was also a component in the nickel-iron battery, also known as the Edison Battery, and is a component in fuel cells. It is the precursor to many nickel salts, for use as specialty chemicals and catalysts. More recently, was used to make the NiCd rechargeable batteries found in many electronic devices until the development of the environmentally superior NiMH battery. an anodic electrochromic material, have been widely studied as counter electrodes with tungsten oxide, cathodic electrochromic material, in complementary electrochromic devices. About 4000 tons of chemical grade are produced annually. Black is the precursor to nickel salts, which arise by treatment with mineral acids. is a versatile hydrogenation catalyst. Heating nickel oxide with either hydrogen, carbon, or carbon monoxide reduces it to metallic nickel. It combines with the oxides of sodium and potassium at high temperatures (>700 °C) to form the corresponding nickelate. Electronic structure NiO is useful for illustrating the failure of density functional theory (using functionals based on the local-density approximation) and Hartree–Fock theory to account for the strong correlation. The term strong correlation refers to behavior of electrons in solids that is not well described (often not even in a qualitatively correct manner) by simple one-electron theories such as the local-density approximation (LDA) or Hartree–Fock theory. For instance, the seemingly simple material NiO has a partially filled 3d-band (the Ni atom has 8 of 10 possible 3d-electrons) and therefore would be expected to be a good conductor. However, strong Coulomb repulsion (a correlation effect) between d-electrons makes NiO instead a wide band gap Mott insulator. Thus, NiO has an electronic structure that is neither simply free-electron-like nor completely ionic, but a mixture of both. Health risks Long-term inhalation of NiO is damaging to the lungs, causing lesions and in some cases cancer. The calculated half-life of dissolution of NiO in the blood is more than 90 days. NiO has a long retention half-time in the lungs; after administration to rodents, it persisted in the lungs for more than 3 months. Nickel oxide is classified as a human carcinogen based on increased respiratory cancer risks observed in epidemiological studies of sulfidic ore refinery workers. In a 2-year National Toxicology Program green NiO inhalation study, some evidence of carcinogenicity in F344/N rats but equivocal evidence in female B6C3F1 mice was observed; there was no evidence of carcinogenicity in male B6C3F1 mice. Chronic inflammation without fibrosis was observed in the 2-year studies. References External links Bunsenite at mindat.org Bunsenite mineral data Transition metal oxides Nickel compounds Non-stoichiometric compounds IARC Group 1 carcinogens Hydrogenation catalysts Rock salt crystal structure
Nickel(II) oxide
[ "Chemistry" ]
1,055
[ "Non-stoichiometric compounds", "Hydrogenation catalysts", "Hydrogenation" ]
5,065,725
https://en.wikipedia.org/wiki/Methyltransferase
Methyltransferases are a large group of enzymes that all methylate their substrates but can be split into several subclasses based on their structural features. The most common class of methyltransferases is class I, all of which contain a Rossmann fold for binding S-Adenosyl methionine (SAM). Class II methyltransferases contain a SET domain, which are exemplified by SET domain histone methyltransferases, and class III methyltransferases, which are membrane associated. Methyltransferases can also be grouped as different types utilizing different substrates in methyl transfer reactions. These types include protein methyltransferases, DNA/RNA methyltransferases, natural product methyltransferases, and non-SAM dependent methyltransferases. SAM is the classical methyl donor for methyltransferases, however, examples of other methyl donors are seen in nature. The general mechanism for methyl transfer is a SN2-like nucleophilic attack where the methionine sulfur serves as the leaving group and the methyl group attached to it acts as the electrophile that transfers the methyl group to the enzyme substrate. SAM is converted to S-Adenosyl homocysteine (SAH) during this process. The breaking of the SAM-methyl bond and the formation of the substrate-methyl bond happen nearly simultaneously. These enzymatic reactions are found in many pathways and are implicated in genetic diseases, cancer, and metabolic diseases. Another type of methyl transfer is the radical S-Adenosyl methionine (SAM) which is the methylation of unactivated carbon atoms in primary metabolites, proteins, lipids, and RNA. Function Genetics Methylation, as well as other epigenetic modifications, affects transcription, gene stability, and parental imprinting. It directly impacts chromatin structure and can modulate gene transcription, or even completely silence or activate genes, without mutation to the gene itself. Though the mechanisms of this genetic control are complex, hypo- and hypermethylation of DNA is implicated in many diseases. Protein regulation Methylation of proteins has a regulatory role in protein–protein interactions, protein–DNA interactions, and protein activation. Examples: RCC1, an important mitotic protein, is methylated so that it can interact with centromeres of chromosomes. This is an example of regulation of protein-protein interaction, as methylation regulates the attachment of RCC1 to histone proteins H2A and H2B. The RCC1-chromatin interaction is also an example of a protein-DNA interaction, as another domain of RCC1 interacts directly with DNA when this protein is methylated. When RCC1 is not methylated, dividing cells have multiple spindle poles and usually cannot survive. p53 methylated on lysine to regulate its activation and interaction with other proteins in the DNA damage response. This is an example of regulation of protein-protein interactions and protein activation. p53 is a known tumor suppressor that activates DNA repair pathways, initiates apoptosis, and pauses the cell cycle. Overall, it responds to mutations in DNA, signaling to the cell to fix them or to initiate cell death so that these mutations cannot contribute to cancer. NF-κB (a protein involved in inflammation) is a known methylation target of the methyltransferase SETD6, which turns off NF-κB signaling by inhibiting of one of its subunits, RelA. This reduces the transcriptional activation and inflammatory response, making methylation of NF-κB a regulatory process by which cell signaling through this pathway is reduced. Natural product methyltransferases provide a variety of inputs into metabolic pathways, including the availability of cofactors, signalling molecules, and metabolites. This regulates various cellular pathways by controlling protein activity. Types Histone methyltransferases Histone methyltransferases are critical for genetic regulation at the epigenetic level. They modify mainly lysine on the ε-nitrogen and the arginine guanidinium group on histone tails. Lysine methyltransferases and Arginine methyltransferases are unique classes of enzymes, but both bind SAM as a methyl donor for their histone substrates. Lysine amino acids can be modified with one, two, or three methyl groups, while Arginine amino acids can be modified with one or two methyl groups. This increases the strength of the positive charge and residue hydrophobicity, allowing other proteins to recognize methyl marks. The effect of this modification depends on the location of the modification on the histone tail and the other histone modifications around it. The location of the modifications can be partially determined by DNA sequence, as well as small non-coding RNAs and the methylation of the DNA itself. Most commonly, it is histone H3 or H4 that is methylated in vertebrates. Either increased or decreased transcription of genes around the modification can occur. Increased transcription is a result of decreased chromatin condensation, while decreased transcription results from increased chromatin condensation. Methyl marks on the histones contribute to these changes by serving as sites for recruitment of other proteins that can further modify chromatin. N-terminal methyltransferases N-alpha methyltransferases transfer a methyl group from SAM to the N-terminal nitrogen on protein targets. The N-terminal methionine is first cleaved by another enzyme and the X-Proline-Lysine consensus sequence is recognized by the methyltransferase. For all known substrates, the X amino acid is Alanine, Serine, or Proline. This reaction yields a methylated protein and SAH. Known targets of these methyltransferases in humans include RCC-1 (a regulator of nuclear transport proteins) and Retinoblastoma protein (a tumor suppressor protein that inhibits excessive cell division). RCC-1 methylation is especially important in mitosis as it coordinates the localization of some nuclear proteins in the absence of the nuclear envelope. When RCC-1 is not methylated, cell division is abnormal following the formation of extra spindle poles. The function of Retinoblastoma protein N-terminal methylation is not known. DNA/RNA methyltransferases DNA methylation, a key component of genetic regulation, occurs primarily at the 5-carbon of the base cytosine, forming 5’methylcytosine (see left). Methylation is an epigenetic modification catalyzed by DNA methyltransferase enzymes, including DNMT1, DNMT2 (renamed TRDMT1 to reflect its function methylating tRNA, not DNA), and DNMT3. These enzymes use S-adenosylmethionine as a methyl donor and contain several highly conserved structural features between the three forms; these include the S-adenosylmethionine binding site, a vicinal proline-cysteine pair which forms a thiolate anion important for the reaction mechanism, and the cytosine substrate binding pocket. Many features of DNA methyltransferases are highly conserved throughout many classes of life, from bacteria to mammals. In addition to controlling the expression of certain genes, there are a variety of protein complexes, many with implications for human health, which only bind to methylated DNA recognition sites. Many of the early DNA methyltransferases have been thought to be derived from RNA methyltransferases that were supposed to be active in the RNA world to protect many species of primitive RNA. RNA methylation has been observed in different types of RNA species viz.mRNA, rRNA, tRNA, snoRNA, snRNA, miRNA, tmRNA as well as viral RNA species. Specific RNA methyltransferases are employed by cells to mark these on the RNA species according to the need and environment prevailing around the cells, which form a part of field called molecular epigenetics. 2'-O-methylation, m6A methylation, m1G methylation as well as m5C are most commonly methylation marks observed in different types of RNA. 6A is an enzyme that catalyzes chemical reaction as following: S-adenosyl-L-methionine + DNA adenine S-adenosyl-L-homocysteine + DNA 6-methylaminopurine m6A was primarily found in prokaryotes until 2015 when it was also identified in some eukaryotes. m6A methyltransferases methylate the amino group in DNA at C-6 position specifically to prevent the host system to digest own genome through restriction enzymes. m5C plays a role to regulate gene transcription. m5C transferases are the enzymes that produce C5-methylcytosine in DNA at C-5 position of cytosine and are found in most plants and some eukaryotes. Natural product methyltransferases Natural product methyltransferases (NPMTs) are a diverse group of enzymes that add methyl groups to naturally-produced small molecules. Like many methyltransferases, SAM is utilized as a methyl donor and SAH is produced. Methyl groups are added to S, N, O, or C atoms, and are classified by which of these atoms are modified, with O-methyltransferases representing the largest class. The methylated products of these reactions serve a variety of functions, including co-factors, pigments, signalling compounds, and metabolites. NPMTs can serve a regulatory role by modifying the reactivity and availability of these compounds. These enzymes are not highly conserved across different species, as they serve a more specific function in providing small molecules for specialized pathways in species or smaller groups of species. Reflective of this diversity is the variety of catalytic strategies, including general acid-base catalysis, metal-based catalysis , and proximity and desolvation effects not requiring catalytic amino acids. NPMTs are the most functionally diverse class of methyltransferases. Important examples of this enzyme class in humans include phenylethanolamine N-methyltransferase (PNMT), which converts norepinephrine to epinephrine, and histamine N-methyltransferase (HNMT), which methylates histamine in the process of histamine metabolism. Catechol-O-methyltransferase (COMT) degrades a class of molecules known as catecholamines that includes dopamine, epinephrine, and norepenepherine. Non-SAM dependent methyltransferases Methanol, methyl tetrahydrofolate, mono-, di-, and trimethylamine, methanethiol, methyltetrahydromethanopterin, and chloromethane are all methyl donors found in biology as methyl group donors, typically in enzymatic reactions using the cofactor vitamin B12. These substrates contribute to methyl transfer pathways including methionine biosynthesis, methanogenesis, and acetogenesis. Radical SAM methyltransferases Based on different protein structures and mechanisms of catalysis, there are 3 different types of radical SAM (RS) methylases: Class A, B, and C. Class A RS methylases are the best characterized of the 4 enzymes and are related to both RlmN and Cfr. RlmN is ubiquitous in bacteria which enhances translational fidelity and RlmN catalyzes methylation of C2 of adenosine 2503 (A2503) in 23 S rRNA and C2 of adenosine (A37). Cfr, on the other hand, catalyzes methylation of C8 of A2503 as well and it also catalyzes C2 methylation. Class B is currently the largest class of radical SAM methylases which can methylate both sp2-hybridized and sp3-hybridized carbon atoms in different sets of substrates unlike Class A which only catalyzes sp2-hybridized carbon atoms. The main difference that distinguishes Class B from others is the additional N-terminal cobalamin-binding domain that binds to the RS domain. Class C methylase has homologous sequence with the RS enzyme, coproporphyrinogen III oxidase (HemN), which also catalyzes the methylation of sp2-hybridized carbon centers yet it lacks the 2 cysteines required for methylation in mechanism of Class A. Clinical significance As with any biological process which regulates gene expression and/or function, anomalous DNA methylation is associated with genetic disorders such as ICF, Rett syndrome, and Fragile X syndrome. Cancer cells typically exhibit less DNA methylation activity in general, though often hypermethylation at sites which are unmethylated in normal cells; this overmethylation often functions as a way to inactivate tumor-suppressor genes. Inhibition of overall DNA methyltransferase activity has been proposed as a treatment option, but DNMT inhibitors, analogs of their cytosine substrates, have been found to be highly toxic due to their similarity to cytosine (see right); this similarity to the nucleotide causes the inhibitor to be incorporated into DNA translation, causing non-functioning DNA to be synthesized. A methylase which alters the ribosomal RNA binding site of the antibiotic linezolid causes cross-resistance to other antibiotics that act on the ribosomal RNA. Plasmid vectors capable of transmitting this gene are a cause of potentially dangerous cross resistance. Examples of methyltransferase enzymes relevant to disease: thiopurine methyltransferase: defects in this gene causes toxic accumulation of thiopurine compounds, drugs used in chemotherapy and immunosuppressant therapy methionine synthase: pernicious anemia, caused by Vitamin B12 deficiency, is caused by a lack of cofactor for the methionine synthase enzyme Applications in drug discovery and development Recent work has revealed the methyltransferases involved in methylation of naturally occurring anticancer agents to use S-Adenosyl methionine (SAM) analogs that carry alternative alkyl groups as a replacement for methyl. The development of the facile chemoenzymatic platform to generate and utilize differentially alkylated SAM analogs in the context of drug discovery and drug development is known as alkylrandomization. Applications in cancer treatment In human cells, it was found that m5C was associated with abnormal tumor cells in cancer. The role and potential application of m5C includes to balance the impaired DNA in cancer both hypermethylation and hypomethylation. An epigenetic repair of DNA can be applied by changing the m5C amount in both types of cancer cells (hypermethylation/ hypomethylation) and as well as the environment of the cancers to reach an equivalent point to inhibit tumor cells. Examples Examples include: Catechol-O-methyltransferase DNA methyltransferase Histone methyltransferase 5-Methyltetrahydrofolate-homocysteine methyltransferase O-methyltransferase methionine synthase corrinoid-iron sulfur protein References Further reading 3-D Structure of DNA Methyltransferase A novel methyltransferase : the 7SK snRNA Methylphosphate Capping Enzyme as seen on Flintbox "The Role of Methylation in Gene Expression" on Nature Scitable "Nutrition and Depression: Nutrition, Methylation, and Depression" on Psychology Today "DNA Methylation - What is DNA Methylation?" from News-Medical.net "Histone Lysine Methylation" Genetic pathways involving Histone Methyltransferases from Cell Signaling Technology EC 2.1.1 Methylation
Methyltransferase
[ "Chemistry" ]
3,329
[ "Methylation" ]
5,065,981
https://en.wikipedia.org/wiki/Hydrodynamic%20radius
The hydrodynamic radius of a macromolecule or colloid particle is . The macromolecule or colloid particle is a collection of subparticles. This is done most commonly for polymers; the subparticles would then be the units of the polymer. For polymers in solution, is defined by where is the distance between subparticles and , and where the angular brackets represent an ensemble average. The theoretical hydrodynamic radius was originally an estimate by John Gamble Kirkwood of the Stokes radius of a polymer, and some sources still use hydrodynamic radius as a synonym for the Stokes radius. Note that in biophysics, hydrodynamic radius refers to the Stokes radius, or commonly to the apparent Stokes radius obtained from size exclusion chromatography. The theoretical hydrodynamic radius arises in the study of the dynamic properties of polymers moving in a solvent. It is often similar in magnitude to the radius of gyration. Applications to aerosols The mobility of non-spherical aerosol particles can be described by the hydrodynamic radius. In the continuum limit, where the mean free path of the particle is negligible compared to a characteristic length scale of the particle, the hydrodynamic radius is defined as the radius that gives the same magnitude of the frictional force, as that of a sphere with that radius, i.e. where is the viscosity of the surrounding fluid, and is the velocity of the particle. This is analogous to the Stokes' radius, however this is untrue as the mean free path becomes comparable to the characteristic length scale of the particulate - a correction factor is introduced such that the friction is correct over the entire Knudsen regime. As is often the case, the Cunningham correction factor is used, where: , where were found by Millikan to be: 1.234, 0.414, and 0.876 respectively. Notes References Grosberg AY and Khokhlov AR. (1994) Statistical Physics of Macromolecules (translated by Atanov YA), AIP Press. Polymer physics Radii
Hydrodynamic radius
[ "Chemistry", "Materials_science" ]
441
[ "Polymer physics", "Polymer chemistry" ]
5,065,997
https://en.wikipedia.org/wiki/Zirconium%28IV%29%20silicate
Zirconium silicate, also zirconium orthosilicate, ZrSiO4, is a chemical compound, a silicate of zirconium. It occurs in nature as zircon, a silicate mineral. Powdered zirconium silicate is also known as zircon flour. Zirconium silicate is usually colorless, but impurities induce various colorations. It is insoluble in water, acids, alkali and aqua regia. Hardness is 7.5 on the Mohs scale. Structure and bonding Zircon consists of 8-coordinated Zr4+ centers linked to tetrahedral orthosilicate SiO44- sites. The oxygen atoms are all triply bridging, each with the environment OZr2Si. Given its highly crosslinked structure, the material is hard, and hence prized as gemstone and abrasive. Zr(IV) is a d0 ion. Consequently the material is colorless and diamagnetic. Production Zirconium silicate occurs in nature as mineral zircon. Concentrated sources of zircon are rare. It is mined from sand deposits and separated by gravity. Some sands contain a few percent of zircon. It can also be synthesized by fusion of SiO2 and ZrO2 in an arc furnace, or by reacting a zirconium salt with sodium silicate in an aqueous solution. Uses As of 1995, the annual consumption of zirconium silicate was nearly 1M tons. The major applications exploit its refractory nature and resistance to corrosion by alkali materials. Two end-uses are for enamels, and ceramic glazes. In enamels and glazes it serves as an opacifier. It can be also present in some cements. Another use of zirconium silicate is as beads for milling and grinding. Thin films of zirconium silicate and hafnium silicate produced by chemical vapor deposition, most often MOCVD, can be used as a high-k dielectric as a replacement for silicon dioxide in semiconductors. Zirconium silicates have also been studied for potential use in medical applications. For example, ZS-9 is a zirconium silicate that was designed specifically to trap potassium ions over other ions throughout the gastrointestinal tract. Zirconium silicate is also used as foundry sands due to its high thermal stability. t is also the primary source of zirconium, which is used in various applications, including in nuclear reactors, due to its high resistance to corrosion and low neutron absorption. Toxicity Zirconium silicate is an abrasive irritant for skin and eyes. Chronic exposure to dust can cause pulmonary granulomas, skin inflammation, and skin granuloma. However, there are no known adverse effects for normal, incidental ingestion. References Silicates Zirconium(IV) compounds Refractory materials High-κ dielectrics
Zirconium(IV) silicate
[ "Physics" ]
649
[ "Refractory materials", "Materials", "Matter" ]
5,066,036
https://en.wikipedia.org/wiki/Unihemispheric%20slow-wave%20sleep
Unihemispheric slow-wave sleep (USWS) is sleep where one half of the brain rests while the other half remains alert. This is in contrast to normal sleep where both eyes are shut and both halves of the brain show unconsciousness. In USWS, also known as asymmetric slow-wave sleep, one half of the brain is in deep sleep, a form of non-rapid eye movement sleep and the eye corresponding to this half is closed while the other eye remains open. When examined by electroencephalography (EEG), the characteristic slow-wave sleep tracings are seen from one side while the other side shows a characteristic tracing of wakefulness. The phenomenon has been observed in a number of terrestrial, aquatic and avian species. Unique physiology, including the differential release of the neurotransmitter acetylcholine, has been linked to the phenomenon. USWS offers a number of benefits, including the ability to rest in areas of high predation or during long migratory flights. The behaviour remains an important research topic because USWS is possibly the first animal behaviour which uses different regions of the brain to simultaneously control sleep and wakefulness. The greatest theoretical importance of USWS is its potential role in elucidating the function of sleep by challenging various current notions. Researchers have looked to animals exhibiting USWS to determine if sleep must be essential; otherwise, species exhibiting USWS would have eliminated the behaviour altogether through evolution. The amount of time spent sleeping during the unihemispheric slow-wave stage is considerably less than the bilateral slow-wave sleep. In the past, aquatic animals, such as dolphins and seals, had to regularly surface in order to breathe and regulate body temperature. USWS might have been generated by the need to perform these vital activities simultaneously with sleep. On land, birds can switch between sleeping with both hemispheres to one hemisphere. Due to their poorly webbed feet and long wings, which are not completely waterproof, it is not energetically efficient for them to make rest stops or land on water, only to take flight again. Using unihemispheric slow-wave sleep, birds are able to maintain environmental awareness and aerodynamic control of wings while obtaining the necessary sleep they need to sustain attention during wakefulness. Their sleep is more asymmetric in flight than on land, and they sleep mostly while circling air currents during flight. The eye connected to the awake hemisphere of their brain is the one facing the direction of flight. Once they land, they pay off their sleep debt, as their REM sleep duration significantly decreases and slow-wave sleep increases. Despite the reduced sleep quantity, species having USWS do not present limits at a behavioral or healthy level. Cetaceans, such as dolphins, show preserved health as well as great memory skills. Indeed, cetaceans, seals, and birds compensate for the lack of complete sleep with efficient immune systems, preserved brain plasticity, thermoregulation, and restoration of brain metabolism. Physiology Slow-wave sleep (SWS), also known as Stage 3, is characterized by a lack of movement and difficulty of arousal. Slow-wave sleep occurring in both hemispheres is referred to as bihemispheric slow-wave sleep (BSWS) and is common among most animals. Slow-wave sleep contrasts with rapid eye movement sleep (REM), which can only occur simultaneously in both hemispheres. In most animals, slow-wave sleep is characterized by high amplitude, low frequency EEG readings. This is also known as the desynchronized state of the brain, or deep sleep. In USWS, only one hemisphere exhibits the deep sleep EEG while the other hemisphere exhibits an EEG typical of wakefulness with a low amplitude and high frequency. There also exist instances in which hemispheres are in transitional stages of sleep, but they have not been the subject of study due to their ambiguous nature. USWS represents the first known behavior in which one part of the brain controls sleep while another part controls wakefulness. Generally, when the whole amount of sleeping of each hemisphere is summed, both hemispheres get equal amounts of USWS. However, when every single session is taken into account, a large asymmetry of USWS episodes can be observed. This information suggests that at one time the neural circuit is more active in one hemisphere than on the other one and vice versa the following time. According to Fuller, awakening is characterized by high activity of neural groups that promote awakening: they activate the cortex as well as subcortical structures and simultaneously inhibit neural groups which promotes sleep. Therefore, sleep is defined by the opposite mechanism. It can be assumed that cetaceans show a similar structure, but the neural groups are stimulated according to the need of each hemisphere. So, neural mechanisms that promote sleep are predominant in the sleeping hemisphere, while the ones that promote awakening are more active in the non-sleeping hemisphere. Role of acetylcholine Due to the origin of USWS in the brain, neurotransmitters are believed to be involved in its regulation. The neurotransmitter acetylcholine has been linked to hemispheric activation in northern fur seals. Researchers studied seals in controlled environments by observing behaviour as well as through surgically implanted EEG electrodes. Acetylcholine is released in nearly the same amounts per hemisphere in bilateral slow-wave sleep. However, in USWS, the maximal release of the cortical acetylcholine neurotransmitter is lateralized to the hemisphere exhibiting an EEG trace resembling wakefulness. The hemisphere exhibiting SWS is marked by the minimal release of acetylcholine. This model of acetylcholine release has been further discovered in additional species such as the bottlenose dolphin. Eye opening In domestic chicks and other species of birds exhibiting USWS, one eye remained open contra-lateral (on the opposite side) to the "awake" hemisphere. The closed eye was shown to be opposite the hemisphere engaging in slow-wave sleep. Learning tasks, such as those including predator recognition, demonstrated the open eye could be preferential. This has also been shown to be the favored behavior of belugas, although inconsistencies have arisen directly relating the sleeping hemisphere and open eye. Keeping one eye open aids birds in engaging in USWS while mid-flight as well as helping them observe predators in their vicinity. Also crocodilians have been shown to sleep with one eye open. Given that USWS is preserved also in blind animals or during a lack of visual stimuli, it cannot be considered as a consequence of keeping an eye open while sleeping. Furthermore, the open eye in dolphins does not forcibly activate the contralateral hemisphere. Although unilateral vision plays a considerable role in keeping active the contralateral hemisphere, it is not the motive power of USWS. Consequently, USWS might be generated by endogenous mechanisms. Thermoregulation Brain temperature has been shown to drop when a sleeping EEG is exhibited in one or both hemispheres. This decrease in temperature has been linked to a method to thermoregulate and conserve energy while maintaining the vigilance of USWS. The thermoregulation has been demonstrated in dolphins and is believed to be conserved among species exhibiting USWS. Anatomical variations Smaller corpus callosum USWS requires hemispheric separation to isolate the cerebral hemispheres enough to ensure that the one can engage in SWS while the other is awake. The corpus callosum is the anatomical structure in the mammalian brain which allows for interhemispheric communication. Cetaceans have been observed to have a smaller corpus callosum when compared to other mammals. Similarly, birds lack a corpus callosum altogether and have only few means of interhemispheric connections. Other evidence contradicts this potential role; sagittal transsections of the corpus callosum have been found to result in strictly bihemispheric sleep. As a result, it seems this anatomical difference, though well correlated, does not directly explain the existence of USWS. Noradrenergic diffuse modulatory system variations A promising method of identifying the neuroanatomical structures responsible for USWS is continuing comparisons of brains that exhibit USWS with those that do not. Some studies have shown induced asynchronous SWS in non-USWS-exhibiting animals as a result of sagittal transactions of subcortical regions, including the lower brainstem, while leaving the corpus callosum intact. Other comparisons found that mammals exhibiting USWS have a larger posterior commissure and increased decussation of ascending fibres from the locus coeruleus in the brainstem. This is consistent with the fact that one form for neuromodulation, the noradrenergic diffuse modulatory system present in the locus coeruleus, is involved in regulating arousal, attention, and sleep-wake cycles. During USWS the proportion of noradrenergic secretion is asymmetric. It is indeed high in the awaken hemisphere and low in the sleeping one. The continuous discharge of noradrenergic neurons stimulates heat production: the awake hemisphere of dolphins shows a higher, but stable, temperature. On the contrary, the sleeping hemisphere reports a slightly lower temperature compared to the other hemisphere. According to researchers, the difference in hemispheric temperatures may play a role in shifting between the SWS and awaken status. Complete crossing of the optic nerve Complete crossing (decussation) of the nerves at the optic chiasm in birds has also stimulated research. Complete decussation of the optic tract has been seen as a method of ensuring the open eye strictly activates the contralateral hemisphere. Some evidence indicates that this alone is not enough as blindness would theoretically prevent USWS if retinal nerve stimuli were the sole player. However, USWS is still exhibited in blinded birds despite the absence of visual input. Benefits Many species of birds and marine mammals have advantages due to their unihemispheric slow-wave sleep capability, including, but not limited to, increased ability to evade potential predators and the ability to sleep during migration. Unihemispheric sleep allows visual vigilance of the environment, preservation of movement, and in cetaceans, control of the respiratory system. Adaptation to high-risk predation Most species of birds are able to detect approaching predators during unihemispheric slow-wave sleep. During flight, birds maintain visual vigilance by utilizing USWS and by keeping one eye open. The utilization of unihemispheric slow-wave sleep by avian species is directly proportional to the risk of predation. In other words, the usage of USWS of certain species of birds increases as the risk of predation increases. Survival of the fittest adaptation The evolution of both cetaceans and birds may have involved some mechanisms for the purpose of increasing the likelihood of avoiding predators. Certain species, especially of birds, that acquired the ability to perform unihemispheric slow-wave sleep had an advantage and were more likely to escape their potential predators over other species that lacked the ability. Regulation based on surroundings Birds can sleep more efficiently with both hemispheres sleeping simultaneously (bihemispheric slow-wave sleep) when in safe conditions, but will increase the usage of USWS if they are in a potentially more dangerous environment. It is more beneficial to sleep using both hemispheres; however, the positives of unihemispheric slow-wave sleep prevail over its negatives under extreme conditions. While in unihemispheric slow-wave sleep, birds will sleep with one open eye towards the direction from which predators are more likely to approach. When birds do this in a flock, it's called the "group edge effect". The mallard is one bird that has been used experimentally to illustrate the "group edge effect". Birds positioned at the edge of the flock are most alert, scanning often for predators. These birds are more at risk than the birds in the center of the flock and are required to be on the lookout for both their own safety and the safety of the group as a whole. They have been observed spending more time in unihemispheric slow-wave sleep than the birds in the center. Since USWS allows for the one eye to be open, the cerebral hemisphere that undergoes slow-wave sleep varies depending on the position of the bird relative to the rest of the flock. If the bird's left side is facing outward, the left hemisphere will be in slow-wave sleep; if the bird's right side is facing outward, the right hemisphere will be in slow-wave sleep. This is because the eyes are contralateral to the left and right hemispheres of the cerebral cortex. The open eye of the bird is always directed towards the outside of the group, in the direction from which predators could potentially attack. Surfacing for air and pod cohesion Unihemispheric slow-wave sleep seems to allow the simultaneous sleeping and surfacing to breathe of aquatic mammals including both dolphins and seals. Bottlenose dolphins are one specific species of cetaceans that have been proven experimentally to use USWS in order to maintain both swimming patterns and the surfacing for air while sleeping. In addition, a reversed version of the "group edge effect" has been observed in pods of Pacific white-sided dolphins. Dolphins swimming on the left side of the pod have their right eyes open while dolphins swimming on the right side of the pod have their left eyes open. Unlike in some species of birds, the open eyes of these cetaceans are facing the inside of the group, not the outside. The dangers of possible predation do not play a significant role during USWS in Pacific white-sided dolphins. It has been suggested that this species utilizes this reversed version of the "group edge effect" in order to maintain pod formation and cohesion while maintaining unihemispheric slow-wave sleep. Rest during long bird flights While migrating, birds may undergo unihemispheric slow-wave sleep in order to simultaneously sleep and visually navigate flight. Certain species may thus avoid a need to make frequent stops along the way. Certain bird species are more likely to utilize USWS during soaring flight, but it is possible for birds to undergo USWS in flapping flight as well. Much is still unknown about the usage of unihemispheric slow-wave sleep, since the inter-hemispheric EEG asymmetry that is viewed in idle birds may not be equivalent to that of birds that are flying. Species exhibiting USWS Although humans show reduced left-hemisphere delta waves during slow-wave sleep in an unfamiliar bedchamber, this is not wakeful alertness of USWS. Aquatic mammals Cetaceans Of all the cetacean species, USWS has been found to be exhibited in the following species Amazon river dolphin (Inia geoffrensis) Beluga whale (Delphinapterus leucus) Narwhal (Monodon monoceros) Bottlenose dolphin (Tursiops truncates) Pacific white-sided dolphin (Sagmatias obliquidens) Pilot whale (Globicephala scammoni) False killer whale (Pseudorca crassidens) Porpoise (Phocoena phocoena) Orca (Orcinus orca) Sperm whale (Physeter macrocephalus) Pinnipeds Though pinnipeds are capable of sleeping on either land or water, it has been found that pinnipeds that exhibit USWS do so at a higher rate while sleeping in water. Though no USWS has been observed in true seals, four different species of eared seals have been found to exhibit USWS including Northern fur seal (Callorhinus ursinus) Significant research has been done illustrating that the northern fur seal can alternate between BSWS and USWS depending on its location while sleeping. While on land, 69% of all SWS is present bilaterally; however, when sleep takes place in water, 68% of all SWS is found with interhemispheric EEG asymmetry, indicating USWS. Southern sea lion (Otari bryonia) Steller sea lion (Eumetopias jubatus) Sirenia In the final order of aquatic mammals, sirenia, experiments have only exhibited USWS in the Amazonian manatee (Trichechus inunguis). Birds The common swift (Apus apus) was the best candidate for research aimed at determining whether or not birds exhibiting USWS can sleep in flight. The selection of the common swift as a model stemmed from observations elucidating the fact that the common swift left its nest at night, only returning in the early morning. Still, evidence for USWS is strictly circumstantial and based on the notion that if swifts must sleep to survive, they must do so via aerial roosting as little time is spent sleeping in a nest. Multiple other species of birds have also been found to exhibit USWS including Common blackbird (Turdus merula) Domestic chicken (Gallus gallus domesticus), Glaucous-winged gull (Larus glaucescens) Japanese quail (Coturnix japonica) Mallard (Anas platyrhynchos). Northern bobwhite (Colinus virginianus), Orange-fronted parakeet (Aratinga canicularis) Peregrine falcon (Falco peregrinus) White-crowned sparrow (Zonotrichia leucophrys gambelii) Future research Recent studies have illustrated that the white-crowned sparrow, as well as other passerines, have the capability of sleeping most significantly during the migratory season while in flight. However, the sleep patterns in this study were observed during migratory restlessness in captivity and might not be analogous to those of free-flying birds. Free-flying birds might be able to spend some time sleeping while in non-migratory flight as well when in the unobstructed sky as opposed to in controlled captive conditions. To truly determine if birds can sleep in flight, recordings of brain activity must take place during flight instead of after landing. A method of recording brain activity in pigeons during flight has recently proven promising in that it could obtain an EEG of each hemisphere but for relatively short periods of time. Coupled with simulated wind tunnels in a controlled setting, these new methods of measuring brain activity could elucidate the truth behind whether or not birds sleep during flight. Additionally, based on research elucidating the role of acetylcholine in control of USWS, additional neurotransmitters are being researched to understand their roles in the asymmetric sleep model. See also Sleep in animals References Sleep physiology Unsolved problems in neuroscience Vision
Unihemispheric slow-wave sleep
[ "Biology" ]
3,907
[ "Behavior", "Sleep physiology", "Sleep" ]
5,066,403
https://en.wikipedia.org/wiki/Firefly%20luciferin
Firefly luciferin (also known as beetle luciferin) is the luciferin, precursor of the light-emitting compound, used for the firefly (Lampyridae), railroad worm (Phengodidae), starworm (Rhagophthalmidae), and click-beetle (Pyrophorini) bioluminescent systems. It is the substrate of firefly luciferase (EC 1.13.12.7), which is responsible for the characteristic light emission of many firefly and other insect species in the visible spectra ranging from 530 until 630 nm. As with other luciferins, oxygen is essential for the luminescence mechanism, which involves the decomposition of a cyclic peroxide to produce excited-state molecules capable of emitting light as they relax to the ground state. Additionally, it has been found that adenosine triphosphate (ATP) and magnesium are required for light emission. History Much of the early work on the chemistry of the firefly luminescence was done in the lab of William D. McElroy at Johns Hopkins University. The luciferin was first isolated and purified in 1949 from a large amount of specimens, though it would be several years until a procedure was developed to crystallize the compound in high yield. This, along with the synthesis and structure elucidation, was accomplished by Dr. Emil H. White at the Johns Hopkins University, Department of Chemistry. The procedure was an acid-base extraction, given the carboxylic acid group on the luciferin. The luciferin could be effectively extracted using ethyl acetate at low pH from powder of approximately 15,000 firefly lanterns. The structure was later confirmed by combined use of infrared spectroscopy, UV–vis spectroscopy and synthetic methods to degrade the compound into identifiable fragments. Properties Crystal luciferin was found to be fluorescent, absorbing ultraviolet light with a peak at 327 nm and emitting light with a peak at 530 nm. Visible emission occurs upon relaxation of the oxyluciferin from a singlet excited state down to its ground state. Alkaline solutions caused a redshift of the absorption likely due to deprotonation of the hydroxyl group on the benzothiazole, but did not affect the fluorescence emission. It was found that the luciferyl adenylate (the AMP ester of luciferin) spontaneously emits light in solution. Different species of fireflies all use the same luciferin, however the color of the light emitted can differ greatly. The light from Photuris pennsylvanica was measured to be 552 nm (green-yellow) while Pyrophorus plagiophthalamus was measured to emit light at 582 nm (orange) in the ventral organ. Such differences are likely due to pH changes or differences in primary structure of the luciferase. Modification of the firefly luciferin substrate has led to "red-shifted" emissions (up to emission wavelength of 675 nm). Biological activity The in vivo synthesis of firefly luciferin is not completely understood. Only the final step of the enzymatic pathway has been studied, which is the condensation reaction of D-cysteine with 2-cyano-6-hydroxybenzothiazole, and is the same reaction used to produce the compound synthetically. This was confirmed by radiolabeling of atoms in the two compounds and by identification of a luciferin-regenerating enzyme. In firefly, oxidation of luciferins, which is catalyzed by luciferases, yields a peroxy compound 1,2-dioxetanone. The dioxetanone is unstable and via the release of carbon dioxide and excited ketones, which release excess energy by emitting light (bioluminescence). Firefly luciferin and modified substrates are fatty acid mimics and have been used to localize fatty acid amide hydrolase (FAAH) in vivo. Firefly luciferin is a substrate of the ABCG2 transporter and has been used as part of a bioluminescence imaging high throughput assay to screen for inhibitors of the transporter. References External links Bioluminescence Page showing major luciferin types Benzothiazoles Biochemical reactions Bioluminescence Carboxylic acids Thiazolines Luciferins
Firefly luciferin
[ "Chemistry", "Biology" ]
894
[ "Luminescence", "Carboxylic acids", "Functional groups", "Luciferins", "Biochemical reactions", "Biochemistry", "Bioluminescence" ]
5,066,430
https://en.wikipedia.org/wiki/Gyration%20tensor
In physics, the gyration tensor is a tensor that describes the second moments of position of a collection of particles where is the Cartesian coordinate of the position vector of the particle. The origin of the coordinate system has been chosen such that i.e. in the system of the center of mass . Where Another definition, which is mathematically identical but gives an alternative calculation method, is: Therefore, the x-y component of the gyration tensor for particles in Cartesian coordinates would be: In the continuum limit, where represents the number density of particles at position . Although they have different units, the gyration tensor is related to the moment of inertia tensor. The key difference is that the particle positions are weighted by mass in the inertia tensor, whereas the gyration tensor depends only on the particle positions; mass plays no role in defining the gyration tensor. Diagonalization Since the gyration tensor is a symmetric 3x3 matrix, a Cartesian coordinate system can be found in which it is diagonal where the axes are chosen such that the diagonal elements are ordered . These diagonal elements are called the principal moments of the gyration tensor. Shape descriptors The principal moments can be combined to give several parameters that describe the distribution of particles. The squared radius of gyration is the sum of the principal moments: The asphericity is defined by which is always non-negative and zero only when the three principal moments are equal, λx = λy = λz. This zero condition is met when the distribution of particles is spherically symmetric (hence the name asphericity) but also whenever the particle distribution is symmetric with respect to the three coordinate axes, e.g., when the particles are distributed uniformly on a cube, tetrahedron or other Platonic solid. Similarly, the acylindricity is defined by which is always non-negative and zero only when the two principal moments are equal, λx = λy. This zero condition is met when the distribution of particles is cylindrically symmetric (hence the name, acylindricity), but also whenever the particle distribution is symmetric with respect to the two coordinate axes, e.g., when the particles are distributed uniformly on a regular prism. Finally, the relative shape anisotropy is defined which is bounded between zero and one. = 0 only occurs if all points are spherically symmetric, and = 1 only occurs if all points lie on a line. References Polymer physics Tensors
Gyration tensor
[ "Chemistry", "Materials_science", "Engineering" ]
515
[ "Polymer physics", "Tensors", "Polymer chemistry" ]
5,066,836
https://en.wikipedia.org/wiki/Coordinated%20flight
In aviation, coordinated flight of an aircraft is flight without sideslip. When an aircraft is flying with zero sideslip a turn and bank indicator installed on the aircraft's instrument panel usually shows the ball in the center of the spirit level. The occupants perceive no lateral acceleration of the aircraft and their weight to be acting straight downward into their seats. Particular care to maintain coordinated flight is required by the pilot when entering and leaving turns. Advantages Coordinated flight is usually preferred over uncoordinated flight for the following reasons: it is more comfortable for the occupants it minimises the drag force on the aircraft it causes fuel to be drawn equally from tanks in both wings it minimises the risk of entering a spin Instrumentation Airplanes and helicopters are usually equipped with a turn and bank indicator to provide their pilots with a continuous display of the lateral balance of their aircraft so the pilots can ensure coordinated flight. Glider pilots attach a piece of coloured string to the outside of the canopy to sense the sideslip angle and assist in maintaining coordinated flight. Axes of rotation An airplane has three axes of rotation: Pitch – in which the nose of the airplane moves up or down. This is typically controlled by the elevator at the rear of the airplane. Yaw – in which the nose of the airplane moves left or right. This is typically controlled by the rudder at the rear of the airplane. Roll (bank) – in which one wing of the airplane moves up and the other moves down. This is typically controlled by ailerons on the wings of the airplane. Coordinated flight requires the pilot to use pitch, roll and yaw control simultaneously. See also flight dynamics. Coordinating the turn If the pilot were to use only the rudder to initiate a turn in the air, the airplane would tend to "skid" to the outside of the turn. If the pilot were to use only the ailerons to initiate a turn in the air, the airplane would tend to "slip" toward the lower wing. If the pilot were to fail to use the elevator to increase the angle of attack throughout the turn, the airplane would also tend to "slip" toward the lower wing. However, if the pilot makes appropriate use of the rudder, ailerons and elevator to enter and leave the turn such that sideslip and lateral acceleration are zero the airplane will be in coordinated flight. See also Adverse yaw References Clancy, L.J. (1975), Aerodynamics, Pitman Publishing Limited, London Coordinated flight Retrieved on 2008-09-19 Aerodynamics
Coordinated flight
[ "Chemistry", "Engineering" ]
509
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
21,241,712
https://en.wikipedia.org/wiki/Bipartite%20double%20cover
In graph theory, the bipartite double cover of an undirected graph is a bipartite, covering graph of , with twice as many vertices as . It can be constructed as the tensor product of graphs, . It is also called the Kronecker double cover, canonical double cover or simply the bipartite double of . It should not be confused with a cycle double cover of a graph, a family of cycles that includes each edge twice. Construction The bipartite double cover of has two vertices and for each vertex of . Two vertices and are connected by an edge in the double cover if and only if and are connected by an edge in . For instance, below is an illustration of a bipartite double cover of a non-bipartite graph . In the illustration, each vertex in the tensor product is shown using a color from the first term of the product () and a shape from the second term of the product (); therefore, the vertices in the double cover are shown as circles while the vertices are shown as squares. The bipartite double cover may also be constructed using adjacency matrices (as described below) or as the derived graph of a voltage graph in which each edge of is labeled by the nonzero element of the two-element group. Examples The bipartite double cover of the Petersen graph is the Desargues graph: . The bipartite double cover of a complete graph is a crown graph (a complete bipartite graph minus a perfect matching). In particular, the bipartite double cover of the graph of a tetrahedron, , is the graph of a cube. The bipartite double cover of an odd-length cycle graph is a cycle of twice the length, while the bipartite double of any bipartite graph (such as an even length cycle, shown in the following example) is formed by two disjoint copies of the original graph. Matrix interpretation If an undirected graph has a matrix as its adjacency matrix, then the adjacency matrix of the double cover of is and the biadjacency matrix of the double cover of is just itself. That is, the conversion from a graph to its double cover can be performed simply by reinterpreting as a biadjacency matrix instead of as an adjacency matrix. More generally, the reinterpretation the adjacency matrices of directed graphs as biadjacency matrices provides a combinatorial equivalence between directed graphs and balanced bipartite graphs. Properties The bipartite double cover of any graph is a bipartite graph; both parts of the bipartite graph have one vertex for each vertex of . A bipartite double cover is connected if and only if is connected and non-bipartite. The bipartite double cover is a special case of a double cover (a 2-fold covering graph). A double cover in graph theory can be viewed as a special case of a topological double cover. If is a non-bipartite symmetric graph, the double cover of is also a symmetric graph; several known cubic symmetric graphs may be obtained in this way. For instance, the double cover of is the graph of a cube; the double cover of the Petersen graph is the Desargues graph; and the double cover of the graph of the dodecahedron is a 40-vertex symmetric cubic graph. It is possible for two different graphs to have isomorphic bipartite double covers. For instance, the Desargues graph is not only the bipartite double cover of the Petersen graph, but is also the bipartite double cover of a different graph that is not isomorphic to the Petersen graph. Not every bipartite graph is a bipartite double cover of another graph; for a bipartite graph to be the bipartite cover of another graph, it is necessary and sufficient that the automorphisms of include an involution that maps each vertex to a distinct and non-adjacent vertex. For instance, the graph with two vertices and one edge is bipartite but is not a bipartite double cover, because it has no non-adjacent pairs of vertices to be mapped to each other by such an involution; on the other hand, the graph of the cube is a bipartite double cover, and has an involution that maps each vertex to the diametrally opposite vertex. An alternative characterization of the bipartite graphs that may be formed by the bipartite double cover construction was obtained by . Name In a connected graph that is not bipartite, only one double cover is bipartite, but when the graph is bipartite or disconnected there may be more than one. For this reason, Tomaž Pisanski has argued that the name "bipartite double cover" should be deprecated in favor of the "canonical double cover" or "Kronecker cover", names which are unambiguous. Other double covers In general, a graph may have multiple double covers that are different from the bipartite double cover. The graph is a covering graph of if there is a surjective local isomorphism from to . In the figure, the surjection is indicated by the colours. For example, maps both blue nodes in to the blue node in . Furthermore, let be the neighbourhood of a blue node in and let be the neighbourhood of the blue node in ; then the restriction of to is a bijection from to . In particular, the degree of each blue node is the same. The same applies to each colour. The graph is a double cover (or 2-fold cover or 2-lift) of if the preimage of each node in has size 2. In the example, there are exactly 2 nodes in that are mapped to the blue node in . In the following figure, the graph is a double cover of the graph : However, is not the bipartite double cover of or any other graph; it is not a bipartite graph. If we replace one triangle by a square in the resulting graph has four distinct double covers. Two of them are bipartite but only one of them is the Kronecker cover. As another example, the graph of the icosahedron is a double cover of the complete graph ; to obtain a covering map from the icosahedron to , map each pair of opposite vertices of the icosahedron to a single vertex of . However, the icosahedron is not bipartite, so it is not the bipartite double cover of . Instead, it can be obtained as the orientable double cover of an embedding of on the projective plane. The double covers of a graph correspond to the different ways to sign the edges of the graph. See also Bipartite half Notes References . . The “coverings” in the title of this paper refer to the vertex cover problem, not to bipartite double covers. However, cite this paper as the source of the idea of reinterpreting the adjacency matrix as a biadjacency matrix. . . . . External links Graph operations Bipartite graphs
Bipartite double cover
[ "Mathematics" ]
1,479
[ "Mathematical relations", "Graph theory", "Graph operations" ]
21,242,503
https://en.wikipedia.org/wiki/International%20Conference%20of%20Physics%20Students
The International Conference of Physics Students (ICPS) is an annual conference of the International Association of Physics Students (IAPS). Usually, up to 500 students from all over the world attend the event, which takes place in another country every year in August. The event includes the opportunity for students at bachelor, master and doctoral level to present their research, whilst listening and interacting with invited speakers of international reputation. During the event, usually lasting between 5 and 7 days, the IAPS holds its Annual General Meeting (AGM) and elects a new Executive Committee. The choice of the host country of ICPS is made two years in advance. Program The main component of the conference consists of lectures given by the students themselves for other students. Guest lectures held by invited speakers and lab tours complete the scientific program. Further activities include city tours, excursions and social events. The participation fee is usually close to €200 per person, including accommodation, food and any extra activity organised by the local committee. Conference venues The following list contains the venues of the ICPS conferences 2022 Puebla, Mexico 2021 Copenhagen, Denmark 2020 Puebla, Mexico (cancelled due to COVID-19 pandemic) 2019 Cologne, Germany 2018 Helsinki, Finland 2017 Turin, Italy 2016 Malta 2015 Zagreb, Croatia 2014 Heidelberg, Germany 2013 Edinburgh, United Kingdom 2012 Utrecht, The Netherlands 2011 Budapest, Hungary 2010 Graz, Austria 2009 Split, Croatia 2008 Kraków, Poland 2007 London, United Kingdom 2006 Bucharest, Romania 2005 Coimbra, Portugal 2004 Novi Sad, Serbia and Montenegro 2003 Odense, Denmark 2002 Budapest, Hungary 2001 Dublin, Ireland 2000 Zadar, Croatia 1999 Helsinki, Finland 1998 Coimbra, Portugal 1997 Vienna, Austria 1996 Szeged, Hungary 1995 Copenhagen, Denmark 1994 St. Petersburg, Russia 1993 Bodrum, Turkey 1992 Lisbon, Portugal 1991 Vienna, Austria 1990 Amsterdam, Netherlands 1989 Freiburg, Germany 1988 Prague, Czechoslovakia 1987 Debrecen, Hungary 1986 Budapest, Hungary History In 1985 a group of Hungarian students decided to host a gathering of Physics students from all over the world. This resulted in the first International Conference of Physics Students in 1986. Due to the large success of this conference a second meeting in 1987 was organized in Debrecen, Hungary. At this occasion the International Association of Physics Students was founded. Furthermore, it was decided to have an International meeting annually. Since then, 30 conferences have taken place. ICPS 2017 The ICPS 2017 took place in Turin, between August 7 and August 14, hosted by the Italian Association of Physics Students (Associazione Italiana Studenti di Fisica) (AISF). The Italian Organizing Committee prepared the formal bid to host the conference only one year after its foundation, when it was not yet formally recognized as a National Committee of IAPS. The event was mostly held at the Campus Einaudi, with an opening ceremony hosted at the Cavallerizza Reale and the Rector's Palace of the University of Turin. Approximately 450 students attended the event and almost 50 volunteers from AISF were involved in the conference activities. Notable guests were Elena Aprile (Columbia University), Steve Cowley (University of Oxford), Roberto Vittori (European Space Agency) and James Kakalios (University of Minnesota, author of the popular book The Physics of Superheroes). Young invited speakers included Francesco Tombesi (NASA), Agnese Bissi (Harvard University) and Francesco Prino (University of Turin). The program of ICPS 2017 comprised visits to the National Centre for Oncological Hadrontherapy in Pavia, traditional wine cellars, the Italian Institute of Technology in Genoa, the Venaria Reale palace, the Sacra di San Michele Abbey and a number of innovation hubs in Northern Italy. ICPS 2016 The ICPS 2016 took place from August, 11th to August, 17th in Malta and was hosted by physics students from the University of Malta. Around 350 physics students attended the conference. Notable guest speakers were Jocelyn Bell Burnell from the University of Oxford and Mark McCaughrean from ESA. ICPS 2015 The ICPS 2015 took place from August, 12th to August, 19th in Zagreb and was hosted by physics students from the Croatian Physical Society. Around 400 physics students attended the conference. Notable guest speakers was Prof. Philip W. Phillips. ICPS 2014 The ICPS 2014 took place from August, 10th to August, 17th in Heidelberg and was hosted by physics students from the jDPG. Around 450 physics students attended the conference. Notable guest speakers were Metin Tolan, Karlheinz Meier and John Dudley, president of the European Physical Society. ICPS 2013 The ICPS 2013 took place from August, 15th to August, 21st in Edinburgh and was hosted by physics students from Heriot-Watt University. Around 400 physics students attended the conference. ICPS 2012 The ICPS 2012 took place from August, 3rd to August, 10th in Utrecht and was hosted by physics students from SPIN. Around 400 physics students attended the conference. ICPS 2011 The ICPS 2011 took place from August, 11th to August, 18th in Budapest and was hosted by physics students from the Hungarian Association of Physics Students (Mafihe). Around 400 physics students attended the conference. Notable guest speakers were Ferenc Krausz from the Max Planck Institute for Quantum Optics, Carlo Rubbia from CERN, and Laszlo Kiss from the Kinkily Observatory. ICPS 2010 The ICPS 2010 took place from August, 17th to August, 23rd in Graz and was hosted by physics students from both Graz University of Technology and University of Graz. A total of 446 students attended the conference. This number includes 64 volunteers to help to organize the event. Notable guest speakers were Peter Zoller and Sabine Schindler, both University of Innsbruck; and John Ellis from CERN. See also University of Malta Graz University of Technology University of Heidelberg References External links IAPS ICPS2008 ICPS2009 ICPS2010 ICPS 2011 ICPS 2012 ICPS 2014 ICPS 2015 ICPS 2016 ICPS 2017 ICPS 2021 International conferences International student organizations Physics education Physics conferences
International Conference of Physics Students
[ "Physics" ]
1,238
[ "Applied and interdisciplinary physics", "Physics education" ]
21,243,458
https://en.wikipedia.org/wiki/Datakit
Datakit is a virtual circuit switch which was developed by Sandy Fraser at Bell Labs for both local-area and wide-area networks, and in widespread deployment by the Regional Bell Operating Companies (RBOCs). Design Datakit uses a cell relay protocol similar to Asynchronous Transfer Mode. Datakit is a connection-oriented switch, with all packets for a particular call traveling through the network over the same virtual circuit. Datakit networks are still in widespread use by the major telephone companies in the United States. Interfaces to these networks include TCP/IP and UDP, X.25, asynchronous protocols and several synchronous protocols, such as SDLC, HDLC, Bisync and others. These networks support host to terminal traffic and vice versa, host-to-host traffic, file transfers, remote login, remote printing, and remote command execution. At the physical layer, it can operate over multiple media, from slow speed EIA-232 to 500Mbit fiber optic links including 10/100 Megabit Ethernet links. Datakit uses an adaptation protocol called Universal Receiver Protocol (URP) that spreads PDU overhead across multiple cells and performs immediate packet processing. URP assumes that cells arrive in order and may force retransmissions if not. The Information Systems Network (ISN) was the pre-version of Datakit that was supported by the former AT&T Information Systems. The ISN was a packet switching network that was built similar to digital System 75 platform. LAN and WAN applications with the use of what was referred to as a Concentrator that was connected via fiber optics up to 15 miles away from the main ISN. The speeds of these connections were very slow to today's standards, from 1200 to 5600 baud with most connections / end users on dumb terminals. The main support for this product came from the NCSC (National Customer Support Center) in Englewood CO then later AT&T Information Systems as the company reorganized and Bell Labs. History Most of Bell Laboratories was trunked together via Datakit networking. On top of Datakit transport service, several operating systems (including UNIX) implemented UUCP for electronic mail and dkcu for remote login. It was in production three or more years prior to the Datakit being released. Datakit was programmed similar to a Central Office with area code and seven digit location. In 1996, AT&T spun off Bell Labs as a separate company, Lucent Technologies–who would later merge with the French firm Alcatel to become Alcatel-Lucent, before finally being acquired by Nokia in 2016. By the late 1990s, Datakit was clearly a legacy technology, being superseded by newer technologies such as IP and Ethernet. Lucent decided to discontinue their Datakit product line, but a group of former Lucent employees started a new firm, Datatek Applications, who licensed the technology from Lucent, and aimed to support the remaining Datakit users and provide gateway solutions to assist in their migration to newer technologies. In part due to the continuing decline in Datakit use, Datatek Applications went out of business in January 2018. See also Cell relay X.25 References Wide area networks Packets (information technology) Computer networking
Datakit
[ "Technology", "Engineering" ]
662
[ "Computer networking", "Computer science", "Computer engineering" ]
21,245,430
https://en.wikipedia.org/wiki/Coriolis%E2%80%93Stokes%20force
In fluid dynamics, the Coriolis–Stokes force is a forcing of the mean flow in a rotating fluid due to interaction of the Coriolis effect and wave-induced Stokes drift. This force acts on water independently of the wind stress. This force is named after Gaspard-Gustave Coriolis and George Gabriel Stokes, two nineteenth-century scientists. Important initial studies into the effects of the Earth's rotation on the wave motion – and the resulting forcing effects on the mean ocean circulation – were done by , and . The Coriolis–Stokes forcing on the mean circulation in an Eulerian reference frame was first given by : to be added to the common Coriolis forcing Here is the mean flow velocity in an Eulerian reference frame and is the Stokes drift velocity – provided both are horizontal velocities (perpendicular to ). Further is the fluid density, is the cross product operator, where is the Coriolis parameter (with the Earth's rotation angular speed and the sine of the latitude) and is the unit vector in the vertical upward direction (opposing the Earth's gravity). Since the Stokes drift velocity is in the wave propagation direction, and is in the vertical direction, the Coriolis–Stokes forcing is perpendicular to the wave propagation direction (i.e. in the direction parallel to the wave crests). In deep water the Stokes drift velocity is with the wave's phase velocity, the wavenumber, the wave amplitude and the vertical coordinate (positive in the upward direction opposing the gravitational acceleration). See also Ekman layer Ekman transport Notes References Fluid dynamics Water waves
Coriolis–Stokes force
[ "Physics", "Chemistry", "Engineering" ]
332
[ "Physical phenomena", "Water waves", "Chemical engineering", "Waves", "Piping", "Fluid dynamics" ]
21,245,707
https://en.wikipedia.org/wiki/Red%20giant
A red giant is a luminous giant star of low or intermediate mass (roughly 0.3–8 solar masses ()) in a late phase of stellar evolution. The outer atmosphere is inflated and tenuous, making the radius large and the surface temperature around or lower. The appearance of the red giant is from yellow-white to reddish-orange, including the spectral types K and M, sometimes G, but also class S stars and most carbon stars. Red giants vary in the way by which they generate energy: most common red giants are stars on the red-giant branch (RGB) that are still fusing hydrogen into helium in a shell surrounding an inert helium core red-clump stars in the cool half of the horizontal branch, fusing helium into carbon in their cores via the triple-alpha process asymptotic-giant-branch (AGB) stars with a helium burning shell outside a degenerate carbon–oxygen core, and a hydrogen-burning shell just beyond that. Many of the well-known bright stars are red giants because they are luminous and moderately common. The K0 RGB star Arcturus is 36 light-years away, and Gacrux is the nearest M-class giant at 88 light-years' distance. A red giant will usually produce a planetary nebula and become a white dwarf at the end of its life. Characteristics A red giant is a star that has exhausted the supply of hydrogen in its core and has begun thermonuclear fusion of hydrogen in a shell surrounding the core. They have radii tens to hundreds of times larger than that of the Sun. However, their outer envelope is lower in temperature, giving them a yellowish-orange hue. Despite the lower energy density of their envelope, red giants are many times more luminous than the Sun because of their great size. Red-giant-branch stars have luminosities up to nearly three thousand times that of the Sun (); spectral types of K or M have surface temperatures of (compared with the Sun's photosphere temperature of nearly ) and radii up to about 200 times the Sun (). Stars on the horizontal branch are hotter, with only a small range of luminosities around . Asymptotic-giant-branch stars range from similar luminosities as the brighter stars of the red-giant branch, up to several times more luminous at the end of the thermal pulsing phase. Among the asymptotic-giant-branch stars belong the carbon stars of type C-N and late C-R, produced when carbon and other elements are convected to the surface in what is called a dredge-up. The first dredge-up occurs during hydrogen shell burning on the red-giant branch, but does not produce a large carbon abundance at the surface. The second, and sometimes third, dredge-up occurs during helium shell burning on the asymptotic-giant branch and convects carbon to the surface in sufficiently massive stars. The stellar limb of a red giant is not sharply defined, contrary to their depiction in many illustrations. Rather, due to the very low mass density of the envelope, such stars lack a well-defined photosphere, and the body of the star gradually transitions into a 'corona'. The coolest red giants have complex spectra, with molecular lines, emission features, and sometimes masers, particularly from thermally pulsing AGB stars. Observations have also provided evidence of a hot chromosphere above the photosphere of red giants, where investigating the heating mechanisms for the chromospheres to form requires 3D simulations of red giants. Another noteworthy feature of red giants is that, unlike Sun-like stars whose photospheres have a large number of small convection cells (solar granules), red-giant photospheres, as well as those of red supergiants, have just a few large cells, the features of which cause the variations of brightness so common on both types of stars. Evolution Red giants are evolved from main-sequence stars with masses in the range from about to around . When a star initially forms from a collapsing molecular cloud in the interstellar medium, it contains primarily hydrogen and helium, with trace amounts of "metals" (in astrophysics, this refers to all elements heavier than hydrogen and helium). These elements are all uniformly mixed throughout the star. The star "enters" the main sequence when its core reaches a temperature (several million kelvins) high enough to begin fusing hydrogen-1 (the predominant isotope), and establishes hydrostatic equilibrium. (In astrophysics, stellar fusion is often referred to as "burning", with hydrogen fusion sometimes termed "hydrogen burning".) Over its main sequence life, the star slowly fuses the hydrogen in the core into helium; its main-sequence life ends when nearly all the hydrogen in the core has been fused. For the Sun, the main-sequence lifetime is approximately 10 billion years. More massive stars burn disproportionately faster and so have a shorter lifetime than less massive stars. When the star has mostly exhausted the hydrogen fuel in its core, the core's rate of nuclear reactions declines, and thus so do the radiation and thermal pressure the core generates, which are what support the star against gravitational contraction. The star further contracts, increasing the pressures and thus temperatures inside the star (as described by the ideal gas law). Eventually a "shell" layer around the core reaches temperatures sufficient to fuse hydrogen and thus generate its own radiation and thermal pressure, which "re-inflates" the star's outer layers and causes them to expand. The hydrogen-burning shell results in a situation that has been described as the mirror principle: when the core within the shell contracts, the layers of the star outside the shell must expand. The detailed physical processes that cause this are complex. Still, the behavior is necessary to satisfy simultaneous conservation of gravitational and thermal energy in a star with the shell structure. The core contracts and heats up due to the lack of fusion, and so the outer layers of the star expand greatly, absorbing most of the extra energy from shell fusion. This process of cooling and expanding is the subgiant stage. When the envelope of the star cools sufficiently it becomes convective, the star stops expanding, its luminosity starts to increase, and the star is ascending the red-giant branch of the Hertzsprung–Russell (H–R) diagram. The evolutionary path the star takes as it moves along the red-giant branch depends on the mass of the star. For the Sun and stars of less than about the core will become dense enough that electron degeneracy pressure will prevent it from collapsing further. Once the core is degenerate, it will continue to heat until it reaches a temperature of roughly , hot enough to begin fusing helium to carbon via the triple-alpha process. Once the degenerate core reaches this temperature, the entire core will begin helium fusion nearly simultaneously in a so-called helium flash. In more-massive stars, the collapsing core will reach these temperatures before it is dense enough to be degenerate, so helium fusion will begin much more smoothly, and produce no helium flash. The core helium fusing phase of a star's life is called the horizontal branch in metal-poor stars, so named because these stars lie on a nearly horizontal line in the H–R diagram of many star clusters. Metal-rich helium-fusing stars instead lie on the so-called red clump in the H–R diagram. An analogous process occurs when the core helium is exhausted, and the star collapses once again, causing helium in a shell to begin fusing. At the same time, hydrogen may begin fusion in a shell just outside the burning helium shell. This puts the star onto the asymptotic giant branch, a second red-giant phase. The helium fusion results in the build-up of a carbon–oxygen core. A star below about will never start fusion in its degenerate carbon–oxygen core. Instead, at the end of the asymptotic-giant-branch phase the star will eject its outer layers, forming a planetary nebula with the core of the star exposed, ultimately becoming a white dwarf. The ejection of the outer mass and the creation of a planetary nebula finally ends the red-giant phase of the star's evolution. The red-giant phase typically lasts only around a billion years in total for a solar mass star, almost all of which is spent on the red-giant branch. The horizontal-branch and asymptotic-giant-branch phases proceed tens of times faster. If the star has about 0.2 to , it is massive enough to become a red giant but does not have enough mass to initiate the fusion of helium. These "intermediate" stars cool somewhat and increase their luminosity but never achieve the tip of the red-giant branch and helium core flash. When the ascent of the red-giant branch ends they puff off their outer layers much like a post-asymptotic-giant-branch star and then become a white dwarf. Stars that do not become red giants Very-low-mass stars are fully convective and may continue to fuse hydrogen into helium for up to a trillion years until only a small fraction of the entire star is hydrogen. Luminosity and temperature steadily increase during this time, just as for more-massive main-sequence stars, but the length of time involved means that the temperature eventually increases by about 50% and the luminosity by around 10 times. Eventually the level of helium increases to the point where the star ceases to be fully convective and the remaining hydrogen locked in the core is consumed in only a few billion more years. Depending on mass, the temperature and luminosity continue to increase for a time during hydrogen shell burning, the star can become hotter than the Sun and tens of times more luminous than when it formed although still not as luminous as the Sun. After some billions more years, they start to become less luminous and cooler even though hydrogen shell burning continues. These become cool helium white dwarfs. Very-high-mass stars develop into supergiants that follow an evolutionary track that takes them back and forth horizontally over the H–R diagram, at the right end constituting red supergiants. These usually end their life as a type II supernova. The most massive stars can become Wolf–Rayet stars without becoming giants or supergiants at all. Planets Prospects for habitability Although traditionally it has been suggested the evolution of a star into a red giant will render its planetary system, if present, uninhabitable, some research suggests that, during the evolution of a star along the red-giant branch, it could harbor a habitable zone for several billion years at 2 astronomical units (AU) out to around 100 million years at out, giving perhaps enough time for life to develop on a suitable world. After the red-giant stage, there would for such a star be a habitable zone between for an additional one billion years. Later studies have refined this scenario, showing how for a star the habitable zone lasts from 100 million years for a planet with an orbit similar to that of Mars to 210 million years for one that orbits at Saturn distance to the Sun, the maximum time (370 million years) corresponding for planets orbiting at the distance of Jupiter. However, planets orbiting a star in equivalent orbits to those of Jupiter and Saturn would be in the habitable zone for 5.8 billion years and 2.1 billion years, respectively; for stars more massive than the Sun, the times are considerably shorter. Enlargement of planets As of 2023, several hundred giant planets have been discovered around giant stars. However, these giant planets are more massive than the giant planets found around solar-type stars. This could be because giant stars are more massive than the Sun (less massive stars will still be on the main sequence and will not have become giants yet) and more massive stars are expected to have more massive planets. However, the masses of the planets that have been found around giant stars do not correlate with the masses of the stars; therefore, the planets could be growing in mass during the stars' red giant phase. The growth in planet mass could be partly due to accretion from stellar wind, although a much larger effect would be Roche lobe overflow causing mass-transfer from the star to the planet when the giant expands out to the orbital distance of the planet. (A similar process in multiple star systems is believed to be the cause of most novas and type Ia supernovas.) Examples Many of the well-known bright stars are red giants, because they are luminous and moderately common. The red-giant branch variable star Gamma Crucis is the nearest M-class giant star at 88 light-years. The K1.5 red-giant branch star Arcturus is 36 light-years away. Red-giant branch Aldebaran (α Tauri) Arcturus (α Bootis) μ Leonis Gacrux (γ Crucis) Red-clump giants Pollux (β Geminorum) Capella Aa (α Aurigae) α Cassiopeiae (Schedar) δ Andromedae Asymptotic giant branch ρ Persei (Gorgonea Tertia) Mira (ο Ceti) χ Cygni α Herculis (Rasalgethi) The Sun as a red giant The Sun will exit the main sequence in approximately 5 billion years and start to turn into a red giant. As a red giant, the Sun will grow so large (over 200 times its present-day radius: ; ) that it will engulf Mercury, Venus, and likely Earth. It will lose 38% of its mass growing, then will die into a white dwarf. References External links Astrophysics Star types Sun
Red giant
[ "Physics", "Astronomy" ]
2,850
[ "Astronomical sub-disciplines", "Star types", "Astrophysics", "Astronomical classification systems" ]
31,331,135
https://en.wikipedia.org/wiki/Wafer%20bonding
Wafer bonding is a packaging technology on wafer-level for the fabrication of microelectromechanical systems (MEMS), nanoelectromechanical systems (NEMS), microelectronics and optoelectronics, ensuring a mechanically stable and hermetically sealed encapsulation. The wafers' diameter range from 100 mm to 200 mm (4 inch to 8 inch) for MEMS/NEMS and up to 300 mm (12 inch) for the production of microelectronic devices. Smaller wafers were used in the early days of the microelectronics industry, with wafers being just 1 inch in diameter in the 1950s. Overview In microelectromechanical systems (MEMS) and nanoelectromechanical systems (NEMS), the package protects the sensitive internal structures from environmental influences such as temperature, moisture, high pressure and oxidizing species. The long-term stability and reliability of the functional elements depend on the encapsulation process, as does the overall device cost. The package has to fulfill the following requirements: protection against environmental influences heat dissipation integration of elements with different technologies compatibility with the surrounding periphery maintenance of energy and information flow Techniques The commonly used and developed bonding methods are as follows: Direct bonding Surface activated bonding Plasma activated bonding Anodic bonding Eutectic bonding Glass frit bonding Adhesive bonding Thermocompression bonding Reactive bonding Transient liquid phase diffusion bonding Atomic diffusion bonding Requirements The bonding of wafers requires specific environmental conditions which can generally be defined as follows: substrate surface flatness smoothness cleanliness bonding environment bond temperature ambient pressure applied force materials substrate materials intermediate layer materials The actual bond is an interaction of all those conditions and requirements. Hence, the applied technology needs to be chosen in respect to the present substrate and defined specification like max. bearable temperature, mechanical pressure or desired gaseous atmosphere. Evaluation The bonded wafers are characterized in order to evaluate a technology's yield, bonding strength and level of hermeticity either for fabricated devices or for the purpose of process development. Therefore, several different approaches for the bond characterization have emerged. On the one hand non-destructive optical methods to find cracks or interfacial voids are used beside destructive techniques for the bond strength evaluation, like tensile or shear testing. On the other hand, the unique properties of carefully chosen gases or the pressure depending vibration behavior of micro resonators are exploited for hermeticity testing. References Further reading Peter Ramm, James Lu, Maaike Taklo (editors), Handbook of Wafer Bonding, Wiley-VCH, . Electronics manufacturing Packaging (microfabrication) Semiconductor technology Chemical bonding
Wafer bonding
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
561
[ "Semiconductor technology", "Electronics manufacturing", "Microtechnology", "Packaging (microfabrication)", "Electronic engineering", "Condensed matter physics", "nan", "Chemical bonding" ]
31,332,116
https://en.wikipedia.org/wiki/Reentry%20Breakup%20Recorder
A Reentry Breakup Recorder (REBR) is a device that is designed to be placed aboard a spacecraft to record pertinent data when the spacecraft (intentionally) breaks up as it re-enters Earth's atmosphere. The device records data regarding the thermal, acceleration, rotational and other stresses the vehicle is subject to. In the final stages it transmits the data back to a laboratory before it is destroyed when it hits the surface. History Two REBRs were launched in January 2011 on the Japanese Kounotori 2 transfer vehicle. One recorded the subsequent re-entry of that vehicle, and the other was placed aboard the Johannes Kepler ATV, which reentered Earth's atmosphere on 21 June 2011. The Kounotori 2 vehicle re-entered on 30 March 2011. Its REBR successfully collected and returned its data; it survived the impact with the ocean and while floating continued to transmit. It took between 6 and 8 weeks to analyze the data. The second unit was intended to collect data during the reentry of the Johannes Kepler ATV (ATV-2); however the device failed to make contact after reentry and consequently no data was retrieved. Two other units were used successfully for Kounotori 3 for its reentry on September 14, 2012, and Edoardo Amaldi ATV (ATV-3) on October 3, 2012. Predecessor technology: image documentation of reentry and breakup Earlier data collection from reentry and breakup was mostly visual and spectrographic. A particularly well-documented case is seen in a reentry and breakup over the South Pacific—recorded by a large team of NASA and ESA space agency personnel with extensive photographic image and video data collection, at multiple spectrographic wavelengths—occurred in September 2008, following the first mission of the ESA cargo spacecraft—the Automated Transfer Vehicle Jules Verne—to the International Space Station (ISS) in March 2008. On 5 September 2008, Jules Verne undocked from the ISS and maneuvered to an orbital position below the ISS. It remained in that orbit until the night of 29 September. At 10:00:27 UTC, Jules Verne started its first de-orbit burn of 6 minutes, followed by a second burn of 15 minutes at 12:58:18 UTC. At 13:31 GMT, Jules Verne re-entered the atmosphere at an altitude of , and then completed its destructive re-entry as planned over the following 12 minutes, depositing debris in the South Pacific Ocean southwest of Tahiti. This was recorded with video and still photography at night by two aircraft flying over the South Pacific for purposes of data gathering. The NASA documentary of the project is in the gallery, below. Gallery References External links Photo and diagram of the first REBRs, April 2011. Atmospheric entry Spacecraft communication Spacecraft instruments Articles containing video clips
Reentry Breakup Recorder
[ "Engineering" ]
576
[ "Atmospheric entry", "Spacecraft communication", "Aerospace engineering" ]
31,336,125
https://en.wikipedia.org/wiki/Plasma-activated%20bonding
Plasma-activated bonding is a derivative, directed to lower processing temperatures for direct bonding with hydrophilic surfaces. The main requirements for lowering temperatures of direct bonding are the use of materials melting at low temperatures and with different coefficients of thermal expansion (CTE). Surface activation prior to bonding has the typical advantage that no intermediate layer is needed and sufficiently high bonding energy is achieved after annealing at temperatures below 400 °C. Overview The decrease of temperature is based on the increase of bonding strength using plasma activation on clean wafer surfaces. Further, the increase is caused by elevation in amount of Si-OH groups, removal of contaminants on the wafer surface, the enhancement of viscous flow of the surface layer and the enhanced diffusivity of water and gas trapped at the interface. Based on ambient pressure, two main surface activation fields using plasma treatment are established for wafer preprocessing to lower the temperatures during annealing. To establish maximum surface energy at low temperatures (< 100 °C) numerous parameters for plasma activation and annealing need to be optimized according to the bond material. Plasma activated bonding is based on process pressure divided into: Atmospheric Pressure-Plasma Activated Bonding (AP-PAB) Dielectric barrier discharge Corona discharge Plasma torch (Jet) Low Pressure-Plasma Activated Bonding (LP-PAB) Reactive ion etching (RIE) Inductively coupled plasma reactive-ion etching (ICP RIE) Sequential plasma activated bonding (SPAB) Remote plasma Atmospheric Pressure-Plasma Activated Bonding (AP-PAB) This method is to ignite plasma without using a low pressure environment, so no expensive equipment for vacuum generation is needed. Atmospheric Pressure-Plasma Activated Bonding enables the possibility to ignite plasma at specific local areas or the whole surface of the substrate. Between the two electrodes plasma gas is ignited via alternating voltage. The wafer pairs pass the following process flow: RCA cleaning Surface activation at atmospheric pressure Treatment duration ~ 40 s Process gases used for silicon Synthetic air (80 vol.-% N2 + 20 vol.-% O2) Oxygen (O2) Process gases used for glass or LiTaO3 Ar/H2 (90 vol.-% Ar + 10 vol.-% H2) Humid oxygen (O2dH2O) Rinsing in de-ionized water Treatment duration 10 minutes Reduction of particle concentration Pre-bonding at room temperature Annealing (room temperature to 400 °C) The optimal gas mixture for the plasma treatment is depending on the annealing temperature. Furthermore, treatment with plasma is suitable to prevent bond defects during the annealing procedure. If using glass, based on the high surface roughness, a chemical-mechanical planarization (CMP) step after rinsing is necessary to improve the bonding quality. The bond strength is characterized by fracture toughness determined by micro chevron tests. Plasma activated wafer bonds can achieve fracture toughnesses that are comparable to bulk material. Dielectric barrier discharge (DBD) The usage of dielectric barrier discharge enables a stable plasma at atmospheric pressure. To avoid sparks, a dielectric has to be fixed on one or both electrodes. The shape of the electrode is similar to the substrate geometry used to cover the entire surface. The principle of an AP-activation with one dielectric barrier is shown in figure "Scheme of dielectric barrier discharge". The activation equipment consists of the grounded chuck acting as wafer carrier and an indium tin oxide (ITO) coated glass electrode. Further, the glass substrate is used as dielectric barrier and the discharge is powered by a corona generator. Low Pressure-Plasma Activated Bonding (LP-PAB) The Low Pressure-Plasma Activated Bonding operates in fine vacuum (0.1 – 100 Pa) with a continuous gas flow. This procedure requires: Vacuum Process gases High frequency (HF) electrical field between two electrodes The plasma exposed surface is activated by ion bombardment and chemical reactions through radicals. Electrons of the atmosphere move towards the HF electrode during its positive voltage. The most established frequency of the HF electrode is 13.56 MHz. Further, the electrons are not able to leave the electrode within the positive half wave of applied voltage, so the negative electrode is charged up to 1000 V (bias voltage). The gap between the electrode and the chuck is filled with plasma gas. The moving electrons of the atmosphere are banging into the plasma gas atoms and hit out electrons. Due to its positive orientation the massive ions, that are not able to follow the HF field, move to the negatively charged electrode, where the wafer is placed. Within those environment the surface activation is based on the striking ions and radicals interacting with the surface of the wafer (compare to figure "Scheme of a plasma reactor for low pressure plasma activated bonding"). The surface activation with plasma at low pressure is processed in the following steps: RCA cleaning Surface activation at low pressure Treatment duration ~ 30–60 s Process gases (N2, O2) Rinsing in de-ionized water Treatment duration 10 min Reduction of particle concentration Pre-bonding at room temperature Annealing (room temperature to 400 °C) Reactive ion etching (RIE) The RIE mode is used in dry etching processes and through reduction of parameters, i.e. HF power, this method is usable for surface activation. The electrode attached to the HF-Generator is used as carrier of the wafer. Following, the surfaces of the wafers charge up negatively caused by the electrons and attract the positive ions of the plasma. The plasma ignites in the RIE-reactor (shown in figure "Scheme of a plasma reactor for low pressure plasma activated bonding"). The maximal bond strength is achieved with nitrogen and oxygen as process gases and is sufficiently high with a homogeneous dispersion over the wafers after annealing at 250 °C. The bond energy is characterized > 200 % of non-activated reference wafer annealed at the same temperature. The surface activated wafer pair has 15% less bond energy compared to a high temperature bonded wafer pair. Annealing at 350 °C results in bonding strengths similar to high-temperature bonding. Remote plasma The procedure of remote plasma is based on creating plasma in a separate side chamber. The input gases enter the remote plasma source and are transported to the main process chamber to react. A scheme of the system is shown in figure "Remote plasma system". Remote plasma is using chemical components where mainly neutral radicals are reacting with the surface. The advantage of this process is less damaged surface through missing ion bombardment. Further, the plasma exposure times could be arranged longer than with, e.g. RIE method. Sequential plasma (SPAB) The wafers are activated with short RIE plasma followed by a radical treatment in one reactor chamber. An additional microwave source and an ion trapping metal plate are used for the generation of radicals. The effect of plasma on the surface changes from chemical/physical to chemical plasma treatment. This is based on the reactions between radicals and atoms on the surface. Technical specifications References Electronics manufacturing Packaging (microfabrication) Semiconductor technology Wafer bonding
Plasma-activated bonding
[ "Materials_science", "Engineering" ]
1,455
[ "Electronics manufacturing", "Microtechnology", "Packaging (microfabrication)", "Electronic engineering", "Semiconductor technology" ]
31,336,130
https://en.wikipedia.org/wiki/Glass%20frit%20bonding
Glass frit bonding, also referred to as glass soldering or seal glass bonding, describes a wafer bonding technique with an intermediate glass layer. It is a widely used encapsulation technology for surface micro-machined structures, e.g., accelerometers or gyroscopes. The technique utilizes low melting-point glass ("glass solder") and therefore provides various advantages including that viscosity of glass decreases with an increase of temperature. The viscous flow of glass has effects to compensate and planarize surface irregularities, convenient for bonding wafers with a high roughness due to plasma etching or deposition. A low viscosity promotes hermetically sealed encapsulation of structures based on a better adaption of the structured shapes. Further, the coefficient of thermal expansion (CTE) of the glass material is adapted to silicon. This results in low stress in the bonded wafer pair. The glass has to flow and wet the soldered surfaces well below the temperature where deformation or degradation of either of the joined materials or nearby structures (e.g., metallization layers on chips or ceramic substrates) occurs. The usual temperature of achieving flowing and wetting is between . Glass frit bonding can be used for many surface materials, e.g., silicon with hydrophobic and hydrophilic surface, silicon dioxide, silicon nitride, aluminium, titanium or glass, as long as the CTE are in the same range. This bonding procedure also allows the realization of metallic feedthroughs to contact active structures in the hermetically sealed cavity. Glass frit as a dielectric material does not need additional passivation for preventing leakage currents at process temperatures up to . The process begins with the deposition of glass paste onto the surfaces to be treated. It is then heated to burn out additives and fire it in order to form the glass layer. The bonding process reconfigures the sintered glass into the desired state. Finally, the reconfigured glass is cooled down. Glass frit bonding is used to encapsulate surface micro-machined sensors, i.e. gyroscopes and accelerometers. Other applications are the sealing of absolute pressure sensor cavities, the mounting of optical windows and the capping of thermally active devices. Procedure Deposition The glass frit bond procedure is used for the encapsulation and mounting of components. The coating of glass frit layers is applied by spin coating for thickness of 5 to 30 μm or commonly by screen printing for thickness of 10 to 30 μm. Screen printing, as a commonly used deposition method, provides a technique of structuring for the glass frit material. This method has the advantage of material deposition on structured cap wafers without any additional processes, i.e. photolithography. Screen printing enables the possibility of selective bonding. So only in areas where bonding is required the glass frit is deposited. The risk of glass frit flowing into the structures can be prevented by optimization of the screen printing process. Under high positioning precision the sizes of the structures in the range of 190 μm with a minimum spacing of < 100 μm are achievable. The exact positioning of the screen print structures to the cap wafer are required to ensure an accurate bond. The bonded structures are, dependent on the wettability of the printed surface, 10 to 20% wider than the designed screen. To ensure a uniform glass thickness, all structures should have the same width. The printed glass frit high is about 30 μm and provides a gap of 5 to 10 μm between the bonded wafers after bonding (compare to cross sectional SEM images). A bond surface activation is not necessary to promote a higher bonding strength. Thermal conditioning The printed glass frit structures are heated to form compact glass. The heating process is necessary to drive out the solvents and binder. This results in a subsequent particle fusion of the glass powder. Using mechanical pressure the wafers are bonded at elevated temperatures. Thermal conditioning transforms the glass paste into a glass layer and is important to prevent voids inside the glass frit layer. The conditioning process consists of: Glazing of organic binder and solvents Melting of glass particle to compact glass Formation of solid connection between glass and wafer surface The initial step comprises drying for 5 to 7 minutes at 100 to 120 °C in order to diffuse solvents out of the interface. This starts the polymerization of the organic binder. The binder molecules are linked to long-chain polymers which solidifies the paste. The organic binder of the glass paste has to be burned with heating up to a specific temperature (325 to 350 °C) where the glass is not fully melted for 10 to 20 minutes. This so-called glazing ensures the outgassing of the organic additives. Further, a pre-melting or sealing step heats the material to the process temperature between 410 and 459 °C for 5 to 10 min. The material fully melts and forms a compact glass without any inclusions. The inorganic fillers are melted down and the properties of the bond glass are fixed. The melting of the glass starts at the silicon-glass interface directed to the glass surface. During the melting process the porosity of the glass eliminates and based on the compression of the intermediate layer the thickness of the glass decreases significantly. Bonding process The glass frit bonding, starting with alignment of the wafers, is a thermo-compressive process that takes place in the bonding chamber at specific pressure. Under bonding pressure wafers are heated up to the process temperature around 430 °C for a few minutes. On the one hand a short bonding time causes the glass frit to spread insufficiently, on the other hand a longer bonding time causes the glass frit to be overflown subsequently leaving voids. The alignment has to be very precise and stable to prevent shifting. This can be realized using clamps or special pressure plates. Shifting can occur through temporarily staggered pressure, not precise vertical pressure based on misalignment of the bonding tools or the difference of thermal expansion between the bonding tools. During bonding a supporting tool pressure is applied to improve the thermal input into the bonding glass and equal wafer geometry inadmissibility (i.e. bow and warp) supporting wettability. Based on the sufficiently high viscosity of the glass, bonding can take place nearly without pressure. The bonding temperature needs to be high enough to reduce the viscosity of the glass material and ensures a good wetting of the bond surface, but also low enough to prevent overspreading of the glass frit material. The heating up over 410 °C enables the wetting of the bond surface. A good wetting is indicated by a low edge angle. The atomic wafer surface layers are fused into the glass at an atomic level. This forms a thin glass mixture at the interface which forms the strong bond between the glass and the wafer. Cooling During cooling down under pressure a mechanically strong and hermetically sealed wafer bond is formed. The cooling process leads especially at higher temperatures to thermal stress in the glass frit layer that has to be considered in the lifetime analysis of the bond frame. The wafer pair is removed from the bond chamber at lower temperatures to prevent thermal cracking of the wafers or the bond interface by thermal shocks. The bonding strength is mainly dependent on the density, the spreading area of the glass frit layer and the surface layer of the bonding interface. It is high enough, around 20 MPa, for most applications and comparable to those achieved with anodic bonding. The hermeticity ensures the correct function and a sufficient reliability of the bond and therefore the product. Further, the bonding yield of glass frit bonded wafers is very high, normally > 90 %. Types Two types of glass solders are used: vitreous, and devitrifying. Vitreous solders retain their amorphous structure during remelting, can be reworked repeatedly, and are relatively transparent. Devitrifying solders undergo partial crystallization during solidifying, forming a glass-ceramic, a composite of glassy and crystalline phases. Devitrifying solders usually create a stronger mechanical bond, but are more temperature-sensitive and the seal is more likely to be leaky; due to their polycrystalline structure they tend to be translucent or opaque. Devitrifying solders are frequently "thermosetting", as their melting temperature after recrystallization becomes significantly higher; this allows soldering the parts together at lower temperature than the subsequent bake-out without remelting the joint afterwards. Devitrifying solders frequently contain up to 25% zinc oxide. In production of cathode ray tubes, devitrifying solders based on PbO-B2O3-ZnO are used. Very low temperature melting glasses, fluid at , were developed for sealing applications for electronics. They can consist of binary or ternary mixtures of thallium, arsenic and sulfur. Zinc-silicoborate glasses can also be used for passivation of electronics; their coefficient of thermal expansion must match silicon (or the other semiconductors used) and they must not contain alkaline metals as those would migrate to the semiconductor and cause failures. The bonding between the glass or ceramics and the glass solder can be either covalent, or, more often, van der Waals. The seal can be leak-tight; glass soldering is frequently used in vacuum technology. Glass solders can be also used as sealants; a vitreous enamel coating on iron lowered its permeability to hydrogen 10 times. Glass solders are frequently used for glass-to-metal seals and glass-ceramic-to-metal seals. Production Glass solders are available as frit powder with grain size below 60 micrometers. They can be mixed with water or alcohol to form a paste for easy application, or with dissolved nitrocellulose or other suitable binder for adhering to the surfaces until being melted. The eventual binder has to be burned off before melting proceeds, requiring careful firing regime. The solder glass can be also applied from molten state to the area of the future joint during manufacture of the part. Due to their low viscosity in molten state, lead glasses with high PbO content (often 70–85%) are frequently used. The most common compositions are based on lead borates (leaded borate glass or borosilicate glass). Smaller amount of zinc oxide or aluminium oxide can be added for increasing chemical stability. Phosphate glasses can be also employed. Zinc oxide, bismuth trioxide, and copper(II) oxide can be added for influencing the thermal expansion; unlike the alkali oxides, these lower the softening point without increasing of thermal expansion. To achieve process temperatures beneath 450 °C leaded glass or lead silicate glass is used. The glass frit is a paste consisting glass powder, organic binder, inorganic fillers and solvents. This low melting glass paste is milled into powder (grain size < 15 μm) and mixed with organic binder forming a printable viscous paste. Inorganic fillers, i.e. cordierite particles (e.g. Mg2Al3 [AlSi5O18]) or barium silicate, are added to the melted glass paste to influence properties, i.e. lowering the mismatch of thermal expansion coefficients between silicon and glass frit. The solvents are used to adjust the viscosity of the organic binder. Several glass frit pastes are commercially available, e.g. FERRO FX-11-0366, and every single one need individual handling after deposition. The choice of the paste depends on various factors, i.e. deposition method, substrate material and process temperatures. The glass used for MEMS applications consists of particles and lead oxide. Latter lowers the glass transition temperature below 400 °C. The reduction of lead oxide by the silicon leads to the formation of lead precipitations at the silicon-glass interface. Those precipitations decrease the strength of the bond and are reliability risks that have to be considered for the lifetime predictions of the devices. Uses Glass solders are frequently used in electronic packaging. CERDIP packagings are an example. Outgassing of water from the glass solder during encapsulation was a cause of high failure rates of early CERDIP integrated circuits. Removal of glass-soldered ceramic covers, e.g., for gaining access to the chip for failure analysis or reverse engineering, is best done by shearing; if this is too risky, the cover is polished away instead. As the seals can be performed at much lower temperature than with direct joining of glass parts and without use of flame (using a temperature-controlled kiln or oven), glass solders are useful in applications like subminiature vacuum tubes or for joining mica windows to vacuum tubes and instruments (e.g., Geiger tube). Thermal expansion coefficient has to be matched to the materials being joined and often is chosen in between the coefficients of expansion of the materials. In case of having to compromise, subjecting the joint to compression stresses is more desirable than to tensile stresses. The expansion matching is not critical in applications where thin layers are used on small areas, e.g., fireable inks, or where the joint will be subjected to a permanent compression (e.g., by an external steel shell) offsetting the thermally introduced tensile stresses. Glass solder can be used as an intermediate layer when joining materials (glasses, ceramics) with significantly different coefficient of thermal expansion; such materials cannot be directly joined by diffusion welding. Evacuated glazing windows are made of glass panels soldered together. A glass solder is used, e.g., for joining together parts of cathode ray tubes and plasma display panels. Newer compositions lowered the usage temperature from by reducing the lead(II) oxide content down from 70%, increasing the zinc oxide content, adding titanium dioxide and bismuth(III) oxide and some other components. The high thermal expansion of such glass can be reduced by a suitable ceramic filler. Lead-free solder glasses with soldering temperature of were also developed. Phosphate glasses with low melting temperature were developed. One of such compositions is phosphorus pentoxide, lead(II) oxide, and zinc oxide, with addition of lithium and some other oxides. Electrically conductive glass solders can be also prepared. Advantages The following advantages result from using the glass frit bonding procedure: screen printing process applicable on thin, structured wafer no electrical potentials during bonding process necessary low tension due to low bonding temperature selective bonding based on structured intermediate glass layer bonding of rough wafer surfaces no outgassing after bonding, better chemical durability, higher strength compared to organic adhesives high reliability and stable hermetical sealing easier process compared to metallic or eutectic layer procedures References Electronics manufacturing Packaging (microfabrication) Semiconductor technology Wafer bonding
Glass frit bonding
[ "Materials_science", "Engineering" ]
3,102
[ "Electronics manufacturing", "Microtechnology", "Packaging (microfabrication)", "Electronic engineering", "Semiconductor technology" ]
886,766
https://en.wikipedia.org/wiki/Bell%20test
A Bell test, also known as Bell inequality test or Bell experiment, is a real-world physics experiment designed to test the theory of quantum mechanics in relation to Albert Einstein's concept of local realism. Named for John Stewart Bell, the experiments test whether or not the real world satisfies local realism, which requires the presence of some additional local variables (called "hidden" because they are not a feature of quantum theory) to explain the behavior of particles like photons and electrons. The test empirically evaluates the implications of Bell's theorem. , all Bell tests have found that the hypothesis of local hidden variables is inconsistent with the way that physical systems behave. Many types of Bell tests have been performed in physics laboratories, often with the goal of ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. This is known as "closing loopholes in Bell tests". Bell inequality violations are also used in some quantum cryptography protocols, whereby a spy's presence is detected when Bell's inequalities cease to be violated. Overview The Bell test has its origins in the debate between Einstein and other pioneers of quantum physics, principally Niels Bohr. One feature of the theory of quantum mechanics under debate was the meaning of Heisenberg's uncertainty principle. This principle states that if some information is known about a given particle, there is some other information about it that is impossible to know. An example of this is found in observations of the position and the momentum of a given particle. According to the uncertainty principle, a particle's momentum and its position cannot simultaneously be determined with arbitrarily high precision. In 1935, Einstein, Boris Podolsky, and Nathan Rosen published a claim that quantum mechanics predicts that more information about a pair of entangled particles could be observed than Heisenberg's principle allowed, which would only be possible if information were travelling instantly between the two particles. This produces a paradox which came to be known as the "EPR paradox" after the three authors. It arises if any effect felt in one location is not the result of a cause that occurred in its past light cone, relative to its location. This action at a distance seems to violate causality, by allowing information between the two locations to travel faster than the speed of light. However, it is a common misconception to think that any information can be shared between two observers faster than the speed of light using entangled particles; the hypothetical information transfer here is between the particles. See no-communication theorem for further explanation. Based on this, the authors concluded that the quantum wave function does not provide a complete description of reality. They suggested that there must be some local hidden variables at work in order to account for the behavior of entangled particles. In a theory of hidden variables, as Einstein envisaged it, the randomness and indeterminacy seen in the behavior of quantum particles would only be apparent. For example, if one knew the details of all the hidden variables associated with a particle, then one could predict both its position and momentum. The uncertainty that had been quantified by Heisenberg's principle would simply be an artifact of not having complete information about the hidden variables. Furthermore, Einstein argued that the hidden variables should obey the condition of locality: Whatever the hidden variables actually are, the behavior of the hidden variables for one particle should not be able to instantly affect the behavior of those for another particle far away. This idea, called the principle of locality, is rooted in intuition from classical physics that physical interactions do not propagate instantly across space. These ideas were the subject of ongoing debate between their proponents. In particular, Einstein himself did not approve of the way Podolsky had stated the problem in the famous EPR paper. In 1964, John Stewart Bell proposed his famous theorem, which states that no physical theory of hidden local variables can ever reproduce all the predictions of quantum mechanics. Implicit in the theorem is the proposition that the determinism of classical physics is fundamentally incapable of describing quantum mechanics. Bell expanded on the theorem to provide what would become the conceptual foundation of the Bell test experiments. A typical experiment involves the observation of particles, often photons, in an apparatus designed to produce entangled pairs and allow for the measurement of some characteristic of each, such as their spin. The results of the experiment could then be compared to what was predicted by local realism and those predicted by quantum mechanics. In theory, the results could be "coincidentally" consistent with both. To address this problem, Bell proposed a mathematical description of local realism that placed a statistical limit on the likelihood of that eventuality. If the results of an experiment violate Bell's inequality, local hidden variables can be ruled out as their cause. Later researchers built on Bell's work by proposing new inequalities that serve the same purpose and refine the basic idea in one way or another. Consequently, the term "Bell inequality" can mean any one of a number of inequalities satisfied by local hidden-variables theories; in practice, many present-day experiments employ the CHSH inequality. All these inequalities, like the original devised by Bell, express the idea that assuming local realism places restrictions on the statistical results of experiments on sets of particles that have taken part in an interaction and then separated. To date, all Bell tests have supported the theory of quantum physics, and not the hypothesis of local hidden variables. These efforts to experimentally validate violations of the Bell inequalities resulted in John Clauser, Alain Aspect, and Anton Zeilinger being awarded the 2022 Nobel Prize in Physics. Conduct of optical Bell test experiments In practice most actual experiments have used light, assumed to be emitted in the form of particle-like photons (produced by atomic cascade or spontaneous parametric down conversion), rather than the atoms that Bell originally had in mind. The property of interest is, in the best known experiments, the polarisation direction, though other properties can be used. Such experiments fall into two classes, depending on whether the analysers used have one or two output channels. A typical CHSH (two-channel) experiment The diagram shows a typical optical experiment of the two-channel kind for which Alain Aspect set a precedent in 1982. Coincidences (simultaneous detections) are recorded, the results being categorised as '++', '+−', '−+' or '−−' and corresponding counts accumulated. Four separate subexperiments are conducted, corresponding to the four terms E(a, b) in the test statistic S (equation (2) shown below). The settings a, a′, b and b′ are generally in practice chosen to be 0, 45°, 22.5° and 67.5° respectively — the "Bell test angles" — these being the ones for which the quantum mechanical formula gives the greatest violation of the inequality. For each selected value of a and b, the numbers of coincidences in each category (N++, N−−, N+− and N−+) are recorded. The experimental estimate for E(a, b) is then calculated as: Once all four E’s have been estimated, an experimental estimate of the test statistic can be found. If S is numerically greater than 2 it has infringed the CHSH inequality. The experiment is declared to have supported the QM prediction and ruled out all local hidden-variable theories. A strong assumption has had to be made, however, to justify use of expression (2), namely, that the sample of detected pairs is representative of the pairs emitted by the source. Denial of this assumption is called the fair sampling loophole. A typical CH74 (single-channel) experiment Prior to 1982 all actual Bell tests used "single-channel" polarisers and variations on an inequality designed for this setup. The latter is described in Clauser, Horne, Shimony and Holt's much-cited 1969 article as being the one suitable for practical use. As with the CHSH test, there are four subexperiments in which each polariser takes one of two possible settings, but in addition there are other subexperiments in which one or other polariser or both are absent. Counts are taken as before and used to estimate the test statistic. where the symbol ∞ indicates absence of a polariser. If S exceeds 0 then the experiment is declared to have infringed the CH inequality and hence to have refuted local hidden-variables. This inequality is known as CH inequality instead of CHSH as it was also derived in a 1974 article by Clauser and Horne more rigorously and under weaker assumptions. Experimental assumptions In addition to the theoretical assumptions, there are practical ones. There may, for example, be a number of "accidental coincidences" in addition to those of interest. It is assumed that no bias is introduced by subtracting their estimated number before calculating S, but that this is true is not considered by some to be obvious. There may be synchronisation problems — ambiguity in recognising pairs because in practice they will not be detected at exactly the same time. Nevertheless, despite all the deficiencies of the actual experiments, one striking fact emerges: the results are, to a very good approximation, what quantum mechanics predicts. If imperfect experiments give us such excellent overlap with quantum predictions, most working quantum physicists would agree with John Bell in expecting that, when a perfect Bell test is done, the Bell inequalities will still be violated. This attitude has led to the emergence of a new sub-field of physics known as quantum information theory. One of the main achievements of this new branch of physics is showing that violation of Bell's inequalities leads to the possibility of a secure information transfer, which utilizes the so-called quantum cryptography (involving entangled states of pairs of particles). Notable experiments Over the past half century, a great number of Bell test experiments have been conducted. The experiments are commonly interpreted to rule out local hidden-variable theories, and in 2015 an experiment was performed that is not subject to either the locality loophole or the detection loophole (Hensen et al.). An experiment free of the locality loophole is one where for each separate measurement and in each wing of the experiment, a new setting is chosen and the measurement completed before signals could communicate the settings from one wing of the experiment to the other. An experiment free of the detection loophole is one where close to 100% of the successful measurement outcomes in one wing of the experiment are paired with a successful measurement in the other wing. This percentage is called the efficiency of the experiment. Advancements in technology have led to a great variety of methods to test Bell-type inequalities. Some of the best known and recent experiments include: Kasday, Ullman and Wu (1970) Leonard Ralph Kasday, Jack R. Ullman and Chien-Shiung Wu carried out the first experimental Bell test, using photon pairs produced by positronium decay and analyzed by Compton scattering. The experiment observed photon polarization correlations consistent with quantum predictions and inconsistent with local realistic models that obey the known polarization dependence of Compton scattering. Due to the low polarization selectivity of Compton scattering, the results did not violate a Bell inequality. Freedman and Clauser (1972) Stuart J. Freedman and John Clauser carried out the first Bell test that observed a Bell inequality violation, using Freedman's inequality, a variant on the CH74 inequality. Aspect et al. (1982) Alain Aspect and his team at Orsay, Paris, conducted three Bell tests using calcium cascade sources. The first and last used the CH74 inequality. The second was the first application of the CHSH inequality. The third (and most famous) was arranged such that the choice between the two settings on each side was made during the flight of the photons (as originally suggested by John Bell). Tittel et al. (1998) The Geneva 1998 Bell test experiments showed that distance did not destroy the "entanglement". Light was sent in fibre optic cables over distances of several kilometers before it was analysed. As with almost all Bell tests since about 1985, a "parametric down-conversion" (PDC) source was used. Weihs et al. (1998): experiment under "strict Einstein locality" conditions In 1998 Gregor Weihs and a team at Innsbruck, led by Anton Zeilinger, conducted an experiment that closed the "locality" loophole, improving on Aspect's of 1982. The choice of detector was made using a quantum process to ensure that it was random. This test violated the CHSH inequality by over 30 standard deviations, the coincidence curves agreeing with those predicted by quantum theory. Pan et al. (2000) experiment on the GHZ state This is the first of new Bell-type experiments on more than two particles; this one uses the so-called GHZ state of three particles. Rowe et al. (2001): the first to close the detection loophole The detection loophole was first closed in an experiment with two entangled trapped ions, carried out in the ion storage group of David Wineland at the National Institute of Standards and Technology in Boulder. The experiment had detection efficiencies well over 90%. Go et al. (Belle collaboration): Observation of Bell inequality violation in B mesons Using semileptonic B0 decays of Υ(4S) at Belle experiment, a clear violation of Bell Inequality in particle-antiparticle correlation is observed. Gröblacher et al. (2007) test of Leggett-type non-local realist theories A specific class of non-local theories suggested by Anthony Leggett is ruled out. Based on this, the authors conclude that any possible non-local hidden-variable theory consistent with quantum mechanics must be highly counterintuitive. Salart et al. (2008): separation in a Bell Test This experiment filled a loophole by providing an 18 km separation between detectors, which is sufficient to allow the completion of the quantum state measurements before any information could have traveled between the two detectors. Ansmann et al. (2009): overcoming the detection loophole in solid state This was the first experiment testing Bell inequalities with solid-state qubits (superconducting Josephson phase qubits were used). This experiment surmounted the detection loophole using a pair of superconducting qubits in an entangled state. However, the experiment still suffered from the locality loophole because the qubits were only separated by a few millimeters. Giustina et al. (2013), Larsson et al (2014): overcoming the detection loophole for photons The detection loophole for photons has been closed for the first time by Marissa Giustina, using highly efficient detectors. This makes photons the first system for which all of the main loopholes have been closed, albeit in different experiments. Christensen et al. (2013): overcoming the detection loophole for photons The Christensen et al. (2013) experiment is similar to that of Giustina et al. Giustina et al. did just four long runs with constant measurement settings (one for each of the four pairs of settings). The experiment was not pulsed so that formation of "pairs" from the two records of measurement results (Alice and Bob) had to be done after the experiment which in fact exposes the experiment to the coincidence loophole. This led to a reanalysis of the experimental data in a way which removed the coincidence loophole, and fortunately the new analysis still showed a violation of the appropriate CHSH or CH inequality. On the other hand, the Christensen et al. experiment was pulsed and measurement settings were frequently reset in a random way, though only once every 1000 particle pairs, not every time. Hensen et al., Giustina et al., Shalm et al. (2015): "loophole-free" Bell tests In 2015 the first three significant-loophole-free Bell-tests were published within three months by independent groups in Delft, Vienna and Boulder. All three tests simultaneously addressed the detection loophole, the locality loophole, and the memory loophole. This makes them “loophole-free” in the sense that all remaining conceivable loopholes like superdeterminism require truly exotic hypotheses that might never get closed experimentally. The first published experiment by Hensen et al. used a photonic link to entangle the electron spins of two nitrogen-vacancy defect centres in diamonds 1.3 kilometers apart and measured a violation of the CHSH inequality (S = 2.42 ± 0.20). Thereby the local-realist hypothesis could be rejected with a p-value of 0.039. Both simultaneously published experiments by Giustina et al. and Shalm et al. used entangled photons to obtain a Bell inequality violation with high statistical significance (p-value ≪10−6). Notably, the experiment by Shalm et al. also combined three types of (quasi-)random number generators to determine the measurement basis choices. One of these methods, detailed in an ancillary file, is the “'Cultural' pseudorandom source” which involved using bit strings from popular media such as the Back to the Future films, Star Trek: Beyond the Final Frontier, Monty Python and the Holy Grail, and the television shows Saved by the Bell and Dr. Who. Schmied et al. (2016): Detection of Bell correlations in a many-body system Using a witness for Bell correlations derived from a multi-partite Bell inequality, physicists at the University of Basel were able to conclude for the first time Bell correlation in a many-body system composed by about 480 atoms in a Bose–Einstein condensate. Even though loopholes were not closed, this experiment shows the possibility of observing Bell correlations in the macroscopic regime. Handsteiner et al. (2017): "Cosmic Bell Test" - Measurement Settings from Milky Way Stars Physicists led by David Kaiser of the Massachusetts Institute of Technology and Anton Zeilinger of the Institute for Quantum Optics and Quantum Information and University of Vienna performed an experiment that "produced results consistent with nonlocality" by measuring starlight that had taken 600 years to travel to Earth. The experiment “represents the first experiment to dramatically limit the space-time region in which hidden variables could be relevant.” Rosenfeld et al. (2017): "Event-Ready" Bell test with entangled atoms and closed detection and locality loopholes Physicists at the Ludwig Maximilian University of Munich and the Max Planck Institute of Quantum Optics published results from an experiment in which they observed a Bell inequality violation using entangled spin states of two atoms with a separation distance of 398 meters in which the detection loophole, the locality loophole, and the memory loophole were closed. The violation of S = 2.221 ± 0.033 rejected local realism with a significance value of P = 1.02×10−16 when taking into account 7 months of data and 55000 events or an upper bound of P = 2.57×10−9 from a single run with 10000 events. The BIG Bell Test Collaboration (2018): “Challenging local realism with human choices” An international collaborative scientific effort used arbitrary human choice to define measurement settings instead of using random number generators. Assuming that human free will exists, this would close the “freedom-of-choice loophole”. Around 100,000 participants were recruited in order to provide sufficient input for the experiment to be statistically significant. Rauch et al (2018): measurement settings from distant quasars In 2018, an international team used light from two quasars (one whose light was generated approximately eight billion years ago and the other approximately twelve billion years ago) as the basis for their measurement settings. This experiment pushed the timeframe for when the settings could have been mutually determined to at least 7.8 billion years in the past, a substantial fraction of the superdeterministic limit (that being the creation of the universe 13.8 billion years ago). The 2019 PBS Nova episode Einstein's Quantum Riddle documents this "cosmic Bell test" measurement, with footage of the scientific team on-site at the high-altitude Teide Observatory located in the Canary Islands. Storz et al (2023): Loophole-free Bell inequality violation with superconducting circuits In 2023, an international team led by the group of Andreas Wallraff at ETH Zurich demonstrated a loophole-free violation of the CHSH inequality with superconducting circuits deterministically entangled via a cryogenic link spanning a distance of 30 meters. Loopholes Though the series of increasingly sophisticated Bell test experiments has convinced the physics community that local hidden-variable theories are indefensible; they can never be excluded entirely. For example, the hypothesis of superdeterminism in which all experiments and outcomes (and everything else) are predetermined can never be excluded (because it is unfalsifiable). Up to 2015, the outcome of all experiments that violate a Bell inequality could still theoretically be explained by exploiting the detection loophole and/or the locality loophole. The locality (or communication) loophole means that since in actual practice the two detections are separated by a time-like interval, the first detection may influence the second by some kind of signal. To avoid this loophole, the experimenter has to ensure that particles travel far apart before being measured, and that the measurement process is rapid. More serious is the detection (or unfair sampling) loophole, because particles are not always detected in both wings of the experiment. It can be imagined that the complete set of particles would behave randomly, but instruments only detect a subsample showing quantum correlations, by letting detection be dependent on a combination of local hidden variables and detector setting. Experimenters had repeatedly voiced that loophole-free tests could be expected in the near future. In 2015, a loophole-free Bell violation was reported using entangled diamond spins over a distance of and corroborated by two experiments using entangled photon pairs. The remaining possible theories that obey local realism can be further restricted by testing different spatial configurations, methods to determine the measurement settings, and recording devices. It has been suggested that using humans to generate the measurement settings and observe the outcomes provides a further test. David Kaiser of MIT told the New York Times in 2015 that a potential weakness of the "loophole-free" experiments is that the systems used to add randomness to the measurement may be predetermined in a method that was not detected in experiments. Detection loophole A common problem in optical Bell tests is that only a small fraction of the emitted photons are detected. It is then possible that the correlations of the detected photons are unrepresentative: although they show a violation of a Bell inequality, if all photons were detected the Bell inequality would actually be respected. This was first noted by Philip M. Pearle in 1970, who devised a local hidden variable model that faked a Bell violation by letting the photon be detected only if the measurement setting was favourable. The assumption that this does not happen, i.e., that the small sample is actually representative of the whole is called the fair sampling assumption. To do away with this assumption it is necessary to detect a sufficiently large fraction of the photons. This is usually characterized in terms of the detection efficiency , defined as the probability that a photodetector detects a photon that arrives at it. Anupam Garg and N. David Mermin showed that when using a maximally entangled state and the CHSH inequality an efficiency of is required for a loophole-free violation. Later Philippe H. Eberhard showed that when using a partially entangled state a loophole-free violation is possible for , which is the optimal bound for the CHSH inequality. Other Bell inequalities allow for even lower bounds. For example, there exists a four-setting inequality which is violated for . Historically, only experiments with non-optical systems have been able to reach high enough efficiencies to close this loophole, such as trapped ions, superconducting qubits, and nitrogen-vacancy centers. These experiments were not able to close the locality loophole, which is easy to do with photons. More recently, however, optical setups have managed to reach sufficiently high detection efficiencies by using superconducting photodetectors, and hybrid setups have managed to combine the high detection efficiency typical of matter systems with the ease of distributing entanglement at a distance typical of photonic systems. Locality loophole One of the assumptions of Bell's theorem is the one of locality, namely that the choice of setting at a measurement site does not influence the result of the other. The motivation for this assumption is the theory of relativity, that prohibits communication faster than light. For this motivation to apply to an experiment, it needs to have space-like separation between its measurements events. That is, the time that passes between the choice of measurement setting and the production of an outcome must be shorter than the time it takes for a light signal to travel between the measurement sites. The first experiment that strived to respect this condition was Aspect's 1982 experiment. In it the settings were changed fast enough, but deterministically. The first experiment to change the settings randomly, with the choices made by a quantum random number generator, was Weihs et al.'s 1998 experiment. Scheidl et al. improved on this further in 2010 by conducting an experiment between locations separated by a distance of . Coincidence loophole In many experiments, especially those based on photon polarization, pairs of events in the two wings of the experiment are only identified as belonging to a single pair after the experiment is performed, by judging whether or not their detection times are close enough to one another. This generates a new possibility for a local hidden variables theory to "fake" quantum correlations: delay the detection time of each of the two particles by a larger or smaller amount depending on some relationship between hidden variables carried by the particles and the detector settings encountered at the measurement station. The coincidence loophole can be ruled out entirely simply by working with a pre-fixed lattice of detection windows which are short enough that most pairs of events occurring in the same window do originate with the same emission and long enough that a true pair is not separated by a window boundary. Memory loophole In most experiments, measurements are repeatedly made at the same two locations. A local hidden variable theory could exploit the memory of past measurement settings and outcomes in order to increase the violation of a Bell inequality. Moreover, physical parameters might be varying in time. It has been shown that, provided each new pair of measurements is done with a new random pair of measurement settings, that neither memory nor time inhomogeneity have a serious effect on the experiment. Superdeterminism A necessary assumption to derive Bell's theorem is that the hidden variables are not correlated with the measurement settings. This assumption has been justified on the grounds that the experimenter has "free will" to choose the settings, and that such is necessary to do science in the first place. A (hypothetical) theory where the choice of measurement is determined by the system being measured is known as superdeterministic. Many-worlds loophole The many-worlds interpretation, also known as the Hugh Everett interpretation, is deterministic and has local dynamics, consisting of the unitary part of quantum mechanics without collapse. Bell's theorem does not apply because of an implicit assumption that measurements have a single outcome. See also Determinism – Quantum and classical mechanics Einstein's thought experiments Principle of locality Quantum indeterminacy References Further reading Quantum measurement
Bell test
[ "Physics" ]
5,757
[ "Quantum measurement", "Quantum mechanics" ]
886,856
https://en.wikipedia.org/wiki/Colloidal%20gold
Colloidal gold is a sol or colloidal suspension of nanoparticles of gold in a fluid, usually water. The colloid is coloured usually either wine red (for spherical particles less than 100 nm) or blue-purple (for larger spherical particles or nanorods). Due to their optical, electronic, and molecular-recognition properties, gold nanoparticles are the subject of substantial research, with many potential or promised applications in a wide variety of areas, including electron microscopy, electronics, nanotechnology, materials science, and biomedicine. The properties of colloidal gold nanoparticles, and thus their potential applications, depend strongly upon their size and shape. For example, rodlike particles have both a transverse and longitudinal absorption peak, and anisotropy of the shape affects their self-assembly. History Used since ancient times as a method of staining glass, colloidal gold was used in the 4th-century Lycurgus Cup, which changes color depending on the location of light source. During the Middle Ages, soluble gold, a solution containing gold salt, had a reputation for its curative property for various diseases. In 1618, Francis Anthony, a philosopher and member of the medical profession, published a book called Panacea Aurea, sive tractatus duo de ipsius Auro Potabili (Latin: gold potion, or two treatments of potable gold). The book introduces information on the formation of colloidal gold and its medical uses. About half a century later, English botanist Nicholas Culpepper published a book in 1656, Treatise of Aurum Potabile, solely discussing the medical uses of colloidal gold. In 1676, Johann Kunckel, a German chemist, published a book on the manufacture of stained glass. In his book Valuable Observations or Remarks About the Fixed and Volatile Salts-Auro and Argento Potabile, Spiritu Mundi and the Like, Kunckel assumed that the pink color of Aurum Potabile came from small particles of metallic gold, not visible to human eyes. In 1842, John Herschel invented a photographic process called chrysotype (from the Greek meaning "gold") that used colloidal gold to record images on paper. Modern scientific evaluation of colloidal gold did not begin until Michael Faraday's work in the 1850s. In 1856, in a basement laboratory of Royal Institution, Faraday accidentally created a ruby red solution while mounting pieces of gold leaf onto microscope slides. Since he was already interested in the properties of light and matter, Faraday further investigated the optical properties of the colloidal gold. He prepared the first pure sample of colloidal gold, which he called 'activated gold', in 1857. He used phosphorus to reduce a solution of gold chloride. The colloidal gold Faraday made 150 years ago is still optically active. For a long time, the composition of the 'ruby' gold was unclear. Several chemists suspected it to be a gold tin compound, due to its preparation. Faraday recognized that the color was actually due to the miniature size of the gold particles. He noted the light scattering properties of suspended gold microparticles, which is now called Faraday-Tyndall effect. In 1898, Richard Adolf Zsigmondy prepared the first colloidal gold in diluted solution. Apart from Zsigmondy, Theodor Svedberg, who invented ultracentrifugation, and Gustav Mie, who provided the theory for scattering and absorption by spherical particles, were also interested in the synthesis and properties of colloidal gold. With advances in various analytical technologies in the 20th century, studies on gold nanoparticles has accelerated. Advanced microscopy methods, such as atomic force microscopy and electron microscopy, have contributed the most to nanoparticle research. Due to their comparably easy synthesis and high stability, various gold particles have been studied for their practical uses. Different types of gold nanoparticle are already used in many industries. Physical properties Optical Colloidal gold has been used by artists for centuries because of the nanoparticle’s interactions with visible light. Gold nanoparticles absorb and scatter light resulting in colours ranging from vibrant reds (smaller particles) to blues to black and finally to clear and colorless (larger particles), depending on particle size, shape, local refractive index, and aggregation state. These colors occur because of a phenomenon called localized surface plasmon resonance (LSPR), in which conduction electrons on the surface of the nanoparticle oscillate in resonance with incident light. Effect of size, shape, composition and environment As a general rule, the wavelength of light absorbed increases as a function of increasing nanoparticle size. Both the surface plasmon resonance frequency and scattering intensity depend on the size, shape composition and environment of the nanoparticles. This phenomenon may be quantified by use of the Mie scattering theory for spherical nanoparticles. Nanoparticles with diameters of 30–100 nm may be detected easily by a microscope, and particles with a size of 40 nm may even be detected by the naked eye when the concentration of the particles is 10−4 M or greater. The scattering from a 60 nm nanoparticle is about 105 times stronger than the emission from a fluorescein molecule. Effect of local refractive index Changes in the apparent color of a gold nanoparticle solution can also be caused by the environment in which the colloidal gold is suspended. The optical properties of gold nanoparticles depend on the refractive index near the nanoparticle surface, so the molecules directly attached to the nanoparticle surface (i.e. nanoparticle ligands) and the nanoparticle solvent may both influence the observed optical features. As the refractive index near the gold surface increases, the LSPR shifts to longer wavelengths. In addition to solvent environment, the extinction peak can be tuned by coating the nanoparticles with non-conducting shells such as silica, biomolecules, or aluminium oxide. Effect of aggregation When gold nanoparticles aggregate, the optical properties of the particle change, because the effective particle size, shape, and dielectric environment all change. Medical research Electron microscope labelling Colloidal gold and various derivatives have long been among the most widely used labels for antigens in biological electron microscopy. Colloidal gold particles can be attached to many traditional biological probes such as antibodies, lectins, superantigens, glycans, nucleic acids, and receptors. Particles of different sizes are easily distinguishable in electron micrographs, allowing simultaneous multiple-labelling experiments. In addition to biological probes, gold nanoparticles can be transferred to various mineral substrates, such as mica, single crystal silicon, and atomically flat gold(III), to be observed under atomic force microscopy (AFM). Drug delivery system Gold nanoparticles can be used to optimize the biodistribution of drugs to diseased organs, tissues or cells, in order to improve and target drug delivery. Nanoparticle-mediated drug delivery is feasible only if the drug distribution is otherwise inadequate. These cases include drug targeting of unstable (proteins, siRNA, DNA), delivery to the difficult sites (brain, retina, tumors, intracellular organelles) and drugs with serious side effects (e.g. anti-cancer agents). The performance of the nanoparticles depends on the size and surface functionalities in the particles. Also, the drug release and particle disintegration can vary depending on the system (e.g. biodegradable polymers sensitive to pH). An optimal nanodrug delivery system ensures that the active drug is available at the site of action for the correct time and duration, and their concentration should be above the minimal effective concentration (MEC) and below the minimal toxic concentration (MTC). Gold nanoparticles are being investigated as carriers for drugs such as Paclitaxel. The administration of hydrophobic drugs require molecular encapsulation and it is found that nanosized particles are particularly efficient in evading the reticuloendothelial system. Tumor detection In cancer research, colloidal gold can be used to target tumors and provide detection using SERS (surface enhanced Raman spectroscopy) in vivo. These gold nanoparticles are surrounded with Raman reporters, which provide light emission that is over 200 times brighter than quantum dots. It was found that the Raman reporters were stabilized when the nanoparticles were encapsulated with a thiol-modified polyethylene glycol coat. This allows for compatibility and circulation in vivo. To specifically target tumor cells, the polyethylenegylated gold particles are conjugated with an antibody (or an antibody fragment such as scFv), against, e.g. epidermal growth factor receptor, which is sometimes overexpressed in cells of certain cancer types. Using SERS, these pegylated gold nanoparticles can then detect the location of the tumor. Gold nanoparticles accumulate in tumors, due to the leakiness of tumor vasculature, and can be used as contrast agents for enhanced imaging in a time-resolved optical tomography system using short-pulse lasers for skin cancer detection in mouse model. It is found that intravenously administered spherical gold nanoparticles broadened the temporal profile of reflected optical signals and enhanced the contrast between surrounding normal tissue and tumors. Gene therapy Gold nanoparticles have shown potential as intracellular delivery vehicles for siRNA oligonucleotides with maximal therapeutic impact. Gold nanoparticles show potential as intracellular delivery vehicles for antisense oligonucleotides (single and double stranded DNA) by providing protection against intracellular nucleases and ease of functionalization for selective targeting. Photothermal agents Gold nanorods are being investigated as photothermal agents for in-vivo applications. Gold nanorods are rod-shaped gold nanoparticles whose aspect ratios tune the surface plasmon resonance (SPR) band from the visible to near-infrared wavelength. The total extinction of light at the SPR is made up of both absorption and scattering. For the smaller axial diameter nanorods (~10 nm), absorption dominates, whereas for the larger axial diameter nanorods (>35 nm) scattering can dominate. As a consequence, for in-vivo studies, small diameter gold nanorods are being used as photothermal converters of near-infrared light due to their high absorption cross-sections. Since near-infrared light transmits readily through human skin and tissue, these nanorods can be used as ablation components for cancer, and other targets. When coated with polymers, gold nanorods have been observed to circulate in-vivo with half-lives longer than 6 hours, bodily residence times around 72 hours, and little to no uptake in any internal organs except the liver. Despite the unquestionable success of gold nanorods as photothermal agents in preclinical research, they have yet to obtain the approval for clinical use because the size is above the renal excretion threshold. In 2019, the first NIR-absorbing plasmonic ultrasmall-in-nano architecture has been reported, and jointly combine: (i) a suitable photothermal conversion for hyperthermia treatments, (ii) the possibility of multiple photothermal treatments and (iii) renal excretion of the building blocks after the therapeutic action. Radiotherapy dose enhancer Considerable interest has been shown in the use of gold and other heavy-atom-containing nanoparticles to enhance the dose delivered to tumors. Since the gold nanoparticles are taken up by the tumors more than the nearby healthy tissue, the dose is selectively enhanced. The biological effectiveness of this type of therapy seems to be due to the local deposition of the radiation dose near the nanoparticles. This mechanism is the same as occurs in heavy ion therapy. Detection of toxic gas Researchers have developed simple inexpensive methods for on-site detection of hydrogen sulfide present in air based on the antiaggregation of gold nanoparticles (AuNPs). Dissolving into a weak alkaline buffer solution leads to the formation of HS-, which can stabilize AuNPs and ensure they maintain their red color allowing for visual detection of toxic levels of . Gold nanoparticle based biosensor Gold nanoparticles are incorporated into biosensors to enhance its stability, sensitivity, and selectivity. Nanoparticle properties such as small size, high surface-to-volume ratio, and high surface energy allow immobilization of large range of biomolecules. Gold nanoparticle, in particular, could also act as "electron wire" to transport electrons and its amplification effect on electromagnetic light allows it to function as signal amplifiers. Main types of gold nanoparticle based biosensors are optical and electrochemical biosensor. Optical biosensor Gold nanoparticles improve the sensitivity of optical sensors in response to the change in the local refractive index. The angle of the incidence light for surface plasmon resonance, an interaction between light waves and conducting electrons in metal, changes when other substances are bounded to the metal surface. Because gold is very sensitive to its surroundings' dielectric constant, binding of an analyte significantly shifts the gold nanoparticle's SPR and therefore allows for more sensitive detection. Gold nanoparticle could also amplify the SPR signal. When the plasmon wave pass through the gold nanoparticle, the charge density in the wave and the electron I the gold interact and result in a higher energy response, referred to as electron coupling. When the analyte and bio-receptor both bind to the gold, the apparent mass of the analyte increases and therefore amplifies the signal. These properties had been used to build a DNA sensor with 1000-fold greater sensitivity than without the Au NP. Humidity sensors have also been built by altering the atom interspacing between molecules with humidity change, the interspacing change would also result in a change of the Au NP's LSPR. Electrochemical biosensor Electrochemical sensor convert biological information into electrical signals that could be detected. The conductivity and biocompatibility of Au NP allow it to act as "electron wire". It transfers electron between the electrode and the active site of the enzyme. It could be accomplished in two ways: attach the Au NP to either the enzyme or the electrode. GNP-glucose oxidase monolayer electrode was constructed use these two methods. The Au NP allowed more freedom in the enzyme's orientation and therefore more sensitive and stable detection. Au NP also acts as immobilization platform for the enzyme. Most biomolecules denatures or lose its activity when interacted with the electrode. The biocompatibility and high surface energy of Au allow it to bind to a large amount of protein without altering its activity and results in a more sensitive sensor. Moreover, Au NP also catalyzes biological reactions. Gold nanoparticle under 2 nm has shown catalytic activity to the oxidation of styrene. Immunological biosensor Gold nanoparticles have been coated with peptides and glycans for use in immunological detection methods. The possibility to use glyconanoparticles in ELISA was unexpected, but the method seems to have a high sensitivity and thus offers potential for development of specific assays for diagnostic identification of antibodies in patient sera. Thin films Gold nanoparticles capped with organic ligands, such as alkanethiol molecules, can self-assemble into large monolayers (>cm2). The particles are first prepared in organic solvent, such as chloroform or toluene, and are then spread into monolayers either on a liquid surface or on a solid substrate. Such interfacial thin films of nanoparticles have close relationship with Langmuir-Blodgett monolayers made from surfactants. The mechanical properties of nanoparticle monolayers have been studied extensively. For 5 nm spheres capped with dodecanethiol, the Young's modulus of the monolayer is on the order of GPa. The mechanics of the membranes are guided by strong interactions between ligand shells on adjacent particles. Upon fracture, the films crack perpendicular to the direction of strain at a fracture stress of 11 2.6 MPa, comparable to that of cross-linked polymer films. Free-standing nanoparticle membranes exhibit bending rigidity on the order of 10 eV, higher than what is predicted in theory for continuum plates of the same thickness, due to nonlocal microstructural constraints such as nonlocal coupling of particle rotational degrees of freedom. On the other hand, resistance to bending is found to be greatly reduced in nanoparticle monolayers that are supported at the air/water interface, possibly due to screening of ligand interactions in a wet environment. Surface chemistry In many different types of colloidal gold syntheses, the interface of the nanoparticles can display widely different character – ranging from an interface similar to a self-assembled monolayer to a disordered boundary with no repeating patterns. Beyond the Au-Ligand interface, conjugation of the interfacial ligands with various functional moieties (from small organic molecules to polymers to DNA to RNA) afford colloidal gold much of its vast functionality. Ligand exchange/functionalization After initial nanoparticle synthesis, colloidal gold ligands are often exchanged with new ligands designed for specific applications. For example, Au NPs produced via the Turkevich-style (or Citrate Reduction) method are readily reacted via ligand exchange reactions, due to the relatively weak binding between the carboxyl groups and the surfaces of the NPs. This ligand exchange can produce conjugation with a number of biomolecules from DNA to RNA to proteins to polymers (such as PEG) to increase biocompatibility and functionality. For example, ligands have been shown to enhance catalytic activity by mediating interactions between adsorbates and the active gold surfaces for specific oxygenation reactions. Ligand exchange can also be used to promote phase transfer of the colloidal particles. Ligand exchange is also possible with alkane thiol-arrested NPs produced from the Brust-type synthesis method, although higher temperatures are needed to promote the rate of the ligand detachment. An alternative method for further functionalization is achieved through the conjugation of the ligands with other molecules, though this method can cause the colloidal stability of the Au NPs to breakdown. Ligand removal In many cases, as in various high-temperature catalytic applications of Au, the removal of the capping ligands produces more desirable physicochemical properties. The removal of ligands from colloidal gold while maintaining a relatively constant number of Au atoms per Au NP can be difficult due to the tendency for these bare clusters to aggregate. The removal of ligands is partially achievable by simply washing away all excess capping ligands, though this method is ineffective in removing all capping ligand. More often ligand removal achieved under high temperature or light ablation followed by washing. Alternatively, the ligands can be electrochemically etched off. Surface structure and chemical environment The precise structure of the ligands on the surface of colloidal gold NPs impact the properties of the colloidal gold particles. Binding conformations and surface packing of the capping ligands at the surface of the colloidal gold NPs tend to differ greatly from bulk surface model adsorption, largely due to the high curvature observed at the nanoparticle surfaces. Thiolate-gold interfaces at the nanoscale have been well-studied and the thiolate ligands are observed to pull Au atoms off of the surface of the particles to form “staple” motifs that have significant Thiyl-Au(0) character. The citrate-gold surface, on the other hand, is relatively less-studied due to the vast number of binding conformations of the citrate to the curved gold surfaces. A study performed in 2014 identified that the most-preferred binding of the citrate involves two carboxylic acids and the hydroxyl group of the citrate binds three surface metal atoms. Health and safety As gold nanoparticles (AuNPs) are further investigated for targeted drug delivery in humans, their toxicity needs to be considered. For the most part, it is suggested that AuNPs are biocompatible, but the concentrations at which they become toxic needs to be determined, and if those concentrations fall within the range of used concentrations. Toxicity can be tested in vitro and in vivo. In vitro toxicity results can vary depending on the type of the cellular growth media with different protein compositions, the method used to determine cellular toxicity (cell health, cell stress, how many cells are taken into a cell), and the capping ligands in solution. In vivo assessments can determine the general health of an organism (abnormal behavior, weight loss, average life span) as well as tissue specific toxicology (kidney, liver, blood) and inflammation and oxidative responses. In vitro experiments are more popular than in vivo experiments because in vitro experiments are more simplistic to perform than in vivo experiments. Toxicity and hazards in synthesis While AuNPs themselves appear to have low or negligible toxicity, and the literature shows that the toxicity has much more to do with the ligands rather than the particles themselves, the synthesis of them involves chemicals that are hazardous. Sodium borohydride, a harsh reagent, is used to reduce the gold ions to gold metal. The gold ions usually come from chloroauric acid, a potent acid. Because of the high toxicity and hazard of reagents used to synthesize AuNPs, the need for more “green” methods of synthesis arose. Toxicity due to capping ligands Some of the capping ligands associated with AuNPs can be toxic while others are nontoxic. In gold nanorods (AuNRs), it has been shown that a strong cytotoxicity was associated with CTAB-stabilized AuNRs at low concentration, but it is thought that free CTAB was the culprit in toxicity . Modifications that overcoat these AuNRs reduces this toxicity in human colon cancer cells (HT-29) by preventing CTAB molecules from desorbing from the AuNRs back into the solution. Ligand toxicity can also be seen in AuNPs. Compared to the 90% toxicity of HAuCl4 at the same concentration, AuNPs with carboxylate termini were shown to be non-toxic. Large AuNPs conjugated with biotin, cysteine, citrate, and glucose were not toxic in human leukemia cells (K562) for concentrations up to 0.25 M. Also, citrate-capped gold nanospheres (AuNSs) have been proven to be compatible with human blood and did not cause platelet aggregation or an immune response. However, citrate-capped gold nanoparticles sizes 8-37 nm were found to be lethally toxic for mice, causing shorter lifespans, severe sickness, loss of appetite and weight, hair discoloration, and damage to the liver, spleen, and lungs; gold nanoparticles accumulated in the spleen and liver after traveling a section of the immune system. There are mixed-views for polyethylene glycol (PEG)-modified AuNPs. These AuNPs were found to be toxic in mouse liver by injection, causing cell death and minor inflammation. However, AuNPs conjugated with PEG copolymers showed negligible toxicity towards human colon cells (Caco-2). AuNP toxicity also depends on the overall charge of the ligands. In certain doses, AuNSs that have positively-charged ligands are toxic in monkey kidney cells (Cos-1), human red blood cells, and E. coli because of the AuNSs interaction with the negatively-charged cell membrane; AuNSs with negatively-charged ligands have been found to be nontoxic in these species. In addition to the previously mentioned in vivo and in vitro experiments, other similar experiments have been performed. Alkylthiolate-AuNPs with trimethlyammonium ligand termini mediate the translocation of DNA across mammalian cell membranes in vitro at a high level, which is detrimental to these cells. Corneal haze in rabbits have been healed in vivo by using polyethylemnimine-capped gold nanoparticles that were transfected with a gene that promotes wound healing and inhibits corneal fibrosis. Toxicity due to size of nanoparticles Toxicity in certain systems can also be dependent on the size of the nanoparticle. AuNSs size 1.4 nm were found to be toxic in human skin cancer cells (SK-Mel-28), human cervical cancer cells (HeLa), mouse fibroblast cells (L929), and mouse macrophages (J774A.1), while 0.8, 1.2, and 1.8 nm sized AuNSs were less toxic by a six-fold amount and 15 nm AuNSs were nontoxic. There is some evidence for AuNP buildup after injection in in vivo studies, but this is very size dependent. 1.8 nm AuNPs were found to be almost totally trapped in the lungs of rats. Different sized AuNPs were found to buildup in the blood, brain, stomach, pancreas, kidneys, liver, and spleen. Biosafety and biokinetics investigations on biodegradable ultrasmall-in-nano architectures have demonstrated that gold nanoparticles are able to avoid metal accumulation in organisms through escaping by the renal pathway. Synthesis Generally, gold nanoparticles are produced in a liquid ("liquid chemical methods") by reduction of chloroauric acid (). To prevent the particles from aggregating, stabilizing agents are added. Citrate acts both as the reducing agent and colloidal stabilizer. They can be functionalized with various organic ligands to create organic-inorganic hybrids with advanced functionality. Turkevich method This simple method was pioneered by J. Turkevich et al. in 1951 and refined by G. Frens in the 1970s. It produces modestly monodisperse spherical gold nanoparticles of around 10–20 nm in diameter. Larger particles can be produced, but at the cost of monodispersity and shape. In this method, hot chloroauric acid is treated with sodium citrate solution, producing colloidal gold. The Turkevich reaction proceeds via formation of transient gold nanowires. These gold nanowires are responsible for the dark appearance of the reaction solution before it turns ruby-red. Capping agents A capping agent is used during nanoparticle synthesis to inhibit particle growth and aggregation. The chemical blocks or reduces reactivity at the periphery of the particle—a good capping agent has a high affinity for the new nuclei. Citrate ions or tannic acid function both as a reducing agent and a capping agent. Less sodium citrate results in larger particles. Brust-Schiffrin method This method was discovered by Brust and Schiffrin in the early 1990s, and can be used to produce gold nanoparticles in organic liquids that are normally not miscible with water (like toluene). It involves the reaction of a chlorauric acid solution with tetraoctylammonium bromide (TOAB) solution in toluene and sodium borohydride as an anti-coagulant and a reducing agent, respectively. Here, the gold nanoparticles will be around 5–6 nm. NaBH4 is the reducing agent, and TOAB is both the phase transfer catalyst and the stabilizing agent. TOAB does not bind to the gold nanoparticles particularly strongly, so the solution will aggregate gradually over the course of approximately two weeks. To prevent this, one can add a stronger binding agent, like a thiol (in particular, alkanethiols), which will bind to gold, producing a near-permanent solution. Alkanethiol protected gold nanoparticles can be precipitated and then redissolved. Thiols are better binding agents because there is a strong affinity for the gold-sulfur bonds that form when the two substances react with each other. is a commonly used strong binding agent to synthesize smaller particles. Some of the phase transfer agent may remain bound to the purified nanoparticles, this may affect physical properties such as solubility. In order to remove as much of this agent as possible, the nanoparticles must be further purified by soxhlet extraction. Perrault method This approach, discovered by Perrault and Chan in 2009, uses hydroquinone to reduce HAuCl4 in an aqueous solution that contains 15 nm gold nanoparticle seeds. This seed-based method of synthesis is similar to that used in photographic film development, in which silver grains within the film grow through addition of reduced silver onto their surface. Likewise, gold nanoparticles can act in conjunction with hydroquinone to catalyze reduction of ionic gold onto their surface. The presence of a stabilizer such as citrate results in controlled deposition of gold atoms onto the particles, and growth. Typically, the nanoparticle seeds are produced using the citrate method. The hydroquinone method complements that of Frens, as it extends the range of monodispersed spherical particle sizes that can be produced. Whereas the Frens method is ideal for particles of 12–20 nm, the hydroquinone method can produce particles of at least 30–300 nm. Martin method This simple method, discovered by Martin and Eah in 2010, generates nearly monodisperse "naked" gold nanoparticles in water. Precisely controlling the reduction stoichiometry by adjusting the ratio of NaBH4-NaOH ions to HAuCl4-HCl ions within the "sweet zone," along with heating, enables reproducible diameter tuning between 3–6 nm. The aqueous particles are colloidally stable due to their high charge from the excess ions in solution. These particles can be coated with various hydrophilic functionalities, or mixed with hydrophobic ligands for applications in non-polar solvents. In non-polar solvents the nanoparticles remain highly charged, and self-assemble on liquid droplets to form 2D monolayer films of monodisperse nanoparticles. Nanotech studies Bacillus licheniformis can be used in synthesis of gold nanocubes with sizes between 10 and 100 nanometres. Gold nanoparticles are usually synthesized at high temperatures in organic solvents or using toxic reagents. The bacteria produce them in much milder conditions. Navarro et al. method For particles larger than 30 nm, control of particle size with a low polydispersity of spherical gold nanoparticles remains challenging. In order to provide maximum control on the NP structure, Navarro and co-workers used a modified Turkevitch-Frens procedure using sodium acetylacetonate as the reducing agent and sodium citrate as the stabilizer. Sonolysis Another method for the experimental generation of gold particles is by sonolysis. The first method of this type was invented by Baigent and Müller. This work pioneered the use of ultrasound to provide the energy for the processes involved and allowed the creation of gold particles with a diameter of under 10 nm. In another method using ultrasound, the reaction of an aqueous solution of HAuCl4 with glucose, the reducing agents are hydroxyl radicals and sugar pyrolysis radicals (forming at the interfacial region between the collapsing cavities and the bulk water) and the morphology obtained is that of nanoribbons with width 30–50 nm and length of several micrometers. These ribbons are very flexible and can bend with angles larger than 90°. When glucose is replaced by cyclodextrin (a glucose oligomer), only spherical gold particles are obtained, suggesting that glucose is essential in directing the morphology toward a ribbon. Block copolymer-mediated method An economical, environmentally benign and fast synthesis methodology for gold nanoparticles using block copolymer has been developed by Sakai et al. In this synthesis methodology, block copolymer plays the dual role of a reducing agent as well as a stabilizing agent. The formation of gold nanoparticles comprises three main steps: reduction of gold salt ion by block copolymers in the solution and formation of gold clusters, adsorption of block copolymers on gold clusters and further reduction of gold salt ions on the surfaces of these gold clusters for the growth of gold particles in steps, and finally its stabilization by block copolymers. But this method usually has a limited-yield (nanoparticle concentration), which does not increase with the increase in the gold salt concentration. Ray et al. improved this synthesis method by enhancing the nanoparticle yield by manyfold at ambient temperature. Applications Antibiotic conjugated nanoparticle synthesis Antibiotic functionalized metal nanoparticles have been widely studied as a mode to treat multi-drug resistant bacterial strains. For example, kanamycin capped gold-nanoparticles (Kan-AuPs) showed broad spectrum dose dependent antibacterial activity against both gram positive and gram negative bacterial strains in comparison to kanamycin alone. See also Colloidal silver Fiveling, also called decahedral nanoparticles Gold nanorods Gold nanoparticles in chemotherapy Nanozymes Colloidal gold protein assay References Further reading External links Point-by-point methods for citrate synthesis and hydroquinone synthesis of gold nanoparticles are available here. Gold Nanoparticles by composition Gold
Colloidal gold
[ "Physics", "Chemistry", "Materials_science" ]
7,135
[ "Chemical mixtures", "Condensed matter physics", "Colloids" ]
887,179
https://en.wikipedia.org/wiki/Colpitts%20oscillator
A Colpitts oscillator, invented in 1918 by Canadian-American engineer Edwin H. Colpitts using vacuum tubes, is one of a number of designs for LC oscillators, electronic oscillators that use a combination of inductors (L) and capacitors (C) to produce an oscillation at a certain frequency. The distinguishing feature of the Colpitts oscillator is that the feedback for the active device is taken from a voltage divider made of two capacitors in series across the inductor. Overview The Colpitts circuit, like other LC oscillators, consists of a gain device (such as a bipolar junction transistor, field-effect transistor, operational amplifier, or vacuum tube) with its output connected to its input in a feedback loop containing a parallel LC circuit (tuned circuit), which functions as a bandpass filter to set the frequency of oscillation. The amplifier will have differing input and output impedances, and these need to be coupled into the LC circuit without overly damping it. A Colpitts oscillator uses a pair of capacitors to provide voltage division to couple the energy in and out of the tuned circuit. (It can be considered as the electrical dual of a Hartley oscillator, where the feedback signal is taken from an "inductive" voltage divider consisting of two coils in series (or a tapped coil).) Fig. 1 shows the common-base Colpitts circuit. The inductor L and the series combination of C1 and C2 form the resonant tank circuit, which determines the frequency of the oscillator. The voltage across C2 is applied to the base-emitter junction of the transistor, as feedback to create oscillations. Fig. 2 shows the common-collector version. Here the voltage across C1 provides feedback. The frequency of oscillation is approximately the resonant frequency of the LC circuit, which is the series combination of the two capacitors in parallel with the inductor: The actual frequency of oscillation will be slightly lower due to junction capacitances and resistive loading of the transistor. As with any oscillator, the amplification of the active component should be marginally larger than the attenuation of the resonator losses and its voltage division, to obtain stable operation. Thus, a Colpitts oscillator used as a variable-frequency oscillator (VFO) performs best when a variable inductance is used for tuning, as opposed to tuning just one of the two capacitors. If tuning by variable capacitor is needed, it should be done with a third capacitor connected in parallel to the inductor (or in series as in the Clapp oscillator). Practical example Fig. 3 shows an example with component values. Instead of field-effect transistors, other active components such as bipolar junction transistors or vacuum tubes, capable of producing gain at the desired frequency, could be used. The common gate amplifier has a low input impedance and a high output impedance. Therefore the amplifier input, the source, is connected to the low impedance tap of the LC circuit L1, C1, C2, C3 and the amplifier output, the drain, is connected to the high impedance top of the LC circuit. The resistor R1 sets the operating point to 0.5mA drain current with no oscillating. The output is at the low impedance tap and can drive some load. Still, this circuit has low harmonic distortion. An additional variable capacitor between drain of J1 and ground allows to change the frequency of the circuit. The load resistor RL is part of the simulation, not part of the circuit. Theory One method of oscillator analysis is to determine the input impedance of an input port neglecting any reactive components. If the impedance yields a negative resistance term, oscillation is possible. This method will be used here to determine conditions of oscillation and the frequency of oscillation. An ideal model is shown to the right. This configuration models the common collector circuit in the section above. For initial analysis, parasitic elements and device non-linearities will be ignored. These terms can be included later in a more rigorous analysis. Even with these approximations, acceptable comparison with experimental results is possible. Ignoring the inductor, the input impedance at the base can be written as where is the input voltage, and is the input current. The voltage is given by where is the impedance of . The current flowing into is , which is the sum of two currents: where is the current supplied by the transistor. is a dependent current source given by where is the transconductance of the transistor. The input current is given by where is the impedance of . Solving for and substituting above yields The input impedance appears as the two capacitors in series with the term , which is proportional to the product of the two impedances: If and are complex and of the same sign, then will be a negative resistance. If the impedances for and are substituted, is If an inductor is connected to the input, then the circuit will oscillate if the magnitude of the negative resistance is greater than the resistance of the inductor and any stray elements. The frequency of oscillation is as given in the previous section. For the example oscillator above, the emitter current is roughly 1 mA. The transconductance is roughly 40 mS. Given all other values, the input resistance is roughly This value should be sufficient to overcome any positive resistance in the circuit. By inspection, oscillation is more likely for larger values of transconductance and smaller values of capacitance. A more complicated analysis of the common-base oscillator reveals that a low-frequency amplifier voltage gain must be at least 4 to achieve oscillation. The low-frequency gain is given by If the two capacitors are replaced by inductors, and magnetic coupling is ignored, the circuit becomes a Hartley oscillator. In that case, the input impedance is the sum of the two inductors and a negative resistance given by In the Hartley circuit, oscillation is more likely for larger values of transconductance and larger values of inductance. The above analysis also describes the behavior of the Pierce oscillator. The Pierce oscillator, with two capacitors and one inductor, is equivalent to the Colpitts oscillator. Equivalence can be shown by choosing the junction of the two capacitors as the ground point. An electrical dual of the standard Pierce oscillator using two inductors and one capacitor is equivalent to the Hartley oscillator. Working Principle A Colpitts oscillator is an electronic circuit that generates a sinusoidal waveform, typically in the radio frequency range. It uses an inductor and two capacitors in parallel to form a resonant tank circuit, which determines the oscillation frequency. The output signal from the tank circuit is fed back into the input of an amplifier, where it is amplified and fed back into the tank circuit. The feedback signal provides the necessary phase shift for sustained oscillation. The working principle of a Colpitts oscillator can be explained as follows: When the power supply is switched on, the capacitors and start charging through the resistor and . The voltage across is coupled to the base of the transistor through the capacitor . The transistor amplifies the input signal and produces an inverted output signal at the collector. The output signal is coupled to the tank circuit through the capacitor . The tank circuit resonates at its natural frequency, which is given by: Where: f = frequency of oscillation L = inductance of the inductor = total capacitance of the series combination of and , given by: The resonant frequency is independent of the values of and , but depends on their ratio. The ratio of and also affects the feedback gain and the stability of the oscillator. The voltage across the inductor L is in phase with the voltage across , and 180 degrees out of phase with the voltage across . Therefore, the voltage at the junction of and is 180 degrees out of phase with the voltage at the collector of the transistor. This voltage is fed back to the base of the transistor through , providing another 180 degrees phase shift. Thus, the total phase shift around the loop is 360 degrees, which is equivalent to zero degrees. This satisfies the Barkhausen criterion for oscillation. The amplitude of the oscillation depends on the feedback gain and the losses in the tank circuit. The feedback gain should be equal to or slightly greater than the losses for sustained oscillation. The feedback gain can be adjusted by varying the values of and , or by using a variable capacitor in place of or . The Colpitts oscillator is widely used in various applications, such as RF communication systems, signal generators, and electronic testing equipment. It has better frequency stability than the Hartley oscillator, which uses a tapped inductor instead of a tapped capacitor in the tank circuit. However, the Colpitts oscillator may require a higher supply voltage and a larger coupling capacitor than the Hartley oscillator. Oscillation amplitude The amplitude of oscillation is generally difficult to predict, but it can often be accurately estimated using the describing function method. For the common-base oscillator in Figure 1, this approach applied to a simplified model predicts an output (collector) voltage amplitude given by where is the bias current, and is the load resistance at the collector. This assumes that the transistor does not saturate, the collector current flows in narrow pulses, and that the output voltage is sinusoidal (low distortion). This approximate result also applies to oscillators employing different active device, such as MOSFETs and vacuum tubes. References Further reading . . Electronic oscillators Electronic design Chaotic maps
Colpitts oscillator
[ "Mathematics", "Engineering" ]
2,140
[ "Functions and mappings", "Electronic design", "Mathematical objects", "Electronic engineering", "Mathematical relations", "Chaotic maps", "Design", "Dynamical systems" ]
887,587
https://en.wikipedia.org/wiki/Creatio%20ex%20materia
Creatio ex materia is the notion that the universe was formed out of eternal, pre-existing matter. This is in contrast to the notion of creatio ex nihilo, where the universe is created out of nothing. The idea of creatio ex materia is found in ancient near eastern cosmology, early Greek cosmology such as is in the works of Homer and Hesiod, and across the board in ancient Greek philosophy. It was also held by a few early Christians, although creatio ex nihilo was the dominant concept among such writers. After the King Follet discourse, creatio ex materia came to be accepted in Mormonism. Greek philosophers came to widely frame the notion of creatio ex materia with the philosophical dictum "nothing comes from nothing" (; ). Although it is not clear if the dictum goes back to Parmenides (5th century BC) or the Milesian philosophers, a more common version of the expression was coined by Lucretius, who stated in his De rerum natura that "nothing can be created out of nothing". Alternatives to creatio ex materia include creatio ex nihilo ("creation from nothing"); creatio ex deo ("creation from God"), referring to a derivation of the cosmos from the substance of God either partially (in panentheism) or completely (in pandeism), and creatio continua (ongoing divine creation). Greek philosophy Greek philosophers widely accepted the notion that creation acted on eternally existing, uncreated matter. Parmenides' articulation of the dictum that "nothing comes from nothing" is first attested in Aristotle's Physics: In English translation: Though commonly credited to Parmenides, some historians believe that the dictum instead historically traces back to the Milesian philosophers. In any case, Parmenides believed that non-existence could neither give rise to existence (genesis), nor could something that exists cease to exist (perishing). That which does not exist has no causal powers, and therefore could not give rise to something. A typical expression of it can be found in the writings of Plutarch, which conditions that the structured and formed things that exist now derive from earlier, unformed and unshaped matter. Therefore, the creation act was the process of ordering this unordered matter. The Roman poet and philosopher Lucretius expressed this principle in his first book of De rerum natura (On the Nature of Things) (1.149–214). According to his argument, if something could come from nothing, it would be commonplace to observe something coming from nothing all the time, even to witness any animal emerge fully-made or to see trees at one point bearing an apple but later producing a pear. This is because there is no prerequisite for what would come out of nothing, as prior causes or matter would have no place in limiting what comes into existence. In short, Lucretius believed that creatio ex nihilo would lead to a lack of regularity in nature. In their interaction with earlier Greek philosophers who accepted this argument/dictum, Christian authors who accepted creatio ex nihilo, like Origen, simply denied the essential premise that something cannot come from nothing, and viewed it as a presumption of a limitation of God's power; God was seen as in fact able to create something out of nothing. See also References Citations Sources External links Lucretius' De Rerum Natura, translated by William Ellery at the Internet Classics Archive Philosophical arguments Philosophy of physics Physical cosmology Ancient Near Eastern cosmology
Creatio ex materia
[ "Physics", "Astronomy" ]
775
[ "Philosophy of physics", "Astronomical sub-disciplines", "Applied and interdisciplinary physics", "Theoretical physics", "Astrophysics", "Physical cosmology" ]
888,140
https://en.wikipedia.org/wiki/Stock%20and%20flow
Economics, business, accounting, and related fields often distinguish between quantities that are stocks and those that are flows. These differ in their units of measurement. A stock is measured at one specific time, and represents a quantity existing at that point in time (say, December 31, 2004), which may have accumulated in the past. A flow variable is measured over an interval of time. Therefore, a flow would be measured per unit of time (say a year). Flow is roughly analogous to rate or speed in this sense. For example, U.S. nominal gross domestic product refers to a total number of dollars spent over a time period, such as a year. Therefore, it is a flow variable, and has units of dollars/year. In contrast, the U.S. nominal capital stock is the total value, in dollars, of equipment, buildings, and other real productive assets in the U.S. economy, and has units of dollars. The diagram provides an intuitive illustration of how the stock of capital currently available is increased by the flow of new investment and depleted by the flow of depreciation. Stocks and flows in accounting Thus, a stock refers to the value of an asset at a balance date (or point in time), while a flow refers to the total value of transactions (sales or purchases, incomes or expenditures) during an accounting period. If the flow value of an economic activity is divided by the average stock value during an accounting period, we obtain a measure of the number of turnovers (or rotations) of a stock in that accounting period. Some accounting entries are normally always represented as a flow (e.g. profit or income), while others may be represented both as a stock or as a flow (e.g. capital). A person or country might have stocks of money, financial assets, liabilities, wealth, real means of production, capital, inventories, and human capital (or labor power). Flow magnitudes include income, spending, saving, debt repayment, fixed investment, inventory investment, and labor utilization. These differ in their units of measurement. Capital is a stock concept which yields a periodic income which is a flow concept. Comparing stocks and flows Stocks and flows have different units and are thus not commensurable – they cannot be meaningfully compared, equated, added, or subtracted. However, one may meaningfully take ratios of stocks and flows, or multiply or divide them. This is a point of some confusion for some economics students, as some confuse taking ratios (valid) with comparing (invalid). The ratio of a stock over a flow has units of (units)/(units/time) = time. For example, the debt to GDP ratio has units of years (as GDP is measured in, for example, dollars per year whereas debt is measured in dollars), which yields the interpretation of the debt to GDP ratio as "number of years to pay off all debt, assuming all GDP devoted to debt repayment". The ratio of a flow to a stock has units 1/time. For example, the velocity of money is defined as nominal GDP / nominal money supply; it has units of (dollars / year) / dollars = 1/year. In discrete time, the change in a stock variable from one point in time to another point in time one time unit later (the first difference of the stock) is equal to the corresponding flow variable per unit of time. For example, if a country's stock of physical capital on January 1, 2010 is 20 machines and on January 1, 2011 is 23 machines, then the flow of net investment during 2010 was 3 machines per year. If it then has 27 machines on January 1, 2012, the flow of net investment during 2010 and 2011 averaged machines per year. In continuous time, the time derivative of a stock variable is a flow variable. More general uses Stocks and flows also have natural meanings in many contexts outside of economics, business and related fields. The concepts apply to many conserved quantities such as energy, and to materials such as in stoichiometry, water reservoir management, and greenhouse gases and other durable pollutants that accumulate in the environment or in organisms. Climate change mitigation, for example, is a fairly straightforward stock and flow problem with the primary goal of reducing the stock (the concentration of durable greenhouse gases in the atmosphere) by manipulating the flows (reducing inflows such as greenhouse gas emissions into the atmosphere, and increasing outflows such as carbon dioxide removal). In living systems, such as the human body, energy homeostasis describes the linear relationship between flows (the food we eat and the energy we expend along with the wastes we excrete) and the stock (manifesting as our gain or loss of body weight over time). In Earth system science, many stock and flow problems arise, such as in the carbon cycle, the nitrogen cycle, the water cycle, and Earth's energy budget. Thus stocks and flows are the basic building blocks of system dynamics models. Jay Forrester originally referred to them as "levels" rather than stocks, together with "rates" or "rates of flow". A stock (or "level variable") in this broader sense is some entity that is accumulated over time by inflows and/or depleted by outflows. Stocks can only be changed via flows. Mathematically a stock can be seen as an accumulation or integration of flows over time – with outflows subtracting from the stock. Stocks typically have a certain value at each moment of time – e.g. the number of population at a certain moment, or the quantity of water in a reservoir. A flow (or "rate") changes a stock over time. Usually we can clearly distinguish inflows (adding to the stock) and outflows (subtracting from the stock). Flows typically are measured over a certain interval of time – e.g., the number of births over a day or month. Synonyms Examples Accounting, finance, etc. Other contexts Calculus interpretation If the quantity of some stock variable at time is , then the derivative is the flow of changes in the stock. Likewise, the stock at some time is the integral of the flow from some moment set as zero until time . For example, if the capital stock is increased gradually over time by a flow of gross investment and decreased gradually over time by a flow of depreciation , then the instantaneous rate of change in the capital stock is given by where the notation refers to the flow of net investment, which is the difference between gross investment and depreciation. Example of dynamic stock and flow diagram Equations that change the two stocks via the flow are: List of all the equations, in their order of execution in each time, from time = 1 to 36: History The distinction between a stock and a flow variable is elementary, and dates back centuries in accounting practice (distinction between an asset and income, for instance). In economics, the distinction was formalized and terms were set in , in which Irving Fisher formalized capital (as a stock). Polish economist Michał Kalecki emphasized the centrality of the distinction of stocks and flows, caustically calling economics "the science of confusing stocks with flows" in his critique of the quantity theory of money (circa 1936, frequently quoted by Joan Robinson). See also Flow (disambiguation) Intensive and extensive properties Stock (disambiguation) Stock-Flow consistent model Systems thinking System dynamics Wealth (economics) References D.W. Bushaw and R.W. Clower, 1957. Introduction to Mathematical Economics, Ch. 3–6. "Section" & arrow-searchable pageChapter ("Section") and arrow-searchable page links. Robert W. Clower, 1954a. "An Investigation into the Dynamics of Investment," American Economic Review. 44(l), pp. 64–81. _, 1954b. "Price Determination in a Stock-Flow Economy" with D. W. Bushaw, Econometrica 22(3), p p. 328–343. _, 1968. "Stock-flow analysis," International Encyclopedia of the Social Sciences, v. 12. Glenn W. Harrison, 1987 [2008]. "Stocks and flows," The New Palgrave: A Dictionary of Economics, v. 4, pp. 506–09. Inventory Accounting systems Systems theory Economics and time
Stock and flow
[ "Physics", "Technology" ]
1,740
[ "Physical quantities", "Time", "Accounting systems", "Information systems", "Economics and time", "Spacetime" ]
889,183
https://en.wikipedia.org/wiki/Inclinometer
An inclinometer or clinometer is an instrument used for measuring angles of slope, elevation, or depression of an object with respect to gravity's direction. It is also known as a tilt indicator, tilt sensor, tilt meter, slope alert, slope gauge, gradient meter, gradiometer, level gauge, level meter, declinometer, and pitch & roll indicator. Clinometers measure both inclines and declines using three different units of measure: degrees, percentage points, and topos. The astrolabe is an example of an inclinometer that was used for celestial navigation and location of astronomical objects from ancient times to the Renaissance. A tilt sensor can measure the tilting in often two axes of a reference plane in two axes. In contrast, a full motion would use at least three axes and often additional sensors. One way to measure tilt angle with reference to the earth's ground plane, is to use an accelerometer. Typical applications can be found in the industry and in game controllers. In aircraft, the "ball" in turn coordinators or turn and bank indicators is sometimes referred to as an inclinometer. History Inclinometers include examples such as Well's in-clinometer, the essential parts of which are a flat side, or base, on which it stands, and a hollow disc just half filled with some heavy liquid. The glass face of the disc is surrounded by a graduated scale that marks the angle at which the surface of the liquid stands, with reference to the flat base. The zero line is parallel to the base, and when the liquid stands on that line, the flat side is horizontal; the 90 degree is perpendicular to the base, and when the liquid stands on that line, the flat side is perpendicular or plumb. Intervening angles are marked, and, with the aid of simple conversion tables, the instrument indicates the rate of fall per set distance of horizontal measurement, and set distance of the sloping line. Al-Biruni, a Persian polymath, once wanted to measure the height of the sun. He lacked the necessary equipment to measure this height. He was forced to create a calibrated arc on the back of a counting board, which he then used as a makeshift quadrant with the help of a plumb line. He determined the location's latitude using the measurements taken with this rudimentary tool. This quadrant was most likely an inclinometer based on the quarter-circle panel. The Abney level is a handheld surveying instrument developed in the 1870s that includes a sighting tube and inclinometer, arranged so that the surveyor may align the sighting tube (and its crosshair) with the reflection of the bubble in the spirit level of the inclinometer when the line of sight is at the angle set on the inclinometer. One of the more famous inclinometer installations was on the panel of the Ryan NYP "The Spirit of St. Louis"—in 1927 Charles Lindbergh chose the lightweight Rieker Inc P-1057 Degree Inclinometer to give him climb and descent angle information. Uses Hand-held clinometers are used for a variety of surveying and measurement tasks. In land surveying and mapping, a clinometer can provide a rapid measurement of the slope of a geographic feature, or used for cave survey. In prospecting for minerals, clinometers are used to measure the strike and dip of geologic formations. In forestry, tree height measurement can be done with a clinometer using standardized methods including triangulation. Major artillery guns may have an associated clinometer used to facilitate aiming of shells over long distances. Permanently-installed tiltmeters are emplaced at major earthworks such as dams to monitor the long-term stability of the structure. Factors which influence the use of inclinometers (Overall accuracy varies depending on the type of tilt sensor (or inclinometer) and technology used) Gravity Temperature (drift), zero offset, linearity, vibration, shock, cross-axis sensitivity, acceleration/deceleration. A clear line of sight between the user and the measured point is needed. A well defined object is required to obtain the maximum precision. The angle measurement precision and accuracy is limited to slightly better than one arcsec. Accuracy Certain highly sensitive electronic inclinometer sensors can achieve an output resolution to 0.0001°; depending on the technology and angle range, it may be limited to 0.001°. An inclinometer sensor's true or absolute accuracy (which is the combined total error), however, is a combination of initial sets of sensor zero offset and sensitivity, sensor linearity, hysteresis, repeatability, and the temperature drifts of zero and sensitivity—electronic inclinometers accuracy can typically range from ±0.01–2° depending on the sensor and situation. Typically in room ambient conditions the accuracy is limited to the sensor linearity specification. Sensor technology Tilt sensors and inclinometers generate an artificial horizon and measure angular tilt with respect to this horizon. They are used in cameras, aircraft flight controls, automobile security systems, and speciality switches and are also used for platform leveling, boom angle indication, and in other applications requiring measurement of tilt. Important specifications to consider for tilt sensors and inclinometers are the tilt angle range and the number of axes. The axes are usually, but not always, orthogonal. The tilt angle range is the range of desired linear output. Common implementations of tilt sensors and inclinometers are accelerometer, Liquid Capacitive, electrolytic, gas bubble in liquid, and pendulum. Tilt sensor technology has also been implemented in video games. Yoshi's Universal Gravitation and Kirby Tilt 'n' Tumble are both built around a tilt sensor mechanism, which is built into the cartridge. The PlayStation 3 and Wii game controllers also use tilt as a means to play video games. Inclinometers are also used in civil engineering, for example, to measure the inclination of land to be built upon. Some inclinometers provide an electronic interface based on CAN (Controller Area Network). In addition, those inclinometers may support the standardized CANopen profile (CiA 410). In this case, these inclinometers are compatible and partly interchangeable. Two-axis digital inclinometer Traditional spirit levels and pendulum-based electronic leveling instruments are usually constrained by only single-axis and narrow tilt measurement range. However, most precision leveling, angle measurement, alignment and surface flatness profiling tasks essentially involve a two-dimensional surface plane angle rather than two independent orthogonal single-axis objects. Two-axis inclinometers that are built with MEMS tilt sensors provides simultaneous two-dimensional angle readings of a surface plane tangent to earth datum. Typical advantages of using two-axis MEMS inclinometers over conventional single-axis "bubble" or mechanical leveling instruments may include: Simultaneous measurement of two-dimensional (X-Y plane) tilt angles (i.e. pitch & roll), can eliminate tedious swapping back-and-forth experienced when using a single-axis level, for example to adjust machine footings to attain a precise leveling position. Digital compensation and precise calibration for non-linearity, for example for operating temperature variation, resulting in higher accuracy over a wider measurement range. The accelerometer sensors may generate numerical data in the form of vibration profiles to enable a machine installer to track and assess alignment quality in real-time and verify a structure's positional stability by comparing leveling profiles before and after it is set up. Inclinometer with gyroscope As inclinometers measure the angle of an object with respect to the force of gravity, external accelerations like rapid motions, vibrations or shocks will introduce errors in the tilt measurements. To overcome this problem, it is possible to use a gyroscope in addition to an accelerometer. Any of the abovementioned accelerations have a huge impact on the accelerometer, but a limited effect on the measured rotation rates of the gyroscope. An algorithm can combine both signals to get the best value out of each sensor. This way it is possible to separate the actual tilt angle from the errors introduced by external accelerations. Applications Inclinometers are used for: Determining latitude using Polaris (in the Northern Hemisphere) or the two stars of the constellation Crux (in the Southern Hemisphere). Determining the angle of the Earth's magnetic field with respect to the horizontal plane. Showing a deviation from the true vertical or horizontal. Surveying, to measure an angle of inclination or elevation. Alerting an equipment operator that it may tip over. Measuring angles of elevation, slope, or incline, e.g. of an embankment. Measuring slight differences in slopes, particularly for geophysics. Such inclinometers are, for instance, used for monitoring volcanoes, or for measuring the depth and rate of landslide movement. Measuring movements in walls or the ground in civil engineering projects. Determining the dip of beds or strata, or the slope of an embankment or cutting; a kind of plumb level. Some automotive safety systems. Indicating pitch and roll of vehicles, nautical craft, and aircraft. See turn coordinator and slip indicator. Monitoring the boom angle of cranes and material handlers. Measuring the "look angle" of a satellite antenna towards a satellite. Adjusting a solar panel to the optimal angle to maximize its output. Measuring the slope angle of a tape or chain during distance measurement. Measuring the height of a building, tree, or other feature using a vertical angle and a distance (determined by taping or pacing), using trigonometry. Measuring the angle of drilling in well logging. Measuring the list of a ship in still water and the roll in rough water. Measuring steepness of a ski slope. Measuring the orientation of planes and lineations in rocks, in combination with a compass, in structural geology. Measuring range of motion in the joints of the body Measuring the inclination angle of the pelvis. Numerous neck and back measurements require the simultaneous use of two inclinometers. it measures the angle of elevation, and ultimately computing the altitudes of, many things otherwise inaccessible for direct measurement. Measuring and fine tuning the angle of line array speaker hangs. Confirmation of the angle achieved via use of a laser built into the remote inclinometer. Setting correct orientation of solar panels while installing Setting firing angle of a cannon or gun (determines projectile range) Electronic games Help prevent unsafe working conditions. The USDA Forest Service uses tilt sensors (or inclinometers) to measure tree height in its Forest Inventory and Analysis program. Logistics and transport also use tilt indicators, it is a specific system for single use. They are attached to the products during the shipping process. Games Nintendo used tilt sensor technology in five games for its Game Boy series of hand-held game systems. The tilt sensor allows players to control aspects of the game by twisting the game system. Games that use this feature: Yoshi's Universal Gravitation (Game Boy Advance) WarioWare: Twisted! (Game Boy Advance)(not released in Europe) Koro Koro Puzzle Happy Panechu! (Game Boy Advance)(Japan only) Kirby Tilt 'n' Tumble (Game Boy Color)(not released in Europe) Command Master (Game Boy Color)(Japan only) Tilt sensors can also be found in game controllers such as the Microsoft Sidewinder Freestyle Pro and Sony's PlayStation 3 controller. However, unlike these other controllers in which the tilt sensor serves as a supplement to normal control methods, it serves as one of the central features of Nintendo's Wii Remote and the Nunchuk attachment. Along with accelerometers, the tilt sensors are a primary method of control in most Wii games. It is now being used in many different aspects, instead of just games like motocrossing and flight simulators. It can be used for sport gaming, first-person shooter, and other odd uses such as in WarioWare: Smooth Moves Another example is a virtual version of a wooden maze with obstacles in which you have to maneuver a ball by tilting the maze. A homebrew tilt sensor interface was made for the Palm (PDA). See also Grade (slope) Goniometer Hypsometer Liquid capacitive inclinometers Theodolite Tiltmeter References External links Inclinometer Blog – General Inclinometer Information MEMS inclinometer and technical specifications Inclinometers Dimensional instruments Forestry tools Surveying instruments de:Neigungssensor
Inclinometer
[ "Physics", "Mathematics" ]
2,578
[ "Quantity", "Dimensional instruments", "Physical quantities", "Size" ]
889,722
https://en.wikipedia.org/wiki/Coronagraph
A coronagraph is a telescopic attachment designed to block out the direct light from a star or other bright object so that nearby objects – which otherwise would be hidden in the object's bright glare – can be resolved. Most coronagraphs are intended to view the corona of the Sun, but a new class of conceptually similar instruments (called stellar coronagraphs to distinguish them from solar coronagraphs) are being used to find extrasolar planets and circumstellar disks around nearby stars as well as host galaxies in quasars and other similar objects with active galactic nuclei (AGN). Invention The coronagraph was introduced in 1931 by the French astronomer Bernard Lyot; since then, coronagraphs have been used at many solar observatories. Coronagraphs operating within Earth's atmosphere suffer from scattered light in the sky itself, due primarily to Rayleigh scattering of sunlight in the upper atmosphere. At view angles close to the Sun, the sky is much brighter than the background corona even at high altitude sites on clear, dry days. Ground-based coronagraphs, such as the High Altitude Observatory's Mark IV Coronagraph on top of Mauna Loa, use polarization to distinguish sky brightness from the image of the corona: both coronal light and sky brightness are scattered sunlight and have similar spectral properties, but the coronal light is Thomson-scattered at nearly a right angle and therefore undergoes scattering polarization, while the superimposed light from the sky near the Sun is scattered at only a glancing angle and hence remains nearly unpolarized. Design Coronagraph instruments are extreme examples of stray light rejection and precise photometry because the total brightness from the solar corona is less than one-millionth the brightness of the Sun. The apparent surface brightness is even fainter because, in addition to delivering less total light, the corona has a much greater apparent size than the Sun itself. During a total solar eclipse, the Moon acts as an occluding disk and any camera in the eclipse path may be operated as a coronagraph until the eclipse is over. More common is an arrangement where the sky is imaged onto an intermediate focal plane containing an opaque spot; this focal plane is reimaged onto a detector. Another arrangement is to image the sky onto a mirror with a small hole: the desired light is reflected and eventually reimaged, but the unwanted light from the star goes through the hole and does not reach the detector. Either way, the instrument design must take into account scattering and diffraction to make sure that as little unwanted light as possible reaches the final detector. Lyot's key invention was an arrangement of lenses with stops, known as Lyot stops, and baffles such that light scattered by diffraction was focused on the stops and baffles, where it could be absorbed, while light needed for a useful image missed them. As examples, imaging instruments on the Hubble Space Telescope and James Webb Space Telescope offer coronagraphic capability. Band-limited coronagraph A band-limited coronagraph uses a special kind of mask called a band-limited mask. This mask is designed to block light and also manage diffraction effects caused by removal of the light. The band-limited coronagraph has served as the baseline design for the canceled Terrestrial Planet Finder coronagraph. Band-limited masks will also be available on the James Webb Space Telescope. Phase-mask coronagraph A phase-mask coronagraph (such as the so-called four-quadrant phase-mask coronagraph) uses a transparent mask to shift the phase of the stellar light in order to create a self-destructive interference, rather than a simple opaque disc to block it. Optical vortex coronagraph An optical vortex coronagraph uses a phase-mask in which the phase shift varies azimuthally around the center. Several varieties of optical vortex coronagraphs exist: the scalar optical vortex coronagraph based on a phase ramp directly etched in a dielectric material, like fused silica. the vector(ial) vortex coronagraph employs a mask that rotates the angle of polarization of photons, and ramping this angle of rotation has the same effect as ramping a phase-shift. A mask of this kind can be synthesized by various technologies, ranging from liquid crystal polymer (same technology as in 3D television), and micro-structured surfaces (using microfabrication technologies from the microelectronics industry). Such a vector vortex coronagraph made out of liquid crystal polymers is currently in use at the 200-inch Hale Telescope at the Palomar Observatory. It has recently been operated with adaptive optics to image extrasolar planets. This works with stars other than the sun because they are so far away their light is, for this purpose, a spatially coherent plane wave. The coronagraph using interference masks out the light along the center axis of the telescope, but allows the light from off axis objects through. Satellite-based coronagraphs Coronagraphs in outer space are much more effective than the same instruments would be if located on the ground. This is because the complete absence of atmospheric scattering eliminates the largest source of glare present in a terrestrial coronagraph. Several space missions such as NASA-ESA's SOHO, and NASA's SPARTAN, Solar Maximum Mission, and Skylab have used coronagraphs to study the outer reaches of the solar corona. The Hubble Space Telescope (HST) is able to perform coronagraphy using the Near Infrared Camera and Multi-Object Spectrometer (NICMOS), and the James Webb Space Telescope (JWST) is able to perform coronagraphy using the Near Infrared Camera (NIRCam) and Mid-Infrared Instrument (MIRI). While space-based coronagraphs such as LASCO avoid the sky brightness problem, they face design challenges in stray light management under the stringent size and weight requirements of space flight. Any sharp edge (such as the edge of an occulting disk or optical aperture) causes Fresnel diffraction of incoming light around the edge, which means that the smaller instruments that one would want on a satellite unavoidably leak more light than larger ones would. The LASCO C-3 coronagraph uses both an external occulter (which casts shadow on the instrument) and an internal occulter (which blocks stray light that is Fresnel-diffracted around the external occulter) to reduce this leakage, and a complicated system of baffles to eliminate stray light scattering off the internal surfaces of the instrument itself. Aditya-L1 Aditya-L1 is a coronagraphy spacecraft developed by the Indian Space Research Organisation (ISRO) and various Indian research institutes. The spacecraft aims to study the solar atmosphere and its impact on the Earth's environment. It will be positioned approximately 1.5 million km from Earth in a halo orbit around the L1 Lagrangian point between Earth and the Sun. The primary payload, Visible Emission Line Coronagraph (VELC), will send 1,440 images of the sun daily to ground stations. The VELC payload has been developed by the Indian Institute of Astrophysics (IIA) and will continuously observe the Sun's corona from the L1 point. Extrasolar planets The coronagraph has recently been adapted to the challenging task of finding planets around nearby stars. While stellar and solar coronagraphs are similar in concept, they are quite different in practice because the object to be occulted differs by a factor of a million in linear apparent size. (The Sun has an apparent size of about 1900 arcseconds, while a typical nearby star might have an apparent size of 0.0005 and 0.002 arcseconds.) Earth-like exoplanet detection requires contrast. To achieve such contrast requires extreme optothermal stability. A stellar coronagraph concept was studied for flight on the canceled Terrestrial Planet Finder mission. On ground-based telescopes, a stellar coronagraph can be combined with adaptive optics to search for planets around nearby stars. In November 2008, NASA announced that a planet was directly observed orbiting the nearby star Fomalhaut. The planet could be seen clearly on images taken by Hubble's Advanced Camera for Surveys' coronagraph in 2004 and 2006. The dark area hidden by the coronagraph mask can be seen on the images, though a bright dot has been added to show where the star would have been. Up until the year 2010, telescopes could only directly image exoplanets under exceptional circumstances. Specifically, it is easier to obtain images when the planet is especially large (considerably larger than Jupiter), widely separated from its parent star, and hot so that it emits intense infrared radiation. However, in 2010 a team from NASAs Jet Propulsion Laboratory demonstrated that a vector vortex coronagraph could enable small telescopes to directly image planets. They did this by imaging the previously imaged HR 8799 planets using just a portion of the Hale Telescope. See also List of solar telescopes New Worlds Mission – A proposed external coronagraph PROBA-3 - a coronagraphy demonstration mission using high-precision formation flying References External links Overview of Technologies for Direct Optical Imaging of Exoplanets, Marie Levine, Rémi Soummer, 2009 "Sun Gazer's Telescope." Popular Mechanics, February 1952, pp. 140–141. Cut-away drawing of first Coronagraph type used in 1952. Optical Vectorial Vortex Coronagraphs using Liquid Crystal Polymers: theory, manufacturing and laboratory demonstration Optics Infobase The Vector Vortex Coronagraph: Laboratory Results and First Light at Palomar Observatory IopScience Annular Groove Phase Mask Coronagraph IopScience This link shows an HST image of a dust disk surrounding a bright star with the star hidden by the coronagraph. Optical telescope components Optical devices French inventions
Coronagraph
[ "Materials_science", "Technology", "Engineering" ]
1,993
[ "Glass engineering and science", "Optical telescope components", "Optical devices", "Components" ]
889,856
https://en.wikipedia.org/wiki/Superheated%20steam
Superheated steam is steam at a temperature higher than its vaporization point at the absolute pressure where the temperature is measured. Superheated steam can therefore cool (lose internal energy) by some amount, resulting in a lowering of its temperature without changing state (i.e., condensing) from a gas to a mixture of saturated vapor and liquid. If unsaturated steam (a mixture which contains both water vapor and liquid water droplets) is heated at constant pressure, its temperature will also remain constant as the vapor quality (think dryness, or percent saturated vapor) increases towards 100%, and becomes dry (i.e., no saturated liquid) saturated steam. Continued heat input will then "super" heat the dry saturated steam. This will occur if saturated steam contacts a surface with a higher temperature. Superheated steam and liquid water cannot coexist under thermodynamic equilibrium, as any additional heat simply evaporates more water and the steam will become saturated steam. However, this restriction may be violated temporarily in dynamic (non-equilibrium) situations. To produce superheated steam in a power plant or for processes (such as drying paper) the saturated steam drawn from a boiler is passed through a separate heating device (a superheater) which transfers additional heat to the steam by contact or by radiation. Superheated steam is not suitable for sterilization. This is because the superheated steam is dry. Dry steam must reach much higher temperatures and the materials exposed for a longer time period to have the same effectiveness; or equal F0 kill value. Superheated steam is also not useful for heating; while it has more energy and can do more work than saturated steam, its heat content is much less useful. This is because superheated steam has the same heat transfer coefficient of air, making it an insulator - a poor conductor of heat. Saturated steam has a much higher wall heat transfer coefficient. Slightly superheated steam may be used for antimicrobial disinfection of biofilms on hard surfaces. Superheated steam's greatest value lies in its tremendous internal energy that can be used for kinetic reaction through mechanical expansion against turbine blades and reciprocating pistons, that produces rotary motion of a shaft. The value of superheated steam in these applications is its ability to release tremendous quantities of internal energy yet remain above the condensation temperature of water vapor; at the pressures at which reaction turbines and reciprocating piston engines operate. Of prime importance in these applications is the fact that water vapor containing entrained liquid droplets is generally incompressible at those pressures. In a reciprocating engine or turbine, if steam doing work cools to a temperature at which liquid droplets form, then the water droplets entrained in the fluid flow will strike the mechanical parts with enough force to bend, crack or fracture them. Superheating and pressure reduction through expansion ensures that the steam flow remains as a compressible gas throughout its passage through a turbine or an engine, preventing damage of the internal moving parts. Saturated steam Saturated steam is steam that is in equilibrium with heated water at the same pressure, i.e., it has not been heated above the boiling point for its pressure. This is in contrast to superheated steam, in which the steam (vapor) has been separated from the water droplets then additional heat has been added. These condensation droplets are a cause of damage to steam turbine blades, the reason why such turbines rely on a supply of dry, superheated steam. Dry steam is saturated steam that has been very slightly superheated. This is not sufficient to change its energy appreciably, but is a sufficient rise in temperature to avoid condensation problems, given the average loss in temperature across the steam supply circuit. Towards the end of the 19th century, when superheating was still a less-than-certain technology, such steam-drying gave the condensation-avoiding benefits of superheating without requiring the sophisticated boiler or lubrication techniques of full superheating. By contrast, water vapor that includes water droplets is described as wet steam. If wet steam is heated further, the droplets evaporate, and at a high enough temperature (which depends on the pressure) all of the water evaporates, the system is in vapor–liquid equilibrium, and it becomes saturated steam. Saturated steam is advantageous in heat transfer due to the high latent heat of vaporization. It is a very efficient mode of heat transfer. In layman's terms, saturated steam is at its dew point at the corresponding temperature and pressure. The typical latent heat of vaporization (or condensation) is for saturated steam at atmospheric pressure. Uses Steam engine Superheated steam was widely used in main line steam locomotives. Saturated steam has three main disadvantages in a steam engine: it contains small droplets of water which have to be periodically drained from the cylinders; being precisely at the boiling point of water for the boiler pressure in use, it inevitably condenses to some extent in the steam pipes and cylinders outside the boiler, causing a disproportionate loss of steam volume as it does so; and it places a heavy demand on the boiler. Superheating the steam dries it effectively, raises its temperature to a point where condensation is much less likely and increases its volume significantly. Added together, these factors increase the power and economy of the locomotive. The main disadvantages are the added complexity and cost of the superheater tubing and the adverse effect that the "dry" steam has on lubrication of moving components such as the steam valves. Shunting locomotives did not generally use superheating. The normal arrangement involved taking steam after the regulator valve and passing it through long superheater tubes inside specially large firetubes of the boiler. The superheater tubes had a reverse ("torpedo") bend at the firebox end so that the steam had to pass the length of the boiler at least twice, picking up heat as it did so. Processing Other potential uses of superheated steam include: drying, cleaning, layering, reaction engineering, epoxy drying and film use where saturated to highly superheated steam is required at one atmospheric pressure or at high pressure. Ideal for steam drying, steam oxidation and chemical processing. Uses are in surface technologies, cleaning technologies, steam drying, catalysis, chemical reaction processing, surface drying technologies, curing technologies, energy systems and nanotechnologies. The application of superheated steam for sanitation of dry food processing plant environment has been reported. Superheated steam is not usually used in a heat exchanger due to low heat transfer co-efficient. In refining and hydrocarbon industries superheated steam is mainly used for stripping and cleaning purposes. Pest control Steam has been used for soil steaming since the 1890s. Steam is induced into the soil which causes almost all organic material to deteriorate (the term "sterilization" is used, but it is not strictly correct since all micro-organism are not necessarily killed). Soil steaming is an effective alternative to many chemicals in agriculture, and is used widely by greenhouse growers. Wet steam is primarily used in this process, but if soil temperatures above the boiling point of water are required, superheated steam must be used. See also Superheated water References Water Steam power ja:水蒸気#飽和蒸気と過熱蒸気
Superheated steam
[ "Physics", "Environmental_science" ]
1,529
[ "Hydrology", "Physical quantities", "Steam power", "Power (physics)", "Water" ]
20,159,695
https://en.wikipedia.org/wiki/SIC-POVM
In the context of quantum mechanics and quantum information theory, symmetric, informationally complete, positive operator-valued measures (SIC-POVMs) are a particular type of generalized measurement (POVM). SIC-POVMs are particularly notable thanks to their defining features of (1) being informationally complete; (2) having the minimal number of outcomes compatible with informational completeness, and (3) being highly symmetric. In this context, informational completeness is the property of a POVM of allowing to fully reconstruct input states from measurement data. The properties of SIC-POVMs make them an interesting candidate for a "standard quantum measurement", utilized in the study of foundational quantum mechanics, most notably in QBism. SIC-POVMs have several applications in the context of quantum state tomography and quantum cryptography, and a possible connection has been discovered with Hilbert's twelfth problem. Definition A POVM over a -dimensional Hilbert space is a set of positive-semidefinite operators that sum to the identity: If a POVM consists of at least operators which span the space of self-adjoint operators , it is said to be an informationally complete POVM (IC-POVM). IC-POVMs consisting of exactly elements are called minimal. A set of rank-1 projectors which have equal pairwise Hilbert–Schmidt inner products, defines a minimal IC-POVM with elements called a SIC-POVM. Properties Symmetry Consider an arbitrary set of rank-1 projectors such that is a POVM, and thus . Asking the projectors to have equal pairwise inner products, for all , fixes the value of . To see this, observe that implies that . Thus, This property is what makes SIC-POVMs symmetric: Any pair of elements has the same Hilbert–Schmidt inner product as any other pair. Superoperator In using the SIC-POVM elements, an interesting superoperator can be constructed, the likes of which map . This operator is most useful in considering the relation of SIC-POVMs with spherical t-designs. Consider the map This operator acts on a SIC-POVM element in a way very similar to identity, in that But since elements of a SIC-POVM can completely and uniquely determine any quantum state, this linear operator can be applied to the decomposition of any state, resulting in the ability to write the following: where From here, the left inverse can be calculated to be , and so with the knowledge that , an expression for a state can be created in terms of a quasi-probability distribution, as follows: where is the Dirac notation for the density operator viewed in the Hilbert space . This shows that the appropriate quasi-probability distribution (termed as such because it may yield negative results) representation of the state is given by Finding SIC sets Simplest example For the equations that define the SIC-POVM can be solved by hand, yielding the vectors which form the vertices of a regular tetrahedron in the Bloch sphere. The projectors that define the SIC-POVM are given by , and the elements of the SIC-POVM are thus . For higher dimensions this is not feasible, necessitating the use of a more sophisticated approach. Group covariance General group covariance A SIC-POVM is said to be group covariant if there exists a group with a -dimensional unitary representation such that The search for SIC-POVMs can be greatly simplified by exploiting the property of group covariance. Indeed, the problem is reduced to finding a normalized fiducial vector such that . The SIC-POVM is then the set generated by the group action of on . The case of Zd × Zd So far, most SIC-POVM's have been found by considering group covariance under . To construct the unitary representation, we map to , the group of unitary operators on d-dimensions. Several operators must first be introduced. Let be a basis for , then the phase operator is where is a root of unity and the shift operator as Combining these two operators yields the Weyl operator which generates the Heisenberg-Weyl group. This is a unitary operator since It can be checked that the mapping is a projective unitary representation. It also satisfies all of the properties for group covariance, and is useful for numerical calculation of SIC sets. Zauner's conjecture Given some of the useful properties of SIC-POVMs, it would be useful if it were positively known whether such sets could be constructed in a Hilbert space of arbitrary dimension. Originally proposed in the dissertation of Zauner, a conjecture about the existence of a fiducial vector for arbitrary dimensions was hypothesized. More specifically, For every dimension there exists a SIC-POVM whose elements are the orbit of a positive rank-one operator under the Weyl–Heisenberg group . What is more, commutes with an element T of the Jacobi group . The action of T on modulo the center has order three. Utilizing the notion of group covariance on , this can be restated as For any dimension , let be an orthonormal basis for , and define Then such that the set is a SIC-POVM. Partial results The proof for the existence of SIC-POVMs for arbitrary dimensions remains an open question, but is an ongoing field of research in the quantum information community. Exact expressions for SIC sets have been found for Hilbert spaces of all dimensions from through inclusive, and in some higher dimensions as large as , for 115 values of in all. Furthermore, using the Heisenberg group covariance on , numerical solutions have been found for all integers up through , and in some larger dimensions up to . Relation to spherical t-designs A spherical t-design is a set of vectors on the d-dimensional generalized hypersphere, such that the average value of any -order polynomial over is equal to the average of over all normalized vectors . Defining as the t-fold tensor product of the Hilbert spaces, and as the t-fold tensor product frame operator, it can be shown that a set of normalized vectors with forms a spherical t-design if and only if It then immediately follows that every SIC-POVM is a 2-design, since which is precisely the necessary value that satisfies the above theorem. Relation to MUBs In a d-dimensional Hilbert space, two distinct bases are said to be mutually unbiased if This seems similar in nature to the symmetric property of SIC-POVMs. Wootters points out that a complete set of unbiased bases yields a geometric structure known as a finite projective plane, while a SIC-POVM (in any dimension that is a prime power) yields a finite affine plane, a type of structure whose definition is identical to that of a finite projective plane with the roles of points and lines exchanged. In this sense, the problems of SIC-POVMs and of mutually unbiased bases are dual to one another. In dimension , the analogy can be taken further: a complete set of mutually unbiased bases can be directly constructed from a SIC-POVM. The 9 vectors of the SIC-POVM, together with the 12 vectors of the mutually unbiased bases, form a set that can be used in a Kochen–Specker proof. However, in 6-dimensional Hilbert space, a SIC-POVM is known, but no complete set of mutually unbiased bases has yet been discovered, and it is widely believed that no such set exists. See also Measurement in quantum mechanics Mutually unbiased bases POVM QBism Notes References Quantum measurement Unsolved problems in physics Unsolved problems in mathematics Hilbert spaces Operator theory Incidence geometry Euclidean plane geometry Algebraic geometry Hypergraphs Computer-assisted proofs
SIC-POVM
[ "Physics", "Mathematics" ]
1,632
[ "Unsolved problems in mathematics", "Computer-assisted proofs", "Euclidean plane geometry", "Unsolved problems in physics", "Quantum mechanics", "Combinatorics", "Fields of abstract algebra", "Quantum measurement", "Algebraic geometry", "Hilbert spaces", "Planes (geometry)", "Mathematical prob...
20,163,518
https://en.wikipedia.org/wiki/Process%20flowsheeting
Process flowsheeting is the use of computer aids to perform steady-state heat and mass balancing, sizing and costing calculations for a chemical process. It is an essential and core component of process design. The process design effort may be split into three basic steps Synthesis Analysis and Optimization. Synthesis Synthesis is the step where the structure of the flowsheet is chosen. It is also in this step that one initializes values for variables which one is free to set. Analysis Analysis is usually made up of three steps Solving heat and material balances Sizing and costing the equipment and Evaluating the economic worth, safety, operability etc. of the chosen flow sheet Optimization Optimization involves both structural optimization of the flow sheet itself as well as optimization of parameters in a given flowsheet. In the former one may alter the equipment used and/or its connections with other equipment. In the latter one can change the values of parameters such as temperature and pressure. Parameter Optimization is a more advanced stage of theory than process flowsheet optimization. Plant design project The first step in the sequence leading to the construction of a process plant and its use in the manufacture of a product is the conception of a process. The concept is embodied in the form of a "flow sheet". Process design then proceeds on the basis of the flow sheet chosen. Physical property data are the other component needed for process design apart from a flow sheet. The result of process design is a process flow diagram, PFD. Detailed engineering for the project and vessel specifications then begin. Process flowsheeting ends at the point of generation of a suitable PFD. General purpose flowsheeting programs became usable and reliable around 1965-1970. See also List of chemical process simulators CAPE-OPEN Interface Standard Process simulation References Westerberg A. W., Hutchinson H. P., Motard R. L., and Winter P., (1979), "Process Flowsheeting", Cambridge Universities Press, Veverka V. V., and Madron, F. (1997), "Material and Energy balancing in the Process Industries", Elsevier, Babu, B. V.(2004), "Process Plant Simulation", Oxford Universities Press, ISBN External links Process flowsheet development using process simulation software Chemical process engineering Process engineering
Process flowsheeting
[ "Chemistry", "Engineering" ]
469
[ "Chemical process engineering", "Chemical engineering", "Mechanical engineering by discipline", "Process engineering" ]
1,996,536
https://en.wikipedia.org/wiki/Grain%20boundary
In materials science, a grain boundary is the interface between two grains, or crystallites, in a polycrystalline material. Grain boundaries are two-dimensional defects in the crystal structure, and tend to decrease the electrical and thermal conductivity of the material. Most grain boundaries are preferred sites for the onset of corrosion and for the precipitation of new phases from the solid. They are also important to many of the mechanisms of creep. On the other hand, grain boundaries disrupt the motion of dislocations through a material, so reducing crystallite size is a common way to improve mechanical strength, as described by the Hall–Petch relationship. High and low angle boundaries It is convenient to categorize grain boundaries according to the extent of misorientation between the two grains. Low-angle grain boundaries (LAGB) or subgrain boundaries are those with a misorientation less than about 15 degrees. Generally speaking they are composed of an array of dislocations and their properties and structure are a function of the misorientation. In contrast the properties of high-angle grain boundaries, whose misorientation is greater than about 15 degrees (the transition angle varies from 10 to 15 degrees depending on the material), are normally found to be independent of the misorientation. However, there are 'special boundaries' at particular orientations whose interfacial energies are markedly lower than those of general high-angle grain boundaries. The simplest boundary is that of a tilt boundary where the rotation axis is parallel to the boundary plane. This boundary can be conceived as forming from a single, contiguous crystallite or grain which is gradually bent by some external force. The energy associated with the elastic bending of the lattice can be reduced by inserting a dislocation, which is essentially a half-plane of atoms that act like a wedge, that creates a permanent misorientation between the two sides. As the grain is bent further, more and more dislocations must be introduced to accommodate the deformation resulting in a growing wall of dislocations – a low-angle boundary. The grain can now be considered to have split into two sub-grains of related crystallography but notably different orientations. An alternative is a twist boundary where the misorientation occurs around an axis that is perpendicular to the boundary plane. This type of boundary incorporates two sets of screw dislocations. If the Burgers vectors of the dislocations are orthogonal, then the dislocations do not strongly interact and form a square network. In other cases, the dislocations may interact to form a more complex hexagonal structure. These concepts of tilt and twist boundaries represent somewhat idealized cases. The majority of boundaries are of a mixed type, containing dislocations of different types and Burgers vectors, in order to create the best fit between the neighboring grains. If the dislocations in the boundary remain isolated and distinct, the boundary can be considered to be low-angle. If deformation continues, the density of dislocations will increase and so reduce the spacing between neighboring dislocations. Eventually, the cores of the dislocations will begin to overlap and the ordered nature of the boundary will begin to break down. At this point the boundary can be considered to be high-angle and the original grain to have separated into two entirely separate grains. In comparison to low-angle grain boundaries, high-angle boundaries are considerably more disordered, with large areas of poor fit and a comparatively open structure. Indeed, they were originally thought to be some form of amorphous or even liquid layer between the grains. However, this model could not explain the observed strength of grain boundaries and, after the invention of electron microscopy, direct evidence of the grain structure meant the hypothesis had to be discarded. It is now accepted that a boundary consists of structural units which depend on both the misorientation of the two grains and the plane of the interface. The types of structural unit that exist can be related to the concept of the coincidence site lattice, in which repeated units are formed from points where the two misoriented \ In coincident site lattice (CSL) theory, the degree of fit (Σ) between the structures of the two grains is described by the reciprocal of the ratio of coincidence sites to the total number of sites. In this framework, it is possible to draw the lattice for the two grains and count the number of atoms that are shared (coincidence sites), and the total number of atoms on the boundary (total number of site). For example, when Σ=3 there will be one atom of each three that will be shared between the two lattices. Thus a boundary with high Σ might be expected to have a higher energy than one with low Σ. Low-angle boundaries, where the distortion is entirely accommodated by dislocations, are Σ1. Some other low-Σ boundaries have special properties, especially when the boundary plane is one that contains a high density of coincident sites. Examples include coherent twin boundaries (e.g., Σ3) and high-mobility boundaries in FCC materials (e.g., Σ7). Deviations from the ideal CSL orientation may be accommodated by local atomic relaxation or the inclusion of dislocations at the boundary. Describing a boundary A boundary can be described by the orientation of the boundary to the two grains and the 3-D rotation required to bring the grains into coincidence. Thus a boundary has 5 macroscopic degrees of freedom. However, it is common to describe a boundary only as the orientation relationship of the neighbouring grains. Generally, the convenience of ignoring the boundary plane orientation, which is very difficult to determine, outweighs the reduced information. The relative orientation of the two grains is described using the rotation matrix: Using this system the rotation angle θ is: while the direction [uvw] of the rotation axis is: The nature of the crystallography involved limits the misorientation of the boundary. A completely random polycrystal, with no texture, thus has a characteristic distribution of boundary misorientations (see figure). However, such cases are rare and most materials will deviate from this ideal to a greater or lesser degree. Boundary energy The energy of a low-angle boundary is dependent on the degree of misorientation between the neighbouring grains up to the transition to high-angle status. In the case of simple tilt boundaries the energy of a boundary made up of dislocations with Burgers vector b and spacing h is predicted by the Read–Shockley equation: where: with is the shear modulus, is Poisson's ratio, and is the radius of the dislocation core. It can be seen that as the energy of the boundary increases the energy per dislocation decreases. Thus there is a driving force to produce fewer, more misoriented boundaries (i.e., grain growth). The situation in high-angle boundaries is more complex. Although theory predicts that the energy will be a minimum for ideal CSL configurations, with deviations requiring dislocations and other energetic features, empirical measurements suggest the relationship is more complicated. Some predicted troughs in energy are found as expected while others missing or substantially reduced. Surveys of the available experimental data have indicated that simple relationships such as low are misleading: It is concluded that no general and useful criterion for low energy can be enshrined in a simple geometric framework. Any understanding of the variations of interfacial energy must take account of the atomic structure and the details of the bonding at the interface. Excess volume The excess volume is another important property in the characterization of grain boundaries. Excess volume was first proposed by Bishop in a private communication to Aaron and Bolling in 1972. It describes how much expansion is induced by the presence of a GB and is thought that the degree and susceptibility of segregation is directly proportional to this. Despite the name the excess volume is actually a change in length, this is because of the 2D nature of GBs the length of interest is the expansion normal to the GB plane. The excess volume () is defined in the following way, at constant temperature , pressure and number of atoms . Although a rough linear relationship between GB energy and excess volume exists the orientations where this relationship is violated can behave significantly differently affecting mechanical and electrical properties. Experimental techniques have been developed which directly probe the excess volume and have been used to explore the properties of nanocrystalline copper and nickel. Theoretical methods have also been developed and are in good agreement. A key observation is that there is an inverse relationship with the bulk modulus meaning that the larger the bulk modulus (the ability to compress a material) the smaller the excess volume will be, there is also direct relationship with the lattice constant, this provides methodology to find materials with a desirable excess volume for a specific application. Boundary migration The movement of grain boundaries (HAGB) has implications for recrystallization and grain growth while subgrain boundary (LAGB) movement strongly influences recovery and the nucleation of recrystallization. A boundary moves due to a pressure acting on it. It is generally assumed that the velocity is directly proportional to the pressure with the constant of proportionality being the mobility of the boundary. The mobility is strongly temperature dependent and often follows an Arrhenius type relationship: The apparent activation energy (Q) may be related to the thermally activated atomistic processes that occur during boundary movement. However, there are several proposed mechanisms where the mobility will depend on the driving pressure and the assumed proportionality may break down. It is generally accepted that the mobility of low-angle boundaries is much lower than that of high-angle boundaries. The following observations appear to hold true over a range of conditions: The mobility of low-angle boundaries is proportional to the pressure acting on it. The rate controlling process is that of bulk diffusion The boundary mobility increases with misorientation. Since low-angle boundaries are composed of arrays of dislocations and their movement may be related to dislocation theory. The most likely mechanism, given the experimental data, is that of dislocation climb, rate limited by the diffusion of solute in the bulk. The movement of high-angle boundaries occurs by the transfer of atoms between the neighbouring grains. The ease with which this can occur will depend on the structure of the boundary, itself dependent on the crystallography of the grains involved, impurity atoms and the temperature. It is possible that some form of diffusionless mechanism (akin to diffusionless phase transformations such as martensite) may operate in certain conditions. Some defects in the boundary, such as steps and ledges, may also offer alternative mechanisms for atomic transfer. Since a high-angle boundary is imperfectly packed compared to the normal lattice it has some amount of free space or free volume where solute atoms may possess a lower energy. As a result, a boundary may be associated with a solute atmosphere that will retard its movement. Only at higher velocities will the boundary be able to break free of its atmosphere and resume normal motion. Both low- and high-angle boundaries are retarded by the presence of particles via the so-called Zener pinning effect. This effect is often exploited in commercial alloys to minimise or prevent recrystallization or grain growth during heat-treatment. Complexion Grain boundaries are the preferential site for segregation of impurities, which may form a thin layer with a different composition from the bulk and a variety of atomic structures that are distinct from the abutting crystalline phases. For example, a thin layer of silica, which also contains impurity cations, is often present in silicon nitride. Grain boundary complexions were introduced by Ming Tang, Rowland Cannon, and W. Craig Carter in 2006. These grain boundary phases are thermodynamically stable and can be considered as quasi-two-dimensional phase, which may undergo to transition, similar to those of bulk phases. In this case structure and chemistry abrupt changes are possible at a critical value of a thermodynamic parameter like temperature or pressure. This may strongly affect the macroscopic properties of the material, for example the electrical resistance or creep rates. Grain boundaries can be analyzed using equilibrium thermodynamics but cannot be considered as phases, because they do not satisfy Gibbs' definition: they are inhomogeneous, may have a gradient of structure, composition or properties. For this reasons they are defined as complexion: an interfacial material or stata that is in thermodynamic equilibrium with its abutting phases, with a finite and stable thickness (that is typically 2–20 Å). A complexion need the abutting phase to exist and its composition and structure need to be different from the abutting phase. Contrary to bulk phases, complexions also depend on the abutting phase. For example, silica rich amorphous layer present in Si3N3, is about 10 Å thick, but for special boundaries this equilibrium thickness is zero. Complexion can be grouped in 6 categories, according to their thickness: monolayer, bilayer, trilayer, nanolayer (with equilibrium thickness between 1 and 2 nm) and wetting. In the first cases the thickness of the layer will be constant; if extra material is present it will segregate at multiple grain junction, while in the last case there is no equilibrium thickness and this is determined by the amount of secondary phase present in the material. One example of grain boundary complexion transition is the passage from dry boundary to biltilayer in Au-doped Si, which is produced by the increase of Au. Effect to the electronic structure Grain boundaries can cause failure mechanically by embrittlement through solute segregation (see Hinkley Point A nuclear power station) but they also can detrimentally affect the electronic properties. In metal oxides it has been shown theoretically that at the grain boundaries in Al2O3 and MgO the insulating properties can be significantly diminished. Using density functional theory computer simulations of grain boundaries have shown that the band gap can be reduced by up to 45%. In the case of metals grain boundaries increase the resistivity as the size of the grains relative to the mean free path of other scatters becomes significant. Defect concentration near grain boundaries It is known that most materials are polycrystalline and contain grain boundaries and that grain boundaries can act as sinks and transport pathways for point defects. However experimentally and theoretically determining what effect point defects have on a system is difficult. Interesting examples of the complications of how point defects behave has been manifested in the temperature dependence of the Seebeck effect. In addition the dielectric and piezoelectric response can be altered by the distribution of point defects near grain boundaries. Mechanical properties can also be significantly influenced with properties such as the bulk modulus and damping being influenced by changes to the distribution of point defects within a material. It has also been found that the Kondo effect within graphene can be tuned due to a complex relationship between grain boundaries and point defects. Recent theoretical calculations have revealed that point defects can be extremely favourable near certain grain boundary types and significantly affect the electronic properties with a reduction in the band gap. Relationship between theory and experiment There has been a significant amount of work experimentally to observe both the structure and measure the properties of grain boundaries but the five dimensional degrees of freedom of grain boundaries within complex polycrystalline networks has not yet been fully understood and thus there is currently no method to control the structure and properties of most metals and alloys with atomic precision. Part of the problem is related to the fact that much of the theoretical work to understand grain boundaries is based upon construction of bicrystal (two) grains which do not represent the network of grains typically found in a real system and the use of classical force fields such as the embedded atom method often do not describe the physics near the grains correctly and density functional theory could be required to give realistic insights. Accurate modelling of grain boundaries both in terms of structure and atomic interactions could have the effect of improving engineering which could reduce waste and increase efficiency in terms of material usage and performance. From a computational point of view much of the research on grain boundaries has focused on bi-crystal systems, these are systems which only consider two grain boundaries. There has been recent work which has made use of novel grain evolution models which show that there are substantial differences in the material properties associated with whether curved or planar grains are present. See also Abnormal grain growth Segregation in materials References Further reading Crystallographic defects Metallurgy Crystallography Materials science Mineralogy concepts
Grain boundary
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,391
[ "Applied and interdisciplinary physics", "Crystallographic defects", "Metallurgy", "Materials science", "Crystallography", "Condensed matter physics", "nan", "Materials degradation" ]
1,996,857
https://en.wikipedia.org/wiki/Nucleation
In thermodynamics, nucleation is the first step in the formation of either a new thermodynamic phase or structure via self-assembly or self-organization within a substance or mixture. Nucleation is typically defined to be the process that determines how long an observer has to wait before the new phase or self-organized structure appears. For example, if a volume of water is cooled (at atmospheric pressure) significantly below 0°C, it will tend to freeze into ice, but volumes of water cooled only a few degrees below 0°C often stay completely free of ice for long periods (supercooling). At these conditions, nucleation of ice is either slow or does not occur at all. However, at lower temperatures nucleation is fast, and ice crystals appear after little or no delay. Nucleation is a common mechanism which generates first-order phase transitions, and it is the start of the process of forming a new thermodynamic phase. In contrast, new phases at continuous phase transitions start to form immediately. Nucleation is often very sensitive to impurities in the system. These impurities may be too small to be seen by the naked eye, but still can control the rate of nucleation. Because of this, it is often important to distinguish between heterogeneous nucleation and homogeneous nucleation. Heterogeneous nucleation occurs at nucleation sites on surfaces in the system. Homogeneous nucleation occurs away from a surface. Characteristics Nucleation is usually a stochastic (random) process, so even in two identical systems nucleation will occur at different times. A common mechanism is illustrated in the animation to the right. This shows nucleation of a new phase (shown in red) in an existing phase (white). In the existing phase microscopic fluctuations of the red phase appear and decay continuously, until an unusually large fluctuation of the new red phase is so large it is more favourable for it to grow than to shrink back to nothing. This nucleus of the red phase then grows and converts the system to this phase. The standard theory that describes this behaviour for the nucleation of a new thermodynamic phase is called classical nucleation theory. However, the CNT fails in describing experimental results of vapour to liquid nucleation even for model substances like argon by several orders of magnitude. For nucleation of a new thermodynamic phase, such as the formation of ice in water below 0°C, if the system is not evolving with time and nucleation occurs in one step, then the probability that nucleation has not occurred should undergo exponential decay. This is seen for example in the nucleation of ice in supercooled small water droplets. The decay rate of the exponential gives the nucleation rate. Classical nucleation theory is a widely used approximate theory for estimating these rates, and how they vary with variables such as temperature. It correctly predicts that the time you have to wait for nucleation decreases extremely rapidly when supersaturated. It is not just new phases such as liquids and crystals that form via nucleation followed by growth. The self-assembly process that forms objects like the amyloid aggregates associated with Alzheimer's disease also starts with nucleation. Energy consuming self-organising systems such as the microtubules in cells also show nucleation and growth. Heterogeneous nucleation often dominates homogeneous nucleation Heterogeneous nucleation, nucleation with the nucleus at a surface, is much more common than homogeneous nucleation. For example, in the nucleation of ice from supercooled water droplets, purifying the water to remove all or almost all impurities results in water droplets that freeze below around −35°C, whereas water that contains impurities may freeze at −5°C or warmer. This observation that heterogeneous nucleation can occur when the rate of homogeneous nucleation is essentially zero, is often understood using classical nucleation theory. This predicts that the nucleation slows exponentially with the height of a free energy barrier ΔG*. This barrier comes from the free energy penalty of forming the surface of the growing nucleus. For homogeneous nucleation the nucleus is approximated by a sphere, but as we can see in the schematic of macroscopic droplets to the right, droplets on surfaces are not complete spheres and so the area of the interface between the droplet and the surrounding fluid is less than a sphere's . This reduction in surface area of the nucleus reduces the height of the barrier to nucleation and so speeds nucleation up exponentially. Nucleation can also start at the surface of a liquid. For example, computer simulations of gold nanoparticles show that the crystal phase sometimes nucleates at the liquid-gold surface. Computer simulation studies of simple models Classical nucleation theory makes a number of assumptions, for example it treats a microscopic nucleus as if it is a macroscopic droplet with a well-defined surface whose free energy is estimated using an equilibrium property: the interfacial tension σ. For a nucleus that may be only of order ten molecules across it is not always clear that we can treat something so small as a volume plus a surface. Also nucleation is an inherently out of thermodynamic equilibrium phenomenon so it is not always obvious that its rate can be estimated using equilibrium properties. However, modern computers are powerful enough to calculate essentially exact nucleation rates for simple models. These have been compared with the classical theory, for example for the case of nucleation of the crystal phase in the model of hard spheres. This is a model of perfectly hard spheres in thermal motion, and is a simple model of some colloids. For the crystallization of hard spheres the classical theory is a very reasonable approximate theory. So for the simple models we can study, classical nucleation theory works quite well, but we do not know if it works equally well for (say) complex molecules crystallising out of solution. The spinodal region Phase-transition processes can also be explained in terms of spinodal decomposition, where phase separation is delayed until the system enters the unstable region where a small perturbation in composition leads to a decrease in energy and, thus, spontaneous growth of the perturbation. This region of a phase diagram is known as the spinodal region and the phase separation process is known as spinodal decomposition and may be governed by the Cahn–Hilliard equation. The nucleation of crystals In many cases, liquids and solutions can be cooled down or concentrated up to conditions where the liquid or solution is significantly less thermodynamically stable than the crystal, but where no crystals will form for minutes, hours, weeks or longer; this process is called supercooling. Nucleation of the crystal is then being prevented by a substantial barrier. This has consequences, for example cold high altitude clouds may contain large numbers of small liquid water droplets that are far below 0°C. In small volumes, such as in small droplets, only one nucleation event may be needed for crystallisation. In these small volumes, the time until the first crystal appears is usually defined to be the nucleation time. Calcium carbonate crystal nucleation depends not only on degree of supersaturation but also the ratio of calcium to carbonate ions in aqueous solutions. In larger volumes many nucleation events will occur. A simple model for crystallisation in that case, that combines nucleation and growth is the KJMA or Avrami model. Although the existing theories including the classical nucleation theory explain well the steady nucleation state when the crystal nucleation rate is not time dependent, the initial non-steady state transient nucleation, and even more mysterious incubation period, require more attention of the scientific community. Chemical ordering of the undercooling liquid prior to crystal nucleation was suggested to be responsible for that feature by reducing the energy barrier for nucleation. Primary and secondary nucleation The time until the appearance of the first crystal is also called primary nucleation time, to distinguish it from secondary nucleation times. Primary here refers to the first nucleus to form, while secondary nuclei are crystal nuclei produced from a preexisting crystal. Primary nucleation describes the transition to a new phase that does not rely on the new phase already being present, either because it is the very first nucleus of that phase to form, or because the nucleus forms far from any pre-existing piece of the new phase. Particularly in the study of crystallisation, secondary nucleation can be important. This is the formation of nuclei of a new crystal directly caused by pre-existing crystals. For example, if the crystals are in a solution and the system is subject to shearing forces, small crystal nuclei could be sheared off a growing crystal, thus increasing the number of crystals in the system. So both primary and secondary nucleation increase the number of crystals in the system but their mechanisms are very different, and secondary nucleation relies on crystals already being present. Experimental observations on the nucleation times for the crystallisation of small volumes It is typically difficult to experimentally study the nucleation of crystals. The nucleus is microscopic, and thus too small to be directly observed. In large liquid volumes there are typically multiple nucleation events, and it is difficult to disentangle the effects of nucleation from those of growth of the nucleated phase. These problems can be overcome by working with small droplets. As nucleation is stochastic, many droplets are needed so that statistics for the nucleation events can be obtained. To the right is shown an example set of nucleation data. It is for the nucleation at constant temperature and hence supersaturation of the crystal phase in small droplets of supercooled liquid tin; this is the work of Pound and La Mer. Nucleation occurs in different droplets at different times, hence the fraction is not a simple step function that drops sharply from one to zero at one particular time. The red curve is a fit of a Gompertz function to the data. This is a simplified version of the model Pound and La Mer used to model their data. The model assumes that nucleation occurs due to impurity particles in the liquid tin droplets, and it makes the simplifying assumption that all impurity particles produce nucleation at the same rate. It also assumes that these particles are Poisson distributed among the liquid tin droplets. The fit values are that the nucleation rate due to a single impurity particle is 0.02/s, and the average number of impurity particles per droplet is 1.2. Note that about 30% of the tin droplets never freeze; the data plateaus at a fraction of about 0.3. Within the model this is assumed to be because, by chance, these droplets do not have even one impurity particle and so there is no heterogeneous nucleation. Homogeneous nucleation is assumed to be negligible on the timescale of this experiment. The remaining droplets freeze in a stochastic way, at rates 0.02/s if they have one impurity particle, 0.04/s if they have two, and so on. These data are just one example, but they illustrate common features of the nucleation of crystals in that there is clear evidence for heterogeneous nucleation, and that nucleation is clearly stochastic. Ice The freezing of small water droplets to ice is an important process, particularly in the formation and dynamics of clouds. Water (at atmospheric pressure) does not freeze at 0°C, but rather at temperatures that tend to decrease as the volume of the water decreases and as the concentration of dissolved chemicals in the water increases. Thus small droplets of water, as found in clouds, may remain liquid far below 0°C. An example of experimental data on the freezing of small water droplets is shown at the right. The plot shows the fraction of a large set of water droplets, that are still liquid water, i.e., have not yet frozen, as a function of temperature. Note that the highest temperature at which any of the droplets freezes is close to -19°C, while the last droplet to freeze does so at almost -35°C. Examples Nucleation of fluids (gases and liquids) Clouds form when wet air cools (often because the air rises) and many small water droplets nucleate from the supersaturated air. The amount of water vapour that air can carry decreases with lower temperatures. The excess vapor begins to nucleate and to form small water droplets which form a cloud. Nucleation of the droplets of liquid water is heterogeneous, occurring on particles referred to as cloud condensation nuclei. Cloud seeding is the process of adding artificial condensation nuclei to quicken the formation of clouds. Bubbles of carbon dioxide nucleate shortly after the pressure is released from a container of carbonated liquid. Nucleation in boiling can occur in the bulk liquid if the pressure is reduced so that the liquid becomes superheated with respect to the pressure-dependent boiling point. More often, nucleation occurs on the heating surface, at nucleation sites. Typically, nucleation sites are tiny crevices where free gas-liquid surface is maintained or spots on the heating surface with lower wetting properties. Substantial superheating of a liquid can be achieved after the liquid is de-gassed and if the heating surfaces are clean, smooth and made of materials well wetted by the liquid. Some champagne stirrers operate by providing many nucleation sites via high surface-area and sharp corners, speeding the release of bubbles and removing carbonation from the wine. The Diet Coke and Mentos eruption offers another example. The surface of Mentos candy provides nucleation sites for the formation of carbon-dioxide bubbles from carbonated soda. Both the bubble chamber and the cloud chamber rely on nucleation, of bubbles and droplets, respectively. Nucleation of crystals The most common crystallisation process on Earth is the formation of ice. Liquid water does not freeze at 0°C unless there is ice already present; cooling significantly below 0°C is required to nucleate ice and for the water to freeze. For example, small droplets of very pure water can remain liquid down to below -30 °C although ice is the stable state below 0°C. Many of the materials we make and use are crystalline, but are made from liquids, e.g. crystalline iron made from liquid iron cast into a mold, so the nucleation of crystalline materials is widely studied in industry. It is used heavily in the chemical industry for cases such as in the preparation of metallic ultradispersed powders that can serve as catalysts. For example, platinum deposited onto TiO2 nanoparticles catalyses the decomposition of water. It is an important factor in the semiconductor industry, as the band gap energy in semiconductors is influenced by the size of nanoclusters. Nucleation in solids In addition to the nucleation and growth of crystals e.g. in non-crystalline glasses, the nucleation and growth of impurity precipitates in crystals at, and between, grain boundaries is quite important industrially. For example in metals solid-state nucleation and precipitate growth plays an important role e.g. in modifying mechanical properties like ductility, while in semiconductors it plays an important role e.g. in trapping impurities during integrated circuit manufacture. References Articles containing video clips Chemistry Materials science Physics
Nucleation
[ "Physics", "Materials_science", "Engineering" ]
3,268
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
1,997,096
https://en.wikipedia.org/wiki/Lindlar%20catalyst
A Lindlar catalyst is a heterogeneous catalyst consisting of palladium deposited on calcium carbonate or barium sulfate then poisoned with various forms of lead or sulfur. It is used for the hydrogenation of alkynes to alkenes (i.e. without further reduction into alkanes). It is named after its inventor Herbert Lindlar, who discovered it in 1952. Synthesis Lindlar catalyst is commercially available but can also be created by reducing palladium chloride in a slurry of calcium carbonate (CaCO3) and adding lead acetate. A variety of other "catalyst poisons" have been used, including lead oxide and quinoline. The palladium content of the supported catalyst is usually 5% by weight. Catalytic properties The catalyst is used for the hydrogenation of alkynes to alkenes (i.e. without further reduction into alkanes). The lead serves to deactivate the palladium sites, and further deactivation of the catalyst with quinoline or 3,6-dithia-1,8-octanediol enhances its selectivity, preventing formation of alkanes. Thus if a compound contains a double bond as well as a triple bond, only the triple bond is reduced. An example being the reduction of phenylacetylene to styrene. Alkyne hydrogenation is stereospecific, occurring via syn addition to give the cis-alkene. For example the hydrogenation of acetylenedicarboxylic acid using Lindlar catalyst gives maleic acid rather than fumaric acid. An example of commercial use is the organic synthesis of vitamin A which involves an alkyne reduction with the Lindlar catalyst. These catalysts are also used in the synthesis of dihydrovitamin K1. See also Rosenmund reduction, a reduction using palladium on barium sulfate, poisoned with sulfur compounds. Urushibara Nickel, a nickel based catalyst used to hydrogenate alkynes to alkenes. References Catalysts Palladium compounds Hydrogenation catalysts
Lindlar catalyst
[ "Chemistry" ]
438
[ "Catalysis", "Catalysts", "Hydrogenation catalysts", "Hydrogenation", "Chemical kinetics" ]
1,997,957
https://en.wikipedia.org/wiki/Crazing
Crazing is a yielding mechanism in polymers characterized by the formation of a fine network of microvoids and fibrils. These structures (known as crazes) typically appear as linear features and frequently precede brittle fracture. The fundamental difference between crazes and cracks is that crazes contain polymer fibrils (5-30 nm in diameter), constituting about 50% of their volume, whereas cracks do not. Unlike cracks, crazes can transmit load between their two faces through these fibrils. Crazes typically initiate when applied tensile stress causes microvoids to nucleate at points of high stress concentration within the polymer, such as those created by scratches, flaws, cracks, dust particles, and molecular heterogeneities. Crazes grow normal to the principal (tensile) stress, they may extend up to centimeters in length and fractions of a millimeter in thickness if conditions prevent early failure and crack propagation. The refractive index of crazes is lower than that of the surrounding material, causing them to scatter light. Consequently, a stressed material with a high density of crazes may appear 'stress-whitened,' as the scattering makes a normally clear material become opaque. Crazing is a phenomenon typical of glassy amorphous polymers, but can also be observed in semicrystalline polymers. In thermosetting polymers crazing is less frequently observed because of the inability of the crosslinked molecules to undergo significant molecular stretching and disentanglement, if crazing does occur, it is often due to the interaction with second-phase particles incorporated as a toughening mechanism. Historical background Crazing, derived from the Middle English term "crasen" meaning "to break", has historically been used to describe a network of fine cracks in the surfaces of glasses and ceramics. This term was naturally extended to describe similar phenomena observed in transparent glassy polymers. Under tensile stress, these polymers develop what appear to be cracks on their surfaces, often very gradually or after prolonged periods. These fine cracks, or crazes, were noted for their ability to propagate across specimens without causing immediate failure. Crazing in polymers was first identified as a distinct deformation mechanism in the mid-20th century. Unlike inorganic glasses, most glassy polymers were found to be able to undergo significant plastic deformation before fracture occurs. Early observations noted the presence of crazes that propagated across specimens without causing immediate failure, indicating their load-bearing capacity and provided further insights into the nature of crazes, describing their appearance and behavior under stress. Significant advancements in the understanding of crazing were made in the 1960s and 1970s, illustrating the formation and structure of crazes in various polymers and on the stress conditions necessary for craze formation in polymers. Researchers demonstrated that crazes grow perpendicular to the principal stress and highlighted the critical stress levels required for their initiation. Mechanisms of crazing Craze nucleation and growth There is typically a delay between the application of stress and the visible appearance of crazes, indicating a barrier to craze nucleation. The time delay between the application of stress and the nucleation of crazes can be attributed to the viscoelastic nature of the process. Like other viscoelastic phenomena, this delay results from the thermally activated movements of polymer segments under mechanical stress. Crazing involves a localized or inhomogeneous plastic strain of the material. However, while plastic deformation essentially occurs at constant volume, crazing is a cavitation process that takes place with an increase in volume. The initiation of crazing normally requires the presence of a dilative component of the stress tensor and can be inhibited by applying hydrostatic pressure. From a solid mechanics perspective this means that a necessary condition for craze nucleation is having a positive value of , the first stress invariant that represent the dilatational component: This condition is favored by the presence of triaxial tensile stresses, a condition that exist in defects of bulky samples subjected to plane strain. The cavitation involved in crazing allows the material to achieve plastic strain faster. The presence of cracks or defects in bulky samples will favor the initiation of crazing, as these defects are points of high concentration of stresses and can cause the formation of initial microvoids. Crazes grow on the plane of maximum principal stress. Craze fibrils can endure substantial tensile forces across the craze but cannot withstand shear forces. Consequently, the highest plastic resistance is achieved by maximizing the normal stress on the plane of the craze. The concept of Taylor's meniscus instability provides a fundamental explanation for the growth of crazes. This phenomenon is commonly observed when two flat plates with a layer of liquid between them are forced apart or when adhesive tape is peeled off from a substrate. The hypothesis concerning craze formation states that a wedge-shaped zone of plastically deformed and strain-softened polymer forms ahead of the craze tip. This deformed polymer constitutes the "fluid" layer into which the craze tip "meniscus" propagates, while the undeformed polymer outside the zone acts as the rigid "plates" constraining the fluid. As the finger-like structure of the craze tip advances, fibrils form by the deformation of the polymer webs between the fingers, and the interconnected void network emerges naturally right at the craze tip. Stereo-transmission electron microscopy has demonstrated that meniscus instability is the operative craze tip advancement mechanism in various glassy polymers. The meniscus formation is a result of the imbalance of surface tension forces, surface tension acts to minimize the surface area, and any disturbance can create a meniscus, a curved surface at the interface between two phases. This causes the polymer chains to pull apart and form a cavity filled with a fibrillar network. This type of instability is well documented in various classes of materials and the concepts were developed from experiments involving the interpenetration of two fluids with different densities. In this scenario, the voided structure of the craze acts like the low-density fluid, spreading into the denser, undeformed polymer. The physical principle behind this instability is the difference in hydrostatic pressure across a curved surface. Any disturbance that introduces curvature can propagate if the pressure difference due to the curvature is significant enough to overcome the surface tension. This condition can be written as: where is the surface energy and is the radius of curvature. In theory, any disturbance meeting the criteria of the previous equation can grow, but in reality, a predominant wavelength emerges, which grows the quickest. Craze breakdown and fracture Crazes in polymers are typically load-bearing and expand in width and area until a region within the craze breaks down, forming a large void. With further stress or over time, this void can develop into a subcritical crack, growing slowly until it reaches a critical length, causing the sample to fracture. For polymers of practical molecular weight, craze growth is necessary but not sufficient for fracture. The critical step in the fracture of most glassy polymer crazes is the initiation of the first large void, defined as several fibril spacing in diameter. This process, known as craze fibril breakdown, is closely linked to the active zone and craze growth at the craze interface. The breakdown of the craze starts gradually as voids coalesce to produce a cavity equal in thickness to the craze itself. Craze breakdown, which leads to crack extension, is crucial to the failure process. However, the detailed mechanisms involved remain a subject of debate among experts, despite the many models that have been suggested. In the framework of fracture mechanics, once a crack of size is initiated due to an applied stress , its propagation can be analyzed using the stress intensity factor : which describes the stress state near the tip of a crack. According to linear elastic fracture mechanics (LEFM), the crack will propagate when the stress intensity factor reaches a critical value , known as the fracture toughness of the material. This approach allows for the prediction of crack growth and the evaluation of the material's resistance to fracture under various loading conditions. It has been observed that for a crack growing relatively slowly in a stable manner and preceded by a craze, then the relationship between and crack propagation speed can be described by an equation of the form: Where is related to the viscoelastic processes at the crack tip that stabilize crack growth. Craze yielding and shear yielding The yield point of a material represents the maximum stress it can endure without resulting in a permanent strain after the load is removed, it refers to the stress level required to initiate plastic deformation. When analyzing the yielding behavior of polymers, it is crucial to differentiate between shear yielding and craze yielding due to their distinct microstructural characteristics. Shear yielding involves the material undergoing shear flow with minimal or no change in density. In contrast, craze yielding, is highly localized and the macroscopic behaviors of shear and craze yielding differ significantly. Crazing and shear yielding are the two principal deformation mechanisms inherent to polymers. Those two phenomenon are competitive mechanisms (although they are not mutually exclusive and can coexist), with shear yielding being the more ductile failure mode because it involves the deformation of significant volume of the material while crazing is a more localized phenomenon ad it is more often associated with brittle failure. Shear yielding manifests as plastic deformation in the form of shear bands and is closely associated with the material softening that occurs immediately after yielding. With continued deformation, the material undergoes hardening due to molecular orientation, resulting in the multiplication and propagation of shear bands. Shear bands may form in a material that exhibits strain softening, hence when the conditions which favour crazing are suppressed, polymers will tend to form shear bands. Yielding criteria for polymers Yielding criteria A yield criterion is a general condition that must be satisfied by the applied stress tensor for yield to occur. A yield criterion expressed in terms of stress can be visualized as a surface encompassing the origin in principal stress space. Yielding does not occur until the stress increases from zero (the origin) to some point on this surface. For isotropic elastic materials with a ductile failure mode, the most used criteria are the Tresca criterion of maximum tangential stress and von Mises yield criterion based on maximum distortion energy. The latter is the most used and states that yielding of a ductile material begins when the second invariant of deviatoric stress reaches a critical value. The criterion assumes that for yield to not occur the stress coordinate must be contained within the cylindrical surface described by the following equation: where is the octahedral shear stress: This criterion is observed quite well by most metals however it cannot be used to describe shear yielding in polymers since in those materials the hydrostatic component of the stress tensor affects the yield stress. The modified von Mises criterion for shear yielding Experiments have shown that nelther the Tresca nor the von Mises criterion adequately describes the shear yield behaviour of polymers, because for example the true yield stress is invariably higher in uniaxial compression than in tension, and uniaxial-tensile tests conducted in a pressure chamber show that yield stresses of polymers increase significantly with hydrostatic pressure. The von Mises criterion can be modified to incorporate the effect of pressure on the state of the material by substituting in its original formulation: a value of that is linearly dependent on the hydrostatic component of the stress tensor: while represents the hydrostatic component: and are material parameters that depend on loading rate and temperature. The constant is the yield stress in pure shear, since under this stress state the value of is zero. In plane stress the modified von Mises criterion is an ellipse on the principal axis space, but differently from the standard criterion it is shifted with respect to the origin, due to the different behavior of the polymeric materials depending on the hydrostatic component of the stress tensor. Yielding criteria for crazing An effective crazing criterion has been proposed in the early 70's by Sternstein and coworkers. Considering crazing a form of dilatational plasticity, a critical condition that has to be met by the applied stress tensor for crazing to take place is: where is the stress required to orient the fibrils and and are time and temperature dependent parameters. Is the first stress invariant and it represent the dilatational component: It is difficult to evaluate for a general triaxial state of stress, but for what concerns the yield criterion, the constants A and B can be easily evaluated performing experiments in plane stress conditions (), so that the condition becomes: The crazing criterion is illustrated in the following graph for different temperatures. The curves will be asymptotic to the pure shear line where , that establishes the boundary between hydrostatic compression and hydrostatic tension. Below this line crazing does not occur because the pressure component of the stress matrix tends to reduce the volume, instead of increasing it. Oxborough and Bowden attempted to create a more comprehensive relationship valid for a general triaxial state of stress. Their assumption is that crazing occurs when the strain in any direction reaches a critical value () that depends on the hydrostatic component of the stress tensor: Where and are again time and temperature dependent parameters. The maximum tensile strain in an isotropic body under a general state of stress defined by the principal stresses is always in the direction of the maximum principal stress () and is given by: where is Young's modulus and is Poisson's ratio. So the previous equation can be re-written as to define the criterion in terms of the principal stresses: for plane stress this equation is very similar to the one proposed by Sternstein and coworkers. Argon proposed an alternative crazing criterion based on a molecular theory for distortional plasticity, he described the process of crazing as a micromechanical problem of elastic-plastic expansion of initially stable micropores produced by a thermally activated mechanism under stress to form a craze nucleus. With his analysis of the condition of craze nucleation he provided a derivation of crazing. This model provides an elegant criterion that can be easily applied for any stress state and it is not based on strain, which is a poor parameter of state: where is the octahedral shear stress, while represents the hydrostatic component: and and are time-temperature constants. General yielding criterion By combining the criterion for shear yielding and crazing a region can be found in which no yielding can occur. This can be easily seen in the plane (considering plane stress condition), where the two criteria intersect a transition between the two mechanisms is expected. Considering that polymers have a viscoelastic behaviour an effect of loading rates and temperatures on shear yield stress and on crazing yield is observed. When the loading conditions () are such that the tensile stress for shear yielding is lower than the crazing stress no crazing will be observed on the material and a brittle to ductile transition can be expected. In order to have a comprehensive yielding criterion both yielding phenomenon must be taken into account and their dependence on external parameters has to be determined. Only if these conditions are known a proper yield criterion expressed in terms of stress can obtained as a surface encompassing the origin in principal stress space. See also Fracture in polymers Environmental stress cracking Yield Viscoelasticity Rubber toughening Polymer physics References External links Crazing & shear banding Polymers Plasticity (physics) Deformation (mechanics) Fracture mechanics
Crazing
[ "Chemistry", "Materials_science", "Engineering" ]
3,246
[ "Structural engineering", "Fracture mechanics", "Deformation (mechanics)", "Materials science", "Plasticity (physics)", "Polymer chemistry", "Polymers", "Materials degradation" ]
1,999,035
https://en.wikipedia.org/wiki/Damper%20%28flow%29
A damper is a valve or plate that stops or regulates the flow of air inside a duct, chimney, VAV box, air handler, or other air-handling equipment. A damper may be used to cut off central air conditioning (heating or cooling) to an unused room, or to regulate it for room-by-room temperature and climate control - for example, in the case of Volume Control Dampers. Its operation can be manual or automatic. Manual dampers are turned by a handle on the outside of a duct. Automatic dampers are used to regulate airflow constantly and are operated by electric or pneumatic motors, in turn controlled by a thermostat or building automation system. Automatic or motorized dampers may also be controlled by a solenoid, and the degree of air-flow calibrated, perhaps according to signals from the thermostat going to the actuator of the damper in order to modulate the flow of air-conditioned air in order to effect climate control. In a chimney flue, a damper closes off the flue to keep the weather and animals (e.g. birds) out and warm or cool air in. This is usually done in the summer, but also may be done in the winter between uses. In some cases, the damper may also be partly closed to help control the rate of combustion. The damper may be accessible only by reaching up into the fireplace by hand or with a woodpoker, or sometimes by a lever or knob that sticks down or out. On a wood-burning stove or similar device, it is usually a handle on the vent duct as in an air conditioning system. Forgetting to open a damper before beginning a fire can cause serious smoke damage to the interior of a home, if not a house fire. Automated zone dampers A zone damper (also known as a Volume Control Damper or VCD) is a specific type of damper used to control the flow of air in an HVAC heating or cooling system. In order to improve efficiency and occupant comfort, HVAC systems are commonly divided up into multiple zones. For example, in a house, the main floor may be served by one heating zone while the upstairs bedrooms are served by another. In this way, the heat can be directed principally to the main floor during the day and principally to the bedrooms at night, allowing the unoccupied areas to cool down. Zone dampers as used in home HVAC systems are usually electrically powered. In large commercial installations, vacuum or compressed air may be used instead. In either case, the motor is usually connected to the damper via a mechanical coupling. For electrical zone dampers, there are two principal designs. In one design, the motor is often a small shaded-pole synchronous motor combined with a rotary switch that can disconnect the motor at either of the two stopping points ("damper open" or "damper closed"). In this way, applying power to the "open damper" terminal causes the motor to run until the damper is open while applying power at the "close damper" terminal causes the motor to run until the damper is closed. The motor is commonly powered from the same 24 volt AC power source that is used for the rest of the control system. This allows the zone dampers to be directly controlled by low-voltage thermostats and wired with low-voltage wiring. Because simultaneous closure of all dampers might harm the furnace or air handler, this style of damper is often designed to only obstruct a portion of the air duct, for example, 75%. Another style of electrically powered damper uses a spring-return mechanism and a shaded-pole synchronous motor. In this case, the damper is normally opened by the force of the spring but can be closed by the force of the motor. Removal of electrical power re-opens the damper. This style of damper is advantageous because it is "fail safe"; if the control to the damper fails, the damper opens and allows air to flow. However, in most applications "fail safe" indicates the damper will close upon loss of power thus preventing the spread of smoke and fire to other areas. These dampers also may allow adjustment of the "closed" position so that they only obstruct, for example, 75% of the air flow when closed. For vacuum-operated or pneumatically operated zone dampers, the thermostat usually switches the pressure or vacuum on or off, causing a spring-loaded rubber diaphragm to move and actuate the damper. As with the second style of electrical zone dampers, these dampers automatically return to the default position without the application of any power, and the default position is usually "open", allowing air to flow. Like the second style of electrical zone damper, these dampers may allow adjustment of the "closed" position. Highly sophisticated systems may use some form of building automation such as BACnet or LonWorks to control the zone dampers. The dampers may also support positions other than fully open or fully closed and are usually capable of reporting their current position and, often, the temperature and volume of the air flowing past the smart damper. Regardless of the style of damper employed, the systems are often designed so that when no thermostat is calling for air, all dampers in the system are opened. This allows air to continue to flow while the heat exchanger in a furnace cools down after a heating period completes. Comparison to multiple furnaces/air handlers Multiple zones can be implemented using either multiple, individually controlled furnaces/air handlers or a single furnace/air handler and multiple zone dampers. Each approach has advantages and disadvantages. Multiple furnaces/air handlers Advantages: Simple mechanical and control design ("SPST thermostats") Redundancy: If one zone furnace fails, the others can remain working Disadvantages: Cost. Furnaces cost much more than zone dampers Power consumption. Operating furnaces draw power whereas a zone damper only draws power while in motion from one state to the other (or, in some cases, a very small amount of power while holding closed) Zone dampers Advantages: Cost. Power consumption. Disadvantages: New US residential building codes require permanent access to dampers through ceiling access panels. Zone dampers are not 100% reliable. Most styles of motor-to-open/motor-to-closed electrically operated zone dampers aren't "fail safe" (that is, they do not fail to the open condition). However, zone dampers that are of the "Normally Open" type are fail-safe, in that they will fail to the open condition. No inherent redundancy for the furnace. A system with zone dampers is dependent upon a single furnace. If it fails, the system becomes completely inoperable. Low total flow when only some dampers are open can cause inefficient operation. Supply and return ducts need dampers to avoid pressurization of portions of the building. The system can be harder to a design, requiring both "SPDT" thermostats (or relays) and the ability of the system to withstand the fault condition whereby all zone dampers are closed simultaneously. Pneumatic actuation is preferred for these dampers. It is easier to provide zone-classified solenoid valves for pneumatic actuation, as compared to electrical actuation. The physical size of such solenoid valves have come down very considerably over the years. Fire dampers Fire dampers are fitted where ductwork passes through fire compartment walls and fire curtains as part of a fire control strategy. In normal circumstances, these dampers are held open by means of fusible links. When subjected to heat, these links fracture and allow the damper to close under the influence of the integral closing spring. The links are attached to the damper such that the dampers can be released manually for testing purposes. The damper is provided with an access door in the adjacent ductworks for the purpose of inspection and resetting in the event of closure. See also Zone valve Variable air volume (VAV) Testing, adjusting, balancing Air-mixing plenum References External links Heating, ventilation, and air conditioning Mechanical engineering
Damper (flow)
[ "Physics", "Engineering" ]
1,727
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
1,999,139
https://en.wikipedia.org/wiki/Turbomachinery
Turbomachinery, in mechanical engineering, describes machines that transfer energy between a rotor and a fluid, including both turbines and compressors. While a turbine transfers energy from a fluid to a rotor, a compressor transfers energy from a rotor to a fluid. It is an important application of fluid mechanics. These two types of machines are governed by the same basic relationships including Newton's second Law of Motion and Euler's pump and turbine equation for compressible fluids. Centrifugal pumps are also turbomachines that transfer energy from a rotor to a fluid, usually a liquid, while turbines and compressors usually work with a gas. History The first turbomachines could be identified as water wheels, which appeared between the 3rd and 1st centuries BCE in the Mediterranean region. These were used throughout the medieval period and began the first Industrial Revolution. When steam power started to be used, as the first power source driven by the combustion of a fuel rather than renewable natural power sources, this was as reciprocating engines. Primitive turbines and conceptual designs for them, such as the smoke jack, appeared intermittently but the temperatures and pressures required for a practically efficient turbine exceeded the manufacturing technology of the time. The first patent for gas turbines were filed in 1791 by John Barber. Practical hydroelectric water turbines and steam turbines did not appear until the 1880s. Gas turbines appeared in the 1930s. The first impulse type turbine was created by Carl Gustaf de Laval in 1883. This was closely followed by the first practical reaction type turbine in 1884, built by Charles Parsons. Parsons’ first design was a multi-stage axial-flow unit, which George Westinghouse acquired and began manufacturing in 1895, while General Electric acquired de Laval's designs in 1897. Since then, development has skyrocketed from Parsons’ early design, producing 0.746 kW, to modern nuclear steam turbines producing upwards of 1500 MW. Furthermore, steam turbines accounted for roughly 45% of electrical power generated in the United States in 2021. Then the first functioning industrial gas turbines were used in the late 1890s to power street lights (Meher-Homji, 2000). Classification In general, the two kinds of turbomachines encountered in practice are open and closed turbomachines. Open machines such as propellers, windmills, and unshrouded fans act on an infinite extent of fluid, whereas closed machines operate on a finite quantity of fluid as it passes through a housing or casing. Turbomachines are also categorized according to the type of flow. When the flow is parallel to the axis of rotation, they are called axial flow machines, and when flow is perpendicular to the axis of rotation, they are referred to as radial (or centrifugal) flow machines. There is also a third category, called mixed flow machines, where both radial and axial flow velocity components are present. Turbomachines may be further classified into two additional categories: those that absorb energy to increase the fluid pressure, i.e. pumps, fans, and compressors, and those that produce energy such as turbines by expanding flow to lower pressures. Of particular interest are applications which contain pumps, fans, compressors and turbines. These components are essential in almost all mechanical equipment systems, such as power and refrigeration cycles. Turbomachines Definition Any device that extracts energy from or imparts energy to a continuously moving stream of fluid can be called a turbomachine. Elaborating, a turbomachine is a power or heat generating machine which employs the dynamic action of a rotating element, the rotor; the action of the rotor changes the energy level of the continuously flowing fluid through the machine. Turbines, compressors and fans are all members of this family of machines. In contrast to positive displacement machines (particularly of the reciprocating type which are low speed machines based on the mechanical and volumetric efficiency considerations), the majority of turbomachines run at comparatively higher speeds without any mechanical problems and volumetric efficiency close to one hundred percent. Categorization Energy conversion Turbomachines can be categorized on the basis of the direction of energy conversion: Absorb power to increase the fluid pressure or head (ducted fans, compressors and pumps). Produce power by expanding fluid to a lower pressure or head (hydraulic, steam and gas turbines). Fluid flow Turbomachines can be categorized on the basis of the nature of the flow path through the passage of the rotor: Axial flow turbomachines - When the path of the through-flow is wholly or mainly parallel to the axis of rotation, the device is termed an axial flow turbomachine. The radial component of the fluid velocity is negligible. Since there is no change in the direction of the fluid, several axial stages can be used to increase power output. A Kaplan turbine is an example of an axial flow turbine. In the figure: U = Blade velocity, Vf = Flow velocity, V = Absolute velocity, Vr = Relative velocity, Vw = Tangential or Whirl component of velocity. Radial flow turbomachines - When the path of the throughflow is wholly or mainly in a plane perpendicular to the rotation axis, the device is termed a radial flow turbomachine. Therefore, the change of radius between the entry and the exit is finite. A radial turbomachine can be inward or outward flow type depending on the purpose that needs to be served. The outward flow type increases the energy level of the fluid and vice versa. Due to continuous change in direction, several radial stages are generally not used. A centrifugal pump is an example of a radial flow turbomachine. Mixed flow turbomachines – When axial and radial flow are both present and neither is negligible, the device is termed a mixed flow turbomachine. It combines flow and force components of both radial and axial types. A Francis turbine is an example of a mixed-flow turbine. Physical action Turbomachines can finally be classified on the relative magnitude of the pressure changes that take place across a stage: Impulse Turbomachines operate by accelerating and changing the flow direction of fluid through a stationary nozzle (the stator blade) onto the rotor blade. The nozzle serves to change the incoming pressure into velocity, the enthalpy of the fluid decreases as the velocity increases. Pressure and enthalpy drop over the rotor blades is minimal. Velocity will decrease over the rotor. Newton's second law describes the transfer of energy. Impulse turbomachines do not require a pressure casement around the rotor since the fluid jet is created by the nozzle prior to reaching the blading on the rotor. A Pelton wheel is an impulse design. Reaction Turbomachines operate by reacting to the flow of fluid through aerofoil shaped rotor and stator blades. The velocity of the fluid through the sets of blades increases slightly (as with a nozzle) as it passes from rotor to stator and vice versa. The velocity of the fluid then decreases again once it has passed between the gap. Pressure and enthalpy consistently decrease through the sets of blades. Newton's third law describes the transfer of energy for reaction turbines. A pressure casement is needed to contain the working fluid. For compressible working fluids, multiple turbine stages are usually used to harness the expanding gas efficiently. Most turbomachines use a combination of impulse and reaction in their design, often with impulse and reaction parts on the same blade. Dimensionless ratios to describe turbomachinery The following dimensionless ratios are often used for the characterisation of fluid machines. They allow a comparison of flow machines with different dimensions and boundary conditions. Pressure range ψ Flow coefficient φ (including delivery or volume number called) Performance numbers λ Run number σ Diameter number δ Applications Power Generation Hydro electric - Hydro-electric turbomachinery uses potential energy stored in water to flow over an open impeller to turn a generator which creates electricity Steam turbines - Steam turbines used in power generation come in many different variations. The overall principle is high pressure steam is forced over blades attached to a shaft, which turns a generator. As the steam travels through the turbine, it passes through smaller blades causing the shaft to spin faster, creating more electricity. Gas turbines - Gas turbines work much like steam turbines. Air is forced in through a series of blades that turn a shaft. Then fuel is mixed with the air and causes a combustion reaction, increasing the power. This then causes the shaft to spin faster, creating more electricity. Windmills - Also known as a wind turbine, windmills are increasing in popularity for their ability to efficiently use the wind to generate electricity. Although they come in many shapes and sizes, the most common one is the large three-blade. The blades work on the same principle as an airplane wing. As wind passes over the blades, it creates an area of low and high pressure, causing the blade to move, spinning a shaft and creating electricity. It is most like a steam turbine, but works with an infinite supply of wind. Marine Steam turbine - Steam turbines in marine applications are very similar to those in power generation. The few differences between them are size and power output. Steam turbines on ships are much smaller because they don't need to power a whole town. They aren't very common because of their high initial cost, high specific fuel consumption, and expensive machinery that goes with it. Gas turbines - Gas turbines in marine applications are becoming more popular due to their smaller size, increased efficiency, and ability to burn cleaner fuels. They run just like gas turbines for power generation, but are also much smaller and do require more machinery for propulsion. They are most popular in naval ships as they can be at a dead stop to full power in minutes (Kayadelen, 2013), and are much smaller for a given amount of power. Water jet - Essentially a water jet drive is like an aircraft turbojet with the difference that the operating fluid is water instead of air. Water jets are best suited to fast vessels and are thus used often by the military. Water jet propulsion has many advantages over other forms of marine propulsion, such as stern drives, outboard motors, shafted propellers and surface drives. Auto Turbochargers - Turbochargers are one of the most popular turbomachines. They are used mainly for adding power to engines by adding more air. It combines both forms of turbomachines. Exhaust gases from the engine spin a bladed wheel, much like a turbine. That wheel then spins another bladed wheel, sucking and compressing outside air into the engine. Superchargers - Superchargers are used for engine-power enhancement as well, but only work off the principle of compression. They use the mechanical power from the engine to spin a screw or vane, some way to suck in and compress the air into the engine. General Pumps - Pumps are another very popular turbomachine. Although there are very many different types of pumps, they all do the same thing. Pumps are used to move fluids around using some sort of mechanical power, from electric motors to full size diesel engines. Pumps have thousands of uses, and are the true basis to turbomachinery (Škorpík, 2017). Air compressors - Air compressors are another very popular turbomachine. They work on the principle of compression by sucking in and compressing air into a holding tank. Air compressors are one of the most basic turbomachines. Fans - Fans are the most general type of turbomachines. Aerospace Gas turbines - Aerospace gas turbines, more commonly known as jet engines, are the most common gas turbines. Turbopumps - Rocket engines require very high propellant pressures and mass flow rates, meaning their pumps require a lot of power. One of the most common solutions to this issue is to use a turbopump that extracts energy from an energetic fluid flow. The source of this energetic fluid flow could be one or a combination of many things, including the decomposition of hydrogen peroxide, the combustion of a portion of the propellants, or even the heating of cryogenic propellants run through coolant jackets in the combustion chamber's walls. Partial list of turbomachine topics Many types of dynamic continuous flow turbomachinery exist. Below is a partial list of these types. What is notable about these turbomachines is that the same fundamentals apply to all. Certainly there are significant differences between these machines and between the types of analysis that are typically applied to specific cases. This does not negate the fact that they are unified by the same underlying physics of fluid dynamics, gas dynamics, aerodynamics, hydrodynamics, and thermodynamics. Axial compressor Axial fan Centrifugal compressor Centrifugal fan Centrifugal pump Centrifugal type supercharger Exoskeletal engine Francis turbine Gas turbine Industrial fans Jet engine Mechanical fan Mixed flow compressor Radial turbine Steam turbine Turbocharger Turboexpander Turbofans Turbojet Turboprop Turbopump Turboshaft Turbines Water turbine See also Blade solidity Secondary flow in turbomachinery Slip factor Three-dimensional losses and correlation in turbomachinery References Sources S. M. Yahya. "Turbines Compressors and Fans". 1987. McGraw Hill. Nagpurwala, Q. (n.d.). Steam Turbines. Retrieved April 10, 2017, from http://164.100.133.129:81/eCONTENT/Uploads/13-Steam%20Turbines%20%5BCompatibility%20Mode%5D.pdf Soares, C. M. (n.d.). GAS TURBINES IN SIMPLE CYCLE & COMBINED CYCLE APPLICATIONS. 1-72. Retrieved April 10, 2017, from https://www.netl.doe.gov/File%20Library/Research/Coal/energy%20systems/turbines/handbook/1-1.pdf Perlman, U. H. (2016, December 2). Hydroelectric power: How it works. Retrieved April 10, 2017, from https://water.usgs.gov/edu/hyhowworks.html Škorpík, J. (2017, January 1). Lopatkový stroj-English version. Retrieved April 9, 2017, from http://www.transformacni-technologie.cz/en_11.html Kayadelen, H. (2013). Marine Gas Turbines. 7th International Advanced Technologies Symposium. Retrieved April 15, 2017. External links Hydrodynamics of Pumps Ctrend website to calculate the head of centrifugal compressor online Mechanical engineering Gas technologies
Turbomachinery
[ "Physics", "Chemistry", "Engineering" ]
3,012
[ "Chemical equipment", "Turbomachinery", "Applied and interdisciplinary physics", "Mechanical engineering" ]
1,999,679
https://en.wikipedia.org/wiki/Index%20of%20meteorology%20articles
This is a list of meteorology topics. The terms relate to meteorology, the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting. (see also: List of meteorological phenomena) A advection aeroacoustics aerobiology aerography (meteorology) aerology air parcel (in meteorology) air quality index (AQI) airshed (in meteorology) American Geophysical Union (AGU) American Meteorological Society (AMS) anabatic wind anemometer annular hurricane anticyclone (in meteorology) apparent wind Atlantic Oceanographic and Meteorological Laboratory (AOML) Atlantic hurricane season atmometer atmosphere Atmospheric Model Intercomparison Project (AMIP) Atmospheric Radiation Measurement (ARM) (atmospheric boundary layer [ABL]) planetary boundary layer (PBL) atmospheric chemistry atmospheric circulation atmospheric convection atmospheric dispersion modeling atmospheric electricity atmospheric icing atmospheric physics atmospheric pressure atmospheric sciences atmospheric stratification atmospheric thermodynamics atmospheric window (see under Threats) B baller lightning balloon (aircraft) baroclinity barotropity barometer ("to measure atmospheric pressure") berg wind biometeorology blizzard bomb (meteorology) buoyancy Bureau of Meteorology (in Australia) C Canada Weather Extremes Canadian Hurricane Centre (CHC) Cape Verde-type hurricane capping inversion (in meteorology) (see "severe thunderstorms" in paragraph 5) carbon cycle carbon fixation carbon flux carbon monoxide (see under Atmospheric presence) ceiling balloon ("to determine the height of the base of clouds above ground level") ceilometer ("to determine the height of a cloud base") celestial coordinate system celestial equator celestial horizon (rational horizon) celestial navigation (astronavigation) celestial pole Celsius Center for Analysis and Prediction of Storms (CAPS) (in Oklahoma in the US) Center for the Study of Carbon Dioxide and Global Change (based in Arizona in the US) (Central America Hurricane of 1857: see) SS Central America (Ship of Gold) Central Florida Tornado of February 2007 Certified Consulting Meteorologist chaos theory (see "butterfly effect" under Chaotic dynamics) (Chapman cycle: see) ozone-oxygen cycle chemtrail theory Chicago Climate Exchange (CCX) chinook wind (see "inversion smog" under Chinooks and health) Henry Helm Clayton clear-air turbulence (CAT) climate climate change Climate Diagnostics Center (in the US) climate engineering (climate forcing: see) radiative forcing Climate Group climate house climate model climate modeller Climate Monitoring and Diagnostics Laboratory (CMDL) (in the US) Climate Outreach and Information Network (COIN) (British charity) (climate parameters, forcings and feedbacks: see) parametrization (climate) Climate Prediction Center (CPC) (climate science: see) climatology climate sensitivity (climate simulation: see) climate model climate surprise (climate techno-fix: see) climate engineering (climate theory: see) Charles de Secondat, Baron de Montesquieu (see "climate theory" in paragraph 3 under Political views) (climate variability: see) climate change (climate warming: see) global warming (climate weapon: see) Weather modification (see under In the military) climateprediction.net (CPDN) (distributed computing project) climatic determinism (equatorial paradox) (see also environmental determinism) Climatic Regions of India Climatic Research Unit (at the University of East Anglia in the UK) (climatic zone: see) clime climatology clime (climatic zone) Clinton Foundation (see under Clinton Climate Initiative (CCI)) cloud cloud albedo ("a measure of the reflectivity of a cloud") cloud base ("the lowest altitude of the visible portion of a cloud") cloud chamber (Wilson chamber) ("for detecting ... ionizing radiation") cloud condensation nuclei (CCNs) (see under Phytoplankton role) cloud cover cloud feedback cloud forcing (see "greenhouse effect" in paragraph 2) cloud forest (cloud formation: see) nephology cloud physics cloud seeding cloud street cloud suck cloudburst (see "destruction" in paragraph 2 and see "Mumbai" in paragraph 3) CloudSat ("a NASA environmental satellite") coefficient of haze (in meteorology) cold-core low cold weather boot cold weather rule (cold weather law) (for public utility companies) (coldest place on earth: see) climate of Antarctica (see under Temperature) coldest temperature achieved on Earth Colorado low Community Climate System Model continental climate contrail controlled airspace controlled atmosphere (for agricultural storage) convection (see under Atmospheric convection) convective available potential energy (CAPE) (in meteorology) convective condensation level (CCL) convective inhibition (CIN) convective instability convective temperature (Tc) Cooperative Institute for Atmospheric Sciences and Terrestrial Applications (CIASTA) Cooperative Institute for Arctic Research Cooperative Institute for Climate and Ocean Research (CICOR) Cooperative Institute for Climate Applications and Research (CICAR) Cooperative Institute for Climate Science (CICS) Cooperative Institute for Limnology and Ecosystems Research (CILER) Cooperative Institute for Marine and Atmospheric Studies (CIMAS) Cooperative Institute for Mesoscale Meteorological Studies (CIMMS) Cooperative Institute for Meteorological Satellite Studies (CIMSS) Cooperative Institute for Precipitation Systems (CIPS) Cooperative Institute for Research in Environmental Sciences (CIRES) Cooperative Institute for Research in the Atmosphere (CIRA) corona (meteorology) COSMIC (Constellation Observing System for Meteorology, Ionosphere, and Climate) Cosmic Anisotropy Telescope (CAT) Cosmic Background Explorer (COBE) ("to investigate the cosmic background radiation" etc.) cosmic microwave background experiments cosmic microwave background radiation (CMB) (CMBR) (CBR) (MBR) cosmic noise cosmic ray (see under Lightning) Cosmochemical Periodic Table of the Elements in the Solar System cosmochemistry cumulonimbus cloud (see under Effects) (cumulonimbus with mammatus: see) mammatus cloud (cumulonimbus with pileus: see) pileus (meteorology) cumulus castellanus cloud cumulus cloud cumulus congestus cloud cumulus humilis cloud cumulus mediocris cloud (cup anemometer: see) anemometer (see under Cup anemometers) current solar income cyclogenesis cyclone cyclone furnace (a type of coal combustor) (cyclone preparedness: see) hurricane preparedness cyclonic separation (method of removing particles from an air or gas stream) D D region (in the atmosphere) Darrieus wind turbine dawn dBZ (meteorology) degree (temperature) deicing dendroclimatology ("extracting past climate information from information in trees") density altitude Denver Convergence Vorticity Zone (DCVZ) deposition (physics) (depression [meteorology]: see) low pressure area derecho (see also List of derecho events) dew dew point (dewpoint, Td) dew point depression disdrometer downwelling drizzle drought dry-bulb temperature dry line (dew point line) dry punch dry season dusk E Earth's atmosphere Earth's magnetic field Earth System Research Laboratories (ESRL) economics of global warming Emagram effect of Hurricane Katrina on New Orleans (see also Hurricane Katrina effects by region) effect of sun angle on climate Enhanced Fujita scale (EF scale) eolian processes equator (see under Equatorial seasons and climate) equilibrium level (EL) equivalent potential temperature equivalent temperature European Centre for Medium-Range Weather Forecasts (ECMWF) European Climate Change Programme (ECCP) European emission standards (for motor vehicles) European Severe Storms Laboratory (ESSL) European windstorm evaporation evaporative cooler evaporative cooling evaporite (a mineral sediment resulting from evaporation of saline water) evapotranspiration (ET) (sum of evaporation and plant transpiration) exhaust gas recirculation (EGR) (exhaust gas recycling) exosphere (layer of atmosphere) extratropical cyclone (mid-latitude cyclone) extreme weather extremes on Earth F fire whirl firestorm fog forensic meteorology free convective layer (FCL) freezing rain (front [meteorology]: see) surface weather analysis frontogenesis frontolysis frost frost creep (frost heave) frost flowers (frost castles) (ice castles) (ice ribbons) (ice blossoms) frost heaving (frost heave) frost law frost line (frost point: see) dew point (dewpoint) frostbite Fujita scale (F scale) (for measuring tornadoes) fulgurite (full lunar eclipse: see) lunar eclipse full-spectrum light funnel cloud (related to a tornado) G galactic cosmic ray (GCR) gale gale warning Galileo thermometer (Galilean thermometer) Galveston, Texas (see under Hurricane of 1900 and recovery) Galveston Hurricane of 1900 (in the US) gas balloon (see under History) gas flare (flare stack) (gas warfare: see) chemical warfare Geophysical Fluid Dynamics Laboratory (GFDL) glossary of climate change glossary of environmental science glossary of tornado terms glossary of tropical cyclone terms glossary of wildfire terms gustnado Ge-Gk geomagnetic storm (geomagnetism: see) Earth's magnetic field geospatial technology (Spatial Information Technology) Geostationary Operational Environmental Satellite (GOES) (a program of the US) geostatistics geostrophic wind Global Atmosphere Watch (GAW) Global Forecast System (GFS) global warming greenhouse effect greenhouse gas (GHG) growing degree day (GDD) growing season gust front H hail halo (optical phenomenon) haze heat (heat budget: see) radiation budget (heat equator: see) thermal equator (heat lightning: see) lightning heat wave heating degree day (HDD) (Heaviside layer) Kennelly–Heaviside layer (E region) (in the atmosphere) Heavy snow warning heliostat High Frequency Active Auroral Research Program (HAARP) high pressure area High Resolution Fly's Eye Cosmic Ray Detector high-altitude airship (HAA) hodograph humid continental climate humid subtropical climate (humidex) heat index (HI) humidity HurriQuake nail (for resisting hurricanes and earthquakes) (hydrologic cycle) water cycle hydrological phenomenon hydrology hydrosphere hygrometer (different from hydrometer) hypercane ("hypothetical class of hurricane") I ice Ice Accretion Indicator ice age ice storm Ice Storm Warning illuminance impact winter impluvium in situ (see under Earth and atmospheric sciences) incidental radiator India Meteorological Department Indian summer infrared (IR) radiation (see under Meteorology) insolation instrument meteorological conditions (IMG) instrumental temperature record intentional radiator International Meteorological Organization (IMO) International Temperature Scale of 1990 (ITS-90) International Terrestrial Reference System (ITRS) inversion Invest (meteorology) ion wind (ion wind) (coronal wind) ionosonde (chirpsounder) ionosphere ionospheric reflection ionospheric sounding iron cycle irradiance irradiation isobar isochore (in a thermodynamic diagram) isodrosotherm isogon (meteorology) (isogram) contour line (level set) (isarithm) isohel isohume isohyet isohypse (in topography) isotherm K katabatic wind L Laboratory for Atmospheric and Space Physics (LASP) lake effect snow (a snowsquall) (lake surge: see) storm surge land hemisphere land lighthouse landspout lapse rate Lemon technique lenticular cloud level of free convection (LFC) life zone lifted condensation level (LCL) lifted index (LI) lightning lightning detection lightning prediction system lightning rod (lightning protector) (lightning finial) lightning safety (lightning storm) thunderstorm (T-storm) (electrical storm) lightvessel (lightship) line echo wave pattern (LEWP) line source ("a source of air, noise, water contamination or electromagnetic radiation") (list of all-time high and low temperatures by state: see) U.S. state temperature extremes list of basic earth science topics list of Category 5 Atlantic hurricanes list of Category 5 Pacific hurricanes list of cloud types list of coastal weather stations of the United Kingdom list of countries by carbon dioxide emissions list of countries by carbon dioxide emissions per capita list of Earth observation satellites list of lighthouses and lightvessels list of meteorological phenomena list of most polluting power stations list of named tropical cyclones list of Northern Indian Ocean tropical cyclone seasons List of derecho events list of power outages list of scientific journals in earth and atmospheric sciences list of Solar Cycles (list of sunspot cycles) list of tornado-related deaths at schools list of weather instruments list of weather records Little Ice Age (LIA) Local storm report low pressure area (see same for "low-pressure cell") (lowest elevations: see) list of places on land with elevations below sea level (luminous pollution) light pollution (photopollution) lunar phase M Madden–Julian oscillation (MJO) magnetic storm (geomagnetic storm) magnetopause magnetosheath magnetosphere marine west coast climate (maritime climate) (oceanic climate) Mars Climate Orbiter Mars Radiation Environment Experiment (Martian Radiation Experiment) (MARIE) maximum parcel level (MPL) maximum sustained wind Max Planck Institute for Meteorology (MPI-M) mean radiant temperature (MRT) Mediterranean climate medium Earth orbit (MEO) (intermediate circular orbit) (ICO) megathermal (macrothermal) melting mercury (element) (see "Clean Air Act" under United States) mercury-in-glass thermometer mesopause mesoscale convective complex (MCC) mesoscale convective system (MCS) mesoscale convective vortex (MCV) mesoscale meteorology mesocyclone mesohigh mesolow mesonet mesosphere mesothermal (in climatology) mesovortex Met Office (previously Meteorological Office) (the UK's national weather service) meteorological history of Hurricane Katrina Meteorological Service of Canada (MSC) meteorology metrology Miami Tornado (of May 12, 1997) Miami tornadoes of 2003 microclimate microscale meteorology Mid-Atlantic United States flood of 2006 middle latitudes midnight millimeter cloud radar (millimeter wave cloud radar) (MMCR) misoscale meteorology mist mixed layer mixing ratio moisture molecular-scale temperature moonlight N NASA Clean Air Study NASA Earth Observatory NASA World Wind (virtual globe) National Ambient Air Quality Standards (NAAQS) (in the US) National Center for Atmospheric Research (NCAR) (in the US) National Centers for Environmental Prediction (NCEP) (in the US) National Climatic Data Center (NCDC) (in the US) National Emissions Standards for Hazardous Air Pollutants (NESHAPS) (in the US) (National Environmental Satellite, Data and Information Service: see) National Oceanic and Atmospheric Administration (NOAA) (in the US) National Geomagnetism Program (in the US) National Hurricane Center (NHC) (in the US) National Map (in the US) National Oceanic and Atmospheric Administration (NOAA) (in the US) (National Severe Storms Forecast Center [NSSFC]: renamed) Storm Prediction Center (SPC) (in the US) National Severe Storms Laboratory (NSSL) (in the US) National Snow and Ice Data Center (NSIDC) (in the US) National Solar Observatory (in the US) National Weather Association (NWA) (in the US) National Weather Center (NWC) (in the US) National Weather Service bulletin for New Orleans region (at 10:11 a.m., August 28, 2005) National Weather Service (NWS) nautical almanac nephology nephoscope night sky nimbus cloud nitrogen cycle (nitrogen pollution: see) eutrophication (see under Atmospheric deposition) NOAA Weather Radio All Hazards (NWR) (of the US) noctilucent cloud North Atlantic tropical cyclone North Pole numerical weather prediction O observational astronomy (see "light pollution" in places) observatory (see also list of observatories) ocean heat content (OHC) Ocean Prediction Center (OHC) occultation oceanic climate Office of Oceanic and Atmospheric Research (OAR) 1999 Oklahoma tornado outbreak orographic lift outflow boundary oxygen oxygen cycle ozone ozone depletion ozone depletion potential (ODP) ozone layer (ozonosphere layer) ozone-oxygen cycle P Pacific decadal oscillation paleoclimatology paleomagnetism paleotempestology parts-per notation photovore planetary boundary layer (PBL) pluvial lake pneumonia front polar circle polar climate polar easterlies polar high polar ice cap (polar light: see) aurora (astronomy) polar low (polar mesospheric cloud) noctilucent cloud polar mesospheric summer echoes (PMSE) polar night polar region (polar reversal) magnetic polarity reversal polar stratospheric cloud (PSC) (nacreous cloud) polar vortex Polarization (waves) (see under Polarization effects in everyday life) pole shift theory positive streamer post-glacial rebound potential evaporation potential temperature precipitation pressure gradient pressure gradient force (PGF) pyrocumulus Q Quantitative precipitation estimation Quantitative precipitation forecast Quasi-geostrophic equations R radiance radiant barrier radiant energy radiation radiation budget radiation hormesis radiation poisoning (radiation sickness) radiative cooling radiative forcing radiological weapon (radiological dispersion device [RDD]) radiosonde radius of outermost closed isobar rain rain fade (fading of signal by rain or snow) rain gauge rain sensor rain shadow rainbow rainforest rarefaction RealClimate (commentary site on climate science) RealSky (digital photographic sky atlas) relative humidity relative pressure (relief precipitation: see) orographic lift research balloon resistance thermometer (resistance temperature detector) (RTD) rime (frost) S Saffir-Simpson Hurricane Scale satellite temperature measurements (Sea Islands Hurricane) 1893 Sea Islands Hurricane sea level (sea level pressure) atmospheric pressure sea surface temperature (SST) severe weather severe weather terminology (United States) Skew-T log-P diagram sky skyglow smoke snow Solar and Heliospheric Observatory solar azimuth angle solar cell solar collector solar constant solar cycle solar eclipse solar flare (see under Hazards) solar furnace solar greenhouse (technical) solar heating solar maximum Solar Maximum Mission solar minimum solar mirror solar proton event solar radiation (solar irradiance) (solar storm) geomagnetic storm solar thermal collector solar thermal energy solar updraft tower solar variation solar wind solarium space geostrategy (astrostrategy) (geostrategy in space) Space Science and Engineering Center (SSEC) space weather (specific humidity: see) humidity (see under Specific Humidity) squall squall line (standard atmospheric pressure) atmospheric pressure (standard atmosphere) standard conditions for temperature and pressure storm storm cellar storm chasing storm drain (storm sewer) (stormwater drain) storm-scale storm surge storm tide storm track storm warning (see same for "storm watch") storm scale stormwater stratopause stratosphere Stüve diagram subarctic subarctic climate subtropical cyclone (see same for "subtropical depression" and for "subtropical storm") subtropics (see same for "subtropical" and for "subtropical climate") sudden ionospheric disturbance (SID) sudden stratospheric warming sun sun dog (sundog) (parhelion) sunlight sunshower sunspot (see under "Significant events") supercell surface temperature inversion surface weather analysis surface weather observation synoptic scale meteorology T teleconnection temperature temperature extremes (temperature inversion) inversion (meteorology) temperature record temperature record of the past 1000 years tephigram The Weather Channel (TWC) The Weather Network thermal equator thermodynamic temperature thermometer thunder thundersnow thunderstorm (electrical storm) TIMED (Thermosphere Ionosphere Mesosphere Energetics and Dynamics) TOR tornado tornado climatology tornado intensity tornado warning tornado watch tornado emergency tornadogenesis torr (symbol: Torr) (millimetre of mercury) (mmHg) Total Ozone Mapping Spectrometer (TOMS) tropical climate tropical cyclogenesis tropical cyclone (tropical storm) (typhoon) (hurricane) Tropical Cyclone Formation Alert (TCFA) tropical cyclone observation tropical cyclone prediction model tropical cyclone rainfall climatology tropical cyclone scales Tropical Ocean-Global Atmosphere program (TOGA) tropical rain belt Tropical Rainfall Measuring Mission (TRMM) Tropical Rainforest Heritage of Sumatra (in Indonesia) (Tropical Research Institute) Smithsonian Tropical Research Institute (STRI) (in Panama) Tropical upper tropospheric trough (TUTT) tropical wave (African easterly wave) tropopause troposphere Tropospheric Emission Spectrometer (TES) tropospheric ozone tsunami Tsunami PTSD Center (Tsunami Post Traumatic Stress Disorder Center) tsunami warning system typical meteorological year U U.S. state temperature extremes ultraviolet United States temperature extremes urban heat island (UHI) UV index V vapor pressure virtual temperature vorticity W waterspout water vapor weather weather forecasting weather front weather lore Weather Modification Operations and Research Board (US) Weather Prediction Center (WPC) weather radar weather satellite wet-bulb potential temperature wet-bulb temperature wind wind chill wind direction wind gradient wind profiler wind shear wind speed windcatcher Windscale fire winter storm Winter Storm Warning Winter Weather Advisory World Asthma Day World Climate Change Conference, Moscow World Climate Conference World Climate Programme World Climate Report World Climate Research Programme World Meteorological Organization (WMO) World Solar Challenge Z Zonal wavenumber Meteorology topics ca:Fenomen meteorològic de:Portal:Wetter und Klima/Themenliste fr:Glossaire de la météorologie id:Fenomena meteorologi nl:Weer en klimaat van A tot Z nn:Vêrfenomen
Index of meteorology articles
[ "Physics" ]
4,629
[ "Meteorology", "Applied and interdisciplinary physics" ]
32,822,023
https://en.wikipedia.org/wiki/Headworks
Headworks is a civil engineering term for any structure at the head or diversion point of a waterway. It is smaller than a barrage and is used to divert water from a river into a canal or from a large canal into a smaller canal. An example is the Horseshoe Falls at the start of the Llangollen Canal. Historically the phrase "headworks" derives from the traditional approach of diverting water at the start of an irrigation network and the location of these processes at the "head of the works". See also List of barrages and headworks in Pakistan References Civil engineering Irrigation
Headworks
[ "Engineering" ]
119
[ "Construction", "Civil engineering", "Civil engineering stubs" ]
32,822,562
https://en.wikipedia.org/wiki/Fukushima%20nuclear%20accident%20cleanup
The Fukushima disaster cleanup is an ongoing attempt to limit radioactive contamination from the three nuclear reactors involved in the Fukushima Daiichi nuclear disaster that followed the earthquake and tsunami on 11 March 2011. The affected reactors were adjacent to one another and accident management was made much more difficult because of the number of simultaneous hazards concentrated in a small area. Failure of emergency power following the tsunami resulted in loss of coolant from each reactor, hydrogen explosions damaging the reactor buildings, and water draining from open-air spent fuel pools. Plant workers were put in the position of trying to cope simultaneously with core meltdowns at three reactors and exposed fuel pools at three units. Automated cooling systems were installed within 3 months from the accident. A fabric cover was built to protect the buildings from storms and heavy rainfall. New detectors were installed at the plant to track emissions of xenon gas which can be a sign of nuclear fission. Filters were installed to reduce contaminants from escaping the area of the plant into the area or atmosphere. Cement has been laid near the seabed to control contaminants from accidentally entering the ocean. Michio Aoyama, a scientist at Fukushima University's Institute of Environmental Radioactivity, estimated that the meltdowns and explosions released 18,000 terabecquerel (TBq) of caesium 137 (equivalent to roughly ), mostly into the Pacific Ocean. He also estimated that two years after the accident, the stricken plant was still releasing 30 gigabecquerel (30 GBq, or approximately 0.8 curie equivalent to roughly ) of caesium 137 and the same amount (in terms of activity, not in terms of mass – the mass of amounts to roughly ) of strontium 90 into the ocean daily. For comparison, the LD50 of Caesium-137 in mice (through acute radiation syndrome) has been reported at 245 μg/kg body weight whereas experiments in the 1970s yielded a lethal dose in dogs of 44 μg/kg body weight. In a adult human, this would imply doses of and respectively. In September 2013, it was reported that the level of strontium-90 detected in a drainage ditch located near a water storage tank, from which around 300 of water was found to have leaked, was believed to have exceeded the threshold set by the government. Efforts to control the flow of contaminated water have included trying to isolate the plant behind a , ice wall of frozen soil, which has had limited success. Decommissioning the plant was estimated to cost tens of billions of dollars in 2013/2014 and last 30 to 40 years. In November 2016, Japan's trade ministry put the cost of the clean up of radioactive contamination and compensation for victims at (20 trillion yen). Tokyo Electric Power Company (TEPCO) is going to remove the remaining nuclear fuel material from the plants. TEPCO completed the removal of 1,535 fuel assemblies from the unit 4 spent fuel pool in December 2014 and 566 fuel assemblies from the unit 3 spent fuel pool in February 2021. TEPCO and While radioactive particles were found to have contaminated rice harvested near Fukushima City in the autumn of 2011, fears of contamination in the soil have receded as government measures to protect the food supply have appeared to be successful. Studies have shown that soil contamination in most areas of Fukushima was not serious. In 2018, it was reported that contaminated water was still flowing into the Pacific Ocean, but at a diminished rate of 2 GBq per day. Overview At the time of the initial event, 50 TEPCO employees remained onsite in the immediate aftermath to work to stabilize the plant and begin cleanup. Initially, TEPCO did not put forward a strategy to regain control of the situation in the reactors. Helmut Hirsch, a German physicist and nuclear expert, said "they are improvising with tools that were not intended for this type of situation". On 17 April 2011, however, TEPCO appeared to put forward the broad basis of a plan that included: (1) reaching "cold shutdown in about six to nine months;" (2) "restoring stable cooling to the reactors and spent fuel pools in about three months;" (3) putting "special covers" on Units 1, 3, and 4 starting in June; (4) installing "additional storage containers for the radioactive water that has been pooling in the turbine basements and outside trenches;" (5) using radio-controlled equipment to clean up the site; and (6) using silt fences to limit ocean contamination. Previously, TEPCO publicly committed to installing new emergency generators 20 m above sea level, twice the height of the generators destroyed by the tsunami on 11 March. Toshiba and Hitachi had both proposed plans for shuttering the facility. "Cold shutdown" was accomplished on 11 December 2011. From that point active cooling was no longer needed, but water injection was still required due to large water leaks. Long-term plans for Units 5 and 6 have not been announced, "but they too may need to be decommissioned". On 5 May 2011, workers entered reactor buildings for the first time since the accident. The workers began to install air filtration systems to clean air of radioactive materials to allow additional workers to install water cooling systems. In 2017, TEPCO announced that remote-controlled robots sent into the destroyed Unit 3 reactor buildings had found the reactor's melted uranium fuel, which had burned through the floor of the reactor vessel and settled in clumps on the concrete floor below. Scope of cleanup Japanese reactor maker Toshiba said it could decommission the earthquake-damaged Fukushima nuclear power plant in about 10 years, a third quicker than the American Three Mile Island plant. As a comparison, at Three Mile Island the vessel of the partially melted core was first opened 11 years after the accident, with cleanup activities taking several more years. TEPCO announced it restored the automated cooling systems in the damaged reactors in about three months, and had the reactors put into cold shutdown status in six months. First estimates included costs as high as (), as cited by the Japanese Prime Minister at the time, Yoshihiko Noda (野田 佳彦). This estimate was made before the scope of the problem was known, however. It seems that the contamination was less than feared. No strontium is detectable in the soil, and though the crops of the year of the disaster were contaminated, the crops produced by the area now are safe for human consumption. In 2016, Japan's Ministry of Economy, Trade and Industry estimated the total cost of dealing with the Fukushima disaster at (), almost twice the previous estimate of (). A rise in compensation for victims of the disaster from () to () was expected, with decontamination costs estimated to rise from () to (), costs for interim storage of radioactive material to increase from () to (), and costs of decommissioning reactors to increase from () to (). Working conditions at the plant There has been concern that the plant would be dangerous for workers. Two workers suffered skin burns from radiation, but no serious injuries or fatalities have been documented to have been caused by radiation at Fukushima Dai-ichi. Workers in dorms exposed to radiation Two shelters for people working at the Fukushima site were not listed as part of the radiation management zones although radiation levels in the shelters exceeded the legal limits. The consequence was that the workers did not get paid the extra "danger allowance" that was paid to workers in these "radiation management zones". The shelters were constructed by Toshiba Corporation and the Kajima Corporation at a place some 2 kilometers west of the damaged reactors, just outside the plant compound, but near reactors 1 to 4. The shelters were built after the shelters at the plant compound became overcrowded. At 7 October 2011, radiation levels were between 2 and 16 microsieverts per hour in the Toshiba building, and 2 to 8.5 in the Kajima dorm. The Industrial Safety and Health Law on the prevention of health damage through ionizing radiation had set the limit for accumulated radiation dosage in radiation management zones at 1.3 millisieverts over three months, so the maximum level is 2.6 microsieverts/hour. In both dorms, the radiation levels were higher. These doses are, however, well below the level to affect human health. According to the law, the "business operator" is responsible for "managing radiation dosage and the prevention of contamination", Toshiba and Kajima said that TEPCO was responsible, but a TEPCO official commented, "From the perspective of protecting workers from radiation, the business operators (that constructed the shelters) are managing radiation dosage and the prevention of contamination", in this way suggesting that Toshiba and Kajima were to take the care of the zone management. Preventing hydrogen explosions On 26 September 2011, after the discovery of hydrogen in a pipe leading to the containment vessel of reactor no.1, NISA instructed TEPCO to check whether hydrogen was building up in reactor no. 2 and 3 as well. TEPCO announced that measurements of hydrogen would be taken in reactor no. 1, before any nitrogen was injected to prevent explosions. When hydrogen would be detected at the other reactors, nitrogen injections would follow. After the discovery of hydrogen concentrations between 61 and 63 percent in pipes of the containment of reactor no. 1, nitrogen injections were started on 8 October. On 10 October, TEPCO announced that the concentrations were, at that moment, low enough to prevent explosions, and even if the concentration would rise again, it would not exceed 4 percent, the lowest level that would pose the risk of an explosion. On the evening of 9 October, two holes were drilled into a pipe to install a filter for radioactive substances inside the containment vessel; this was 2 weeks behind the schedule TEPCO had set for itself. Investigations inside the reactors On 19 January 2012, the interior of the primary containment vessel of reactor 2 was inspected with an industrial endoscope. This device, 8.5 millimeters in diameter, was equipped with a 360 degrees-view camera and a thermometer to measure the temperature and the cooling water inside, in an attempt to calibrate the existing temperature measurements that could have an error margin of 20 degrees. The device was brought in by a hole at 2.5 meters above the floor where the vessel is located. The procedure lasted 70 minutes. The photos showed parts of the walls and pipes inside the containment vessel. But they were unclear and blurred, most likely due to water vapors and the radiation inside. According to TEPCO the photos showed no serious damage. The temperature measured inside was 44.7 degrees Celsius, and did not differ much from the 42.6 degrees measured outside the vessel. Inspections of the suppression chambers reactor no. 2 and 3 On 14 March 2012, for the first time after the accidents, six workers were sent into the basements of reactor no. 2 and 3, to examine the suppression chambers. Behind the door of the suppression chamber in the no. 2 reactor building, 160 millisieverts/hour was measured. The door to the suppression chamber in the no. 3 reactor building was damaged and could not be opened. In front of this door, the radiation level measurement was 75 millisieverts/hour. For reactors to be decommissioned, access to the suppression chambers is vital for conducting repairs to the containment structures. According to TEPCO, this work should be done with robots, because these places with high levels of radiation could be hostile to humans. TEPCO released some video footage of the work at the suppression chambers of the No. 2 and 3 reactors. On 26 and 27 March 2012, the inside of the containment vessel of reactor 2 was inspected with a 20 meter long endoscope. With this, a dosimeter was brought into the vessel to measure the radiation levels inside. At the bottom of the primary containment structure, 60 centimeters of water was found, instead of the 3 meters expected. The radiation level measured was 72.9 sieverts per hour. Because of this, the endoscope could only function for a few hours. For reactors number 1 and 3, no endoscopic survey was planned at that time, because the actual radiation levels were too high for humans. Management of contaminated water Continued cooling of the melted reactor cores is required in order to remove excess heat. Due to damage to the integrity of the reactor vessels, radioactive water accumulates inside the reactor and turbine buildings. To decontaminate the contaminated water, TEPCO installed radioaction water treatment systems. The Japanese government had initially requested the assistance of the Russian floating water decontamination plant Landysh to process the radioactive water from the damaged reactors, but negotiations with the Russian government were an extremely slow process and it is unclear if the plant was ever sent to Fukushima. Landysh was built by Russia with funding from Japan to process liquid wastes produced during the decommissioning of nuclear submarines. As of early September 2011, the operating rate of the filtering system exceeded the target of 90 percent for the first time. 85,000 tons of water were decontaminated by 11 September, with over 100,000 tons of waste water remaining to be treated at the time. The nuclear waste generated by the filters had already filled almost 70 percent of the 800 cubic meters of storage space available at the time. TEPCO had to figure out how to cool the reactors with less than 15 tons of water per day in order to reduce the growth of waste water and nuclear waste to more manageable levels. Installation of circulating water cooling system In order to remove decay heat of the severely damaged cores of Units 1–3, TEPCO injected cooling water into the reactors. As the reactors appear to have holes around the bottom, the water dissolved the water-soluble fission products, which then accumulated in the basement of the turbine building (see the adjacent diagram) through any leaks from the water-injected reactor buildings. Since the accumulated radioactive water was a risk, TEPCO tried to transfer it. As the accumulated water in the basement of the turbine building of Units 2 and 3 was radioactive, TEPCO needed to remove it. They had initially planned to pump the water to the condenser (see diagram). TEPCO abandoned that plan after discovering the condensers on both units were already full of water. Pumps capable of processing 10–25 tons of water per hour were used to transfer condenser water into other storage tanks, freeing up condenser storage for the water in the basements. Since both the storage tanks and the condensers were nearly full, TEPCO also considered using floating tanker ships as a temporary storage location for the radioactive water. Regardless of the availability of offshore storage for radioactive-contaminated water, TEPCO discharged 11,500 tons of its least contaminated water (which was still approximately 100 times the legal limit for radioactivity) into the sea on 5 April in order to free up storage space. At the same time, on 5 April, TEPCO began pumping water from the condensers of units 1–3 to their respective condensation storage tanks to free room for the trench water (see below). Removal of accumulated water in seawater piping trench The Fukushima Daiichi NPS has several seawater piping trenches that were originally designed to house pipes and cables running from the Units 2–4 turbine buildings to their seaside, which do not directly connect to the sea. Inside the trench, radioactive contaminated water has been accumulating since the accident. Due to the risk of soil or ocean contamination from these trenches, TEPCO has been trying to remove the accumulated water in the trenches by pumping it back into the turbine buildings, as well as backfilling the trenches to reduce or prevent further incursion of contaminated water. Groundwater contamination On 5 July 2013, TEPCO found 9 kBq/L of 134Cs and 18 kBq/L of 137Cs in a sample taken from a monitoring well close to the coastline. Compared with samples taken three days earlier, the levels were 90 times higher. The cause was unknown. The monitoring well is situated close to another monitoring well that had previously leaked radioactive water into the sea in April 2011. A sample of groundwater from another well situated about 100 meters south of the first well showed that the radioactivity had risen by 18 times over the course of 4 days, with 1.7 kBq/L of strontium and other radioactive substances. A day later the readings in the first well were 11 kBq/L of 134Cs and 22 kBq/L of 137Cs, 111 times and 105 times greater than the samples of 5 July. TEPCO did not know the reasons for the higher readings, but the monitoring was to be intensified. More than a month after the groundwater contamination was discovered, TEPCO started to contain the radioactive groundwater. They assumed that the radioactivity had escaped early in the beginning of the disaster in 2011, but NRA experts had serious doubts about their assumption. According to them, other sources could not be excluded. Numerous pipes were running everywhere on the reactor grounds to cool the reactors and decontaminate the water used, and leaks could be anywhere. TEPCO's solution resulted in redirection of the groundwater flows, which could have spread the radioactive contamination further. TEPCO also had plans for pumping groundwater. At that time the turbine buildings of units 2 and 3 contained 5000 and 6000 cubic meters of radioactive water respectively. With wells in contact with the turbine buildings, this could spread the radioactivity into the ground. The NRA announced that it would form a task force to find the leaks and to block the flow of the groundwater to the coastline, because the NRA suspected that the groundwater was leaking into the sea. Timeline of contaminated water treatment 2011 On 27 March TEPCO announced that radioactive water had accumulated in the basement of the Unit 2 turbine building. On 28 March The Japanese Nuclear Safety Commission advised TEPCO to take all possible measures to avoid the accumulated water in the Unit 2 turbine building leaking into the ground and the sea (hereinafter called "the JNSC advice"). On 2 April TEPCO announced the outflow of fluid containing radioactive materials to the ocean from areas near the intake channel of Unit 2. The fluid source was a 20 cm crack on the concrete lateral of the pit that appeared to have been created by the earthquake. TEPCO attempted to inject fresh concrete, polymeric water absorbent, sawdust, and shredded newspapers into the crack; this approach failed to slow the leak. After an investigation of the water flow, TEPCO began to inject sodium silicate on 5 April, and the outflow was stopped on 6 April. The total amount and radioactivity of the outflow from the crack was estimated to be approximately 520 m3 and approximately 4.7 PBq respectively. On 17 April TEPCO announced the Roadmap towards Restoration from the Accident at Fukushima Daiichi Nuclear Power Station. On 27 April In order to prevent the outflow of the highly radioactive water at the turbine building of Unit 2, the water was transferred to the Centralized Radiation Waste Treatment Facility since 19 April. TEPCO planned to install facilities for processing the stored water and reusing treated water to inject it into the reactors. On 11 May TEPCO investigated possible leakage of radioactive water to the outside from around the intake canal of Unit 3 in response to employees' report of water flowing into the pit via power cable pipe lines. On 23 May Nuclear and Industrial Safety Agency began to use the term "Contaminated Water" as the water with high concentration of radioactive materials. On 17 June TEPCO began the operation of the cesium adsorption apparatus (Kurion) and the decontamination apparatus (AREVA). On 17 August TEPCO began the (test) operation of SARRY, which is the second cesium adsorption apparatus (TOSHIBA). On 28 August 2 TEPCO workers at the plant were exposed to radiation by mistake while they were replacing parts of the contaminated water processing system. The next Wednesday, 31 August, two other workers were sprayed with highly contaminated water when the water splashed from a container with a leaking valve that did not close. It was found that they were exposed to 0.16 and 0.14 millisieverts. The last man wore a raincoat. No immediate symptoms were found. On 21 December TEPCO announced Mid-and-long-Term Roadmap towards the Decommissioning of Fukushima Daiichi Nuclear Power Units 1–4. 2012 On 5 April A leaking pipe was found at 1.00 AM. The leakage stopped an hour after the valves were closed. 12,000 liters of water with high levels of radioactive strontium were lost. According to TEPCO, much of this water escaped through a nearby sewer system into the ocean. Investigations were expected to reveal how much water was lost into the ocean, and how the joint could fail. A similar leakage at the same facility happened on 26 March 2012. On 19 September Nuclear Regulation Authority (NRA) was established. 2013 On 30 March TEPCO began the operation of ALPS, which is the multi-nuclide removal equipment. On 22 July With announcing the situation on seawater and groundwater, TEPCO admitted that contaminated groundwater had been leaking into the ocean since March 2011. On 27 July TEPCO announced that extremely high levels of tritium and cesium were found in a pit containing about 5000 cubic meters of water on the sea side of the Unit 2 reactor building. 8.7 MBq/liter of tritium and 2.35 GBq/liter of cesium was measured. The NRA was concerned that leaks from the pit could release high tritium levels into the sea and that there was still water flowing from the reactor into the turbine building and into the pit. TEPCO believed that this pollution was there from the first days in 2011, and had stayed there. Nevertheless, TEPCO would control the site for leaks, and seal the soil around the pit. On 30 May The Government of Japan decided on a policy to prevent the groundwater flowing in the reactor buildings. A frozen soil wall (Land-side of Impermeable Wall) was scheduled for introduction to block the flow of groundwater and prevent its mixing with contaminated water. On 19 August Contaminated water leakage from a flange type tank was found in the H4 area. The incident was evaluated by the NRA as a provisional rating Level 3 on the eight-level INES. In response to this incident, the NRA recommended that TEPCO should replace the flange type tank, which was prone to leak water, with a welded type tank. On 28 August A subcontractor employee was contaminated on his face, head and chest while transferring water from the damaged tank. After decontamination, 5,000 cpm were still measured on his head; the readings from prior to decontamination were not released. The man was released, but ordered to have a whole-body radiation count later. On 2 September It was reported that radiation near another tank was measured at 1.8 Sv/h, 18 times higher than previously thought. TEPCO had initially recorded radiation at about 100 mSv/h, but later admitted that that was because the equipment they were using could only read measurements up to that level. The latest reading came from a more advanced device capable of measuring higher levels. The buildup of water at the site was reported to be close to becoming unmanageable and experts said that TEPCO will soon be left with no choice but to release the water into the ocean or evaporate it. On 3 September The Nuclear Emergency Response Headquarters published "the Government's Decision on Addressing the Contaminated Water Issue at TEPCO's Fukushima Daiichi NPS". On 12 September Contaminated water leakage from storage tanks was found in the H4 area. Suggestions of dumping cooling water In September 2019, the contaminated cooling water had almost reached storage capacity. Japan's environment minister Yoshiaki Harada suggested, that there was only one recourse: "release it into the ocean and dilute it... there are no other options." A day later, Yoshiaki Harada was taken out of his function, after protests. His successor Shinjiro Koizumi apologized to the fishermen in Fukushima at a meeting in Iwaki City. The new minister promised to take a strong view of the facts, and to push for reconstruction. In 2020, the storage of contaminated water reached over a million tons, stored in large containers at the grounds of the plant. It was predicted that in 2022, the storage capacity could be exceeded. Therefore, a proposal was made in spring 2020, to start discharging the cooling water into the ocean. Hiroshi Kishi, the president of JF Zengyoren, the headman of many fishermen cooperatives, strongly opposed this proposal at a meeting with Japanese government representatives. According to Kishi, any release of cooling water could prompt other countries to reinforce restrictions on imports of Japanese fishery products, reversing a recent trend toward easing. Radioactive waste Cooling the reactors with recirculated and decontaminated water from the basements proved successful, but as a consequence, this radioactive waste was accumulating in the temporary storage facility at the plant. TEPCO decided in the first week of October to use the "Sally" decontamination system built by Toshiba Corporation and keep the Kurion/Areva system as backup. On 27 September, after three months operation, some 4,700 drums with radioactive waste had accumulated at the plant. The Kurion and Sally systems both used zeolites to concentrate cesium. After the zeolite was saturated, the vessels with the zeolite were designated as nuclear waste. By now, 210 Kurion-made vessels with a total of 307 cubic meters, each vessel measuring 0.9 meters in diameter and 2.3 meters in height had accumulated at the plant. The Areva-filters used sand to absorb radioactive materials and chemicals were used to reactivate the filters. In this way, 581 cubic meters of highly contaminated sludge were produced. According to Professor Akio Koyama of the Kyoto University Research Reactor Institute, the density of high-level decontaminated water was believed to contain 10 gigabecquerel per liter, but if this is condensed to polluted sludge and zeolites, this density could increase 10,000 fold. These densities could not be dealt with using conventional systems. Spent fuel pools On 16 August 2011, TEPCO announced the installation of desalination equipment in the spent fuel pools of reactor 2, 3, and 4. These pools had been cooled with seawater for some time, and TEPCO feared the salt would corrode the stainless steel pipes and pool wall liners. The Unit 4 spent fuel pool was the first to have the equipment installed. The spent fuel pools of reactor 2 and 3 came next. TEPCO expected to achieve removal of 96% of the salt in the spent fuel pools within two months. Unit 4 spent fuel removal On 22 December 2014, TEPCO crews completed the removal of all fuel assemblies from the spent fuel pool of reactor 4. 1331 spent fuel assemblies were moved to the ground-level common spent fuel pool, and 204 unused fuel assemblies were moved to the spent fuel pool of reactor 6 (Unit 4 was out of service for refueling at the time of the 2011 accident, so the spent fuel pool contained a number of unused new fuel assemblies). Unit 3 spent fuel removal On 15 April 2019, began the process of removing the fuel assemblies from the pool of Unit 3. On 28 February 2021, the removal of all spent fuel from the fuel pool of reactor 3 was completed. On the top of the roof of the reactor a fuel handling machine crane had been built, which has been used to remove 566 fuel assemblies from the pool. Unit 2 spent fuel removal 615 fuel assemblies lay in the spent fuel pool. Removal operations have yet to begin; operations may start in the fiscal year of 2025 and end in the fiscal year 2027. Unit 1 spent fuel removal 392 fuel assemblies lay in the spent fuel pool. Removal operations have yet to begin. The operations may start in 2027. Debris removal On 10 April 2011, TEPCO began using remote-controlled, unmanned heavy equipment to remove debris from around reactors 1–4. The debris and rubble, caused by hydrogen explosions at reactors 1 and 3, was impeding recovery operations both by being in the way and emitting high radioactivity. The debris will be placed into containers and kept at the plant. Proposed building protections Because the monsoon season begins in June in Japan, it became urgent to protect the damaged reactor buildings from storms, typhoons, and heavy rainfall. As a short-term solution, TEPCO envisaged to apply a light cover on the remaining structures above the damaged reactors. As of mid-June 2011, TEPCO released its plan to use automated cranes to move structures into place over the reactor. This strategy is an attempt to keep as many people away from the reactors as possible, while still covering the damaged reactors. Proposed sarcophagus On 18 March 2011, Reuters reported that Hidehiko Nishiyama, Japan's nuclear agency spokesman when asked about burying the reactors in sand and concrete, said: "That solution is in the back of our minds, but we are focused on cooling the reactors down." Considered a last-ditch effort since it would not provide cooling, such a plan would require massive reinforcement under the floor, as in the Chernobyl Nuclear Power Plant sarcophagus. Scrapping reactors Daiichi 1–4 On 7 September 2011, TEPCO president Toshio Nishizawa said that the 4 damaged reactors will be scrapped. This announcement came at a session of the Fukushima Prefectural Assembly, which was investigating the accident at the plant. Whether the six other remaining reactors (Daiichi 5, 6, Daini 1, 2, 3, 4) should be abolished too would be decided based on the opinions of local municipalities. On 28 October 2011, the Japanese Atomic Energy Commission presented a timetable in a draft report, titled "how to scrap the Fukushima reactors". It stated that within 10 years, a start should be made with the retrieval of the melted fuel within the reactors. First, the containment vessels of reactors 1, 2 and 3 should be repaired to prevent radiation releases, then all should be filled with water. Decommissioning would take more than 30 years, because the pressure vessels of the reactor vessels are damaged. After the accident at Three Mile Island in 1979, some 70 percent of the fuel rods had melted. There, the retrieval of the fuel was started in 1985, and completed in 1990. The work at Fukushima was expected to take significantly longer because of the far greater damage and the fact that 4 reactors would need to be decommissioned all at the same time. After discussions were started in August 2011, on 9 November 2011, a panel of experts of Japan's Atomic Energy Commission completed a schedule for scrapping the damaged reactors. The panel's conclusions were: The scrapping will take 30 years or longer. First, the containment vessels needed to be repaired, then filled with water to block radiation. The reactors should be in a state of stable cold shutdown. Three years later, a start would be made to take all spent fuel from the 4 damaged reactors to a pool within the compound. Within 10 years, the removal of the melted fuel inside the reactors could begin. This scheme was partly based on the experience gained from the 1979 Three Mile Island accident. In Fukushima, however, with three meltdowns at one site, the damage was much more extensive. It could take 30 years or more to remove the nuclear fuel, dismantle the reactors, and remove all the buildings. Research institutions all over the world were asked to participate in the construction of a research site to examine the removal of fuel and other nuclear wastes. The official publication of the report was planned for the end of 2011. Protection systems installed Since the disaster, TEPCO has installed sensors, a fabric cover over the reactors and additional filters to reduce the emission of contaminants. Sensors for xenon and temperature changes to detect critical reactions After the detection of radioactive xenon gas in the containment vessel of the No. 2 reactor on 1 and 2 November 2011, TEPCO was not able to determine whether this was a sustained fission process or only spontaneous fission. Therefore, TEPCO installed detection devices for radioactive xenon to single out any occurrence of nuclear criticality. Next to this TEPCO installed temperature sensors to detect temperature changes in the reactors, another indicator of possible critical fission reactions. New filters On 20 September 2011, the Japanese government and TEPCO announced the installation of new filters to reduce the amount of radioactive substances released into the air. In the last week of September 2011 these filters were to be installed at reactors 1, 2 and 3. Gases out of the reactors would be decontaminated before they would be released into the air. By mid October, the construction of the polyester shield over the No.1 reactor should be completed. In the first half of September, the amount of radioactive substances released from the plant was about 200 megabecquerel per hour, according to TEPCO, that was about one four-millionths of the level of the initial stages of the accident in March 2011. Fabric cover over Unit 1 An effort has been undertaken to fit the three damaged reactor buildings with fabric covers and filters to limit radioactive contamination release. On 6 April 2011, sources told Kyodo News that a major construction firm was studying the idea, and that construction wouldn't "start until June". The plan had been criticized for potentially having "limited effects in blocking the release of radioactive substances into the environment". On 14 May 2011, TEPCO announced that it had begun to clear debris to create a space to install a cover over the building of reactor 1. By 13 October 2011, the roof had been completed. Metal cover over Unit 3 In June 2016, preparation work began to install a metal cover over the Unit 3 reactor building. In conjunction with this, a crane was to be installed to assist with the removal of the fuel rods from the storage pool. After inspection and cleaning, the removed fuel is expected to be stored in the site's communal storage facility. By February 2018 the dome-shaped roof had been completed in preparation of the removal of the fuel rods. Cleanup of neighboring areas Significant efforts are being taken to clean up radioactive material that escaped the plant. This effort combines washing down buildings and scraping away topsoil. It has been hampered by the volume of material to be removed and the lack of adequate storage facilities. There is also a concern that washing surfaces will merely move the radioactive material without eliminating it. After an earlier decontamination plan to clean all areas with radiation levels above 5 millisievert per year had raised protests, the Japanese government revealed on 10 October 2011, in a meeting with experts, a revised decontamination plan. This plan included: all areas with radiation levels above 1 millisievert per year would be cleaned. no-entry zones and evacuation zones designated by the government would be the responsibility of the government. the rest of the areas would be cleaned by local authorities. in areas with radiation levels above 20 millisievert per year, decontamination would be done step by step. within two years, radiation levels between 5 and 20 millisieverts should be cut down to 60%. the Japanese government would help local authorities with disposing of the enormous amount of radioactive waste. On 19 December 2011, the Japanese Ministry of Environment published more details about these plans for decontamination: the work would be subsidized in 102 villages and towns. Opposition against the plan came from cattle farmers in the prefecture Iwate and the tourist industry in the city of Aizuwakamatsu, because of fears that cattle sales might drop or tourism would be hurt, when the areas would be labeled to be contaminated. Areas with lower readings complained that their decontamination would not be funded. In a Reuters story from August 2013, it was noted "[m]any have given up hope of ever returning to live in the shadow of the Fukushima nuclear plant. A survey in June showed that a third of the former residents of Iitate, a lush village famed for its fresh produce before the disaster, never want to move back. Half of those said they would prefer to be compensated enough to move elsewhere in Japan to farm." In addition, despite being allowed to return home, some residents say the lack of an economy continues to make the area de facto unlivable. Compensation payments to those who have been evacuated are stopped when they are allowed to return home, but , decontamination of the area has progressed more slowly than expected. There have also been revelations of additional leaks (see above: storage tanks leaking contaminated water). Cementing the seabed near the water intake On 22 February 2012, TEPCO started cementing the seabed near the plant to prevent the spread of radioactive materials into the sea. Some 70,000 square meters of seabed around the intake of cooling water would be covered with 60 centimeters thick cement. The work was expected to take 4 months time, and prevent the spread of contaminated mud and sand for at least 50 years. New definition of the no-entry zones introduced On 18 December 2011, Fukushima Governor, Yuhei Sato and representatives of 11 other municipal governments near the plant, were notified at a meeting at the city of Fukushima that the three ministers in charge of handling the crises, Yokio Edano, minister of Economy, Trade and Industry, Goshi Hosono, nuclear disaster minister, and Tatsuo Hirano, minister in charge of reconstruction of the government planned to redesign the classification of the no-entry zones around the Fukushima nuclear plant. From 1 April 2012, a three level system would be introduced, by the Japanese government: no-entry zones, with an annual radiation exposure of 50 millisieverts or more at these places habitation would be prohibited zones with annual radiation exposures between 20 and 50 millisieverts here former residents could return, but with restrictions zones with exposures of less than 20 millisieverts per year in these zones the residents would be allowed to return to their houses Decontamination efforts were planned in line with this order, to help people return to places where the radiation levels would be relatively low. Costs of the cleanup operations In mid December 2011, the local authorities in Fukushima had spent around 1.7 billion yen ($21 million) on the costs of decontamination works in the cities of Fukushima and Date and the village of Kawauchi. In 2017, the Japan Center for Economic Research estimated the total cleanup costs to be between 50 and 70 trillion yen ($470 to $660 billion), with a 2019 estimate of between 35 and 80 trillion yen; in both estimates the upper-end included removing tritium from contaminated water. For the cleanup, only 184.3 billion yen was reserved in the September supplementary budget of prefecture Fukushima, and some funds in the central government's third supplementary budget of 2011. Whenever needed, the central government would be asked for extra funding. In 2016, University of Oxford researcher and author Peter Wynn Kirby wrote that the government had allocated the equivalent of US$15 billion for the regional cleanup and described the josen (decontamination) process, with "provisional storage areas (kari-kari-okiba) ... [and] more secure, though still temporary, storage depots (kari-okiba)". Kirby opined that the effort would be better called "transcontamination" because it was moving the contaminated material around without long-term safe storage planned or executed. He also saw little progress on handling the more intense radiation waste of the destroyed power plant site itself; or on handling the larger issue of the national nuclear program's waste, particularly given the earthquake-risk of Japan relative to secure long-term storage. Lessons learned to date The Fukushima Daiichi nuclear disaster revealed the dangers of building multiple nuclear reactor units close to one another. This proximity triggered the parallel, chain-reaction accidents that led to hydrogen explosions blowing the roofs off reactor buildings and water evaporating from open-air spent fuel pools—a situation that was potentially more dangerous than the loss of reactor cooling itself. Because of the proximity of the reactors, Plant Director Masao Yoshida "was put in the position of trying to cope simultaneously with core meltdowns at three reactors and exposed fuel pools at three units". See also Tritiated water Human decontamination Discharge of radioactive water of the Fukushima Daiichi Nuclear Power Plant Notes References Sources (in Japanese) Management of contaminated water (in Japanese) External links PM Information on contaminated water leakage at TEPCO's Fukushima Daiichi Nuclear Power Station, Prime Minister of Japan and His Cabinet MOFA Information on contaminated water leakage at TEPCO's Fukushima Daiichi Nuclear Power Station, Ministry of Foreign Affairs TEPCO News Releases, Tokyo Electric Power Company NRA, Japan, Nuclear Regulation Authority NISA, Nuclear and Industrial Safety Agency, former organization Fukushima Diary News site of a concerned Japanese man in Europe Decommissioning plan of Fukushima Daiichi Nuclear Power Station Mid-and-Long-Term Roadmap towards the Decommissioning of TEPCO's Fukushima Daiichi Nuclear Power Station Units 1–4 Fukushima Daiichi nuclear disaster Ecological restoration Radioactively contaminated areas
Fukushima nuclear accident cleanup
[ "Chemistry", "Technology", "Engineering" ]
8,615
[ "Radioactively contaminated areas", "Radioactive contamination", "Ecological restoration", "Soil contamination", "Environmental engineering" ]
32,823,089
https://en.wikipedia.org/wiki/Surround%20optical-fiber%20immunoassay
Surround optical-fiber immunoassay (SOFIA) is an ultrasensitive, in vitro diagnostic platform incorporating a surround optical-fiber assembly that captures fluorescence emissions from an entire sample. The technology's defining characteristics are its extremely high limit of detection, sensitivity, and dynamic range. SOFIA's sensitivity is measured at the attogram level (10−18 g), making it about one billion times more sensitive than conventional diagnostic techniques. Based on its enhanced dynamic range, SOFIA is able to discriminate levels of analyte in a sample over 10 orders of magnitude, facilitating accurate titering. As a diagnostic platform, SOFIA has a broad range of applications. Several studies have already demonstrated SOFIA's unprecedented ability to detect naturally occurring prions in the blood and urine of disease carriers. This is expected to lead to the first reliable ante mortem screening test for vCJD, BSE, scrapie, CWD, and other transmissible spongiform encephalopathies. Given the technology's extreme sensitivity, additional unique applications are anticipated, including in vitro tests for other neurodegenerative diseases, such as Alzheimer's and Parkinson's disease. SOFIA was developed as a result of a joint-collaborative research project between Los Alamos National Laboratory and State University of New York, and was supported by the Department of Defense's National Prion Research Program. Background The conventional method of performing laser-induced fluorescence, as well as other types of spectroscopic measurements, such as infrared, ultraviolet-visible spectroscopy, phosphorescence, etc., is to use a small transparent laboratory vessel, a cuvette, to contain the sample to be analyzed. To perform a measurement, the cuvette is filled with the liquid to be investigated and then illuminated with a laser focused through one of the cuvette's faces. A lens is placed in line with one of the faces of the cuvette located at 90° from the input window to collect the laser-induced fluorescent light. Only a small volume of the cuvette is actually illuminated by the laser and produces a detectable spectroscopic emission. The output signal is significantly reduced because the lens picks up only about 10% of the spectroscopic emission due to solid angle considerations. This technique has been used for at least 75 years; even before the laser existed, when conventional light sources were used to excite the fluorescence. SOFIA solves the problem of low collection efficiency, as it collects nearly all of the fluorescent light produced from the sample being analyzed, increasing the amount of fluorescence signal by around a factor of 10 over conventional apparatus. Technological advantages SOFIA is an apparatus and method for improved optical geometry for enhancement of spectroscopic detection of analytes in a sample. The invention has already demonstrated its proof-of-concept functionality as an apparatus and method for ultrasensitive detection of prions and other low-level analytes. SOFIA combines the specificity inherent in monoclonal antibodies for antigen capture with the sensitivity of surround optical detection technology. To detect extremely low signal levels, a low-noise, photovoltaic diode is used as the detector for the system. SOFIA uses a laser to illuminate a microcapillary tube holding the sample. Then, the light collected from the sample is directed to transfer optics from optical fibers. Next, the light is optically filtered for detection, which is performed as a current measurement amplified against noise by a digital signal processing lock-in amplified. The results are displayed on a computer and software designed for data acquisition. The advantages of such a detection array are numerous. Primarily, it permits the use of very small samples at low concentration to be optimally interrogated using the laser-induced fluorescence technique. This fiber-based detection system is adaptable to existing short-pulsed detection hardware that was originally developed for sequencing single DNA molecules. The geometry is also amenable to deployment for short-pulse laser, single-molecule detection schemes. The multiport geometry of the system allows efficient electronic processing of the signals from each arm of the device. Finally, and perhaps most importantly, fiberoptic cables are essentially 100% efficient in optical transmission, having an attenuation less than 10 dB/km. Thus, once deployed for use in a facility, the fluorescence information can be fiberoptically transmitted to a remote location, where data processing and analysis can be performed. Components of SOFIA SOFIA comprises a multiwell plate sample container, an automated means for successively transporting samples from the multiwell plate sample container to a transparent capillary contained within a sample holder, an excitation source in optical communication with the sample, wherein radiation from the excitation source is directed along the length of the capillary, and wherein the radiation induces a signal which is emitted from the sample, and, at least one linear array. Steps in SOFIA Assay preparation After amplifying and then concentrating the target analyte, the samples are labeled with a fluorescent dye using an antibody for specificity and then finally loaded into a microcapillary tube. This tube is placed in a specially constructed apparatus so it is totally surrounded by optical fibers to capture all light emitted once the dye is excited using a laser. Instrumentation processing This equipment is a spectroscopic (light gathering) apparatus and corresponding method for rapidly detecting and analyzing analytes in a sample. The sample is irradiated by an excitation source in optical communication with the sample. The excitation source may include, but is not limited to, a laser, a flash lamp, an arc lamp, a light-emitting diode, or the like. Figure 1 depicts the current version of the SOFIA system. Four linear arrays (101) extend from a sample holder (102), which houses an elongated, transparent sample container which is open at both ends, to an end port (103). The distal end of the endport (104) is inserted into an end port assembly (200). The linear arrays (101) comprise a plurality of optical fibers having a first end and a second end, the plurality of optical fibers optionally surrounded by a protective and/or insulating sheath. The optical fibers are linearly arranged, meaning that they are substantially coplanar with respect to one another so as to form an elongated row of fibers. Applications The analyte of interest may be biological or chemical in nature, and by way of example, only may include chemical moieties (toxins, metabolites, drugs and drug residues), peptides, proteins, cellular components, viruses, and combinations thereof. The analyte of interest may be in either a fluid or a supporting medium, such as a gel. SOFIA has demonstrated its potential as a device with a wide range of applications. These include clinical applications, such as detecting diseases, discovering predispositions to pathologies, establishing a diagnosis and tracking the effectiveness of prescribed treatments, and nonclinical applications, such as preventing the entry of toxins and other pathogenic agents into products intended for human consumption: Clinical applications – SOFIA may be used to conduct both qualitative tests (either positive or negative results) to detect or identify bacteria or viruses, and quantitative tests (measuring substances) to detect or quantify biological constants or markers, which are substances produced by the body in the presence of, for example, an infectious disease (to allow determination of viral load, for instance, in AIDS therapy, or the level of toxicity in drugs of abuse detection). Nonclinical applications - As an immunoassay, SOFIA can potentially be used on a wider scale to monitor the quality of food, pharmaceuticals, cosmetics, or water, as well as general environmental parameters and agricultural products. The ability to detect and screen bacteria and toxins for a wide range of products is a growing and more complex requirement as may be evidenced by the increase incidence of food- and animal-borne diseases, such as E. coli, Salmonella, BSE, avian influenza, etc. Ante mortem test for prion diseases SOFIA has been used to rapidly detect the abnormal form of the prion protein (PrPSc) in samples of bodily fluids, such as blood or urine. PrPSc is the marker protein used in diagnostics for transmissible spongiform encephalopathies (TSEs), examples of which include bovine spongiform encephalopathy in cattle (i.e. “mad cow” disease), scrapie in sheep, and Creutzfeldt–Jakob disease in humans. Currently, no rapid means exists for the ante mortem detection of PrPSc in the dilute quantities in which it usually appears in bodily fluids. SOFIA has the advantages of requiring little sample preparation, and allowing for electronic diagnostic equipment to be placed outside the containment area. Background TSEs, or prion diseases, are infectious neurodegenerative diseases of mammals that include bovine spongiform encephalopathy, chronic wasting disease of deer and elk, scrapie in sheep, and Creutzfeldt–Jakob disease (CJD) in humans. TSEs may be passed from host to host by ingestion of infected tissues or blood transfusions. Clinical symptoms of TSEs include loss of movement and coordination and dementia in humans. They have incubation periods of months to years, but after the appearance of clinical signs, they progress rapidly, are untreatable and invariably are fatal. Attempts at TSE risk-reduction have led to significant changes in the production and trade of agricultural goods, medicines, cosmetics, blood and tissue donations, and biotechnology products. Post mortem neuropathological examination of brain tissue from an animal or human has remained the ‘gold standard’ of TSE diagnosis and is very specific, but not as sensitive as other techniques. To improve food safety, it would be beneficial to screen all the animals for prion diseases using ante mortem, preclinical testing, i.e., testing prior to presentation of symptoms. However, PrPSc levels are very low in presymptomatic hosts. In addition, PrPScs are generally unevenly distributed in body tissues, with highest concentration consistently found in nervous system tissues and very low concentrations in easily accessible body fluids such as blood or urine. Therefore, any such test would be required to detect extremely small amounts of PrP and would have to differentiate PrPC and PrPSc. Current PrPSc detection methods are time-consuming and employ post mortem analysis after suspicious animals manifest one or more symptoms of the disease. Current diagnostic methods are based mainly on detection of physiochemical differences between PrPC and PrPSc which, to date, are the only reliable markers for TSEs. For example, the most widely used diagnostic tests exploit the relative protease resistance of PrPSc in brain samples to discriminate between PrPC and PrPSc, in combination with antibody-based detection of the PK-resistant portion of PrPSc. It has as yet not been possible to detect prion diseases by using conventional methods, such as polymerase chain reaction, serology, or cell culture assays. An agent-specific nucleic acid has not yet been identified, and the infected host does not elicit an antibody response. The conformationally altered form of PrPC is PrPSc. Some groups believe PrPSc is the infectious agent (prion agent) in TSEs, while other groups do not. PrPSc could be a neuropathological product of the disease process, a component of the infectious agent, the infectious agent itself, or something else altogether. Regardless of what its actual function in the disease state is, PrPSc is clearly specifically associated with the disease process, and detection of it indicates infection with the agent causing prion diseases. SOFIA as an ante mortem test for prion diseases SOFIA provides, among other things, methods to diagnose prion diseases by detection of PrPSc in biological samples. Samples can be brain tissue, nerve tissue, blood, urine, lymphatic fluid, cerebrospinal fluid, or a combination thereof. Absence of PrPSc indicates no infection with the infectious agent up to the detection limits of the methods. Detection of a presence of PrPSc indicates infection with the infectious agent associated with prion disease. Infection with the prion agent may be detected in both presymptomatic and symptomatic stages of disease progression. These and other improvements have been achieved with SOFIA. SOFIA's sensitivity and specificity eliminates the need for PK digestion to distinguish between the normal and abnormal PrP isoforms. Further detection of PrPSc in blood plasma has been addressed by limited protein misfolding cyclic amplification (PMCA) followed by SOFIA. Because of the sensitivity of SOFIA, PMCA cycles can be reduced, thus decreasing the chances of spontaneous PrPSc formation and the detection of false-positive samples. SOFIA meets the needs of increased sensitivity in the detection of prion diseases in both presymptomatic and symptomatic TSE infected animals, including humans, by providing methods of analysis using highly sensitive instrumentation, which requires less sample preparation than previously described methods, in combination with recently developed Mabs against PrP. The method of the present version of SOFIA provides sensitivity levels sufficient to detect PrPSc in brain tissue. When coupled with limited sPMCA, the methods of the present inventions provide sensitivity levels sufficient to detect PrPSc in blood plasma, tissue and other fluids collected antemortem. The methods combine the specificity of the Mabs for antigen capture and concentration with the sensitivity of a surround optical fiber detection technology. In contrast to previously described methods for detection of PrPSc in brain homogenates, these techniques, when used to study brain homogenates, do not use seeded polymerization, amplification, or enzymatic digestion (for example, by proteinase K, or “PK”). This is important in that previous reports have indicated the existence of PrPSc isoforms with varied PK sensitivity, which decreases reliability of the assay. The sensitivity of this assay makes it suitable as a platform for rapid prion detection assay in biological fluids. In addition to prion diseases, the method may provide a means for rapid, high-throughput testing for a wide spectrum of infections and disorders. While about 40 cycles of sPMCA combined with immunoprecipitation were found to be inadequate for PrPSc detection in plasma by ELISA or western blotting, the PrPSc has also been found to be readily measured by SOFIA methods. The limited numbers of cycles necessary for the present assay platform virtually eliminates the possibility of obtaining PMCA-related false-positive results such as those previously reported (Thorne and Terry, 2008). Other clinical applications With rapid developments in the field of biomarker research, many infections and disorders that have not been possible to diagnose via in vitro testing, are becoming increasingly possible. SOFIA is predicted to be of broader use in diagnostic assay development for infections and disorders beyond the scope of prion diseases. A major potential application is for other protein misfolding diseases, in particular Alzheimer's. Published research A 2011 study reported the detection of prions in urine from naturally and orally infected sheep with clinical scrapie agent and orally infected preclinical and infected white-tailed deer with clinical chronic wasting disease (CWD). This is the first report on prion detection of PrPSc from the urine of naturally or preclinical prion-diseased ovines or cervids. A 2010 study demonstrated a moderate amount of protein misfolding cyclic amplification (PMCA) coupled to a novel SOFIA detection scheme, can be used to detect PrPSc in protease-untreated plasma from preclinical and clinical scrapie sheep, and white-tailed deer with chronic wasting disease, following natural and experimental infection. The disease-associated form of the prion protein (PrPSc), resulting from a conformational change of the normal (cellular) form of prion protein (PrPC), is considered central to neuropathogenesis and serves as the only reliable molecular marker for prion disease diagnosis. While the highest levels of PrPSc are present in the CNS, the development of a reasonable diagnostic assay requires the use of body fluids which characteristically contains extremely low levels of PrPSc. PrPSc has been detected in the blood of sick animals by means of PMCA technology. However, repeated cycling over several days, which is necessary for PMCA of blood material, has been reported to result in decreased specificity (false positives). To generate an assay for PrPSc in blood that is both highly sensitive and specific, the researchers used limited serial PMCA (sPMCA) with SOFIA. They did not find any enhancement of sPMCA with the addition of polyadenylic acid, nor was it necessary to match the genotypes of the PrPC and PrPSc sources for efficient amplification. A 2009 study found SOFIA, in its current format, is capable of detecting less than 10 attogram (ag) of hamster, sheep and deer recombinant PrP. About 10 ag of PrPSc from 263K-infected hamster brains can be detected with similar lower limits of PrPSc detection from the brains of scrapie-infected sheep and deer infected with chronic wasting disease. These detection limits allow protease-treated and untreated material to be diluted beyond the point where PrPC, nonspecific proteins or other extraneous material may interfere with PrPSc signal detection and/or specificity. This not only eliminates the issue of specificity of PrPSc detection, but also increases sensitivity, since the possibility of partial PrPSc proteolysis is no longer a concern. SOFIA will likely lead to early ante mortem detection of transmissible encephalopathies and is also amenable for use with additional target amplification protocols. SOFIA represents a sensitive means for detecting specific proteins involved in disease pathogenesis and/or diagnosis that extends beyond the scope of the transmissible spongiform encephalopathies. See also Immunoassay References External links Bionosis SOFIA Video Introduction Immunologic tests Protein methods Molecular biology Laboratory techniques Molecular biology techniques Prions
Surround optical-fiber immunoassay
[ "Chemistry", "Biology" ]
3,800
[ "Biochemistry methods", "Protein methods", "Protein biochemistry", "Immunologic tests", "Molecular biology techniques", "nan", "Molecular biology", "Biochemistry" ]
24,218,245
https://en.wikipedia.org/wiki/C20H23N7O7
The molecular formula C20H23N7O7 (molar mass: 473.44 g/mol, exact mass: 473.1659 u) may refer to: Folinic acid 10-Formyltetrahydrofolate (10-CHO-THF) Molecular formulas
C20H23N7O7
[ "Physics", "Chemistry" ]
64
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,219,329
https://en.wikipedia.org/wiki/Neurogenomics
Neurogenomics is the study of how the genome of an organism influences the development and function of its nervous system. This field intends to unite functional genomics and neurobiology in order to understand the nervous system as a whole from a genomic perspective. The nervous system in vertebrates is made up of two major types of cells – neuroglial cells and neurons. Hundreds of different types of neurons exist in humans, with varying functions – some of them process external stimuli; others generate a response to stimuli; others organize in centralized structures (brain, spinal ganglia) that are responsible for cognition, perception, and regulation of motor functions. Neurons in these centralized locations tend to organize in giant networks and communicate extensively with each other. Prior to the availability of expression arrays and DNA sequencing methodologies, researchers sought to understand the cellular behaviour of neurons (including synapse formation and neuronal development and regionalization in the human nervous system) in terms of the underlying molecular biology and biochemistry, without any understanding of the influence of a neuron's genome on its development and behaviour. As our understanding of the genome has expanded, the role of networks of gene interactions in the maintenance of neuronal function and behaviour has garnered interest in the neuroscience research community. Neurogenomics allows scientists to study the nervous system of organisms in the context of these underlying regulatory and transcriptional networks. This approach is distinct from neurogenetics, which emphasizes the role of single genes without a network-interaction context when studying the nervous system. Approaches Advent of high-throughput biology In 1999, Cirelli & Tononi first reported the association of genome-wide brain gene expression profiling (using microarrays) with a behavioural phenotype in mice. Since then, global brain gene expression data, derived from microarrays, has been aligned to various behavioural quantitative trait loci (QTLs) and reported in several publications. However, microarray based approaches have their own problems that confound analysis – probe saturation can result in very small measurable variance of gene expression between genetically unique individuals, and the presence of single nucleotide polymorphisms (SNPs) can result in hybridization artifacts. Furthermore, due to their probe-based nature, microarrays can miss out on many types of transcripts (ncRNAs, miRNAs, and mRNA isoforms). Probes can also have species-specific binding affinities that can confound comparative analysis. Notably, the association between behavioural patterns and high penetrance single gene loci falls under the purview of neurogenetics research, wherein the focus is to identify a simple causative relationship between a single, high penetrance gene and an observed function/behaviour. However, it has been shown that several neurological diseases tend to be polygenic, being influenced by multiple different genes and regulatory regions instead of one gene alone. There has hence been a shift from single gene approaches to network approaches for studying neurological development and diseases, a shift that has been greatly propelled by the advent of next generation sequencing methodologies. Next-generation sequencing approaches Twin studies have revealed that schizophrenia, bipolar disorder, autism spectrum disorder (ASD), and attention deficit hyperactivity disorder (ADHD) are highly heritable, genetically complex psychiatric disorders. However, linkage studies have largely failed at identifying causative variants for psychiatric disorders such as these, primarily because of their complex genetic architecture. Multiple low penetrance risk variants can be aggregated in affected individuals and families, and sets of causative variants could vary across families. Studies along these lines have determined a polygenic basis for several psychiatric disorders. Several independently occurring de novo mutations in patients Alzheimer's disease have been found to disrupt a shared set of functional pathways involved with neuronal signalling, for example. The quest to understand the causative biology of psychiatric disorders is hence greatly assisted by the ability to analyse entire genomes of affected and unaffected individuals in an unbiased manner. With the availability of massively parallel next generation sequencing methodologies, scientists have been able to look beyond the probe based captures of expressed genes. RNA-seq, for example, identifies 25-60% more expressed genes than microarrays do. In the upcoming field of neurogenomics, it is hoped that by understanding the genomic profiles of different parts of the brain, we might be able to improve our understanding of how the interactions between genes and pathways influence cellular function and development. This approach is expected to be able to identify the secondary gene networks that are disrupted in neurological disorders, subsequently assisting drug development stratagems for brain diseases. The BRAIN initiative launched in 2013, for example, seeks to "inform the development of future treatments for brain disorders, including Alzheimer's disease, epilepsy, and traumatic brain injury" . Rare variant association studies (RVAS) have highlighted the role of de novo mutations in several congenital and early-childhood-onset disorders like autism. Several of these protein disrupting mutations have been able to be identified only with the aid of whole genome sequencing efforts, and validated with RNA-Seq. Additionally, these mutations are not statistically enriched in individual genes, but rather, exhibit patterns of statistical enrichment in groups of genes associated with networks regulating neurological development and maintenance. Such a discovery would have been impossible with prior gene-centric approaches (neurogenetics, behavioural neuroscience). Neurogenomics allows for a high-throughput system-based approach for understanding the polygenic basis of neuropsychiatric disorders. Imaging studies and optical mapping When autism was identified as a distinct biological disorder in the 1980s, researchers found that autistic individuals showed a brain growth abnormality in the cerebellum in their early developmental years. Subsequent research has indicated that 90% of autistic children have a larger brain volume than their peers by 2 to 4 years of age, and show an expansion in the white and gray matter content in the cerebrum. The white and gray matter in the cerebrum is associated with learning and cognition respectively, and the formation of amyloid plaques in the white matter has been associated with Alzheimer's disease. These findings highlighted the influence of structural variance in the brain on psychiatric disorders, and have motivated the use of imaging technologies to map regions of divergence between healthy and diseased brains. Furthermore, while it may not always be possible to retrieve biological specimens from different areas live human brains, neuroimaging techniques offer a noninvasive means to understanding the biological basis of neurological disorders. It is hoped that an understanding of localization patterns of different psychiatric diseases could in turn inform network analysis studies in neurogenomics. MRI Structural Magnetic Resonance Imaging (MRI) can be used to identify the structural composition of the brain. Particularly in the context of neurogenomics, MRI has played an extensive role in the study of Alzheimer's disease over the past four decades. It was initially used to rule out other causes of dementia, but recent studies indicated the presence of characteristic changes in patients with Alzheimer's disease. As a result, MRI scans are currently being used as a neuroimaging tool to help identify the temporal and spatial pathophysiology of Alzheimer's disease, such as specific cerebral alterations and amyloid imaging. The ease and non-invasive nature of MRI scans has motivated research projects that trace the development and onset of psychiatric diseases in the brain. Alzheimer disease has become a key candidate in this topographical approach to psychiatric diseases. For example, MRI scans are currently being used to track the resting and task-dependent functional profiles of brains in children with autosomal dominant Alzheimer disease. These studies have found indications of early onset brain alterations in at-risk individuals for Alzheimer's disease. The Autism Center of Excellence at University of California, San Diego, is also conducting MRI studies with children between 12 and 42 months, in the hopes of characterizing brain development abnormalities in children who present behavioural symptoms of autism. Additional research has indicated that there are specifics patterns of atrophy in the cerebrum (as a repercussion of neurodegeneration) in different neurological disorders and diseases. These disease-specific patterns of progression of atrophy can be identified with MRI scans, and provide a clinical phenotype context to neurogenomic research. The temporal information about disease progression provided by this approach can also potentially inform the interpretation of gene network-level perturbations in psychiatric diseases. Optical mapping One prohibitive feature of 2nd generation sequencing methodologies is the upper limit on the genomic range accessible by mate-pairing. Optical mapping is an emerging methodology used to span large-scale variants that cannot usually be detected using paired end reads. This approach has been successfully applied to detect structural variants in oligodendroglioma, a type of brain cancer. Recent work has also highlighted the versatility of optical maps in improving existing genome assemblies. Chromosomal rearrangements, microdeletions, and large-scale translocations have been associated with impaired neurological and cognitive function, for example in hereditary neuropathy and neurofibromatosis. Optical mapping can significantly improve variant detection and inform gene interaction network models for the diseased state in neurological disorders. Studying other brain diseases Apart from neurological disorders, there are additional diseases that manifest in the brain and have formed exemplar use-case scenarios for the application of brain imaging in network analysis. In a classic example of imaging-genomic analyses, a research study in 2012 compared MRI scans and gene expression profiles of 104 glioma patients in order to distinguish treatment outcomes and identify novel targetable genomic pathways in Glioblastoma Multiforme (GBM). Researchers found two distinct groups of patients with significantly different organization of white matter (invasive vs non-invasive). Subsequent pathway analysis of the gene expression data indicated mitochondrial dysfunction as the top canonical pathway in an aggressive, low-mortality GBM phenotype. Expansion of brain imaging approaches to other diseases can be used to rule out other medical illnesses while diagnosing psychiatric disorders, but cannot be used to inform the presence or absence of a psychiatric disorder. Research developmental models In humans The current approaches in collecting gene expression data in human brains are to use either microarrays or RNA-seq. Currently, it is rare to gather "live" brain tissue – only when treatments involve brain surgery is there a chance that brain tissue is collected during the procedure. This is the case with epilepsy. Currently, gene expression data is usually collected on post mortem brains and this is often a barrier to neurogenomics research in humans. After death, the amount of time between death and when the data from the post mortem brain is collected is known as the post mortem interval (PMI). Since RNA degrades after death, a fresh brain is optimal – but not always available. This in turn can influence a variety of downstream analyses. Consideration should be taken of the following factors when working with 'omics data collected from post-mortem brains: Ideally, human brains should be controlled for PMIs for a given study. The cause of death is also an important variable to consider in the collection of human brain samples for the purposes of neurogenomics research. For example, brain samples of individuals with clinical depression are often collected after suicide. Certain conditions of death, such as drug overdose or self-inflicted gunshot, will alter the expression of the brain. Another issue with studying gene expression in brains is the cellular heterogeneity of brain tissue samples. Bulk brain samples may vary in proportions of specific cell populations from case to case. This can impact the gene expression signatures and may significantly change differential expression analysis. One approach to address this issue is to use single cell RNA-seq. This would control for a specific cell type. However, this solution is only applicable where studies are not cell-type specific. Differential diagnosis also remains a critical pre-analytical confounder of cohort-wide studies of spectrum neurological disorders. Specifically, this has been noted to be a problem for Alzheimer's disease and autism spectrum disorder studies. Furthermore, as our understanding of the diverse symptoms and genomic underpinnings of various neurogenomic disorders improves, the diagnostic criteria itself undergoes rearrangements and review. Animal models Ongoing genomics research in neurological disorders tends to use animal models (and corresponding gene homologs) to understand the network interactions underlying a particular disorder due to ethical issues surrounding the retrieval of biological specimens from live human brains. This, too, is not without its roadblocks. Neurogenomic research with a model organism is contingent on the availability of a fully sequenced and annotated reference genome. Additionally, the RNA profiles (miRNA, ncRNA, mRNA) of the model organism need to be well catalogued, and any inferences applied from them to humans must have a basis in functional/sequence homology. Zebrafish Zebrafish development relies on gene networks that are highly conserved among all vertebrates. Additionally, with an extremely well annotated set of 12,000 genes and 1,000 early development mutants that are actually visible in the optically clear zebrafish embryos and larvae, zebrafish offer a sophisticated system for mutagenesis and real-time imaging of developing pathologies. This early development model has been employed to study the nervous system at cellular resolution. The zebrafish model system has already been used to study neuroregeneration and severe polygenic human diseases like cancer and heart disease. Several zebrafish mutants with behavioural variations in response to cocaine and alcohol dosage have been isolated and can also form a basis for studying the pathogenesis of behavioural disorders. Rodent Rodent models have been preeminent in studying human disorders. These models have been extensively annotated with gene homologs of several monogenic disorders in humans. Knockout studies of these homologs have led to expansion of our understanding of network interactions of genes in human tissues. For example, the FMR1 gene has been implicated with autism from a number of network studies. Using a knockout of FMR1 in mice creates the model for Fragile X Syndrome, one of the disorders in the Autism spectrum. Mice xenografts are particularly useful for drug discovery, and were extremely important in the discovery of early anti-psychotic drugs. The development of animal models for complex psychiatric diseases has also improved over the last few years. Rodent models have demonstrated behavioural phenotype changes resembling a positive schizophrenia state, either after genetic manipulation or after treatment with drugs that target the areas of the brain suspected to influence hyperactivity or neurodevelopment. Interest has been generated in identifying the network disruptions mediated by these laboratory manipulations, and collection of genomic data from rodent studies has contributed significantly to a better understanding of the genomics of psychiatric diseases. The first mouse brain transcriptome was generated in 2008. Since then, extensive work has been done with building social-stress mice models to study the pathway level expression signatures of various psychiatric diseases. A recent paper simulated features of Post Traumatic Stress Disorder (PTSD) in mice, and profiled the entire transcriptome of these mice. The authors found differential regulation in many biological pathways, some of which were implicated in anxiety disorders (hyperactivity, fear response), mood disorders, and impaired cognition. These findings are backed by extensive transcriptomic analyses of anxiety disorders, and expression level changes in biological pathways involved with fear learning and memory are thought to contribute to the behavioural manifestations of these disorders. It is thought that functional enrichment of genes involved in long term synaptic potentiation, depression, and plasticity has an important role to play in the acquisition, consolidation, and maintenance of traumatic memories underlying anxiety disorders. Experimental mice models for psychiatric disorders A common approach to using a mouse model is to apply an experimental treatment to a pregnant mouse in order to affect a whole litter. However, a key issue in the field is the treatment of litters in a statistical analysis. Most studies consider the total number of offspring produced as that may lead to an increase in statistical power. However, the correct way is to count by the number of litters and to normalize based on litter size. It was found that several autism studies incorrectly performed their statistical analyses based on total number of offspring instead of number of litters. Several anxiety disorders such as post-traumatic stress disorder (PTSD) involve heterogeneous changes in several different brain regions, such as the hippocampus, amygdala, and nucleus accumbens. The cellular encoding of traumatic events and the behavioral responses triggered by such events has been shown to lie primarily in changes in signaling molecules associated with synaptic transmission. Global gene expression profiling of the various gene regions implicated in fear and anxiety processing, using mice models, has led to the identification of temporally and spatially distinct sets of differentially expressed genes. Pathway analysis of these genes has indicated possible roles in neurogenesis and anxiety-related behavioural responses, alongside other functional and phenotypic observations. Mice models for brain research have contributed significantly to drug development and increased our understanding of the genomic underpinnings of several neurological diseases in the last generation. Chlorpromazine, the first antipsychotic drug (discovered in 1951), was identified as a viable treatment option after it was shown to suppress response to aversive stimuli in rats in a behavioural screen. Challenges The modelling and assessment of latent symptoms (thoughts, verbal learning, social interactions, cognitive behaviour) remains a challenge when using model organisms to study psychiatric disorders with a complex genetic pathology. For example, a given genotype+phenotype in a mouse model must imitate the genomic underpinnings of a phenotype observed in a human. This is a particularly crucial item of consideration in spectrum disorders such as autism. Autism is a disorder whose symptoms can be divided into two categories: (i) deficits of social interactions and (ii) repetitive behaviours and restricted interests. Since mice tend to be more social creatures amongst all members of the order Rodentia currently being used as model organisms, mice are generally used to model human psychiatric disorders as closely as possible. Particularly for autism, the following work-arounds are currently in place to emulate human behavioural symptoms: For the first diagnostic category of impaired social behaviour, mice are subject to a social assay intended to represent typical autistic social deficits. Normal social behaviour for mice includes sniffing, following, physical contact and allogrooming. Vocal communication could be used as well. There are a number of ways the second diagnostic category can be observed in mice. Examples of repetitive behaviours can include excessive circling, self-grooming and excessive digging. Usually these behaviours would be performed consistently within a long measurement of time (i.e. self-grooming for 10 minutes). While repetitive behaviours are easily observable, it is difficult to characterize actual restricted interests of mice. One aspect of restricted interests of autistic individuals is the "insistence of sameness"—the concept that autistic individuals require their environment to remain consistent. If that environment should change, the individual would experience stress and anxiety. There has been reported success in confirming a mouse model of autism by changing the mouse's environment. In any of these experiments, the 'autistic' mice have a 'normal' socializing partner and the scientists observing the mice are unaware ("blind") to the genotypes of the mice. Gene expression in the brain The gene expression profile of the central nervous system (CNS) is unique. Eighty percent of all human genes are expressed in the brain; 5,000 of these genes are solely expressed in the CNS. The human brain has the highest amount of gene expression of all studied mammalian brains. In comparison, tissues outside of the brain will have more similar expression levels in comparison to their mammalian counterparts. One source of the increased expression levels in the human brain is from the non-protein coding region of the genome. Numerous studies have indicated that the human brain have a higher level of expression in regulatory regions in comparison to other mammalian brains. There is also notable enrichment for more alternative splicing events in the human brain. Spatial differences Gene expression profiles also vary within specific regions of the brain. A microarray study showed that the transcriptome profile of the CNS clusters together based on region. A different study characterized the regulation of gene expression across 10 different regions based on their eQTL signals. The cause of the varying expression profiles relates to function, neuron migration and cellular heterogeneity of the region. Even the three layers of the cerebral cortex have distinct expression profiles. A study completed at Harvard Medical School in 2014 was able to identify developmental lineages stemming from single base neuronal mutations. The researchers sequenced 36 neurons from the cerebral cortex of three normal individuals, and found that highly expressed genes, and neural associated genes, were significantly enriched for single-neuron SNVs. These SNVs, in turn, were found to be correlated with chromatin markers of transcription from fetal brain. Development patterns in humans Gene expression of the brain changes throughout the different phases of life. The most significant levels of expression are found during early development, with the rate of gene expression being highest during fetal development. This results from the rapid growth of neurons in the embryo. Neurons at this stage are undergoing neuronal differentiation, cell proliferation, migration events and dendritic and synaptic development. Gene expression patterns shift closer towards specialized functional profiles during embryonic development, however, certain developmental steps are still ongoing at parturition. Consequently, gene expression profiles of the two brain hemispheres appear asymmetrical at birth. At birth, gene expression profiles appear asymmetrical between brain hemispheres. As development continues, the gene expression profiles become similar between the hemispheres. Given a healthy adult, expression profiles stay relatively consistent from the late twenties into the late forties. From the fifties onwards, there is significant decrease in the expression of genes important for regular function. Despite this, there is an increase in the diversity of genes being expressed across the brain. This age related change in expression may be correlated with GC content. At later stages of life, there is an increase in the induction of low GC-content pivotal genes as well as an increase in the repression of high GC-content pivotal genes. Another cause of the shift in gene diversity is the accumulation of mutations and DNA damage. Gene expression studies show that genes that accrue these age-related mutations are consistent between individuals in the aging population. Genes that are highly expressed at development decrease significantly at late stages in life, whereas genes that are highly repressed at development increase significantly at the late stages. Evolution of the mammalian brain The evolution of Homo sapiens since the divergence from the primate common ancestor has shown a marked expansion in the size and complexity of the brain, especially in the cerebral cortex. In comparison to primates, the human cerebral cortex has a larger surface area but differs only slightly in thickness. Many large scale studies in understanding the differences of the human brain from other species have indicated expansion of gene families and changes in alternative splicing to be responsible for the corollary increase in cognitive capabilities and cooperative behaviour in humans. However, we are yet to determine the exact phenotypic consequences of all these changes. One difficulty is that only primates have developed subdivisions in their cerebral cortex, making the modeling of human specific neurological problems difficult to mimic in rodents. Sequence data is used to understand the evolutionary genetic changes which led to the development of the human CNS. We can then understand how the neurological phenotypes differ between species. Comparative genomics entails comparison of sequence data across a phylogeny to pinpoint the genotypic changes that occur within specific lineages, and understand how these changes might have arisen. The increase in high quality mammalian reference sequences generally makes comparative analysis better as it increases statistical power. However, the increase in number of species in a phylogeny does risk adding unnecessary noise as the alignments of the orthologous sequences usually decrease in quality. Furthermore, different classes of species will have significant differences in their phenotypes. Despite this, comparative genomics has allowed us to connect the genetic changes found in a phylogeny to specific pathways. In order to determine this, lineages are tested for the functional changes that accrue over time. This is often measured as a ratio of nonsynonymous substitutions to synonymous substitutions or the dN/dS ratio (sometimes, further abbreviated to ω). When the dN/dS ratio is greater than 1, this indicates positive selection. A dN/dS ratio equal to 1 is evidence of no selective pressures. A dN/dS ratio less than 1 indicates negative selection. For example, the conserved regions of the genome will generally have a dN/dS ratio of less than 1 since any changes to those positions will likely be detrimental. Of the genes expressed in the human brain, it is estimated that 342 of them have a dN/dS ratio greater than 1 in the human lineage in comparison to other primate lineages. This indicates positive selection on the human lineage for brain phenotypes. Understanding the significance of the positive selection is generally the next step. For example, ASPM, CDK5RAP2 and NIN are genes that are positively selected for on the human lineage and have been directly correlated with brain size. This finding may help elucidate why human brains are larger than other mammalian brains. Network level expression differences between species It is thought that gene expression changes, being the ultimate response for any genetic changes, are a good proxy for understanding phenotypic differences within biological samples. Comparative studies have revealed a range of differences in the transcriptional controls between primates and rodents. For example, the gene CNTNAP2 is specifically enriched for in the prefrontal cortex. The mouse homolog of CNTNAP2 is not expressed in the mouse brain. CNTNAP2 has been implicated in cognitive functions of language as well as neurodevelopmental disorders such as Autism Spectrum Disorder. This suggests that the control of expression plays a significant role in the development in unique human cognitive function. As a consequence, a number of studies have investigated the brain specific enhancers. Transcription factors such as SOX5 have been found to be positively selected for on the human lineage. Gene expression studies in humans, chimpanzees and rhesus macaques, have identified human specific co-expression networks, and an elevation in gene expression in the human cortex in comparison to primates. Disorders Neurogenomic disorders manifest themselves as neurological disorders with a complex genetic architecture and a non-Mendelian-like pattern of inheritance. Some examples of these disorders include Bipolar disorder and Schizophrenia. Several genes may be involved in the manifestation of the disorder, and mutations in such disorders are generally rare and de novo. Hence it becomes extremely unlikely to observe the same (potentially causative) variant in two unrelated individuals affected with the same neurogenomic disorder. Ongoing research has implicated several de novo exonic variations and structural variations in Autism Spectrum Disorder (ASD), for example. The allelic spectrum of the rare and common variants in neurogenomic disorders therefore necessitates a need for large cohort studies in order to effectively exclude low effect variants and identify the overarching pathways frequently mutated in the different disorders, rather than specific genes and specific high penetrance mutations. Whole genome sequencing (WGS) and whole exome sequencing (WES) has been used in Genome Wide Association Studies (GWAS) to characterize genetic variants associated with neurogenomic disorders. However, the impact of these variants cannot always be verified because of the non-Mendelian inheritance patterns observed in several of these disorders. Another prohibitive feature in network analysis is the lack of large-scale datasets for many psychiatric (neurogenomic) diseases. Since several diseases with neurogenomic underpinnings tend to have a polygenic basis, several nonspecific, rare, and partially penetrant de novo mutations in different patients can contribute to the same observed range of phenotypes, as is the case with Autism Spectrum Disorder and schizophrenia. Extensive research in alcohol dependence has also highlighted the need for high-quality genomic profiling of large sample sets when studying polygenic, spectrum disorders. The 1000 Genomes Project was a successful demonstration of how a concerted effort to acquire representative genomic data from the broad spectrum of humans can result in identification of actionable biological insights for different diseases. However, a large-scale initiative like this is still lacking in the field of neurogenomic disorders specifically. Modelling psychiatric disorders in neurogenomics research – issues One major GWAS study identified 13 new risk loci for schizophrenia. Studying the impact of these candidates would ideally demonstrate a schizophrenia phenotype in animal models, which is usually difficult to observe due to its manifestation as a latent personality. This approach is able to determine the molecular impact the candidate gene. Ideally the candidate genes would have a neurological impact, which in turn would suggest that it plays a role in the neurological disorder. For example, in the aforementioned schizophrenia GWAS study, Ripke and colleagues determined that these candidate genes were all involved in calcium signalling. Alternatively, one can study these variants in model organisms in the context of affected neurological function. It is important to note that the high penetrance variants of these disorders tend to be de novo mutations. A further complication to studying neurogenomic disorders is the heterogeneous nature of the disorder. In many of these disorders, the mutations observed from case to case do not stay consistent. In autism, an affected individual may experience a large amount of deleterious mutations in gene X. A different affected individual may not have any significant mutations on gene X but have a large amount of mutations in gene Y. The alternative is to determine if gene X and gene Y impact the same biochemical pathway—one that influences a neurological function. A bioinformatics network analysis is one approach to this problem. Network analyses methodologies provide a generalized, systems overview of a molecular pathway. One final complication to consider is the comorbidity of neurogenomic genes. Several disorders, especially at the more severe ends of the spectrum tend to be comorbid with each other. For example, more severe cases of ASD tend to be associated with intellectual disability (ID). This raises the question of whether or not there are true, unique ASD genes and unique ID genes or if there are just genes just associated with neurological function that can be mutated into an abnormal phenotype. One confounding factor may be the actual diagnostic category and methods of the spectrum disorders as symptoms between severe disorders may be similar. One study investigated the comorbid symptoms between groups of ID and ASD, and found no significant difference between the symptoms of ID children, ASD children with ID and ASD children without ID. Future research may help establish a more stringent genetic basis for the diagnoses of these disorders. Network analysis The main goal of network analysis in neurogenomics is to identify statistically significant nonrandom associations between genes that contain risk variants. While several algorithm implementations of this approach already exist, the general steps for network analysis remain the same. The analytical process starts out with the identification of a biological network based on experimental validation. This can be a gene co-expression network, or a protein-protein interaction (PPI) network. The nodes of the network will be clustered. Subsequently, a specific list of genes with known associations to a particular phenotype of interest is generated. This list could be determined by experimental data, agnostic of genetic studies in psychiatric disorders. This is referred to as a 'hit list'. Genes that belong to the hit list as well as the biological network selected in the first step are marked as such. This is followed by a guilt-by-association (GBA) step. This means that clusters within the biological network that have a significant amount of genes from the hit list are investigated further using functional enrichment tools and database querying for the pathways in which these high scoring cluster genes participate Thus the biological associations of the high-scoring, experimentally implicated cluster members are investigated, expanding the search area from beyond the initial hit list to include gene members of additional pathways that may have significant association with the initial biological network under consideration. This results in a set of candidate genes. The underlying principle of this approach is that the genes that cluster together, will also jointly affect the same molecular pathway. Again, they would ideally be part of a neurological function. The candidate genes can then be used to prioritize variants for wet lab validation. Neuropharmacology Historically, due to the behavioural stimulation manifested as a symptom in several the neurogenomic disorders, the therapies would rely mostly on anti-psychotics or antidepressants. These classes of medications would suppress common symptoms of the disorders, but with questionable efficacy. The biggest barrier to neruopharmacogenomic research was the cohort sizes. Given newly available large-cohort sequencing data, there has been a recent push to expand therapeutic options. The heterogenous nature of neurological diseases is the key motivation for personalized medicine approaches to their therapies. It is rare to find single high penetrance causative genes in neurological diseases. The genomic profiles understandably vary between cases, and logically, the therapies would need to vary between cases. Further complicating the issue is that many of these disorders are spectrum disorders. Their genetic etiology will vary within this spectrum. For example, severe ASD is associated with high penetrance de novo mutations. Milder forms of ASD is usually associated with a mixture of common variants. The key issue then is the translation of these newly identified genetic variants (from Copy Number Variant studies, candidate gene sequencing and high throughput sequencing technologies) into an intervention for patients with neurogenomic disorders. One aspect will be if the neurological disorder are medically actionable (i.e. is there a simple metabolic pathway that a therapy can target). For example, specific cases of ASD have been associated with microdeletions on TMLHE gene. This gene codes for the enzyme of carnitine biosynthesis. Supplements to elevate carnitine levels appeared to alleviate certain ASD symptoms but the study was confounded by many influencing factors. As mentioned earlier, using a gene network approach will help identify relevant pathways of interest. Many neuropharmacogenomic approaches have focused on targeting the downstream products of these pathways. Blood brain barrier Studies in animal models for several brain diseases has shown that the blood brain barrier (BBB) undergoes modification at many levels; for example, the surface glycoprotein composition can influence the types of HIV-1 strains transported by the BBB. The BBB has been found to be key in the onset of Alzheimer's disease. It is extremely difficult, however, to be able to study this in humans due to obvious restrictions with accessing the brain and retrieving biological specimens for sequencing or morphological analysis. Mice models of the BBB and models of disease states have served well in conceptualizing the BBB as a regulatory interface between disease and good health in the brain. Personalized neurobiology The heterogenous nature of neurological diseases is the key motivation for personalized medicine approaches to their therapies. Genomic samples of individual patients could be used to identify predictive factors, or to better understand the specific prognosis of a neurogenomic disease, and use this information to guide treatment options. While there is a clear clinical utility to this approach, the adaptation of this approach is still nonexistent. There are various issues prohibiting the application of personalized genomics to the assessment, diagnosis, and treatment of psychiatric disorders. Firstly, the causative network biology of several spectrum disorders with neurogenomic underpinnings is not fully understood yet, in spite of extensive studies conducted with disorders like Autism Spectrum and schizophrenia. Thus, the analytical validity of standing hypotheses concerning the etiology of neurogenomic disorders has still not been fully established and is subject to debate and controversy. The clinical validity of genetic variants that have shown to be highly correlated with specific neurogenomic disorders is often a major cause of concern. The interpretation of these test results, and subsequent decision making, are a complicated undertaking given the polygenic nature of many of these disorders. Complicating things further, it has been shown that pre-emptive intervention in major psychiatric disorders does not always reduce the risk for the disorder. Such intervention might not even be available for at-risk offspring of affected adults, thereby limiting the 'medical actionability' of the data. Ethical concerns have also been raised regarding the safeguarding of personal genomic information, and how best to approach the burden of incidental findings and family risk assessment. Consanguinity and in-breeding can lead to selective enrichment of rare, otherwise low penetrance genetic mutations attributed to various symptoms of neurogenomic disorders. Thus, the interpretation of family-specific genetic mutations and/or network-level disruptions in the onset of a rare psychiatric disorder requires careful consideration of the motivations of participants included in the study. That said, these issues can be addressed by effective education and counseling, and collection of genomic data from patients with psychiatric disorders should not be disqualified solely on this basis. The data itself serves as a dynamic health resource and can significantly further our understanding of the genomic basis of several psychiatric disorders. See also Neuroinformatics 1000 Genomes Project Genome-wide association study References Genetics Neuroinformatics
Neurogenomics
[ "Biology" ]
7,792
[ "Bioinformatics", "Genetics", "Neuroinformatics" ]
24,220,458
https://en.wikipedia.org/wiki/Super%20Tonks%E2%80%93Girardeau%20gas
In physics, the super Tonks–Girardeau gas represents an excited quantum gas phase with strong attractive interactions in a one-dimensional spatial geometry. Usually, strongly attractive quantum gases are expected to form dense particle clusters and lose all gas-like properties. But in 2005, it was proposed by Stefano Giorgini and co-workers that there is a many-body state of attractively interacting bosons that does not decay in one-dimensional systems. If prepared in a special way, this lowest gas-like state should be stable and show new quantum mechanical properties. Particles in a super-Tonks gas should be strongly correlated and show long range order with a Luttinger liquid parameter K<1. Since each particle occupies a certain volume, the gas properties are similar to a classical gas of hard rods. Despite the mutual attraction, the single particle wave functions separate and the bosons behave similar to fermions with repulsive, long-range interaction. To prepare the super-Tonks–Girardeau phase it is necessary to increase the repulsive interaction strength all the way through the Tonks–Girardeau regime up to infinity. Sudden switching from infinitely strong repulsive to infinitely attractive interactions stabilizes the gas against collapse and connects the ground state of the Tonks gas to the excited state of the super-Tonks gas. Experimental realization The super-Tonks–Girardeau gas was experimentally observed in Ref. using an ultracold gas of cesium atoms. Reducing the magnitude of the attractive interactions caused the gas to became unstable to collapse into cluster-like bound states. Repulsive dipolar interactions stabilize the gas when instead using highly magnetic dysprosium atoms. This enabled the creation of prethermal quantum many-body scar states via the topological pumping of these super-Tonks-Girardeau gases. References Condensed matter physics
Super Tonks–Girardeau gas
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
378
[ "Phases of matter", "Materials science", "Condensed matter physics", " molecular", "Atomic", "Matter", " and optical physics" ]
24,224,042
https://en.wikipedia.org/wiki/Dialdehyde%20starch
Dialdehyde starch is a polysaccharide derived by chemical modification from starch. It is prepared by periodate oxidation of starch. It has found use in the paper industry, where in it has been shown to improve the wet strength of consumer products like toilet paper and paper towels. References Starch Organic polymers
Dialdehyde starch
[ "Chemistry" ]
68
[ "Organic compounds", "Polymer stubs", "Organic polymers", "Organic chemistry stubs" ]
24,224,338
https://en.wikipedia.org/wiki/Expocode
EXPOCODE, or the "expedition code", is a unique alphanumeric identifier defined by the National Oceanographic Data Center (NODC) of the US. The code defines a standard nomenclature for cruise labels of research vessels and intends to avoid confusion in oceanographic data management. The code was used by international projects (WOCE, CarboOcean) and is considered a de facto standard in the international hydrographic community beginning with the Climate Variability Program (CLIVAR) and the EU-Project Eurofleets. The format of an expocode for an oceanographic cruise is defined in the format NODCYYYYMMDD where: NODC is NOAA's National Oceanographic Data Center's 4-character research vessel identifier, consisting of country and ship code YYYYMMDD is the GMT date when the cruise left port. Example for a cruise of the US research vessel Nathaniel B. Palmer, starting on 2011-02-19: 320620110219 (Code of US = 32, code of Palmer = 06, date when cruise starts 2011-02-19) External links NODC country codes NODC ship codes CLIVAR and Carbon Hydrographic Data Office (CCHDO) Oceanography
Expocode
[ "Physics", "Environmental_science" ]
256
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
27,478,537
https://en.wikipedia.org/wiki/Leggett%20inequality
In physics, the Leggett inequalities, named for Anthony James Leggett, who derived them, are a related pair of mathematical expressions concerning the correlations of properties of entangled particles. (As published by Leggett, the inequalities were exemplified in terms of relative angles of elliptical and linear polarizations.) Inequalities They are fulfilled by a large class of physical theories based on particular non-local and realistic assumptions, that may be considered to be plausible or intuitive according to common physical reasoning. The Leggett inequalities are violated by quantum mechanical theory. The results of experimental tests in 2007 and 2010 have shown agreement with quantum mechanics rather than the Leggett inequalities. Given that experimental tests of Bell's inequalities have ruled out local realism in quantum mechanics, the violation of Leggett's inequalities is considered to have falsified realism in quantum mechanics. In quantum mechanics "realism" means "notion that physical systems possess complete sets of definite values for various parameters prior to, and independent of, measurement". See also CHSH inequality Leggett–Garg inequality References External links "The Reality Tests", Joshua Roebke, SEED, June 2008. "A quantum renaissance", Markus Aspelmeyer and Anton Zeilinger, Physics World, July 2008. "Quantum theory survives latest challenge", Kate McAlpine, Physics World, December 2010. Equations of physics Quantum information science Quantum measurement Physics theorems Inequalities
Leggett inequality
[ "Physics", "Mathematics" ]
315
[ "Mathematical theorems", "Equations of physics", "Mathematical objects", "Quantum mechanics", "Binary relations", "Equations", "Quantum measurement", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Physics theorems" ]
27,481,335
https://en.wikipedia.org/wiki/Plasmaron
In physics, the plasmaron was proposed by Lundqvist in 1967 as a quasiparticle arising in a system that has strong plasmon-electron interactions. In the original work, the plasmaron was proposed to describe a secondary peak (or satellite) in the photoemission spectral function of the electron gas. More precisely it was defined as an additional zero of the quasi-particle equation . The same authors pointed out, in a subsequent work, that this extra solution might be an artifact of the used approximations: A more mathematical discussion is provided. The plasmaron was also studied in more recent works in the literature. It was shown, also with the support of the numerical simulations, that the plasmaron energy is an artifact of the approximation used to numerically compute the spectral function, e.g. solution of the dyson equation for the many body green function with a frequency dependent GW self-energy. This approach give rise to a wrong plasmaron peak instead of the plasmon satellite which can be measured experimentally. Despite this fact, experimental observation of a plasmaron was reported in 2010 for graphene. Also supported by earlier theoretical work. However subsequent works discussed that the theoretical interpretation of the experimental measure was not correct, in agreement with the fact that the plasmaron is only an artifact of the GW self-energy used with the Dyson equation. The artificial nature of the plasmaron peak was also proven via the comparison of experimental and numerical simulations for the photo-emission spectrum of bulk silicon. Other works on plasmaron have been published in the literature. Observation of plasmaron peaks have also been reported in optical measurements of elemental bismuth and in other optical measurements. References Quasiparticles Plasmonics Physics
Plasmaron
[ "Physics", "Chemistry", "Materials_science" ]
353
[ "Plasmonics", "Matter", "Surface science", "Nanotechnology", "Condensed matter physics", "Quasiparticles", "Solid state engineering", "Subatomic particles" ]
28,770,638
https://en.wikipedia.org/wiki/Irrigation%20informatics
Irrigation informatics is a newly emerging academic field that is a cross-disciplinary science using informatics to study the information flows and data management related to irrigation. The field is one of many new informatics sub-specialities that uses the science of information, the practice of information processing, and the engineering of information systems to advance a biophysical science or engineering field. Background Agricultural productivity increases are eagerly sought by governments and industry, spurred by the realisation that world food production must double in the 21st century to feed growing populations and that as irrigation makes up 36% of global food production, but that new land for irrigation growth is very limited, irrigation efficiency must increase. Since irrigation science is a mature and stable field, irrigation researchers are looking to cross-disciplinary science to bring about production gains and informatics is one such science along with others such as social science. Much of the driver for work in the area of irrigation informatics is the perceived success of other informatics fields such as health informatics. Current research Irrigation informatics is very much a part of the wider research into irrigation wherever information technology or data systems are used, however the term informatics is not always used to describe research involving computer systems and data management so that information science or information technology may alternatively be used. This leads to a great number of irrigation informatics articles not using the term irrigation informatics. There are currently no formal publications (journals) that focus on irrigation informatics with the publication most likely to present articles on the topic being Computers and electronics in Agriculture or one of the many irrigation science journals such as Irrigation Science. Recent work in the general area of irrigation informatics has mentioned the exact phrase "Irrigation Informatics" with at least one publication in scientific conference proceedings using it in its title. Current implementations Meteorological informatics, as with all informatics, are increasingly being used to handle the growing volumes of data that are available from sensors, remote sensing and scientific models. The Australian Bureau of Meteorology has recently implemented an XML data format, known as the Water Data Transfer Format (WDTF) and standard to be used by Australian government agencies and meteorological data suppliers when delivering data to the Bureau. This format includes specifications for evapotranspiration and other weather parameters that are useful for irrigation and may be used through implementations of irrigation informatics. See also Environmental informatics References Informatics Land management Water management Computational science
Irrigation informatics
[ "Mathematics" ]
482
[ "Computational science", "Applied mathematics" ]
28,777,533
https://en.wikipedia.org/wiki/DNA%20and%20RNA%20codon%20tables
A codon table can be used to translate a genetic code into a sequence of amino acids. The standard genetic code is traditionally represented as an RNA codon table, because when proteins are made in a cell by ribosomes, it is messenger RNA (mRNA) that directs protein synthesis. The mRNA sequence is determined by the sequence of genomic DNA. In this context, the standard genetic code is referred to as translation table 1. It can also be represented in a DNA codon table. The DNA codons in such tables occur on the sense DNA strand and are arranged in a 5-to-3 direction. Different tables with alternate codons are used depending on the source of the genetic code, such as from a cell nucleus, mitochondrion, plastid, or hydrogenosome. There are 64 different codons in the genetic code and the below tables; most specify an amino acid. Three sequences, UAG, UGA, and UAA, known as stop codons, do not code for an amino acid but instead signal the release of the nascent polypeptide from the ribosome. In the standard code, the sequence AUG—read as methionine—can serve as a start codon and, along with sequences such as an initiation factor, initiates translation. In rare instances, start codons in the standard code may also include GUG or UUG; these codons normally represent valine and leucine, respectively, but as start codons they are translated as methionine or formylmethionine. The classical table/wheel of the standard genetic code is arbitrarily organized based on codon position 1. Saier, following observations from, showed that reorganizing the wheel based instead on codon position 2 (and reordering from UCAG to UCGA) better arranges the codons by the hydrophobicity of their encoded amino acids. This suggests that early ribosomes read the second codon position most carefully, to control hydrophobicity patterns in protein sequences. The first table—the standard table—can be used to translate nucleotide triplets into the corresponding amino acid or appropriate signal if it is a start or stop codon. The second table, appropriately called the inverse, does the opposite: it can be used to deduce a possible triplet code if the amino acid is known. As multiple codons can code for the same amino acid, the International Union of Pure and Applied Chemistry's (IUPAC) nucleic acid notation is given in some instances. Translation table 1 Standard RNA codon table As shown in the above table, NCBI table 1 includes the less-canonical start codons GUG and UUG. Inverse RNA codon table Standard DNA codon table Inverse DNA codon table Alternative codons in other translation tables The genetic code was once believed to be universal: a codon would code for the same amino acid regardless of the organism or source. However, it is now agreed that the genetic code evolves, resulting in discrepancies in how a codon is translated depending on the genetic source. For example, in 1981, it was discovered that the use of codons AUA, UGA, AGA and AGG by the coding system in mammalian mitochondria differed from the universal code. Stop codons can also be affected: in ciliated protozoa, the universal stop codons UAA and UAG code for glutamine. Four novel alternative genetic codes (numbered here 34–37) were discovered in bacterial genomes by Shulgina and Eddy, revealing the first sense codon changes in bacteria. The following table displays these alternative codons. See also Bioinformatics List of genetic codes Notes References Further reading External links DNA codon chart organized in a wheel Gene expression Molecular genetics Protein biosynthesis Bioinformatics
DNA and RNA codon tables
[ "Chemistry", "Engineering", "Biology" ]
799
[ "Protein biosynthesis", "Biological engineering", "Gene expression", "Bioinformatics", "Molecular genetics", "Biosynthesis", "Cellular processes", "Molecular biology", "Biochemistry" ]
28,779,877
https://en.wikipedia.org/wiki/Atmospheric%20optics
Atmospheric optics is "the study of the optical characteristics of the atmosphere or products of atmospheric processes .... [including] temporal and spatial resolutions beyond those discernible with the naked eye". Meteorological optics is "that part of atmospheric optics concerned with the study of patterns observable with the naked eye". Nevertheless, the two terms are sometimes used interchangeably. Meteorological optical phenomena, as described in this article, are concerned with how the optical properties of Earth's atmosphere cause a wide range of optical phenomena and visual perception phenomena. Examples of meteorological phenomena include: The blue color of the sky. This is from Rayleigh scattering, which sends more higher frequency/shorter wavelength (blue) sunlight into the eye of an observer than other frequencies/wavelength. The reddish color of the Sun when it is observed through a thick atmosphere, as during a sunrise or sunset. This is because long-wavelength (red) light is scattered less than blue light. The red light reaches the observer's eye, whereas the blue light is scattered out of the line of sight. Other colours in the sky, such as glowing skies at dusk and dawn. These are from additional particulate matter in the sky that scatter different colors at different angles. Halos, afterglows, coronas, polar stratospheric clouds, and sun dogs. These are from scattering, or refraction, by ice crystals and from other particles in the atmosphere. They depend on different particle sizes and geometries. Mirages. These are optical phenomena in which light rays are bent due to thermal variations in the refractive index of air, producing displaced or heavily distorted images of distant objects. Other optical phenomena associated with this include the Novaya Zemlya effect, in which the Sun has a distorted shape and rises earlier or sets later than predicted. A spectacular form of refraction, called the Fata Morgana, occurs with a temperature inversion, in which objects on the horizon or even beyond the horizon (e.g. islands, cliffs, ships, and icebergs) appear elongated and elevated, like "fairy tale castles". Rainbows. These result from a combination of internal reflection and dispersive refraction of light in raindrops. Because rainbows are seen on the opposite side of the sky from the Sun, rainbows are more visible the closer the Sun is to the horizon. For example, if the Sun is overhead, any possible rainbow appears near an observer's feet, making it hard to see, and involves very few raindrops between the observer's eyes and the ground, making any rainbow very sparse. Other phenomena that are remarkable because they are forms of visual illusions include: Crepuscular rays, Anticrepuscular rays, and The apparent size of celestial objects such as the Sun and Moon. History A book on meteorological optics was published in the sixteenth century, but there have been numerous books on the subject since about 1950. The topic was popularised by the wide circulation of a book by Marcel Minnaert, Light and Color in the Open Air, in 1954. Sun and Moon size In the Book of Optics (1011–22 AD), Ibn al-Haytham argued that vision occurs in the brain, and that personal experience has an effect on what people see and how they see, and that vision and perception are subjective. Arguing against Ptolemy's refraction theory for why people perceive the Sun and Moon larger at the horizon than when they are higher in the sky, he redefined the problem in terms of perceived, rather than real, enlargement. He said that judging the distance of an object depends on there being an uninterrupted sequence of intervening bodies between the object and the observer. Critically, Ibn al-Haytham said that judging the size of an object depends on its judged distance: an object that appears near appears smaller than an object having the same image size on the retina that appears far. With the overhead Moon, there is no uninterrupted sequence of intervening bodies. Hence it appears far and small. With a horizon Moon, there is an uninterrupted sequence of intervening bodies: all the objects between the observer and the horizon, so the Moon appears far and large. Through works by Roger Bacon, John Pecham, and Witelo based on Ibn al-Haytham's explanation, the Moon illusion gradually came to be accepted as a psychological phenomenon, with Ptolemy's theory being rejected in the 17th century. For over 100 years, research on the Moon illusion has been conducted by vision scientists who invariably have been psychologists specializing in human perception. After reviewing the many different explanations in their 2002 book The Mystery of the Moon Illusion, Ross and Plug concluded "No single theory has emerged victorious". Sky coloration The color of light from the sky is a result of Rayleigh scattering of sunlight, which results in a perceived blue color. On a sunny day, Rayleigh scattering gives the sky a blue gradient, darkest around the zenith and brightest near the horizon. Light rays coming from the zenith take the shortest-possible path () through the air mass, yielding less scattering. Light rays coming from the horizon take the longest-possible path through the air, yielding more scattering. The blueness is at the horizon because the blue light coming from great distances is also preferentially scattered. This results in a red shift of the distant light sources that is compensated by the blue hue of the scattered light in the line of sight. In other words, the red light scatters also; if it does so at a point a great distance from the observer it has a much higher chance of reaching the observer than blue light. At distances nearing infinity, the scattered light is therefore white. Distant clouds or snowy mountaintops will seem yellow for that reason; that effect is not obvious on clear days, but very pronounced when clouds are covering the line of sight reducing the blue hue from scattered sunlight. The scattering due to molecule sized particles (as in air) is greater in the forward and backward directions than it is in the lateral direction. Individual water droplets exposed to white light will create a set of colored rings. If a cloud is thick enough, scattering from multiple water droplets will wash out the set of colored rings and create a washed out white color. Dust from the Sahara moves around the southern periphery of the subtropical ridge moves into the southeastern United States during the summer, which changes the sky from a blue to a white appearance and leads to an increase in red sunsets. Its presence negatively affects air quality during the summer since it adds to the count of airborne particulates. The sky can turn a multitude of colors such as red, orange, pink and yellow (especially near sunset or sunrise) and black at night. Scattering effects also partially polarize light from the sky, most pronounced at an angle 90° from the Sun. Sky luminance distribution models have been recommended by the International Commission on Illumination (CIE) for the design of daylighting schemes. Recent developments relate to “all sky models” for modelling sky luminance under weather conditions ranging from clear sky to overcast. Cloud coloration The color of a cloud, as seen from the Earth, tells much about what is going on inside the cloud. Dense deep tropospheric clouds exhibit a high reflectance (70% to 95%) throughout the visible spectrum. Tiny particles of water are densely packed and sunlight cannot penetrate far into the cloud before it is reflected out, giving a cloud its characteristic white color, especially when viewed from the top. Cloud droplets tend to scatter light efficiently, so that the intensity of the solar radiation decreases with depth into the gases. As a result, the cloud base can vary from a very light to very dark grey depending on the cloud's thickness and how much light is being reflected or transmitted back to the observer. Thin clouds may look white or appear to have acquired the color of their environment or background. High tropospheric and non-tropospheric clouds appear mostly white if composed entirely of ice crystals and/or supercooled water droplets. As a tropospheric cloud matures, the dense water droplets may combine to produce larger droplets, which may combine to form droplets large enough to fall as rain. By this process of accumulation, the space between droplets becomes increasingly larger, permitting light to penetrate farther into the cloud. If the cloud is sufficiently large and the droplets within are spaced far enough apart, it may be that a percentage of the light which enters the cloud is not reflected back out before it is absorbed. A simple example of this is being able to see farther in heavy rain than in heavy fog. This process of reflection/absorption is what causes the range of cloud color from white to black. Other colors occur naturally in clouds. Bluish-grey is the result of light scattering within the cloud. In the visible spectrum, blue and green are at the short end of light's visible wavelengths, while red and yellow are at the long end. The short rays are more easily scattered by water droplets, and the long rays are more likely to be absorbed. The bluish color is evidence that such scattering is being produced by rain-sized droplets in the cloud. A cumulonimbus cloud emitting green is a sign that it is a severe thunderstorm, capable of heavy rain, hail, strong winds and possible tornadoes. The exact cause of green thunderstorms is still unknown, but it could be due to the combination of reddened sunlight passing through very optically thick clouds. Yellowish clouds may occur in the late spring through early fall months during forest fire season. The yellow color is due to the presence of pollutants in the smoke. Yellowish clouds caused by the presence of nitrogen dioxide are sometimes seen in urban areas with high air pollution levels. Red, orange and pink clouds occur almost entirely at sunrise and sunset and are the result of the scattering of sunlight by the atmosphere. When the angle between the Sun and the horizon is less than 10 percent, as it is just after sunrise or just prior to sunset, sunlight becomes too red due to refraction for any colors other than those with a reddish hue to be seen. The clouds do not become that color; they are reflecting long and unscattered rays of sunlight, which are predominant at those hours. The effect is much like if a person were to shine a red spotlight on a white sheet. In combination with large, mature thunderheads this can produce blood-red clouds. Clouds look darker in the near-infrared because water absorbs solar radiation at those wavelengths. Halos A halo (ἅλως; also known as a nimbus, icebow or gloriole) is an optical phenomenon produced by the interaction of light from the Sun or Moon with ice crystals in the atmosphere, resulting in colored or white arcs, rings or spots in the sky. Many halos are positioned near the Sun or Moon, but others are elsewhere and even in the opposite part of the sky. They can also form around artificial lights in very cold weather when ice crystals called diamond dust are floating in the nearby air. There are many types of ice halos. They are produced by the ice crystals in cirrus or cirrostratus clouds high in the upper troposphere, at an altitude of to , or, during very cold weather, by ice crystals called diamond dust drifting in the air at low levels. The particular shape and orientation of the crystals are responsible for the types of halo observed. Light is reflected and refracted by the ice crystals and may split into colors because of dispersion. The crystals behave like prisms and mirrors, refracting and reflecting sunlight between their faces, sending shafts of light in particular directions. For circular halos, the preferred angular distance are 22 and 46 degrees from the ice crystals which create them. Atmospheric phenomena such as halos have been used as part of weather lore as an empirical means of weather forecasting, with their presence indicating an approach of a warm front and its associated rain. Sun dogs Sun dogs are a common type of halo, with the appearance of two subtly-colored bright spots to the left and right of the Sun, at a distance of about 22° and at the same elevation above the horizon. They are commonly caused by plate-shaped hexagonal ice crystals. These crystals tend to become horizontally aligned as they sink through the air, causing them to refract the sunlight to the left and right, resulting in the two sun dogs. As the Sun rises higher, the rays passing through the crystals are increasingly skewed from the horizontal plane. Their angle of deviation increases and the sundogs move further from the Sun. However, they always stay at the same elevation as the Sun. Sun dogs are red-colored at the side nearest the Sun. Farther out the colors grade to blue or violet. However, the colors overlap considerably and so are muted, rarely pure or saturated. The colors of the sun dog finally merge into the white of the parhelic circle (if the latter is visible). It is theoretically possible to predict the forms of sun dogs as would be seen on other planets and moons. Mars might have sundogs formed by both water-ice and CO2-ice. On the giant gas planets — Jupiter, Saturn, Uranus and Neptune — other crystals form the clouds of ammonia, methane, and other substances that can produce halos with four or more sundogs. Glory A common optical phenomenon involving water droplets is the glory. A glory is an optical phenomenon, appearing much like an iconic Saint's halo about the head of the observer, produced by light backscattered (a combination of diffraction, reflection and refraction) towards its source by a cloud of uniformly sized water droplets. A glory has multiple colored rings, with red colors on the outermost ring and blue/violet colors on the innermost ring. The angular distance is much smaller than a rainbow, ranging between 5° and 20°, depending on the size of the droplets. The glory can only be seen when the observer is directly between the Sun and cloud of refracting water droplets. Hence, it is commonly observed while airborne, with the glory surrounding the airplane's shadow on clouds (this is often called The Glory of the Pilot). Glories can also be seen from mountains and tall buildings, when there are clouds or fog below the level of the observer, or on days with ground fog. The glory is related to the optical phenomenon anthelion. Rainbow A rainbow is a narrow, multicoloured semicircular arc due to dispersion of white light by a multitude of drops of water, usually in the form of rain, when they are illuminated by sunlight. Hence, when conditions are right, a rainbow always appears in the section of sky directly opposite the Sun. For an observer on the ground, the amount of the arc that is visible depends on the height of the sun above the horizon. It is a full semicircle with an angular radius of 42° when the sun is at the horizon. But as the sun rises in the sky, the arc grows smaller and ceases to be visible when the sun is more than 42° above the horizon. To see more than a semicircular bow, an observer would have to be able to look down on the drops, say from an airplane or a mountaintop. Rainbows are most common during afternoon rain showers in summer. A single reflection off the backs of an array of raindrops produces a rainbow with an angular size that ranges from 40° to 42° with red on the outside and blue/violet on the inside. This is known as the primary bow. A fainter secondary bow is often visible some 10° outside the primary bow. It is due to two internal reflections within a drop. The resulting secondary arc is some 3° wide and the colours are reversed, with blue/violet on the outside. Two internal reflections produce a bow with angular size of 50.5° to 54° with blue/violet on the outside. The region between a double rainbow is often noticeably darker that the sky withinthe primary bow and that beyond the seconday bow. It known an Alexander's Dark Band. The reason for this apparent reduction in sky brightness is that, while light from the sky enclosed within the primary bow comes from droplet reflection, and light beyond the secondary bow also comes from droplet reflection, there is no mechanism for the region between the bows to reflect light in the direction of the observer. Generally speaking, larger the droplets make for brighter bows. A rainbow spans a continuous spectrum of colors; the distinct bands (including the number of bands) are an artifact of human color vision, and no banding of any type is seen in a black-and-white photograph of a rainbow (only a smooth gradation of intensity to a maxima, then fading to a minima at the other side of the arc). For colors seen by a normal human eye, the most commonly cited and remembered sequence, in English, is Isaac Newton's sevenfold red, orange, yellow, green, blue, indigo and violet (popularly memorized by mnemonics like Roy G. Biv). Mirage A mirage is a naturally occurring optical phenomenon in which light rays are bent to produce a displaced image of distant objects or the sky. The word comes to English via the French mirage, from the Latin mirare, meaning "to look at, to wonder at". This is the same root as for "mirror" and "to admire". Also, it has its roots in the Arabic mirage. In contrast to a hallucination, a mirage is a real optical phenomenon which can be captured on camera, since light rays actually are refracted to form the false image at the observer's location. What the image appears to represent, however, is determined by the interpretive faculties of the human mind. For example, inferior images on land are very easily mistaken for the reflections from a small body of water. Mirages can be categorized as "inferior" (meaning lower), "superior" (meaning higher) and "Fata Morgana", one kind of superior mirage consisting of a series of unusually elaborate, vertically stacked images, which form one rapidly changing mirage. Green flashes and green rays are optical phenomena that occur shortly after sunset or before sunrise, when a green spot is visible, usually for no more than a second or two, above the Sun, or a green ray shoots up from the sunset point. Green flashes are actually a group of phenomena stemming from different causes, and some are more common than others. Green flashes can be observed from any altitude (even from an aircraft). They are usually seen at an unobstructed horizon, such as over the ocean, but are possible over cloud tops and mountain tops as well. A green flash from the Moon and bright planets at the horizon, including Venus and Jupiter, can also be observed. Fata Morgana This optical phenomenon occurs because rays of light are strongly bent when they pass through air layers of different temperatures in a steep thermal inversion where an atmospheric duct has formed. A thermal inversion is an atmospheric condition where warmer air exists in a well-defined layer above a layer of significantly cooler air. This temperature inversion is the opposite of what is normally the case; air is usually warmer close to the surface, and cooler higher up. In calm weather, a layer of significantly warmer air can rest over colder dense air, forming an atmospheric duct which acts like a refracting lens, producing a series of both inverted and erect images. A Fata Morgana is an unusual and very complex form of mirage, a form of superior mirage, which, like many other kinds of superior mirages, is seen in a narrow band right above the horizon. It is an Italian phrase derived from the vulgar Latin for "fairy" and the Arthurian sorcerer Morgan le Fay, from a belief that the mirage, often seen in the Strait of Messina, were fairy castles in the air, or false land designed to lure sailors to their death created by her witchcraft. Although the term Fata Morgana is sometimes incorrectly applied to other, more common kinds of mirages, the true Fata Morgana is not the same as an ordinary superior mirage, and is certainly not the same as an inferior mirage. Fata Morgana mirages tremendously distort the object or objects which they are based on, such that the object often appears to be very unusual, and may even be transformed in such a way that it is completely unrecognizable. A Fata Morgana can be seen on land or at sea, in polar regions or in deserts. This kind of mirage can involve almost any kind of distant object, including such things as boats, islands, and coastline. A Fata Morgana is not only complex, but also rapidly changing. The mirage comprises several inverted (upside down) and erect (right side up) images that are stacked on top of one another. Fata Morgana mirages also show alternating compressed and stretched zones. Novaya Zemlya effect The Novaya Zemlya effect is a polar mirage caused by high refraction of sunlight between atmospheric thermoclines. The Novaya Zemlya effect will give the impression that the sun is rising earlier or setting later than it actually should (astronomically speaking). Depending on the meteorological situation the effect will present the Sun as a line or a square (which is sometimes referred to as the "rectangular sun"), made up of flattened hourglass shapes. The mirage requires rays of sunlight to have an inversion layer for hundreds of kilometres, and depends on the inversion layer's temperature gradient. The sunlight must bend to the Earth's curvature at least to allow an elevation rise of 5 degrees for sight of the sun disk. The first person to record the phenomenon was Gerrit de Veer, a member of Willem Barentsz' ill-fated third expedition into the polar region. Novaya Zemlya, the archipelago where de Veer first observed the phenomenon, lends its name to the effect. Crepuscular rays Crepuscular rays are near-parallel rays of sunlight moving through the Earth's atmosphere, but appear to diverge because of linear perspective. They often occur when objects such as mountain peaks or clouds partially shadow the Sun's rays like a cloud cover. Various airborne compounds scatter the sunlight and make these rays visible, due to diffraction, reflection, and scattering. Crepuscular rays can also occasionally be viewed underwater, particularly in arctic areas, appearing from ice shelves or cracks in the ice. Also they are also viewed in days when the sun hits the clouds in a perfect angle shining upon the area. There are three primary forms of crepuscular rays: Rays of light penetrating holes in low clouds (also called "Jacob's Ladder"). Beams of light diverging from behind a cloud. Pale, pinkish or reddish rays that radiate from below the horizon. These are often mistaken for light pillars. They are commonly seen near sunrise and sunset, when tall clouds such as cumulonimbus and mountains can be most effective at creating these rays. Anticrepuscular rays Anticrepuscular rays while parallel in reality are sometimes visible in the sky in the direction opposite the sun. They appear to converge again at the distant horizon. Atmospheric refraction Atmospheric refraction influences the apparent position of astronomical and terrestrial objects, usually causing them to appear higher than they actually are. For this reason navigators, astronomers, and surveyors observe positions when these effects are minimal. Sailors will only shoot a star when 20° or more above the horizon, astronomers try to schedule observations when an object is highest in the sky, and surveyors try to observe in the afternoon when refraction is minimum. Atmospheric diffraction Atmospheric diffraction is a visual effect caused when sunlight is bent by particles suspended in the air. List See also Alpenglow Nacreous cloud Noctilucent cloud Sunset#Colors Sunrise#Colors References Applied and interdisciplinary physics Atmospheric optical phenomena Optics Scattering, absorption and radiative transfer (optics)
Atmospheric optics
[ "Physics", "Chemistry" ]
4,929
[ "Physical phenomena", "Earth phenomena", "Applied and interdisciplinary physics", "Optics", " absorption and radiative transfer (optics)", "Optical phenomena", "Scattering", " molecular", "Atomic", "Atmospheric optical phenomena", " and optical physics" ]
28,783,032
https://en.wikipedia.org/wiki/Critical%20Path%20Institute
Critical Path Institute (C-Path) is a non-profit organization created to improve the drug development process; its consortia include more than 1,600 scientists from government regulatory and research agencies, academia, patient organizations, and bio-pharmaceutical companies. Background The U.S. Food and Drug Administration (FDA) launched the Critical Path Initiative in 2004 to transform the way FDA-regulated medical products are developed, evaluated, and manufactured. C-Path was created as an independent organization to respond to the needs outlined in the FDA's initiative and with support and funding from the FDA, Science Foundation Arizona, and the Tucson, Arizona community. It operates as a neutral third party to enable scientists from the regulated industry and international regulatory agencies to work together with scientists from academia and patient groups to improve the drug development process. Approach In the interest of national and global public health, C-Path develops large databases of aggregated clinical trial data that can be used to study disease progression. These data are also used to develop and qualify biomarkers and clinical outcome assessment instruments that are shared with the greater community for use in drug development. C-Path also develops quantitative models to facilitate the design of efficient clinical trials. C-Path Programs C-Path programs are focused on reducing the time, cost, and risk of drug development and regulatory review. Where appropriate, C-Path forms consortia to improve the drug development process. The Predictive Safety Testing Consortium (PSTC) works to find improved safety biomarkers to detect drug induced toxicity. The Patient-Reported Outcome (PRO) Consortium develops, evaluates, and qualifies PRO instruments (e.g., questionnaires) for use in clinical trials designed to assess the safety and effectiveness of medical products. The Critical Path to TB Drug Regimens (CPTR) aims to accelerate the development of new, safe, and highly effective tuberculosis treatment regimens with shortened durations of therapy. The Polycystic Kidney Disease (PKD) Consortium evaluates the evidence supporting total kidney volume (TKV) as a biomarker for assessing the progression of autosomal dominant PKD. The Critical Path for Alzheimer's Disease (CPAD) aims to increase the efficiency of the development process of new treatments for Alzheimer disease (AD) and related neurodegenerative disorders with impaired cognition and function. The Critical Path for Parkinson's (CPP) works to improve the clinical trial process. The Data Collaboration Center (DCC) develops data solutions for scientific research. The Duchenne Regulatory Science Consortium (D-RSC) supports collaborative research through shared data access and drug development tools. The Electronic Patient-Reported Outcome Consortium (e-PRO) supports the collection of patient-focused outcomes data in clinical trials. The Huntington's Disease Regulatory Science Consortium (HD-RSC) aims to accelerate the regulatory approval of Huntington's disease therapies. The International Neonatal Consortium (INC) seeks to forge a predictable regulatory path for evaluating the safety and effectiveness of therapies for neonates. The Multiple Sclerosis Outcome Assessments Consortium (MSOAC) works to qualify a new measure of disability as a primary or secondary endpoint for future trials of MS therapies. The Type 1 Diabetes Consortium (T1D) works to qualify islet autoimmunity antibodies as prognostic biomarkers. The goal of the Transplant Therapeutics Consortium (TTC) is to accelerate the medical product development process for transplantation. The TB-Platform for Aggregation of Clinical TB Studies (TB-PACTS) curates and standardizes Phase III tuberculosis (TB) clinical trial data. The successfully completed Pediatric Trials Consortium worked toward the efficient evaluation of innovative drugs, biologics, and devices for children. Location C-Path is headquartered in Tucson, Arizona. Raymond L. Woosley, M.D., Ph.D. founded C-Path in 2005 and is President Emeritus. Kristen Swingle is currently C-Path's Interim President and Chief Operating Officer. The Board of Directors includes Robert M. Califf, Wainwright Fishburn, Timothy R Franson, Kay Holcombe, Jeffrey E Jacob, former Pfizer CFO Alan Levin and biochemist Paula J. Olsiewski. References External links Critical Path Institute U.S. Food and Drug Administration National Institutes of Health European Medicines Agency Pharmaceutical research institutes Pharmaceutical industry Drug discovery
Critical Path Institute
[ "Chemistry", "Biology" ]
893
[ "Pharmacology", "Life sciences industry", "Drug discovery", "Pharmaceutical industry", "Medicinal chemistry" ]
34,321,107
https://en.wikipedia.org/wiki/Unevenly%20spaced%20time%20series
In statistics, signal processing, and econometrics, an unevenly (or unequally or irregularly) spaced time series is a sequence of observation time and value pairs (tn, Xn) in which the spacing of observation times is not constant. Unevenly spaced time series naturally occur in many industrial and scientific domains: natural disasters such as earthquakes, floods, or volcanic eruptions typically occur at irregular time intervals. In observational astronomy, measurements such as spectra of celestial objects are taken at times determined by weather conditions, availability of observation time slots, and suitable planetary configurations. In clinical trials (or more generally, longitudinal studies), a patient's state of health may be observed only at irregular time intervals, and different patients are usually observed at different points in time. Wireless sensors in the Internet of things often transmit information only when a state changes to conserve battery life. There are many more examples in climatology, ecology, high-frequency finance, geology, and signal processing. Analysis A common approach to analyzing unevenly spaced time series is to transform the data into equally spaced observations using some form of interpolation - most often linear - and then to apply existing methods for equally spaced data. However, transforming data in such a way can introduce a number of significant and hard to quantify biases, especially if the spacing of observations is highly irregular. Ideally, unevenly spaced time series are analyzed in their unaltered form. However, most of the basic theory for time series analysis was developed at a time when limitations in computing resources favored an analysis of equally spaced data, since in this case efficient linear algebra routines can be used and many problems have an explicit solution. As a result, fewer methods currently exist specifically for analyzing unevenly spaced time series data. The least-squares spectral analysis methods are commonly used for computing a frequency spectrum from such time series without any data alterations. Software Traces is a Python library for analysis of unevenly spaced time series in their unaltered form. CRAN Task View: Time Series Analysis is a list describing many R (programming language) packages dealing with both unevenly (or irregularly) and evenly spaced time series and many related aspects, including uncertainty. MessyTimeSeries and MessyTimeSeriesOptim are Julia packages dedicated to incomplete time series. See also Least-squares spectral analysis Non-uniform discrete Fourier transform References Statistical signal processing Time series
Unevenly spaced time series
[ "Engineering" ]
488
[ "Statistical signal processing", "Engineering statistics" ]
34,324,580
https://en.wikipedia.org/wiki/Regional%20Centre%20for%20Biotechnology
The Regional Centre for Biotechnology (RCB) is an autonomous institution of education, training and research established under the auspices of United Nations Educational, Scientific and Cultural Organization (UNESCO) and Department of Biotechnology (DBT, India). The Parliament has passed the Regional Centre for Biotechnology Bill, 2016 to provide statutory status to the existing institution. Dr. Arvind Sahu is the executive director of RCB. Background The Government of India and UNESCO signed a Memorandum of Understanding (MoU) on 14 July 2006 to establish RCB. The centre is now recognized as a "Category II Centre" by "the principles and guidelines for the establishment and functioning of UNESCO Institutes and Centres". Following approval from the Union Cabinet, the centre became operational from its interim campus at Gurgaon, Haryana from 20 April 2009. Research areas RCB engages in contemporary research at the interface of disciplines constituting biotechnology in its broadest definition. Research programmes aim to integrate science, engineering, medicine and agriculture in biotechnology and emphasize on their relevance to the regional societies. A broad range of research areas planned include: Biomedical Sciences Molecular and Cellular Biology Bioengineering and Devices Biophysics, Biochemistry and Structural Biology Climate science, Agriculture and Environment Biotechnology Regulatory Affairs, IPR and Policy Academics and training Multidisciplinary doctoral programme Has been instituted for students who have completed masters in any relevant discipline of natural sciences, medicine, engineering and other related sciences. RCB recruits Junior Research Fellowships (JRFs) twice during an academic year and already mentors 31 Research Fellows. Young Investigator (YI) Post-Doctoral Programme (RCB-YI award) has been initiated to nurture outstanding recent PhDs with innovative ideas and the drive to pursue novel discoveries under the mentorship of RCB faculty. RCB-YI award has been instituted for both Indian and Foreign nationals on the competitive basis with initial appointment for three years which is extendable on rigorous review for additional two years. Short-term training programmes These are conducted at RCB by inducting post-graduate students of science from various universities/institutions/colleges to carry out their project/ dissertation work towards partial fulfillment of their postgraduate degrees. Advanced workshops/training Courses are arranged by RCB periodically throughout the year, covering various frontier areas that could be broadly classified under biotechnology keeping in view the multi-disciplinary nature of the subject. During the week-long workshops, expert in-house and invited faculty deliver lectures and provide hands-on training to expose the participants to contemporary science/technology and explore their utility for addressing research problems in specific scientific areas. Research facilities RCB has established facilities in its interim campus at Gurgaon where it is functioning. Centre is expected to expand further when it moves to its permanent campus in Faridabad, within the NCR Biotech Science Cluster, later this year. RCB has established major specialized facilities that include: high resolution optical imaging (Atomic Force Microscopy, Confocal Microscopy, Fluorescence Microscopy), synthesis chemistry facilities, Protein sequencer, Protein purification systems, biophysical (Isothermal Titration Calorimetry, Differential Scanning Calorimetry, Circular Dichroism, SPR, NMR, FTIR, Dynamic Light Scattering), structural biology (Crystallization Robotics, X-ray Diffraction), proteomics (ABSciEx Triple TOF 5600), flow cytometry, plant, bacterial and animal cell/ tissue culture facilities, tissue sectioning and insect culture facilities. In addition, researchers at RCB have access to the Advanced Technology Platform Center (ATPC) of the Biotech Science Cluster Faridabad. The ATPC already houses an operational flow cytometry and proteomics facilities. Other high-end facilities planned to be operational in near future include complete optical imaging, electron microscopy and next-generation sequencing. Indian Biological Data Centre Department of Biotechnology announces the launch of first Indian Biological Data Centre (IBDC) at Regional Centre for Biotechnology, Faridabad. A national facility to store, manage, archive and distribute all kind of biological data. Partnerships Towards fulfilling its mandate, RCB is collaborating with various national and international institutions of repute. The partnerships are meant for exchange of ideas, information sharing, training, networking, conducting scientific colloquia, workshops, academic exchange programmes and student study visits within (and outside) India and for students of the Asia-Pacific region. RCB and National Institute of Advanced Industrial Science and Technology (AIST), Japan announced a partnership to further capacity building initiatives in bio-imaging and biotechnology. The agreement offers an excellent opportunity for both the institutions in capacity building, training and research collaborations, benefitting young scientists not only in India and Japan, but also from the UNESCO member countries in the Asia-Pacific and SAARC regions. In its continuing effort to fulfill the core mandate, RCB is actively engaged in a range of research and related activities in partnership with other academic institutions, which form part of the NCR Biotech Science Cluster, Faridabad. Shared facilities such as Advanced Technology Platform Centre (ATPC), and Bioincubators (supported by Biotechnology Industry Research Assistance Council (BIRAC)), which is meant to support the budding biotechnology entrepreneurs, are being established. Governance RCB is an institution of international importance in biotechnology, education, training and research. The Board of Governors (BoG), composed of eminent scientists and specialists in the field of biotechnology, representing Government of India and UNESCO are responsible for the governance of the Centre. The Programme Advisory Committee (PAC), composed of experts within India and abroad, provide support and guidance for the centre's education, training and research programmes. On behalf of the Governing body, the Executive Director executes policies and functions of the Centre with the guidance of a duly constituted Executive Committee. References www.mrcindia.org/ National Institute of Malaria Research under Indian Council of Medical Research New Delhi. External links Regional Centre for Biotechnology Website RCB - Profile IBDC Website Research institutes in Delhi Universities and colleges in Delhi Biotechnology in India 2009 establishments in Haryana
Regional Centre for Biotechnology
[ "Biology" ]
1,229
[ "Biotechnology in India", "Biotechnology by country" ]
34,327,576
https://en.wikipedia.org/wiki/Low-rank%20approximation
In mathematics, low-rank approximation refers to the process of approximating a given matrix by a matrix of lower rank. More precisely, it is a minimization problem, in which the cost function measures the fit between a given matrix (the data) and an approximating matrix (the optimization variable), subject to a constraint that the approximating matrix has reduced rank. The problem is used for mathematical modeling and data compression. The rank constraint is related to a constraint on the complexity of a model that fits the data. In applications, often there are other constraints on the approximating matrix apart from the rank constraint, e.g., non-negativity and Hankel structure. Low-rank approximation is closely related to numerous other techniques, including principal component analysis, factor analysis, total least squares, latent semantic analysis, orthogonal regression, and dynamic mode decomposition. Definition Given structure specification , vector of structure parameters , norm , and desired rank , Applications Linear system identification, in which case the approximating matrix is Hankel structured. Machine learning, in which case the approximating matrix is nonlinearly structured. Recommender systems, in which cases the data matrix has missing values and the approximation is categorical. Distance matrix completion, in which case there is a positive definiteness constraint. Natural language processing, in which case the approximation is nonnegative. Computer algebra, in which case the approximation is Sylvester structured. Basic low-rank approximation problem The unstructured problem with fit measured by the Frobenius norm, i.e., has an analytic solution in terms of the singular value decomposition of the data matrix. The result is referred to as the matrix approximation lemma or Eckart–Young–Mirsky theorem. This problem was originally solved by Erhard Schmidt in the infinite dimensional context of integral operators (although his methods easily generalize to arbitrary compact operators on Hilbert spaces) and later rediscovered by C. Eckart and G. Young. L. Mirsky generalized the result to arbitrary unitarily invariant norms. Let be the singular value decomposition of , where is the rectangular diagonal matrix with the singular values . For a given , partition , , and as follows: where is , is , and is . Then the rank- matrix, obtained from the truncated singular value decomposition is such that The minimizer is unique if and only if . Proof of Eckart–Young–Mirsky theorem (for spectral norm) Let be a real (possibly rectangular) matrix with . Suppose that is the singular value decomposition of . Recall that and are orthogonal matrices, and is an diagonal matrix with entries such that . We claim that the best rank- approximation to in the spectral norm, denoted by , is given by where and denote the th column of and , respectively. First, note that we have Therefore, we need to show that if where and have columns then . Since has columns, then there must be a nontrivial linear combination of the first columns of , i.e., such that . Without loss of generality, we can scale so that or (equivalently) . Therefore, The result follows by taking the square root of both sides of the above inequality. Proof of Eckart–Young–Mirsky theorem (for Frobenius norm) Let be a real (possibly rectangular) matrix with . Suppose that is the singular value decomposition of . We claim that the best rank approximation to in the Frobenius norm, denoted by , is given by where and denote the th column of and , respectively. First, note that we have Therefore, we need to show that if where and have columns then By the triangle inequality with the spectral norm, if then . Suppose and respectively denote the rank approximation to and by SVD method described above. Then, for any Since , when and we conclude that for Therefore, as required. Weighted low-rank approximation problems The Frobenius norm weights uniformly all elements of the approximation error . Prior knowledge about distribution of the errors can be taken into account by considering the weighted low-rank approximation problem where vectorizes the matrix column wise and is a given positive (semi)definite weight matrix. The general weighted low-rank approximation problem does not admit an analytic solution in terms of the singular value decomposition and is solved by local optimization methods, which provide no guarantee that a globally optimal solution is found. In case of uncorrelated weights, weighted low-rank approximation problem also can be formulated in this way: for a non-negative matrix and a matrix we want to minimize over matrices, , of rank at most . Entry-wise Lp low-rank approximation problems Let . For , the fastest algorithm runs in time. One of the important ideas been used is called Oblivious Subspace Embedding (OSE), it is first proposed by Sarlos. For , it is known that this entry-wise L1 norm is more robust than the Frobenius norm in the presence of outliers and is indicated in models where Gaussian assumptions on the noise may not apply. It is natural to seek to minimize . For and , there are some algorithms with provable guarantees. Distance low-rank approximation problem Let and be two point sets in an arbitrary metric space. Let represent the matrix where . Such distances matrices are commonly computed in software packages and have applications to learning image manifolds, handwriting recognition, and multi-dimensional unfolding. In an attempt to reduce their description size, one can study low rank approximation of such matrices. Distributed/Streaming low-rank approximation problem The low-rank approximation problems in the distributed and streaming setting has been considered in. Image and kernel representations of the rank constraints Using the equivalences and the weighted low-rank approximation problem becomes equivalent to the parameter optimization problems and where is the identity matrix of size . Alternating projections algorithm The image representation of the rank constraint suggests a parameter optimization method in which the cost function is minimized alternatively over one of the variables ( or ) with the other one fixed. Although simultaneous minimization over both and is a difficult biconvex optimization problem, minimization over one of the variables alone is a linear least squares problem and can be solved globally and efficiently. The resulting optimization algorithm (called alternating projections) is globally convergent with a linear convergence rate to a locally optimal solution of the weighted low-rank approximation problem. Starting value for the (or ) parameter should be given. The iteration is stopped when a user defined convergence condition is satisfied. Matlab implementation of the alternating projections algorithm for weighted low-rank approximation: function [dh, f] = wlra_ap(d, w, p, tol, maxiter) [m, n] = size(d); r = size(p, 2); f = inf; for i = 2:maxiter % minimization over L bp = kron(eye(n), p); vl = (bp' * w * bp) \ bp' * w * d(:); l = reshape(vl, r, n); % minimization over P bl = kron(l', eye(m)); vp = (bl' * w * bl) \ bl' * w * d(:); p = reshape(vp, m, r); % check exit condition dh = p * l; dd = d - dh; f(i) = dd(:)' * w * dd(:); if abs(f(i - 1) - f(i)) < tol, break, end endfor Variable projections algorithm The alternating projections algorithm exploits the fact that the low rank approximation problem, parameterized in the image form, is bilinear in the variables or . The bilinear nature of the problem is effectively used in an alternative approach, called variable projections. Consider again the weighted low rank approximation problem, parameterized in the image form. Minimization with respect to the variable (a linear least squares problem) leads to the closed form expression of the approximation error as a function of The original problem is therefore equivalent to the nonlinear least squares problem of minimizing with respect to . For this purpose standard optimization methods, e.g. the Levenberg-Marquardt algorithm can be used. Matlab implementation of the variable projections algorithm for weighted low-rank approximation: function [dh, f] = wlra_varpro(d, w, p, tol, maxiter) prob = optimset(); prob.solver = 'lsqnonlin'; prob.options = optimset('MaxIter', maxiter, 'TolFun', tol); prob.x0 = p; prob.objective = @(p) cost_fun(p, d, w); [p, f ] = lsqnonlin(prob); [f, vl] = cost_fun(p, d, w); dh = p * reshape(vl, size(p, 2), size(d, 2)); function [f, vl] = cost_fun(p, d, w) bp = kron(eye(size(d, 2)), p); vl = (bp' * w * bp) \ bp' * w * d(:); f = d(:)' * w * (d(:) - bp * vl); The variable projections approach can be applied also to low rank approximation problems parameterized in the kernel form. The method is effective when the number of eliminated variables is much larger than the number of optimization variables left at the stage of the nonlinear least squares minimization. Such problems occur in system identification, parameterized in the kernel form, where the eliminated variables are the approximating trajectory and the remaining variables are the model parameters. In the context of linear time-invariant systems, the elimination step is equivalent to Kalman smoothing. A Variant: convex-restricted low rank approximation Usually, we want our new solution not only to be of low rank, but also satisfy other convex constraints due to application requirements. Our interested problem would be as follows, This problem has many real world applications, including to recover a good solution from an inexact (semidefinite programming) relaxation. If additional constraint is linear, like we require all elements to be nonnegative, the problem is called structured low rank approximation. The more general form is named convex-restricted low rank approximation. This problem is helpful in solving many problems. However, it is challenging due to the combination of the convex and nonconvex (low-rank) constraints. Different techniques were developed based on different realizations of . However, the Alternating Direction Method of Multipliers (ADMM) can be applied to solve the nonconvex problem with convex objective function, rank constraints and other convex constraints, and is thus suitable to solve our above problem. Moreover, unlike the general nonconvex problems, ADMM will guarantee to converge a feasible solution as long as its dual variable converges in the iterations. See also CUR matrix approximation is made from the rows and columns of the original matrix References M. T. Chu, R. E. Funderlic, R. J. Plemmons, Structured low-rank approximation, Linear Algebra and its Applications, Volume 366, 1 June 2003, Pages 157–172 External links C++ package for structured-low rank approximation Numerical linear algebra Dimension reduction Mathematical optimization
Low-rank approximation
[ "Mathematics" ]
2,370
[ "Mathematical optimization", "Mathematical analysis" ]
34,328,568
https://en.wikipedia.org/wiki/SAMtools
SAMtools is a set of utilities for interacting with and post-processing short DNA sequence read alignments in the SAM (Sequence Alignment/Map), BAM (Binary Alignment/Map) and CRAM formats, written by Heng Li. These files are generated as output by short read aligners like BWA. Both simple and advanced tools are provided, supporting complex tasks like variant calling and alignment viewing as well as sorting, indexing, data extraction and format conversion. SAM files can be very large (tens of Gigabytes is common), so compression is used to save space. SAM files are human-readable text files, and BAM files are simply their binary equivalent, whilst CRAM files are a restructured column-oriented binary container format. BAM files are typically compressed and more efficient for software to work with than SAM. SAMtools makes it possible to work directly with a compressed BAM file, without having to uncompress the whole file. Additionally, since the format for a SAM/BAM file is somewhat complex - containing reads, references, alignments, quality information, and user-specified annotations - SAMtools reduces the effort needed to use SAM/BAM files by hiding low-level details. As third-party projects were trying to use code from SAMtools despite it not being designed to be embedded in that way, the decision was taken in August 2014 to split the SAMtools package into a stand-alone software library with a well-defined API (HTSlib), a project for variant calling and manipulation of variant data (BCFtools), and the stand-alone SAMtools package for working with sequence alignment data. Usage and commands Like many Unix commands, SAMtool commands follow a stream model, where data runs through each command as if carried on a conveyor belt. This allows combining multiple commands into a data processing pipeline. Although the final output can be very complex, only a limited number of simple commands are needed to produce it. If not specified, the standard streams (stdin, stdout, and stderr) are assumed. Data sent to stdout are printed to the screen by default but are easily redirected to another file using the normal Unix redirectors (> and >>), or to another command via a pipe (|). SAMtools commands SAMtools provides the following commands, each invoked as "". view The command filters SAM or BAM formatted data. Using options and arguments it understands what data to select (possibly all of it) and passes only that data through. Input is usually a sam or bam file specified as an argument, but could be sam or bam data piped from any other command. Possible uses include extracting a subset of data into a new file, converting between BAM and SAM formats, and just looking at the raw file contents. The order of extracted reads is preserved. sort The command sorts a BAM file based on its position in the reference, as determined by its alignment. The element + coordinate in the reference that the first matched base in the read aligns to is used as the key to order it by. [TODO: verify]. The sorted output is dumped to a new file by default, although it can be directed to stdout (using the -o option). As sorting is memory intensive and BAM files can be large, this command supports a sectioning mode (with the -m options) to use at most a given amount of memory and generate multiple output file. These files can then be merged to produce a complete sorted BAM file [TODO - investigate the details of this more carefully]. index The command creates a new index file that allows fast look-up of data in a (sorted) SAM or BAM. Like an index on a database, the generated or file allows programs that can read it to more efficiently work with the data in the associated files. tview The command starts an interactive ascii-based viewer that can be used to visualize how reads are aligned to specified small regions of the reference genome. Compared to a graphics based viewer like IGV, it has few features. Within the view, it is possible to jumping to different positions along reference elements (using 'g') and display help information ('?'). mpileup The command produces a pileup format (or BCF) file giving, for each genomic coordinate, the overlapping read bases and indels at that position in the input BAM files(s). This can be used for SNP calling for example. flagstat Examples view samtools view sample.bam > sample.sam Convert a bam file into a sam file. samtools view -bS sample.sam > sample.bam Convert a sam file into a bam file. The option compresses or leaves compressed input data. samtools view sample_sorted.bam "chr1:10-13" Extract all the reads aligned to the range specified, which are those that are aligned to the reference element named chr1 and cover its 10th, 11th, 12th or 13th base. The results is saved to a BAM file including the header. An index of the input file is required for extracting reads according to their mapping position in the reference genome, as created by samtools index. samtools view -h -b sample_sorted.bam "chr1:10-13" > tiny_sorted.bam Extract the same reads as above, but instead of displaying them, writes them to a new bam file, tiny_sorted.bam. The option makes the output compressed and the option causes the SAM headers to be output also. These headers include a description of the reference that the reads in sample_sorted.bam were aligned to and will be needed if the tiny_sorted.bam file is to be used with some of the more advanced SAMtools commands. The order of extracted reads is preserved. tview samtools tview sample_sorted.bam Start an interactive viewer to visualize a small region of the reference, the reads aligned, and mismatches. Within the view, can jump to a new location by typing g: and a location, like . If the reference element name and following colon is replaced with , the current reference element is used, i.e. if is typed after the previous "goto" command, the viewer jumps to the region 200 base pairs down on chr1. Typing brings up help information for scroll movement, colors, views, ... samtools tview -p chrM:1 sample_chrM.bam UCSC_hg38.fa Set start position and compare. samtools tview -d T -p chrY:10,000,000 sample_chrY.bam UCSC_hg38.fa >> save.txt <code class="plainlinks">samtools tview -d H -p chrY:10,000,000 sample_chrY.bam UCSC_hg38.fa >> save.html</code> Save screen in .txt or .html. sort samtools sort -o sorted_out unsorted_in.bam Read the specified unsorted_in.bam as input, sort it by aligned read position, and write it out to sorted_out. Type of output can be either sam, bam, or cram, and will be determined automatically by sorted_out's file-extension. samtools sort -m 5000000 unsorted_in.bam sorted_out Read the specified unsorted_in.bam as input, sort it in blocks up to 5 million k (5 Gb) and write output to a series of bam files named sorted_out.0000.bam, sorted_out.0001.bam, etc., where all bam 0 reads come before any bam 1 read, etc. index samtools index sorted.bam Creates an index file, sorted.bam.bai for the sorted.bam'' file. See also DNA sequencing Pileup format References External links Home page for the SAMtools project SAMtools commands Wiki page at SeqAnswers for the SAMtools software (stub as of 2012-02-26.) broken link Mathematical notes on SAMtools algorithms from its primary author Short, somewhat specialized tutorial on SAMtools from EMBL. broken link Bioinformatics algorithms Bioinformatics software DNA sequencing Public-domain software
SAMtools
[ "Chemistry", "Biology" ]
1,760
[ "Bioinformatics algorithms", "Bioinformatics software", "Bioinformatics", "Molecular biology techniques", "DNA sequencing" ]
34,329,151
https://en.wikipedia.org/wiki/Fundamental%20ephemeris
A fundamental ephemeris of the Solar System is a model of the objects of the system in space, with all of their positions and motions accurately represented. It is intended to be a high-precision primary reference for prediction and observation of those positions and motions, and which provides a basis for further refinement of the model. It is generally not intended to cover the entire life of the Solar System; usually a short-duration time span, perhaps a few centuries, is represented to high accuracy. Some long ephemerides cover several millennia to medium accuracy. They are published by the Jet Propulsion Laboratory as Development Ephemeris. The latest releases include DE430 which covers planetary and lunar ephemeris from Dec 21, 1549 to Jan 25, 2650 with high precision and is intended for general use for modern time periods . DE431 was created to cover a longer time period Aug 15, -13200 to March 15, 17191 with slightly less precision for use with historic observations and far reaching forecasted positions. DE432 was released as a minor update to DE430 with improvements to the Pluto barycenter in support of the New Horizons mission. Description The set of physical laws and numerical constants used in the calculation of the ephemeris must be self-consistent and precisely specified. The ephemeris must be calculated strictly in accordance with this set, which represents the most current knowledge of all relevant physical forces and effects. Current fundamental ephemerides are typically released with exact descriptions of all mathematical models, methods of computation, observational data, and adjustment to the observations at the time of their announcement. This may not have been the case in the past, as fundamental ephemerides were then computed from a collection of methods derived over a span of decades by many researchers. The independent variable of the ephemeris is always time. In the case of the most current ephemerides, it is a relativistic coordinate time scale equivalent to the IAU definition of TCB. In the past, mean solar time (before the discovery of the non-uniform rotation of the Earth) and ephemeris time (before the implementation of relativistic gravitational equations) were used. The remainder of the ephemeris can consist of either the mathematical equations and initial conditions which describe the motions of the bodies of the Solar System, of tabulated data calculated from those equations and conditions, or of condensed mathematical representations of the tabulated data. A fundamental ephemeris is the basis from which apparent ephemerides, phenomena, and orbital elements are computed for astronomical, nautical, and surveyors' almanacs. Apparent ephemerides give positions and motions of Solar System bodies as seen by observers from the surface of Earth, and are useful for astronomers, navigators, and surveyors in planning observations and in reducing the data acquired, although much of the work of latter two has been supplanted by GPS technology. Phenomena are events related to the configurations of Solar System bodies, for instance rise and set times, phases, eclipses and occultations, and have many civil and scientific applications. Orbital elements are descriptions of the motion of a body at a particular instant, used for further short-time-span calculation of the body's position when high accuracy is not required. History Astronomers have been tasked with computing accurate ephemerides, originally for purposes of sea navigation, from at least the 18th century. In England, Charles II founded the Royal Observatory in 1675, which began publishing The Nautical Almanac in 1766. In France, the Bureau des Longitudes was founded in 1795 to publish the Connaissance des Temps. The early fundamental ephemerides of these publications came from many different sources and authors as the science of celestial mechanics matured. At the end of the 19th century, the analytical methods of general perturbations reached the probable limits of what could be accomplished by hand calculation. The planetary "theories" of Newcomb and Hill formed the fundamental ephemerides of the Nautical Almanac at that time. For the Sun, Mercury, Venus, and Mars, the tabulations of the Astronomical Almanac continued to be derived from the work of Newcomb and Ross through 1983. In France, the works of LeVerrier and Gaillot formed the fundamental ephemeris of the Connaissance des Temps. From the mid 20th century, work began on numerical integration of the equations of motion on early computing machines for purposes of producing fundamental ephemerides for the Astronomical Almanac. Jupiter, Saturn, Uranus, Neptune, and Pluto were based on the work of Eckert, et al. and Clemence through 1983. The fundamental ephemeris of the Moon, always a difficult problem in celestial mechanics, remained a work-in-progress through the early 1980s. It was based originally on the work of Brown, with updates and corrections by Clemence, et al. and Eckert, et al. Starting in 1984, a revolution in the methods of producing fundamental ephemerides began. From 1984 through 2002, the fundamental ephemeris of the Astronomical Almanac was the Jet Propulsion Laboratory's DE200/LE200, a fully numerically-integrated ephemeris fitted to modern position and velocity observations of the Sun, Moon, and planets. From 2003 onward (as of Feb 2012), JPL's DE405/LE405, an integrated ephemeris referred to the International Celestial Reference Frame, has been used. In France, the Bureau des Longitudes began using their machine-generated semi-analytical theory VSOP82 in 1984, and their work continued with the founding of the Institut de mécanique céleste et de calcul des éphémérides in 1998 and the INPOP series of numerical ephemerides. DE405/LE405 were superseded by DE421/LE421 in 2008. See also American Ephemeris and Nautical Almanac Astronomical Almanac Jet Propulsion Laboratory Development Ephemeris Newcomb's Tables of the Sun The Nautical Almanac References and notes Sources Nautical Almanac Office, U.S. Naval Observatory and H.M. Nautical Almanac Office, Royal Greenwich Observatory, Explanatory Supplement to the Astronomical Ephemeris and the American Ephemeris and Nautical Almanac. London: Her Majesty's Stationery Office, 1961 (reprint 1974) Nautical Almanac Office, U.S. Naval Observatory and H.M. Nautical Almanac Office, Royal Greenwich Observatory, P.K. Seidelmann, editor, Explanatory Supplement to the Astronomical Almanac. Mill Valley, California: University Science Books, 1992 (reprint 2005) Celestial mechanics Dynamical systems Dynamics of the Solar System
Fundamental ephemeris
[ "Physics", "Astronomy", "Mathematics" ]
1,394
[ "Dynamics of the Solar System", "Classical mechanics", "Astrophysics", "Mechanics", "Celestial mechanics", "Solar System", "Dynamical systems" ]
3,736,715
https://en.wikipedia.org/wiki/Apoptosome
The apoptosome is a large quaternary protein structure formed in the process of apoptosis. Its formation is triggered by the release of cytochrome c from the mitochondria in response to an internal (intrinsic) or external (extrinsic) cell death stimulus. Stimuli can vary from DNA damage and viral infection to developmental cues such as those leading to the degradation of a tadpole's tail. In mammalian cells, once cytochrome c is released, it binds to the cytosolic protein Apaf-1 to facilitate the formation of an apoptosome. An early biochemical study suggests a two-to-one ratio of cytochrome c to apaf-1 for apoptosome formation. However, recent structural studies suggest the cytochrome c to apaf-1 ratio is one-to-one. It has also been shown that the nucleotide dATP as third component binds to apaf-1, however its exact role is still debated. The mammalian apoptosome had never been crystallized, but a human APAF-1/cytochrome-c apoptosome has been imaged at lower (2 nm) resolution by cryogenic transmission electron microscopy in 2002, revealing a heptameric wheel-like particle with 7-fold symmetry. Recently, a medium resolution (9.5 Ångström) structure of human apoptosome was also solved by cryo-electron microscopy, which allows unambiguous inference for positions of all the APAF-1 domains (CARD, NBARC and WD40) and cytochrome c. There is also now a crystal structure of the monomeric, inactive Apaf-1 subunit (PDB 3SFZ). Once formed, the apoptosome can then recruit and activate the inactive pro-caspase-9. Once activated, this initiator caspase can then activate effector caspases and trigger a cascade of events leading to apoptosis. History The term Apoptosome was first introduced in Yoshihide Tsujimoto's 1998 paper "Role of Bcl-2 family proteins in apoptosis: apoptosomes or mitochondria?". However, the Apoptosome was known before this time as a ternary complex. This complex involved caspase-9 and Bcl-XL which each bound a specific Apaf-1 domain. The formation of this complex was then believed to play a regulatory role in mammalian cell death. In December of the same year, a further article was released in The Journal of Biological Chemistry stating that Apaf-1 is the regulator of apoptosis, through activation of procaspase-9. The criteria for an apoptosome were laid out in 1999. Firstly, it must be a large complex (greater than 1.3 million daltons). Secondly its formation requires the hydrolysis of a high energy bond of ATP or dATP. And lastly it must activate procaspase-9 in its functional form. The formation of this complex is the point of no return, and apoptosis will occur. The stable APAF-1 and cytochrome multimeric complex fit this description, and is now called the apoptosome. The apoptosome was thought to be a multimeric complex for two reasons. Firstly, to bring multiple procaspase-9 molecules close together for cleavage. And secondly, to raise the threshold for apoptosis, therefore nonspecific leakage of cytochrome c would not result in apoptosis. Once the apoptosome was established as the procaspase-9 activator, mutations within this pathway became an important research area. Some examples include human leukemia cells, ovarian cancer and viral infections. Current research areas for this pathway will be discussed in further detail. There are hidden routes for cell death as well, which are independent of APAF-1 and therefore the apoptosome. These routes are also independent of caspase-3 and 9. These hidden pathways for apoptosis are slower, but may prove useful with further research. Structure The apoptosome is a multimolecular holoenzyme complex assembled around the adaptor protein Apaf1 (apoptotic protease activating factor 1) upon mitochondria-mediated apoptosis which must be stimulated by some type of stress signal. The formation of the apoptosome requires the presence of ATP/dATP and cytochrome c in the cytosol. A stress stimulus can trigger the release of cytochrome c into the cytoplasm which will then bind to the C-terminus of Apaf-1 within a region containing multiple WD-40 repeats. The oligomerization of Apaf-1 appears to be accompanied by synchronized recruitment of procaspase-9 to the CARD motif at the Apaf-1 N-terminus. The apoptosome triggers the activation of caspases in the intrinsic pathway of apoptosis. The wheel-shaped heptameric complex with sevenfold symmetry structure of the apoptosome was first revealed at 27 Å resolution by electron cryomicroscopy techniques and has a calculated mass of about 1 MDa (Acehan et al. 2002). This wheel-like particle has seven spokes and a central hub. The distal region of the spoke has a pronounced Y shape. The hub domain is connected to the Y domain by a bent arm. Each Y domain comprises two lobes (a large one and a small one) between which cytochrome c binding sites. Because the resolution of the apoptosome structure was relatively low, two controversial models for apoptosome assembly were proposed. One model suggests NOD domains form the central hub and the CARD domains form a freer ring at the top of the NOD region. Another model proposes that Apaf-1 is organized in an extended fashion such that both the N-terminal CARD and the nucleotide binding region form the central hub of the apoptosome, whereas the 13 WD-40 repeats constitute the two lobes. The large lobe is formed by seven repeats and the small lobe is formed by six repeats. Each caspase- 9 molecule binds a CARD domain at the central hub, forming a dome-shaped structure. This controversy has been resolved by a recent high resolution structure of the human apoptosome-procaspase-9 CARD complex. This structure clearly demonstrated that only the NOD regions form the central hub of the apoptosome (see pictures), while CARD is flexibly linked to the platform of apoptosome and becomes disordered in the ground state apoptosome. Once apoptosome binds to procaspase-9, the Apaf-1 CARDs and procaspase-9 CARDs form a flexible disk-like structure sitting above the platform. The number of WD-40 repeats has also been proved to be 15 instead of 13, and it is composed of a 7-bladed beta-propeller and an 8-bladed beta-propeller. Evidence from Wang and colleagues indicates that the stoichiometric ratio of procaspase-9 to Apaf-1 within the complex is approximately 1:1 . This was further proved by quantitative mass spectrometry analysis. The stoichiometry of cytochrome c to Apaf-1 within the complex is proved to be 1:1. There are some debates about whether stable incorporation of cytochrome c into the apoptosome is required following oligomerization, but recent structural data favor the idea that cytochrome c stabilizes the oligomeric human apoptosome. However, cytochrome c may be not required for the assembly of apoptosome in non-mammalian species, such as worms and fruit flies. In addition, several other molecules, most notably caspase-3, have been reported to co-purify with the apoptosome and caspase-3 has been proved to be able to bind the apoptosome-procaspase-9 complex. Apaf-1 forms the backbone of the apoptosome. It has three distinct regions: the N-terminal caspase-recruitment domain (CARD, residues 1–90), a central nucleotide-binding and oligomerization region (NB-ARC/NOD, 128–586) and a C-terminal WD40 region (613–1248) making up a protein about 140 KDa. The CARD domain of Apaf-1 interacts with procaspase-9 and involved with recruitment within the apoptosome. The NB-ARC/NOD region exhibits significant sequence similarity to the C. elegans Ced-4 protein. The C-terminal WD40 region of Apaf1 contains 15 WD-40 repeats structured into two b-propeller-shaped domains. WD-40 repeats are sequences around 40 amino acids long which end in Trp-Asp and are typically involved in protein–protein interaction. A short linker and nucleotide binding a/b domains (NBD) that contain conserved Walker boxes A (p-loop 155-161) and B (239-243) follow the N-terminal CARD domain. The Walker boxes A/B are critical for dATP/ATP and Mg2+ binding. Following the NBD is a small helical domain (HD1), a second linker and a conserved winged helix domain (WHD). The NOD region comprises NBD, HD1 and WHD, creating an ATPase domain that is part of the AAA+ family of ATPases. There is a super helical domain (HD2) present in the junction between the NOD and the WD-40 repeats. The WD40 repeats are in groups of eight and seven with linkers connecting them. Apoptosomes in other organisms The above descriptions are for the human apoptosome. Apoptosome complex structures from other organisms have many similarities, but are of quite different sizes and numbers of subunits, as shown in the figure. The fruit-fly system, called Dark, has a ring of 8 subunits (PDB 4V4L). The nematode apoptosome, called CED-4, is octameric but much smaller (PDB 3LQQ), and it does not include the regions that would bind cytochrome C. Mechanism of Action Initiation The initiation of apoptosome action corresponds with the first steps in the programmed cell death (PCD) pathway. In animals, apoptosis can be catalyzed in one of two ways; the extrinsic pathway involves binding of extracellular ligands to transmembrane receptors, while the intrinsic pathway take place in the mitochondria. This intrinsic pathway involves the release of cytochrome C from the mitochondria and subsequent binding to the cytosolic protein Apaf-1. Cytochrome c release is thus necessary for the initiation of apoptosome action; this release is regulated in a number of ways, most importantly by detection of calcium ion levels. Cytochrome c Release Cytochrome c release is proposed to take place in one of two ways. Firstly, the permeability transition pore (PTP) when the mitochondria receives a death inducing signal, and releases intermembrane space proteins (12). The PTP is composed of the voltage-dependent anion channel (VDAC), the inner membrane protein adenine nucleotide translocator (AdNT) and the matrix protein cyclophilin D (CyD) (12). This pore causes the mitochondria to swell and the outer mitochondrial membrane to rupture (Diamond & McCabe, 2007). With this change in permeability, proteins such as cytochrome c are released into the cytosol (12). This change likely causes the mitochondrial permeability transition (MPT), where the mitochondrial transmembrane potential collapses, and ATP production ceases (12). The inhibition of this method by the pharmaceutical agent cyclosporine A (CsA), lead to the discovery of the second pathway (13). The second method of cytochrome c release is independent of the PTP and involves only the VDAC. Members of the Bcl-2 family of pro-apoptotic proteins can induce the opening of the VDAC (12). This will cause the same release of intermembrane space proteins, including cytochrome c, and the subsequent MPT to occur (12). Apaf-1 a. Absence of Cytochrome c In the absence of cytochrome c, Apaf-1 exists in its monomeric form; it is thought that the WD-40 domain remain folded back onto the protein, keeping Apaf-1 in an auto inhibited state. In addition, several areas are so tightly bound that the protein is unable to bind to anything else. It has been determined through mass spectrometry that in the autoinhibited, or "locked" state, ADP is bound to the ATPase domain of Apaf-1. In this state, this protein is singular, and incapable of activating any caspases. b. Presence of Cytochrome c Cytochrome c binds to the WD-40 domain of Apaf-1. This allows for the "lock" to be released, meaning this domain is no longer autoinhibited. However, the CARD and NB-ARC domains remain in autoinhibited state. The CARD domain will only be released from this lock when Apaf-1 is bound to (d) ATP/ATP; when ATP binds, the CARD domain will then be allowed to bind to Caspase-9. When ADP is in the ATPase domain, oligomerization is inhibited. Thus, the binding of ATP also allows for the oligomerization of Apaf-1 into the heptagonal structure necessary for downstream caspase activation. Mutations in the ATPase domain render the protein inactive; however, the method of controlling this ADP-ATP exchange is unclear. Oligomerization can thus only occur in the presence of 7 cytochrome c molecules, 7 Apaf-1 proteins and sufficient (d) ATP/ATP . The ATPase domain belongs to the AAA+ family of ATPases; this family is known for its ability to link to other ATPase domains and form hexa- or heptamers. The apoptosome is then considered active when there are seven Apaf-1 molecules arranged in a wheel structure, oriented such that the NB-ARC domains rest in the centre. Active Apoptosome Action This functional apoptosome then can provide a platform activation of caspase 9. Caspase 9 exists as a zymogen in the cytosol and is thought to be found at 20 nM in cells. Though it is known that the zymogen does not need to be cleaved in order to become active, the activity of procaspase-9 may increase significantly once cleaved. The first hypothesis is that the apoptosome provides a location for the dimerization of two caspase 9 molecules before cleavage; this hypothesis was favoured by Reidl & Salvasen in 2007. The second is that cleavage takes place while caspase 9 is still in its monomeric form. In each case, caspase 9 activation leads to the activation of a full caspase cascade and subsequent cell death. It has been suggested that the evolutionary reason for the multimeric protein complex activating the caspase cascade is to ensure trace amounts of cytochrome c do not accidentally cause apoptosis. Research Areas What happens when mutations occur? While apoptosis is required for natural body function, mutations of the apoptosome pathway cause catastrophic effects and changes in the body. Mutations of the cell pathway can either promote cell death or disallow cell death creating a huge amount of disease in the body. Mutated apoptosis pathways causing disease are plentiful and have a wide range from cancer, due to lack of apoptosome activity, Alzheimer's disease due to too much apoptosome activity, and many other neurodegenerative diseases such as Parkinson's disease and Huntington's disease. Neurodegenerative diseases such as Alzheimer's, Parkinson's, and Huntington's are all age-related diseases and involve increased apoptosis where cells die that are still able to function or that contribute to support function of tissue. Apaf-1-ALT is an Apaf-1 mutant found in prostate cancer, which does not have residues 339-1248. Recent structural studies of apoptosome prove that Apaf-1-ALT cannot form apoptosome as it misses key structural components for assembly. Repression of Apoptosis causing cancer Genetic and biochemical abnormalities within a cell normally trigger programmed cell death to rid the body of irregular cell function and development; however, cancer cells have acquired mutations that allow them to repress apoptosis and survive. Chemotherapies like ionizing radiation have been developed to activate these repressed PCD pathways by hyper-stimulation to promote normal PCD. P53 mutations in Apoptosis P53 functions as a tumor suppressor that is involved in preventing cancer and occurs naturally in apoptotic pathways. P53 causes cells to enter apoptosis and disrupt further cell division therefore preventing that cell from becoming cancerous (16). In the majority of cancers it is the p53 pathway that has become mutated resulting in lack of ability to terminate dysfunctional cells. P53 function can also be responsible for a limited life span where mutations of the p53 gene causes expression of dominant-negative forms producing long lived animals. For example in an experiment using C. elegans, the increased life span of p53 mutants was found to depend on increased autophagy. In another experiment using Drosophila the p53 mutation had both positive and negative effects on the adult life span, which concluded a link between sexual differentiation, PCD and aging. Determining how p53 are affecting life span will be an important area for future research. Targeting the Apoptosome for Cancer therapy The inhibition of apoptosis is one of the key features of cancer so finding ways to manipulate and overcome this inhibition to form the apoptosome and activate caspases are important in the development of new cancer treatments. The ability to directly cause apoptosome activation is valuable in cancer therapies because the infected cancerous genes are unable to be destroyed causing a continuation of the cancer to form. By activating the apoptosome by an outside stimulus apoptosis can occur and get rid of the mutated cells. Numerous approaches to achieve this are currently being pursued including recombinant biomolecules, antisense strategies, gene therapy and classic organic combinatorial chemistry to target specific apoptotic regulators in the approach to correct excessive or deficient cell death in human diseases. In general the up regulation of anti-apoptotic proteins leads to the prevention of apoptosis which can be solved by inhibitors and the down regulation of anti-apoptotic proteins leads to the induction of apoptosis which is reversed by activators that are able to bind and modify their activity. An important target molecule in apoptosis based therapies is Bcl-2 for drug design. Bcl-2 was the first oncogene found to cause cancer-inhibiting apoptosis. It is over expressed in tumors and is resistant to chemotherapy. Scientists have found that binding depressors to Bcl-2 anti-apoptotic proteins inhibits them and leaves direct activators free to interact with Bax and Bak. Another targeted molecule for cancer therapy involves the caspase family and their regulators. The inhibition of caspase activity blocks cell death in human disease including neurodegenerative disorders, stroke, heart attack and liver injury. Therefore, caspase inhibitors are a promising pharmacological tool providing treatments for stroke and other human diseases. There are several caspase inhibitors that are currently in the preclinical stage that have shown promising evidence of reversing effects of some neurodegenerative diseases. In a recent study researchers developed a reversible caspase-3 inhibitor called M-826 and tested it in a mouse model where it blocked brain tissue damage. Furthermore, it was tested on a mouse with Huntington's disease and the inhibitor prevented striated neuron death revealing promising effects for further study of this caspase inhibitor. Apoptosome complex has revealed new potential targets for molecular therapy The Apaf1/caspase-9 apoptosome formation is a crucial event in the apoptotic cascade. The identification of new potential drugs that prevent or stabilize the formation of active apoptosome complex is the ideal strategy for the treatment of disease characterized by excessive or insufficient apoptosis. Recently taurine has been found to prevent ischemia-induced apoptosis in cardiomyocytes through its ability to inhibit Apaf1/caspase-9 apoptosome formation without preventing mitochondrial dysfunction. The possible mechanism by which taurine inhibits the apoptosome formation was identified as being capable of reducing the expression of caspase-9, a fundamental component of apoptosome. However, there are studies that show Apaf1 and caspase-9 have independent roles other than the apoptosome so altering their levels could alter cell function as well. So despite encouraging experimental data several problems remain unsolved and limit the use of experimental drugs in clinical practice. The discovery of apoptosome inhibitors will provide a new therapeutical tool for the treatment of apoptosis mediated disease. Of particular importance are those new compounds able to inhibit apoptosome stability and activity, by acting on intracellular protein–protein interactions without altering the transcriptional levels of the apoptosome components. Recent structural studies of apoptosome may provide valuable tools for designing apoptosome-based therapies. See also The Proteolysis Map References Organelles Apoptosis Programmed cell death
Apoptosome
[ "Chemistry", "Biology" ]
4,649
[ "Senescence", "Programmed cell death", "Apoptosis", "Signal transduction" ]
3,738,637
https://en.wikipedia.org/wiki/Failure%20mode%2C%20effects%2C%20and%20criticality%20analysis
Failure mode effects and criticality analysis (FMECA) is an extension of failure mode and effects analysis (FMEA). FMEA is a bottom-up, inductive analytical method which may be performed at either the functional or piece-part level. FMECA extends FMEA by including a criticality analysis, which is used to chart the probability of failure modes against the severity of their consequences. The result highlights failure modes with relatively high probability and severity of consequences, allowing remedial effort to be directed where it will produce the greatest value. FMECA tends to be preferred over FMEA in space and NATO military applications, while various forms of FMEA predominate in other industries. History FMECA was originally developed in the 1940s by the U.S military, which published MIL–P–1629 in 1949. By the early 1960s, contractors for the U.S. National Aeronautics and Space Administration (NASA) were using variations of FMECA under a variety of names. In 1966 NASA released its FMECA procedure for use on the Apollo program. FMECA was subsequently used on other NASA programs including Viking, Voyager, Magellan, and Galileo. Possibly because MIL–P–1629 was replaced by MIL–STD–1629 (SHIPS) in 1974, development of FMECA is sometimes incorrectly attributed to NASA. At the same time as the space program developments, use of FMEA and FMECA was already spreading to civil aviation. In 1967 the Society for Automotive Engineers released the first civil publication to address FMECA. The civil aviation industry now tends to use a combination of FMEA and Fault Tree Analysis in accordance with SAE ARP4761 instead of FMECA, though some helicopter manufacturers continue to use FMECA for civil rotorcraft. Ford Motor Company began using FMEA in the 1970s after problems experienced with its Pinto model, and by the 1980s FMEA was gaining broad use in the automotive industry. In Europe, the International Electrotechnical Commission published IEC 812 (now IEC 60812) in 1985, addressing both FMEA and FMECA for general use. The British Standards Institute published BS 5760–5 in 1991 for the same purpose. In 1980, MIL–STD–1629A replaced both MIL–STD–1629 and the 1977 aeronautical FMECA standard MIL–STD–2070. MIL–STD–1629A was canceled without replacement in 1998, but nonetheless remains in wide use for military and space applications today. Methodology Slight differences are found between the various FMECA standards. By RAC CRTA–FMECA, the FMECA analysis procedure typically consists of the following logical steps: Define the system Define ground rules and assumptions in order to help drive the design Construct system block diagrams Identify failure modes (piece-part level or functional) Analyze failure effects/causes Feed results back into design process Classify the failure effects by severity Perform criticality calculations Rank failure mode criticality Determine critical items Feed results back into design process Identify the means of failure detection, isolation and compensation Perform maintainability analysis Document the analysis, summarize uncorrectable design areas, identify special controls necessary to reduce failure risk Make recommendations Follow up on corrective action implementation/effectiveness FMECA may be performed at the functional or piece-part level. Functional FMECA considers the effects of failure at the functional block level, such as a power supply or an amplifier. Piece-part FMECA considers the effects of individual component failures, such as resistors, transistors, microcircuits, or valves. A piece-part FMECA requires far more effort, but provides the benefit of better estimates of probabilities of occurrence. However, Functional FMEAs can be performed much earlier, may help to better structure the complete risk assessment and provide other type of insight in mitigation options. The analyses are complementary. The criticality analysis may be quantitative or qualitative, depending on the availability of supporting part failure data. System definition In this step, the major system to be analyzed is defined and partitioned into an indented hierarchy such as systems, subsystems or equipment, units or subassemblies, and piece-parts. Functional descriptions are created for the systems and allocated to the subsystems, covering all operational modes and mission phases. Ground rules and assumptions Before detailed analysis takes place, ground rules and assumptions are usually defined and agreed to. This might include, for example: Standardized mission profile with specific fixed duration mission phases Sources for failure rate and failure mode data Fault detection coverage that system built-in test will realize Whether the analysis will be functional or piece-part Criteria to be considered (mission abort, safety, maintenance, etc.) System for uniquely identifying parts or functions Severity category definitions Block diagrams Next, the systems and subsystems are depicted in functional block diagrams. Reliability block diagrams or fault trees are usually constructed at the same time. These diagrams are used to trace information flow at different levels of system hierarchy, identify critical paths and interfaces, and identify the higher level effects of lower level failures. Failure mode identification For each piece-part or each function covered by the analysis, a complete list of failure modes is developed. For functional FMECA, typical failure modes include: Untimely operation Failure to operate when required Loss of output Intermittent output Erroneous output (given the current condition) Invalid output (for any condition) For piece-part FMECA, failure mode data may be obtained from databases such as RAC FMD–91 or RAC FMD–97. These databases provide not only the failure modes, but also the failure mode ratios. For example: Each function or piece-part is then listed in matrix form with one row for each failure mode. Because FMECA usually involves very large data sets, a unique identifier must be assigned to each item (function or piece-part), and to each failure mode of each item. Failure effects analysis Failure effects are determined and entered for each row of the FMECA matrix, considering the criteria identified in the ground rules. Effects are separately described for the local, next higher, and end (system) levels. System level effects may include: System failure Degraded operation System status failure No immediate effect The failure effect categories used at various hierarchical levels are tailored by the analyst using engineering judgment. Severity classification Severity classification is assigned for each failure mode of each unique item and entered on the FMECA matrix, based upon system level consequences. A small set of classifications, usually having 3 to 10 severity levels, is used. For example, When prepared using MIL–STD–1629A, failure or mishap severity classification normally follows MIL–STD–882. Current FMECA severity categories for U.S. Federal Aviation Administration (FAA), NASA and European Space Agency space applications are derived from MIL–STD–882. Failure detection methods For each component and failure mode, the ability of the system to detect and report the failure in question is analyzed. One of the following will be entered on each row of the FMECA matrix: Normal: the system correctly indicates a safe condition to the crew Abnormal: the system correctly indicates a malfunction requiring crew action Incorrect: the system erroneously indicates a safe condition in the event of malfunction, or alerts the crew to a malfunction that does not exist (false alarm) Criticality ranking Failure mode criticality assessment may be qualitative or quantitative. For qualitative assessment, a mishap probability code or number is assigned and entered on the matrix. For example, MIL–STD–882 uses five probability levels: The failure mode may then be charted on a criticality matrix using severity code as one axis and probability level code as the other. For quantitative assessment, modal criticality number is calculated for each failure mode of each item, and item criticality number is calculated for each item. The criticality numbers are computed using the following values: Basic failure rate Failure mode ratio Conditional probability Mission phase duration The criticality numbers are computed as and . The basic failure rate is usually fed into the FMECA from a failure rate prediction based on MIL–HDBK–217, PRISM, RIAC 217Plus, or a similar model. The failure mode ratio may be taken from a database source such as RAC FMD–97. For functional level FMECA, engineering judgment may be required to assign failure mode ratio. The conditional probability number represents the conditional probability that the failure effect will result in the identified severity classification, given that the failure mode occurs. It represents the analyst's best judgment as to the likelihood that the loss will occur. For graphical analysis, a criticality matrix may be charted using either or on one axis and severity code on the other. Critical item/failure mode list Once the criticality assessment is completed for each failure mode of each item, the FMECA matrix may be sorted by severity and qualitative probability level or quantitative criticality number. This enables the analysis to identify critical items and critical failure modes for which design mitigation is desired. Recommendations After performing FMECA, recommendations are made to design to reduce the consequences of critical failures. This may include selecting components with higher reliability, reducing the stress level at which a critical item operates, or adding redundancy or monitoring to the system. Maintainability analysis FMECA usually feeds into both Maintainability Analysis and Logistics Support Analysis, which both require data from the FMECA. FMECA is the most popular tool for failure and criticality analysis of systems for performance enhancement. In the present era of Industry 4.0, the industries are implementing a predictive maintenance strategy for their mechanical systems. The FMECA is widely used for the failure mode identification and prioritization of mechanical systems and their subsystems for predictive maintenance. FMECA report A FMECA report consists of system description, ground rules and assumptions, conclusions and recommendations, corrective actions to be tracked, and the attached FMECA matrix which may be in spreadsheet, worksheet, or database form. Risk priority calculation RAC CRTA–FMECA and MIL–HDBK–338 both identify Risk Priority Number (RPN) calculation as an alternate method to criticality analysis. The RPN is a result of a multiplication of detectability (D) x severity (S) x occurrence (O). With each on a scale from 1 to 10, the highest RPN is 10x10x10 = 1000. This means that this failure is not detectable by inspection, very severe and the occurrence is almost sure. If the occurrence is very sparse, this would be 1 and the RPN would decrease to 100. So, criticality analysis enables to focus on the highest risks. Advantages and disadvantages The Strengths of FMECA include its comprehensiveness, the systematic establishment of relationships between failure causes and effects, and its ability to point out individual failure modes for corrective action in design. Weaknesses include the extensive labor required, the large number of trivial cases considered, and inability to deal with multiple-failure scenarios or unplanned cross-system effects such as sneak circuits. According to an FAA research report for commercial space transportation, Failure Modes, effects, and Criticality Analysis is an excellent hazard analysis and risk assessment tool, but it suffers from other limitations. This alternative does not consider combined failures or typically include software and human interaction considerations. It also usually provides an optimistic estimate of reliability. Therefore, FMECA should be used in conjunction with other analytical tools when developing reliability estimates. See also Failure mode and effects analysis Integrated logistics support Reliability engineering RAMS Risk assessment Safety engineering System safety References Impact assessment Maintenance Reliability engineering Safety engineering
Failure mode, effects, and criticality analysis
[ "Engineering" ]
2,358
[ "Systems engineering", "Reliability engineering", "Safety engineering", "Maintenance", "Mechanical engineering" ]
3,738,711
https://en.wikipedia.org/wiki/Anders%20Hallberg
Anders Hallberg (born 29 April 1945 in Vetlanda, Jönköping County (Småland)) is a Swedish pharmaceutical researcher, professor in medicinal chemistry and 2006–2011 Rector Magnificus and Vice Chancellor at Uppsala University. Biography Hallberg completed his basic education at Lund University, where he obtained a Master of Science (MSc) in chemistry and physics in 1969. The following year he attended the School of Education in Malmö obtained a BScEd and worked thereafter as a teacher in the junior high school from 1970 to 1973. Hallberg returned to Lund University and the Chemical Centre, to conduct research in 1973–1979. In January 1980, he received a Doctor of Philosophy (PhD) in organic chemistry with the thesis "Methoxythiophenes and Related Systems". After six months as a researcher at Nobel Chemistry in Karlskoga, he completed the post-doctoral period at the University of Arizona, Tucson, Arizona, where he then was promoted to a position as assistant professor at the College of Pharmacy in 1981–1982. On his return to the Chemical Centre in Lund in 1983, he was appointed associate professor (docent). He received grants from the Swedish Research Council and stayed at Lund University until 1986, when he took up a managerial position within the pharmaceutical company AstraDraco in Lund. Eventually he became Director and Head Medicinal Chemistry and only in 1990 did he leave the company to be installed as a professor of medicinal chemistry at Uppsala University. During the twenty years that followed, he worked at the Uppsala Biomedical Center (BMC), but kept in touch with his old company, now Astra Zeneca, through an assignment as a research advisor. One year after arriving at Uppsala, in 1991, Hallberg became Chairman of the Department of Organic Pharmaceutical Chemistry and from 1992 to 1996 he acted as Dean for Research at the Faculty of Pharmacy. He served for many years as Chairman of the Evaluation Panel for Chemistry at The Swedish Research Council in Stockholm and as Chairman of the Medicinal Chemistry Section at The Swedish Academy of Pharmaceutical Sciences. During the period 2002–2005, he was Deputy Vice President (Medicine/Pharmacy), before taking over as Dean of the Faculty of Pharmacy in 2005. Hallberg was then elected Rector Magnificus (Vice Chancellor) for Uppsala University from 1 July 2006. As Vice Chancellor, Hallberg initiated Quality and Renewal (KoF07) in 2007, a comprehensive international evaluation of the university's research that was followed four years later by KoF11. In 2008, he took the initiative to Uppsala University's collaboration with the Royal Institute of Technology, Stockholm University and the Karolinska Institute with the aim to build the biomedical center formation Science for Life Laboratory (SciLifeLab). Hallberg was also one of the initiators to the collaborative organisation U4 Network, founded in 2008 and currently uniting a number of European universities, as well as the international Matariki Network of Universities (MNU), founded in 2010. In 2011, Hallberg and Jörgen Tholin, then Vice Chancellor of the University of Gotland, signed a declaration of intent on the merger of Uppsala University and the University of Gotland. He was succeeded as Vice Chancellor on 1 January 2012 by Eva Åkesson. Research Hallberg's research interests encompass a range of protein targets of pharmaceutical relevance, including proteases and G-protein coupled receptors (GPCRs). One of the primary themes has been to identify novel and selective low molecular weight ligands for these targets. Compounds are optimized using computer-aided techniques and are preferentially synthesized using high-speed chemistry and robust palladium-catalyzed C-C bond forming reactions partly developed in his laboratory. Major indications that have been addressed are malaria and viral infections caused by HIV and HCV (Hepatitis C Virus). More recently, the main focus of the Drug Discovery program was to identify novel ligands that interfere with protein targets in the renin/angiotensin system (RAS). The first reported drug-like selective and potent angiotensin II, type II receptor (AT2R) agonist (C21) was discovered by Hallberg’s group. Compound C21, (buloxibutid) now owned by Vicore Pharma AB (founded by AH et al.) has been extensively studied and is currently undergoing clinical evaluations (Phase II) with first indication idiopathic pulmonary fibrosis. Hallberg has founded biotech companies, has > 40 patents and has authored >290 articles published in international scientific journals (number of citations >10 000). Hallberg has been the main supervisor for 29 doctoral students up to the doctoral degree. Member of several foundations, pharmaceutical company and university boards. Accolades Hallberg is a member of the Royal Society of Sciences in Uppsala (1994), Royal Society of Arts and Sciences in Uppsala (2004), the Royal Physiographic Society in Lund (2005), the Royal Swedish Academy of Sciences (2006) and the Royal Academy of Engineering Sciences (2007). In 2009 he was promoted to honorary doctor (Doctor honoris causa) of medicine at the Université de Sherbrooke, Canada and in 2014 to honorary doctor of pharmacy at Uppsala University. Since 2018, he is an honorary doctor of science and technology at Åbo Akademi University and in 2019 he became an honorary doctor of medicine at Hallym University, South Korea . Family Hallberg is the son of forestry consultant Rudolf Hallberg and Anna-Lisa Hallberg, née Jonsson, and married to dentist Gunilla Hallberg, née Sartor. The son, Mathias Hallberg, is Professor of Molecular research on drug dependence and Dean of the Faculty of Pharmacy at Uppsala University. Honours and awards Anders Hallberg has received several awards and prizes. The Fabian Gyllenberg Award from the Royal Physiographic Society in Lund, for best PhD thesis in chemistry (over a three-year period) at Lund University (1981) Senior Individual Grant Award to Outstanding Senior Scientist from Swedish Foundation for Strategic Research (SSF) (1998) First recipient of the National Swedish Prize in Organic Chemistry (The Holmquist Prize) (2004) The Oscar Carlsson Medal for Excellence from the Swedish Chemical Society (2005) The Best Teacher Prize 2006 from the Pharmacy Student Union, Uppsala University (2006) H. M. The King's Medal, 12th size with the ribbon of the Order of the Seraphim, for Distinguished Achievements in Education and Research (2008) The Order of the Cross of Terra Mariana from the President of Estonia (2011) The Gustav Adolf Medal (of the year 1924) from Uppsala University (2011) The Honorary Medal of the Uppsala County from the Governor (2011) The Rudbeck Medal for Outstanding Achievements in Science from Uppsala University (2013) Honorary doctorates Doctor honoris causa (Medicine), Université de Sherbrooke, Canada (2009) Doctor honoris causa (Pharmacy), Uppsala University (2014) Doctor honoris causa (Science and Technology), Åbo Akademi University, Finland (2018) Doctor honoris causa (Medicine), Hallym University, South Korea (2019) Membership of Royal Academies and Societies Member of the Royal Society of Sciences in Uppsala (Preses 2014–15) (1994–) Member of the Royal Society of Arts and Sciences in Uppsala (2004–) Member of the Royal Physiographic Society in Lund (2005–) Member of the Royal Swedish Academy of Sciences (KVA) Class IV, chemistry (2006–) Member of the Royal Swedish Academy of Engineering Sciences (IVA) Class IV, chemistry (2007–) Member of the Royal Patriotic Society (2009–) Honorary memberships Honorary Member of the Småland Student Nation in Uppsala   Honorary Member of the Upland Student Nation in Uppsala Honorary Member of the Gotland Student Nation in Uppsala Honorary Member of Allmänna Sången Honorary Member of Uppsala University Jazz Orchestra Honorary Member of Orphei Drängar (OD) Honorary Member of the Royal Academic Orchestra Honorary Member Rotary International Literature The good university. Rector's period 2006–2011. Letter of Appreciation to Anders Hallberg. (Acta Universitatis Upsaliensis. Writings concerning Uppsala University. C:93.) Uppsala 2011. Fred Nyberg, "Anders Hallberg as a scientist" published in The good university. Kerstin Sahlin, "A rectorship with quality as a guiding light." Published in The good university. External links Professor Emeritus Anders Hallberg, CV and list of publications References 1945 births Living people Swedish chemists Rectors of Uppsala University Members of the Royal Swedish Academy of Sciences Members of the Royal Swedish Academy of Engineering Sciences Members of the Royal Physiographic Society in Lund Recipients of the Order of the Cross of Terra Mariana, 2nd Class Computational chemists Lund University alumni University of Arizona faculty Members of the Royal Society of Sciences in Uppsala
Anders Hallberg
[ "Chemistry" ]
1,817
[ "Computational chemistry", "Theoretical chemists", "Computational chemists" ]
3,739,358
https://en.wikipedia.org/wiki/RX-250-LPN
The RX-250-LPN is an Indonesian sounding rocket, part of the RX rocket family. It was launched six times between 1987 and 2007. Technical data Specifications come from the rocket's summary datasheet published by Indonesian space agency LAPAN. Apogee: 70 kilometres Liftoff thrust: 53 kilonewtons Burning time: 6 seconds Specific impulse: 220 seconds Propellant: HTPB Total mass: 300 kilograms Core diameter: 0.25 metres Total length: 5.30 metres Payload: 30–60 kg References Sounding rockets of Indonesia
RX-250-LPN
[ "Astronomy" ]
116
[ "Rocketry stubs", "Astronomy stubs" ]
3,739,769
https://en.wikipedia.org/wiki/Janka%20hardness%20test
The Janka hardness test (; ), created by Austrian-born American researcher Gabriel Janka (1864–1932), measures the resistance of a sample of wood to denting and wear. It measures the force required to embed an steel ball halfway into a sample of wood. (The diameter was chosen to produce a circle with an area of 100 square millimeters, or one square centimeter.) A common use of Janka hardness ratings is to determine whether a species is suitable for use as flooring. For hardwood flooring, the test usually requires an sample with a thickness of at least 6–8 mm, and the most commonly used test is the ASTM D1037. When testing wood in lumber form, the Janka test is always carried out on wood from the tree trunk (known as the heartwood), and the standard sample (according to ASTM D143) is at 12% moisture content and clear of knots. The hardness of wood varies with the direction of the wood grain. Testing on the surface of a plank, perpendicular to the grain, is said to be of "side hardness". Testing the cut surface of a stump is called a test of "end hardness". Side hardness may be further divided into "radial hardness" and "tangential hardness", although the differences are minor and often neglected. The results are stated in various ways, leading to confusion, especially when the actual units employed are often not attached. The resulting measure is always one of force. In the United States, the measurement is in pounds-force (lbf). In Sweden, it is in kilograms-force (kgf), and in Australia, either in newtons (N) or kilonewtons (kN). This confusion is greatest when the results are treated as units, for example "660 Janka". The Janka hardness test results tabulated below followed ASTM D 1037-12 testing methods. Lumber stocks tested range from 1" to 2" (25–50 mm) thick. The tabulated Janka hardness numbers are an average. There is a standard deviation associated with each species, but these values are not given. No testing was done on actual flooring. Other factors affect how flooring performs: the type of core for engineered floorings, such as pine, HDF, poplar, oak, or birch; grain direction and thickness; floor or top wear surface, etc. The chart is not to be considered an absolute; it is meant to help people understand which woods are harder than others. Typical Janka hardness values References External links Janka Hardness Scale For Wood – Side Hardness Chart of Some Woods USDA – Wood Handbook – Wood as an Engineering Material USDA – Janka Hardness Using Nonstandard Specimens Woodworking Woodcarving Wood Hardness tests
Janka hardness test
[ "Materials_science" ]
573
[ "Hardness tests", "Materials testing" ]
3,740,178
https://en.wikipedia.org/wiki/Euler%20number%20%28physics%29
The Euler number (Eu) is a dimensionless number used in fluid flow calculations. It expresses the relationship between a local pressure drop caused by a restriction and the kinetic energy per volume of the flow, and is used to characterize energy losses in the flow, where a perfect frictionless flow corresponds to an Euler number of 0. The inverse of the Euler number is referred to as the Ruark Number with the symbol Ru. The Euler number is defined as where is the density of the fluid. is the upstream pressure. is the downstream pressure. is a characteristic velocity of the flow. An alternative definition of the Euler number is given by Shah and Sekulic where is the pressure drop See also Darcy–Weisbach equation is a different way of interpreting the Euler number Reynolds number for use in flow analysis and similarity of flows Cavitation number a similarly formulated number with different meaning References Further reading Dimensionless numbers of fluid mechanics Fluid dynamics Leonhard Euler
Euler number (physics)
[ "Chemistry", "Engineering" ]
199
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
3,740,698
https://en.wikipedia.org/wiki/Direct%20numerical%20simulation
A direct numerical simulation (DNS) is a simulation in computational fluid dynamics (CFD) in which the Navier–Stokes equations are numerically solved without any turbulence model. This means that the whole range of spatial and temporal scales of the turbulence must be resolved. All the spatial scales of the turbulence must be resolved in the computational mesh, from the smallest dissipative scales (Kolmogorov microscales), up to the integral scale , associated with the motions containing most of the kinetic energy. The Kolmogorov scale, , is given by where is the kinematic viscosity and is the rate of kinetic energy dissipation. On the other hand, the integral scale depends usually on the spatial scale of the boundary conditions. To satisfy these resolution requirements, the number of points along a given mesh direction with increments , must be so that the integral scale is contained within the computational domain, and also so that the Kolmogorov scale can be resolved. Since where is the root mean square (RMS) of the velocity, the previous relations imply that a three-dimensional DNS requires a number of mesh points satisfying where is the turbulent Reynolds number: Hence, the memory storage requirement in a DNS grows very fast with the Reynolds number. In addition, given the very large memory necessary, the integration of the solution in time must be done by an explicit method. This means that in order to be accurate, the integration, for most discretization methods, must be done with a time step, , small enough such that a fluid particle moves only a fraction of the mesh spacing in each step. That is, ( is here the Courant number). The total time interval simulated is generally proportional to the turbulence time scale given by Combining these relations, and the fact that must be of the order of , the number of time-integration steps must be proportional to . On the other hand, from the definitions for , and given above, it follows that and consequently, the number of time steps grows also as a power law of the Reynolds number. One can estimate that the number of floating-point operations required to complete the simulation is proportional to the number of mesh points and the number of time steps, and in conclusion, the number of operations grows as . Therefore, the computational cost of DNS is very high, even at low Reynolds numbers. For the Reynolds numbers encountered in most industrial applications, the computational resources required by a DNS would exceed the capacity of the most powerful computers currently available. However, direct numerical simulation is a useful tool in fundamental research in turbulence. Using DNS it is possible to perform "numerical experiments", and extract from them information difficult or impossible to obtain in the laboratory, allowing a better understanding of the physics of turbulence. Also, direct numerical simulations are useful in the development of turbulence models for practical applications, such as sub-grid scale models for large eddy simulation (LES) and models for methods that solve the Reynolds-averaged Navier–Stokes equations (RANS). This is done by means of "a priori" tests, in which the input data for the model is taken from a DNS simulation, or by "a posteriori" tests, in which the results produced by the model are compared with those obtained by DNS. References External links DNS page at CFD-Wiki Fluid dynamics Turbulence Turbulence models
Direct numerical simulation
[ "Chemistry", "Engineering" ]
691
[ "Piping", "Chemical engineering", "Turbulence", "Fluid dynamics" ]
3,741,212
https://en.wikipedia.org/wiki/Close-ratio%20transmission
A close-ratio transmission describes a motor vehicle transmission with a smaller than average difference between the gear ratios. They are most often used on sports cars in order to keep the engine in the power band. Overview A close-ratio transmission is one which is described relative to another transmission for the same vehicle model. The relativity applies only for the transmissions offered for a single make and model; that is, there is no specific threshold value or accepted industry standard that determines whether the steps between gears constitute a normal or close-ratio transmission. What one manufacturer describes as a close-ratio transmission is not necessarily closer in ratios than another manufacturer's normal manual transmission. Often, manufacturers use the term "close-ratio" when offering one or more alternatives to the transmission fitted as standard equipment: for example, an optional, sportier transmission which offers closer ratios than the standard, such as Porsche offered with the three transmissions listed below for the 911 from 1967 to 1971. Mathematically, the "closeness" of a transmission can be characterized from the cumulative average spacing between, or geometric average of, gears. This is defined as the (n−1)th root of the products of each gear ratio, which simplifies to the overall range of gear ratios and the number of (forward) speeds: where In general, most transmissions have approximately the same total range between the highest and lowest gears, so the more gears a transmission has, the closer they are together. This is apparent from the expression above: as increases, the average spacing will increase. A continuously variable transmission (CVT) has a nearly infinite "number" of gear ratios between its highest and lowest ratios, which means the CVT has infinitesimally small steps between gear ratios. However, because CVTs do not have specific (fixed) gear ratios unless programmed as such, it would not be considered a close-ratioed transmission. Engine power band considerations Internal combustion engines found in passenger automobiles are capable of operating over a relatively wide range of speeds: idle to redline for petrol engines is approximately 700 to 6500 RPM or more; however, the power band, which is the optimum range of engine speeds considering fuel consumption, torque, and power output, is usually smaller. The automotive transmission is used to maintain the engine speed within the power band while operating the vehicle over a wide range of legal speeds. During acceleration, when the vehicle's speed increases to the point that the engine speed exceeds the speed at which maximum power is developed, the driver or transmission shifts to a higher gear (numerically lower ratio), which reduces engine speed, keeping it in its optimum power band, and allows continued acceleration. It is possible for the next higher gear ratio to be so much lower than the preceding ratio that upshifting lowers the engine speed excessively, resulting in the engine speed falling outside its "power band"; for maximum acceleration, the engine speed of an automobile should be kept in this power band. A wide-ratio transmission requires the engine to operate over a greater range of engine speeds, but requires less shifting and allows a wider range of output (vehicle) speeds. High-performance engines often are tuned for maximum power in an even more narrow range of operating speeds. A close-ratio type of transmission is designed to allow an engine to remain in this relatively narrow range of operating speeds and generally are offered in sports cars, in which the driver can be expected to enjoy shifting often to keep the engine in its power band. Example (Porsche 911) This table compares the ratios of three transmissions offered for Porsche 911 vehicles from 1967 to 1971, the first being the standard 901/75 transmission, the second being the 901/76 transmission denoted "For hill climbs", and the third being the 901/79 transmission denoted "Nürburgring ratios". It includes the step between successive ratios; for example, the step from 3rd to 4th gear for the 901/76 transmission is 85%, meaning the engine speed in 4th gear is 85% of that in 3rd gear at equivalent vehicle speeds. For the standard Porsche transmission (901/75) described here, each successive gear's ratio is, on average, 75% of that of the preceding gear: By similar calculations, the Hill Climb transmission has successive gear ratios which are 81% of the preceding gear, on average, and the Nürburgring transmission has successive gear ratios which are 77% of the preceding gear. Thus, the Hill Climb transmission's gears are "closer" in numerical ratio to the preceding gear than that of the standard or Nürburgring transmission, making it a close-ratio transmission. Note the step from 1st to 2nd for the Nürburgring transmission is the largest single change for any of the three transmissions, but the successive step changes from 2nd through 5th are relatively small; this transmission is intended for sustained high-speed operation instead of acceleration from a stop. Power band The 1967 Porsche 911 S was equipped with a 2.0 L flat-six engine which produced at 6600/min and of torque at 5200/min. Using the Standard transmission gear ratios above, when the driver shifts from 2nd to 3rd gear at 6,600/min, the engine speed would fall to /min (which is 6600 × 1.32 / 1.78). In this case, shifting up to 3rd gear causes the engine speed to be slightly below the speed at which maximum torque is produced. By using a close-ratio gearbox, such as the Hill Climb example above, shifting to 3rd gear would drop engine speed to /min (6600 × 1.55 / 2.00), which almost coincides with the maximum torque output of the engine. Likewise, the Nürburgring gearset above is also slightly numerically closer than the Standard gearset, making it more useful for sporting applications. However, the Nürburgring specification has a "taller" (numerically lower) 5th gear ratio than the Hill Climb gearbox, allowing for higher top speeds necessary for this faster racing circuit. The Standard gearset with its numerically lower 5th gear, will allow even lower engine speeds at highway speeds, thereby reducing engine noise and fuel consumption, but compromises acceleration performance at very high speeds. Historical evolution In the 1960s, cars equipped with manual transmissions typically had four forward speeds and a top gear offering a 1:1 ratio. The designation of wide versus close ratio affected the lowest gear ratio; for example, the four-speed Muncie transmissions offered in General Motors performance vehicles included the M20 "wide ratio" transmission, which had a first gear ratio of 2.52 or 2.56:1, while the M21 and M22 "close ratio" transmissions had a first gear ratio of 2.20:1. At that time, fuel efficiency was not a primary concern and the "close ratio" transmissions generally were paired with a low final drive ratio of 3.5:1 or higher to compensate for the relatively high lowest gear ratio, resulting in a large rate of fuel consumption. Following the oil crises of the 1970s, final drive ratios went to 3:1 or lower to improve fuel economy and to accommodate this, vehicle manufacturers began adding more forward speeds into the gearbox, typically pairing an overdrive fifth with an even lower first gear, resulting in what would have been considered a very wide ratio transmission. "Close ratio" transmissions now had low gear ratios of 2.64:1 while "wide ratio" transmissions were 3:1 or higher, meaning that "close ratio" transmissions produced in the 1970s often had a larger range of ratios than "wide ratio" transmissions from the 1960s! Adding gear ratios One way to create a close-ratio transmission is to install more gears into the transmission without altering the lowest and highest gear ratios. In this manner, some six-speed transmissions available in consumer vehicles are labelled as "close-ratio". Again, the defining issue is the overall spacing of gears between 1st and in this case 6th gear. As an example, consider three manual transmissions fitted to Honda cars, each with an overall change in ratios () of 0.2 to 0.3, but with a different number of gears; the six-speed transmission has closer ratios than the four-speed transmission. Whether a six-speed transmission can be legitimately called "close-ratio" depends on whether it keeps the top gear unchanged relative to that of a comparable 5-speed model, thus causing the change in ratios from low to high gear to occur in smaller steps (i.e. closer ratios) between gears. Alternatively, some six-speed transmissions have ratios essentially the same as a 5-speed transmission, and add an even higher (numerically lower) 6th gear that allows even lower engine speeds at highway speeds. In this case, the transmission would be considered a "double" overdrive transmission, depending upon the 5th and 6th gear ratios. By extension, an automatic transmission could also be called close-ratioed. With the advent of 6-, 7-, and 8-speed automatic transmissions, the ratios become closer and closer together, which meets the mathematical conception of what constitutes a close-ratio transmission. Continuously Variable Transmissions (CVT) Prior to the 1970s, manufacturers' manual transmissions generally had three or four gears. To meet requirements to maximize fuel economy, manufacturers began offering 5- and, in the 1990s, 6- speed manual transmissions. Likewise, 3-speed automatic transmissions were the norm until fairly recently, but now 6-, 7-, and 8-speed automatic transmissions are being offered. By reducing the spacing between ratios allowed by having more gears, a vehicle's engine speed can be kept in a narrow band. With a 5-speed transmission, the power range must be relatively wide, which requires compromising the engine's efficiency. With an 8-speed transmission, the power range can be kept relatively narrow, which allows the engineer to optimize engine efficiency at a particular engine speed, and the transmission attempts to keep the engine operating at that speed. (Engine efficiency improves greatly when the load on the engine is maximized; hence, automatic transmissions also upshift whenever possible in an attempt to lower the engine's speed as much as possible, which increases load and efficiency.) The recent introduction of continuously variable transmissions (CVTs) attempts to push this strategy to its logical conclusion. This allows a near infinite "number" of gear ratios, which this implies an infinitely close-ratioed transmission. However, given that there are no gears or specific gear ratios, one would not really consider such a transmission close-ratioed. References Automotive transmission technologies Motorcycle transmissions Engineering ratios
Close-ratio transmission
[ "Mathematics", "Engineering" ]
2,155
[ "Quantity", "Metrics", "Engineering ratios" ]
3,741,389
https://en.wikipedia.org/wiki/Hylotheism
Hylotheism (from Gk. hyle, 'matter' and theos, 'God') is the belief that matter and God are the same, so in other words, defining God as matter. The American Lutheran Church–Missouri Synod defines hylotheism is "Theory equating matter with God or merging one into the other" which it states as "Synonym for pantheism* and materialism.*". See also Pantheism References Lutheran theology Conceptions of God Matter Lutheran Church – Missouri Synod
Hylotheism
[ "Physics" ]
112
[ "Matter" ]
6,674,542
https://en.wikipedia.org/wiki/Kolmogorov%20extension%20theorem
In mathematics, the Kolmogorov extension theorem (also known as Kolmogorov existence theorem, the Kolmogorov consistency theorem or the Daniell-Kolmogorov theorem) is a theorem that guarantees that a suitably "consistent" collection of finite-dimensional distributions will define a stochastic process. It is credited to the English mathematician Percy John Daniell and the Russian mathematician Andrey Nikolaevich Kolmogorov. Statement of the theorem Let denote some interval (thought of as "time"), and let . For each and finite sequence of distinct times , let be a probability measure on Suppose that these measures satisfy two consistency conditions: 1. for all permutations of and measurable sets , 2. for all measurable sets , Then there exists a probability space and a stochastic process such that for all , and measurable sets , i.e. has as its finite-dimensional distributions relative to times . In fact, it is always possible to take as the underlying probability space and to take for the canonical process . Therefore, an alternative way of stating Kolmogorov's extension theorem is that, provided that the above consistency conditions hold, there exists a (unique) measure on with marginals for any finite collection of times . Kolmogorov's extension theorem applies when is uncountable, but the price to pay for this level of generality is that the measure is only defined on the product σ-algebra of , which is not very rich. Explanation of the conditions The two conditions required by the theorem are trivially satisfied by any stochastic process. For example, consider a real-valued discrete-time stochastic process . Then the probability can be computed either as or as . Hence, for the finite-dimensional distributions to be consistent, it must hold that . The first condition generalizes this statement to hold for any number of time points , and any control sets . Continuing the example, the second condition implies that . Also this is a trivial condition that will be satisfied by any consistent family of finite-dimensional distributions. Implications of the theorem Since the two conditions are trivially satisfied for any stochastic process, the power of the theorem is that no other conditions are required: For any reasonable (i.e., consistent) family of finite-dimensional distributions, there exists a stochastic process with these distributions. The measure-theoretic approach to stochastic processes starts with a probability space and defines a stochastic process as a family of functions on this probability space. However, in many applications the starting point is really the finite-dimensional distributions of the stochastic process. The theorem says that provided the finite-dimensional distributions satisfy the obvious consistency requirements, one can always identify a probability space to match the purpose. In many situations, this means that one does not have to be explicit about what the probability space is. Many texts on stochastic processes do, indeed, assume a probability space but never state explicitly what it is. The theorem is used in one of the standard proofs of existence of a Brownian motion, by specifying the finite dimensional distributions to be Gaussian random variables, satisfying the consistency conditions above. As in most of the definitions of Brownian motion it is required that the sample paths are continuous almost surely, and one then uses the Kolmogorov continuity theorem to construct a continuous modification of the process constructed by the Kolmogorov extension theorem. General form of the theorem The Kolmogorov extension theorem gives us conditions for a collection of measures on Euclidean spaces to be the finite-dimensional distributions of some -valued stochastic process, but the assumption that the state space be is unnecessary. In fact, any collection of measurable spaces together with a collection of inner regular measures defined on the finite products of these spaces would suffice, provided that these measures satisfy a certain compatibility relation. The formal statement of the general theorem is as follows. Let be any set. Let be some collection of measurable spaces, and for each , let be a Hausdorff topology on . For each finite subset , define . For subsets , let denote the canonical projection map . For each finite subset , suppose we have a probability measure on which is inner regular with respect to the product topology (induced by the ) on . Suppose also that this collection of measures satisfies the following compatibility relation: for finite subsets , we have that where denotes the pushforward measure of induced by the canonical projection map . Then there exists a unique probability measure on such that for every finite subset . As a remark, all of the measures are defined on the product sigma algebra on their respective spaces, which (as mentioned before) is rather coarse. The measure may sometimes be extended appropriately to a larger sigma algebra, if there is additional structure involved. Note that the original statement of the theorem is just a special case of this theorem with for all , and for . The stochastic process would simply be the canonical process , defined on with probability measure . The reason that the original statement of the theorem does not mention inner regularity of the measures is that this would automatically follow, since Borel probability measures on Polish spaces are automatically Radon. This theorem has many far-reaching consequences; for example it can be used to prove the existence of the following, among others: Brownian motion, i.e., the Wiener process, a Markov chain taking values in a given state space with a given transition matrix, infinite products of (inner-regular) probability spaces. History According to John Aldrich, the theorem was independently discovered by British mathematician Percy John Daniell in the slightly different setting of integration theory. References External links Aldrich, J. (2007) "But you have to remember P.J.Daniell of Sheffield" Electronic Journ@l for History of Probability and Statistics December 2007. Theorems regarding stochastic processes
Kolmogorov extension theorem
[ "Mathematics" ]
1,202
[ "Theorems about stochastic processes", "Theorems in probability theory" ]
6,682,293
https://en.wikipedia.org/wiki/Salt%20Waste%20Processing%20Facility
The Salt Waste Processing Facility (SWPF) is a nuclear waste treatment facility for the United States Department of Energy's Nuclear Reservation Savannah River Site in Aiken, South Carolina. It was designed, constructed and commissioned by the Parsons Corporation for treatment of nuclear salt waste and became operational in 2021. Background The Savannah River Site (SRS) presently contains legacy nuclear waste from the production of nuclear materials between 1951 and 2002. The nuclear waste is stored in large (typically nominal capacity) underground double walled storage tanks located in F-Area and H-Area tank farms. Upon completion, the Salt Waste Processing Facility (SWPF) will be the cornerstone of the Savannah River Site (SRS) salt processing strategy. It is designed to be capable of processing of salt solution per year. The waste currently in storage at SRS presently includes approximately of salt solution that must be processed, of which are projected to be processed through SWPF. SWPF will use specific processes that have been developed at Oak Ridge National Laboratory and Argonne National Laboratory using annular centrifugal contactors and that will be the state-of-the-art methods to target the removal of cesium-137, strontium-90, and actinides from SRS salt wastes. SWPF will remove approximately 99.998% of the cesium-137/barium-137 (metastable) activity while also removing strontium and actinides (Ref 1). Planned Deployment of SWPF Treatment Facility About of salt waste are currently stored in underground waste storage tanks at SRS. This waste, along with future salt waste forecast to be sent to the tank farms, will be processed through DDA, ARP/MCU, and the SWPF. DOE estimated in preparing the Section 3116 Determination that an additional 41.3 Mgal of unconcentrated salt waste would have been received by the Tank Farms between December 1, 2004, and the completion of salt waste processing. After both liquid removal by processing through the Tank Farm evaporator systems and later additions of liquid for saltcake dissolution and chemistry adjustments required for processing, approximately 84 Mgal (5.9 Mgal existing salt waste through the DDA process, 1.0 Mgal future salt waste through the DDA process, 2.1 Mgal existing and future salt waste through ARP/MCU, 69.1 Mgal existing salt waste through SWPF, and 5.9 Mgal future salt waste through SWPF) of salt solution will be processed by Interim Salt Processing and High Capacity Salt Processing resulting in approximately 168 Mgal of grout output from the Saltstone Production Facility to be disposed of in the Saltstone Disposal Facility. (DOE Amended Decision) Planned Start Date Delayed The start date for SWPF operations has been delayed to allow for modification of the SWPF preliminary design to incorporate a higher degree of performance category (PC)in the confinement barriers necessary for worker protection during natural phenomena hazard events. The Defense Nuclear Facilities Safety Board initially identified concerns related to the PC designations of the SWPF in August, 2004. DOE agreed in November, 2005, to modify the SWPF design after extensive analysis and review, resulting in an approximate two-year delay in the planned startup of SWPF. DOE anticipates that it will continue to explore possible ways to improve the schedule for design and construction of the SWPF. It remains DOE's goal to complete processing of salt waste through the SWPF by 2019 although this date may need to be modified in the future. Despite this projected delay, DOE will not increase the quantity of waste (total curies) to be disposed of in the Saltstone Disposal Facility, nor increase the quantities (curies) processed with interim processes or SWPF from those described here and in the Draft Section 3116 Determination for Salt Waste Disposal at the Savannah River Site and the Section 3116 Determination for Salt Waste Disposal at the Savannah River Site. Therefore, the date change does not affect the analyses in the Section 3116 Determination for Salt Waste Disposal at the Savannah River Site, its supporting documents, or the Nuclear Regulatory Commission (NRC) consultation. The modified schedule is reflected in the Section 3116 Determination for Salt Waste Disposal at the Savannah River. However, the technical and programmatic documents that are referenced by the Section 3116 Determination for Salt Waste Disposal at the Savannah River Site have not been updated to reflect this new date because the schedule change did not occur until after those documents were completed. (DOE Amended decision). See also Nuclear fuel cycle MOX fuel References Section 3116 Determination for Salt Waste Disposal at the Savannah River Site (February 28, 2005) Salt Waste Processing Facility Independent Technical Review (February 26, 2007) Independent Oversight Review of the Savannah River Site Salt Waste Processing Facility Safety Basis and Design Development (August 2013) Integrated Salt Waste Processing at the Savannah River Site, WM2015 Conference, March 15-19, 2015, Phoenix, Arizona USA (March 15, 2015) Salt Waste Processing Facility – Phase II – Aiken, SC (July 2017) Salt Waste Processing Facility Operations Underway (November 9, 2020) Savannah River’s Salt Waste Processing Facility begins full operations (January 25, 2021) External links Official website of the Savannah River Site Official website of Parsons Corporation Official website of the Department of Energy Website detailing geotechnical testing of the SWPF site. Savannah River Site Radioactive waste Nuclear technology in the United States
Salt Waste Processing Facility
[ "Chemistry", "Technology" ]
1,102
[ "Radioactive waste", "Environmental impact of nuclear power", "Radioactivity", "Hazardous waste" ]