id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
17,192,773 | https://en.wikipedia.org/wiki/Kink%20instability | A kink instability (also known as a kink oscillation or kink mode) is a current-driven plasma instability characterized by transverse displacements of a plasma column's cross-section from its center of mass without any change in the characteristics of the plasma. It typically develops in a thin plasma column carrying a strong axial current which exceeds the Kruskal–Shafranov limit and is sometimes known as the Kruskal–Shafranov (kink) instability, named after Martin David Kruskal and Vitaly Shafranov.
The kink instability was first widely explored in fusion power machines with Z-pinch configurations in the 1950s. It is one of the common magnetohydrodynamic instability modes which can develop in a pinch plasma and is sometimes referred to as the mode. (The other is the mode known as the sausage instability.)
If a "kink" begins to develop in a column the magnetic forces on the inside of the kink become larger than those on the outside, which leads to growth of the perturbation. As it develops at fixed areas in the plasma, kinks belong to the class of "absolute plasma instabilities", as opposed to convective processes.
References
Plasma instabilities | Kink instability | [
"Physics"
] | 256 | [
"Physical phenomena",
"Plasma physics",
"Plasma phenomena",
"Plasma instabilities",
"Plasma physics stubs"
] |
7,995,248 | https://en.wikipedia.org/wiki/Coenzyme%20M | Coenzyme M is a coenzyme required for methyl-transfer reactions in the metabolism of archaeal methanogens, and in the metabolism of other substrates in bacteria. It is also a necessary cofactor in the metabolic pathway of alkene-oxidizing bacteria. CoM helps eliminate the toxic epoxides formed from the oxidation of alkenes such as propylene. The structure of this coenzyme was discovered by CD Taylor and RS Wolfe in 1974 while they were studying methanogenesis, the process by which carbon dioxide is transformed into methane in some archaea. The coenzyme is an anion with the formula . It is named 2-mercaptoethanesulfonate and abbreviated HS–CoM. The cation is unimportant, but the sodium salt is most available. Mercaptoethanesulfonate contains both a thiol, which is the main site of reactivity, and a sulfonate group, which confers solubility in aqueous media.
Biochemical role
Methanogenesis
The coenzyme is the C1 donor in methanogenesis. It is converted to methyl-coenzyme M thioether, the thioether , in the penultimate step to methane formation. Methyl-coenzyme M reacts with coenzyme B, 7-thioheptanoylthreoninephosphate, to give a heterodisulfide, releasing methane:
CH3–S–CoM + HS–CoB → CH4 + CoB–S–S–CoM
This induction is catalyzed by the enzyme methyl-coenzyme M reductase, which restricts cofactor F430 as the prosthetic group.
CH3-S-CoM is produced by the MtaA-catalyzed reaction between a methylated version of monomethylamine corrinoid protein MtmC and HS-CoM. The methylated version of MtmC is in turn produced by a cobamide-dependent methyltransferase that uses trimethylamine (TMA), dimethylamine (DMA), or monomethylamine (MMA) as the mehyl donor.
Alkene metabolism
Coenzyme M is also used to make acetoacetate from CO2 and propylene or ethylene in aerobic bacteria. Specifically, in bacteria that oxidize alkenes into epoxides. After the propylene (or other alkene) undergoes epoxidation and becomes epoxypropane it becomes electrophilic and toxic. These epoxides react with DNA and proteins, affecting cell function. Alkene-oxidizing bacteria like Xanthobacter autotrophicus use a metabolic pathway in which CoM is conjugated with an aliphatic epoxide. This step creates a nucleophilic compound which can react with CO2. The eventual carboxylation produces acetoacetate, breaking down the propylene.
Biosynthesis
Bacteria and archaea use different synthetic routes, albeit both starting with phosphoenolpyruvate.
See also
Mesna – a cancer chemotherapy adjuvant with the same structure
References
Coenzymes
Thiols
Sulfonates | Coenzyme M | [
"Chemistry"
] | 682 | [
"Organic compounds",
"Coenzymes",
"Thiols"
] |
7,995,309 | https://en.wikipedia.org/wiki/Subsurface%20flow | Subsurface flow, in hydrology, is the flow of water beneath Earth's surface as part of the water cycle.
In the water cycle, when precipitation falls on the Earth's land, some of the water flows on the surface forming streams and rivers. The remaining water, through infiltration, penetrates the soil traveling underground, hydrating the vadose zone soil, recharging aquifers, with the excess flowing in subsurface runoff. In hydrogeology it is measured by the Groundwater flow equation.
Runoff
Water flows from areas where the water table is higher to areas where it is lower. This flow can be either surface runoff in rivers and streams, or subsurface runoff infiltrating rocks and soil. The amount of runoff reaching surface and groundwater can vary significantly, depending on rainfall, soil moisture, permeability, groundwater storage, evaporation, upstream use, and whether or not the ground is frozen. The movement of subsurface water is determined largely by the water gradient, type of substrate, and any barriers to flow. The groundwater flow may be through either confined or phreatic aquifers, with smaller flow systems overlying or within. The residence time generally ranges from several decades to many centuries, implying the establishment of a complete chemical equilibrium with the aquifer. Mapping scales are between 1:250,000 and 1:2,000,000. (see, for example, Engelen et al. 1988).
Surface return
Subsurface water may return to the surface in groundwater flow, such as from a spring, seep, or a water well, or subsurface return to streams, rivers, and oceans. Water returns to the land surface at a lower elevation than where infiltration occurred, under the force of gravity or gravity induced pressures. Groundwater tends to move slowly, and is replenished slowly, so it can remain in aquifers for thousands of years. Mainly, water flows through the ground which leads to the ocean where the cycle begins again.
Subsurface flow
Flow within the soil body may take place under unsaturated conditions, but faster subsurface flow is associated with localized soil saturation.
See also
Artesian aquifer
Ecohydrology
Groundwater
Groundwater energy balance
Groundwater flow
Groundwater recharge
Vadose zone
Water cycle
References
enchartedlearning.com
tutor.com/
Huggett, J. (2005) Fundamentals of Geomorphology, Routeledge, Oxon.
Aquifers
Hydrology | Subsurface flow | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 519 | [
"Hydrology",
"Hydrology stubs",
"Aquifers",
"Environmental engineering"
] |
7,997,903 | https://en.wikipedia.org/wiki/Polyester%20resin | Polyester resins are synthetic resins formed by the reaction of dibasic organic acids and polyhydric alcohols. Maleic anhydride is a commonly used raw material with diacid functionality in unsaturated polyester resins. Unsaturated polyester resins are used in sheet moulding compound, bulk moulding compound and the toner of laser printers. Wall panels fabricated from polyester resins reinforced with fiberglassso-called fiberglass reinforced plastic (FRP)are typically used in restaurants, kitchens, restrooms and other areas that require washable low-maintenance walls. They are also used extensively in cured-in-place pipe applications. Departments of Transportation in the USA also specify them for use as overlays on roads and bridges. In this application they are known AS Polyester Concrete Overlays (PCO). These are usually based on isophthalic acid and cut with styrene at high levelsusually up to 50%. Polyesters are also used in anchor bolt adhesives though epoxy based materials are also used. Many companies have and continue to introduce styrene free systems mainly due to odor issues, but also over concerns that styrene is a potential carcinogen. Drinking water applications also prefer styrene free. Most polyester resins are viscous, pale coloured liquids consisting of a solution of a polyester in a reactive diluent which is usually styrene, but can also include vinyl toluene and various acrylates.
Unsaturated polyester
Unsaturated polyesters are condensation polymers formed by the reaction of polyols (also known as polyhydric alcohols), organic compounds with multiple alcohol or hydroxy functional groups, with unsaturated and in some cases saturated dibasic acids. Typical polyols used are glycols including ethylene glycol, propylene glycol, and diethylene glycol; typical acids used are phthalic acid, isophthalic acid, terephthalic acid, and maleic anhydride. Water, a condensation by-product of esterification reactions, is continuously removed by distillation, driving the reaction to completion via Le Chatelier's principle. Unsaturated polyesters are generally sold to parts manufacturers as a solution of resin in reactive diluent; styrene is the most common diluent and the industry standard. The diluent allows control over the viscosity of the resin, and is also a participant in the curing reaction. The initially liquid resin is converted to a solid by cross-linking chains. This is done by creating free radicals at unsaturated bonds, which propagate in a chain reaction to other unsaturated bonds in adjacent molecules, linking them in the process. Unsaturation is generally in the form of maleate and fumarate species along the polymer chain. Maleate/fumarate generally does not self-polymerize via radical reactions, but readily reacts with styrene. Maleic anhydride and styrene are known to form alternating copolymers, and are in fact the textbook case of this phenomenon. This is one reason that styrene has been so hard to displace in the market as the industry standard reactive diluent for unsaturated polyester resins, despite increasing efforts to displace the material such as California's Proposition 65. The initial free radicals are induced by adding a compound that easily decomposes into free radicals. This compound is known as the catalyst within the industry, but initiator is a more appropriate term. Transition metal salts are usually added as a catalyst for the chain-growth crosslinking reaction, and in the industry this type of additive is known as a promoter; the promoter is generally understood to lower the bond dissociation energy of the radical initiator. Cobalt salts are the most common type of promoter used. Common radical initiators used are organic peroxides such as benzoyl peroxide or methyl ethyl ketone peroxide.
Polyester resins are thermosetting and, as with other resins, cure exothermically. The use of excessive initiator especially with a catalyst present can, therefore, cause charring or even ignition during the curing process. Excessive catalyst may also cause the product to fracture or form a rubbery material.
Unsaturated polyesters (UPR) are utilized in many different industrially relevant markets, but in general are used as the matrix material for various types of composites. Glass fiber-reinforced composites comprise the largest segment into which UPRs are used and can be processed via SMC, BMC, pultrusion, cured-in-place pipe (known as relining in Europe), filament winding, vacuum molding, spray-up molding, resin transfer molding (RTM). Wind turbine blades also use them as well as many more processes. UPRs are also used in non-reinforced applications with common examples being gel coats, shirt buttons, mine-bolts, bowling ball cores, polymer concrete, and engineered stone/cultured marble.
Chemistry
In organic chemistry, an ester is formed as the condensation product of a carboxylic acid and an alcohol, with water formed as the condensate by-product. An ester can also be produced with an acyl halide and an alcohol, in which case the condensate by-product is a hydrogen halide.
Polyesters are a category of polymers in which ester functionality repeats within the main chain. Polyesters are a classic example of step-growth polymer, in which a difunctional (or higher order) acid or acyl halide is reacted with a difunctional (or higher order) alcohol. Polyesters are produced commercially both as saturated and unsaturated resins. The most common and highest volume produced polyester is Polyethylene terephthalate (PET), which is an example of a saturated polyester and finds utilization in such applications as fibers for clothing and carpet, food and liquid containers (such as a water/soda bottles), as well as films.
In unsaturated polyester (UPR) chemistry, unsaturation sites are present along the chain, usually by incorporation of maleic anhydride, but maleic acid and fumaric acid are also used. Maleic acid and fumaric acid are isomers where maleic is the cis-isomer and fumaric is the trans-isomer. The ester forms of these two molecules are maleate and fumarate, respectively. When curing a UPR, the fumarate form is known to react more rapidly with the styrene radical, so isomerization catalysts, such as N,N-dimethylacetoacetamide (DMAA), are often employed in the synthesis process which converts the maleates into fumarates; the isomerization can also be encouraged with increased reaction time and temperature.
Within the UPR industry, the classification of the resins is generally based on the primary saturated acid. For example, a resin containing primarily terephthalic acid is known as a Tere resin, a resin containing primarily phthalic anhydride is known as an Ortho resin, and a resin containing primarily isophthalic acid is known as an Iso resin. Dicyclopentadiene (DCPD) is also a common UPR raw material, and can be incorporated two different ways. In one process, the DCPD is cracked in situ to form cyclopentadiene which can then be reacted with maleate/fumarate groups along the polymer chain via a Diels-alder reaction. This type of resin is known as a Nadic resin and is referred to as a poor man's Ortho, due to sharing many similar properties of an Ortho resin along with the extremely low cost of DCPD raw material. In another process, maleic anhydride is first opened with water or another alcohol to form maleic acid and is then reacted with DCPD where an alcohol from the maleic acid reacts across one of the double bonds of the DCPD. This product is then used to end-cap the UPR resin which yields a product with unsaturation on the end-groups. This type of resin is referred to as a DCPD resin.
Ortho resins comprise the most common type of UPR, and many are known as general purpose resins. FRP composites utilizing ortho resins are found in such application as boat hulls, bath ware, and bowling ball cores.
Iso resins are generally on the higher end of UPR products, both because of the relatively higher cost of the isophthalic acid as well as the superior properties they possess. Iso resins are the primary type of resin used in gel coat applications, which is similar to a paint, but is sprayed into a mold before the FRP is molded leaving a coating on the part. Gel coat resins must have lower color (almost clear) so as to not impart additional color to the part or so that they can be dyed properly. Gel coats must also have strong resistance to UV-weathering and water blistering.
Tere resins are often used when high modulus and strength are desired, but the low color properties of an Iso resin is not necessary. Terephthalic acid is generally lower cost than isophthalic acid, but both give similar strength characteristics to a UPR product. There exists a special sub-set of Tere resins, known as PET UPR resins, which are produced by catalytically cracking PET resin in the reactor to yield a mixture of terephthalic acid and ethylene glycol. Additional acids and glycols are then added along with maleic anhydride and a new polymer is produced. The end product is functionally the same as a Tere resin, but can often be lower cost to manufacture as scrap PET can be sourced cheaply. If a glycol-modified PET (PET-G) is used, exceptional properties can be imparted to the resin due to some of the exotic materials used in PET-G production. Tere and PET-UPR resins are used in many applications including cured-in-place pipe.
Biodegradation
Lichens have been shown to deteriorate polyester resins, as can be seen in archaeological sites in the Roman city of Baelo Claudia Spain.
Advantages
Polyester resin offers the following advantages:
Adequate resistance to water and variety of chemicals.
Adequate resistance to weathering and ageing.
Low cost.
Polyesters can withstand a temperature up to 80 °C.
Polyesters have good wetting to glass fibres.
Relatively low shrinkage at between 4–8% during curing.
Linear thermal expansion ranges from 100–200 x 10−6 K−1.
Disadvantages
Polyester resin has the following disadvantages:
Strong styrene odour
More difficult to mix than other resins, such as a two-part epoxy
The toxic nature of its fumes, and especially of its catalyst, MEKP, pose a safety risk if proper protection isn't used
Not appropriate for bonding many substrates
The finished cure is most likely weaker than an equal amount of an epoxy resin
See also
Polyester
Styrene
Thermoset polymer matrix
Thermosetting polymer
Vinyl ester
References
Polyesters
Synthetic resins
Thermosetting plastics
Polymer chemistry | Polyester resin | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,429 | [
"Synthetic materials",
"Polymer chemistry",
"Materials science",
"Synthetic resins"
] |
7,999,492 | https://en.wikipedia.org/wiki/Sediment%20transport | Sediment transport is the movement of solid particles (sediment), typically due to a combination of gravity acting on the sediment, and the movement of the fluid in which the sediment is entrained. Sediment transport occurs in natural systems where the particles are clastic rocks (sand, gravel, boulders, etc.), mud, or clay; the fluid is air, water, or ice; and the force of gravity acts to move the particles along the sloping surface on which they are resting. Sediment transport due to fluid motion occurs in rivers, oceans, lakes, seas, and other bodies of water due to currents and tides. Transport is also caused by glaciers as they flow, and on terrestrial surfaces under the influence of wind. Sediment transport due only to gravity can occur on sloping surfaces in general, including hillslopes, scarps, cliffs, and the continental shelf—continental slope boundary.
Sediment transport is important in the fields of sedimentary geology, geomorphology, civil engineering, hydraulic engineering and environmental engineering (see applications, below). Knowledge of sediment transport is most often used to determine whether erosion or deposition will occur, the magnitude of this erosion or deposition, and the time and distance over which it will occur.
Environments
Aeolian
Aeolian or eolian (depending on the parsing of æ) is the term for sediment transport by wind. This process results in the formation of ripples and sand dunes. Typically, the size of the transported sediment is fine sand (<1 mm) and smaller, because air is a fluid with low density and viscosity, and can therefore not exert very much shear on its bed.
Bedforms are generated by aeolian sediment transport in the terrestrial near-surface environment. Ripples and dunes form as a natural self-organizing response to sediment transport.
Aeolian sediment transport is common on beaches and in the arid regions of the world, because it is in these environments that vegetation does not prevent the presence and motion of fields of sand.
Wind-blown very fine-grained dust is capable of entering the upper atmosphere and moving across the globe. Dust from the Sahara deposits on the Canary Islands and islands in the Caribbean, and dust from the Gobi Desert has deposited on the western United States. This sediment is important to the soil budget and ecology of several islands.
Deposits of fine-grained wind-blown glacial sediment are called loess.
Fluvial
Coastal
Coastal sediment transport takes place in near-shore environments due to the motions of waves and currents. At the mouths of rivers, coastal sediment and fluvial sediment transport processes mesh to create river deltas.
Coastal sediment transport results in the formation of characteristic coastal landforms such as beaches, barrier islands, and capes.
Glacial
As glaciers move over their beds, they entrain and move material of all sizes. Glaciers can carry the largest sediment, and areas of glacial deposition often contain a large number of glacial erratics, many of which are several metres in diameter. Glaciers also pulverize rock into "glacial flour", which is so fine that it is often carried away by winds to create loess deposits thousands of kilometres afield. Sediment entrained in glaciers often moves approximately along the glacial flowlines, causing it to appear at the surface in the ablation zone.
Hillslope
In hillslope sediment transport, a variety of processes move regolith downslope. These include:
Soil creep
Tree throw
Movement of soil by burrowing animals
Slumping and landsliding of the hillslope
These processes generally combine to give the hillslope a profile that looks like a solution to the diffusion equation, where the diffusivity is a parameter that relates to the ease of sediment transport on the particular hillslope. For this reason, the tops of hills generally have a parabolic concave-up profile, which grades into a convex-up profile around valleys.
As hillslopes steepen, however, they become more prone to episodic landslides and other mass wasting events. Therefore, hillslope processes are better described by a nonlinear diffusion equation in which classic diffusion dominates for shallow slopes and erosion rates go to infinity as the hillslope reaches a critical angle of repose.
Debris flow
Large masses of material are moved in debris flows, hyperconcentrated mixtures of mud, clasts that range up to boulder-size, and water. Debris flows move as granular flows down steep mountain valleys and washes. Because they transport sediment as a granular mixture, their transport mechanisms and capacities scale differently from those of fluvial systems.
Applications
Sediment transport is applied to solve many environmental, geotechnical, and geological problems. Measuring or quantifying sediment transport or erosion is therefore important for coastal engineering. Several sediment erosion devices have been designed in order to quantify sediment erosion (e.g., Particle Erosion Simulator (PES)). One such device, also referred to as the BEAST (Benthic Environmental Assessment Sediment Tool) has been calibrated in order to quantify rates of sediment erosion.
Movement of sediment is important in providing habitat for fish and other organisms in rivers. Therefore, managers of highly regulated rivers, which are often sediment-starved due to dams, are often advised to stage short floods to refresh the bed material and rebuild bars. This is also important, for example, in the Grand Canyon of the Colorado River, to rebuild shoreline habitats also used as campsites.
Sediment discharge into a reservoir formed by a dam forms a reservoir delta. This delta will fill the basin, and eventually, either the reservoir will need to be dredged or the dam will need to be removed. Knowledge of sediment transport can be used to properly plan to extend the life of a dam.
Geologists can use inverse solutions of transport relationships to understand flow depth, velocity, and direction, from sedimentary rocks and young deposits of alluvial materials.
Flow in culverts, over dams, and around bridge piers can cause erosion of the bed. This erosion can damage the environment and expose or unsettle the foundations of the structure. Therefore, good knowledge of the mechanics of sediment transport in a built environment are important for civil and hydraulic engineers.
When suspended sediment transport is increased due to human activities, causing environmental problems including the filling of channels, it is called siltation after the grain-size fraction dominating the process.
Initiation of motion
Stress balance
For a fluid to begin transporting sediment that is currently at rest on a surface, the boundary (or bed) shear stress exerted by the fluid must exceed the critical shear stress for the initiation of motion of grains at the bed. This basic criterion for the initiation of motion can be written as:
.
This is typically represented by a comparison between a dimensionless shear stress and a dimensionless critical shear stress . The nondimensionalization is in order to compare the driving forces of particle motion (shear stress) to the resisting forces that would make it stationary (particle density and size). This dimensionless shear stress, , is called the Shields parameter and is defined as:
.
And the new equation to solve becomes:
.
The equations included here describe sediment transport for clastic, or granular sediment. They do not work for clays and muds because these types of floccular sediments do not fit the geometric simplifications in these equations, and also interact thorough electrostatic forces. The equations were also designed for fluvial sediment transport of particles carried along in a liquid flow, such as that in a river, canal, or other open channel.
Only one size of particle is considered in this equation. However, river beds are often formed by a mixture of sediment of various sizes. In case of partial motion where only a part of the sediment mixture moves, the river bed becomes enriched in large gravel as the smaller sediments are washed away. The smaller sediments present under this layer of large gravel have a lower possibility of movement and total sediment transport decreases. This is called armouring effect. Other forms of armouring of sediment or decreasing rates of sediment erosion can be caused by carpets of microbial mats, under conditions of high organic loading.
Critical shear stress
The Shields diagram empirically shows how the dimensionless critical shear stress (i.e. the dimensionless shear stress required for the initiation of motion) is a function of a particular form of the particle Reynolds number, or Reynolds number related to the particle. This allows the criterion for the initiation of motion to be rewritten in terms of a solution for a specific version of the particle Reynolds number, called .
This can then be solved by using the empirically derived Shields curve to find as a function of a specific form of the particle Reynolds number called the boundary Reynolds number. The mathematical solution of the equation was given by Dey.
Particle Reynolds number
In general, a particle Reynolds number has the form:
Where is a characteristic particle velocity, is the grain diameter (a characteristic particle size), and is the kinematic viscosity, which is given by the dynamic viscosity, , divided by the fluid density, .
The specific particle Reynolds number of interest is called the boundary Reynolds number, and it is formed by replacing the velocity term in the particle Reynolds number by the shear velocity, , which is a way of rewriting shear stress in terms of velocity.
where is the bed shear stress (described below), and is the von Kármán constant, where
.
The particle Reynolds number is therefore given by:
Bed shear stress
The boundary Reynolds number can be used with the Shields diagram to empirically solve the equation
,
which solves the right-hand side of the equation
.
In order to solve the left-hand side, expanded as
,
the bed shear stress needs to be found, . There are several ways to solve for the bed shear stress. The simplest approach is to assume the flow is steady and uniform, using the reach-averaged depth and slope. because it is difficult to measure shear stress in situ, this method is also one of the most-commonly used. The method is known as the depth-slope product.
Depth-slope product
For a river undergoing approximately steady, uniform equilibrium flow, of approximately constant depth h and slope angle θ over the reach of interest, and whose width is much greater than its depth, the bed shear stress is given by some momentum considerations stating that the gravity force component in the flow direction equals exactly the friction force. For a wide channel, it yields:
For shallow slope angles, which are found in almost all natural lowland streams, the small-angle formula shows that is approximately equal to , which is given by , the slope. Rewritten with this:
Shear velocity, velocity, and friction factor
For the steady case, by extrapolating the depth-slope product and the equation for shear velocity:
,
The depth-slope product can be rewritten as:
.
is related to the mean flow velocity, , through the generalized Darcy–Weisbach friction factor, , which is equal to the Darcy-Weisbach friction factor divided by 8 (for mathematical convenience). Inserting this friction factor,
.
Unsteady flow
For all flows that cannot be simplified as a single-slope infinite channel (as in the depth-slope product, above), the bed shear stress can be locally found by applying the Saint-Venant equations for continuity, which consider accelerations within the flow.
Example
Set-up
The criterion for the initiation of motion, established earlier, states that
.
In this equation,
, and therefore
.
is a function of boundary Reynolds number, a specific type of particle Reynolds number.
.
For a particular particle Reynolds number, will be an empirical constant given by the Shields Curve or by another set of empirical data (depending on whether or not the grain size is uniform).
Therefore, the final equation to solve is:
.
Solution
Some assumptions allow the solution of the above equation.
The first assumption is that a good approximation of reach-averaged shear stress is given by the depth-slope product. The equation then can be rewritten as:
.
Moving and re-combining the terms produces:
where R is the submerged specific gravity of the sediment.
The second assumption is that the particle Reynolds number is high. This typically applies to particles of gravel-size or larger in a stream, and means the critical shear stress is constant. The Shields curve shows that for a bed with a uniform grain size,
.
Later researchers have shown this value is closer to
for more uniformly sorted beds. Therefore the replacement
is used to insert both values at the end.
The equation now reads:
This final expression shows the product of the channel depth and slope is equal to the Shield's criterion times the submerged specific gravity of the particles times the particle diameter.
For a typical situation, such as quartz-rich sediment in water , the submerged specific gravity is equal to 1.65.
Plugging this into the equation above,
.
For the Shield's criterion of . 0.06 * 1.65 = 0.099, which is well within standard margins of error of 0.1. Therefore, for a uniform bed,
.
For these situations, the product of the depth and slope of the flow should be 10% of the diameter of the median grain diameter.
The mixed-grain-size bed value is , which is supported by more recent research as being more broadly applicable because most natural streams have mixed grain sizes. If this value is used, and D is changed to D_50 ("50" for the 50th percentile, or the median grain size, as an appropriate value for a mixed-grain-size bed), the equation becomes:
Which means that the depth times the slope should be about 5% of the median grain diameter in the case of a mixed-grain-size bed.
Modes of entrainment
The sediments entrained in a flow can be transported along the bed as bed load in the form of sliding and rolling grains, or in suspension as suspended load advected by the main flow. Some sediment materials may also come from the upstream reaches and be carried downstream in the form of wash load.
Rouse number
The location in the flow in which a particle is entrained is determined by the Rouse number, which is determined by the density ρs and diameter d of the sediment particle, and the density ρ and kinematic viscosity ν of the fluid, determine in which part of the flow the sediment particle will be carried.
Here, the Rouse number is given by P. The term in the numerator is the (downwards) sediment the sediment settling velocity ws, which is discussed below. The upwards velocity on the grain is given as a product of the von Kármán constant, κ = 0.4, and the shear velocity, u∗.
The following table gives the approximate required Rouse numbers for transport as bed load, suspended load, and wash load.
Settling velocity
The settling velocity (also called the "fall velocity" or "terminal velocity") is a function of the particle Reynolds number. Generally, for small particles (laminar approximation), it can be calculated with Stokes' Law. For larger particles (turbulent particle Reynolds numbers), fall velocity is calculated with the turbulent drag law. Dietrich (1982) compiled a large amount of published data to which he empirically fit settling velocity curves. Ferguson and Church (2006) analytically combined the expressions for Stokes flow and a turbulent drag law into a single equation that works for all sizes of sediment, and successfully tested it against the data of Dietrich. Their equation is
.
In this equation ws is the sediment settling velocity, g is acceleration due to gravity, and D is mean sediment diameter. is the kinematic viscosity of water, which is approximately 1.0 x 10−6 m2/s for water at 20 °C.
and are constants related to the shape and smoothness of the grains.
The expression for fall velocity can be simplified so that it can be solved only in terms of D. We use the sieve diameters for natural grains, , and values given above for and . From these parameters, the fall velocity is given by the expression:
Alternatively, settling velocity for a particle of sediment can be derived using Stokes Law assuming quiescent (or still) fluid in steady state. The resulting formulation for settling velocity is,
where is the gravitational constant; is the density of the sediment; is the density of water; is the sediment particle diameter (commonly assumed to be the median particle diameter, often referred to as in field studies); and is the molecular viscosity of water. The Stokes settling velocity can be thought of as the terminal velocity resulting from balancing a particle's buoyant force (proportional to the cross-sectional area) with the gravitational force (proportional to the mass). Small particles will have a slower settling velocity than heavier particles, as seen in the figure. This has implications for many aspects of sediment transport, for example, how far downstream a particle might be advected in a river.
Hjulström–Sundborg diagram
In 1935, Filip Hjulström created the Hjulström curve, a graph which shows the relationship between the size of sediment and the velocity required to erode (lift it), transport it, or deposit it. The graph is logarithmic.
Åke Sundborg later modified the Hjulström curve to show separate curves for the movement threshold corresponding to several water depths, as is necessary if the flow velocity rather than the boundary shear stress (as in the Shields diagram) is used for the flow strength.
This curve has no more than a historical value nowadays, although its simplicity is still attractive. Among the drawbacks of this curve are that it does not take the water depth into account and more importantly, that it does not show that sedimentation is caused by flow velocity deceleration and erosion is caused by flow acceleration. The dimensionless Shields diagram is now unanimously accepted for initiation of sediment motion in rivers.
Transport rate
Formulas to calculate sediment transport rate exist for sediment moving in several different parts of the flow. These formulas are often segregated into bed load, suspended load, and wash load. They may sometimes also be segregated into bed material load and wash load.
Bed load
Bed load moves by rolling, sliding, and hopping (or saltating) over the bed, and moves at a small fraction of the fluid flow velocity. Bed load is generally thought to constitute 5–10% of the total sediment load in a stream, making it less important in terms of mass balance. However, the bed material load (the bed load plus the portion of the suspended load which comprises material derived from the bed) is often dominated by bed load, especially in gravel-bed rivers. This bed material load is the only part of the sediment load that actively interacts with the bed. As the bed load is an important component of that, it plays a major role in controlling the morphology of the channel.
Bed load transport rates are usually expressed as being related to excess dimensionless shear stress raised to some power. Excess dimensionless shear stress is a nondimensional measure of bed shear stress about the threshold for motion.
,
Bed load transport rates may also be given by a ratio of bed shear stress to critical shear stress, which is equivalent in both the dimensional and nondimensional cases. This ratio is called the "transport stage" and is an important in that it shows bed shear stress as a multiple of the value of the criterion for the initiation of motion.
When used for sediment transport formulae, this ratio is typically raised to a power.
The majority of the published relations for bedload transport are given in dry sediment weight per unit channel width, ("breadth"):
.
Due to the difficulty of estimating bed load transport rates, these equations are typically only suitable for the situations for which they were designed.
Notable bed load transport formulae
Meyer-Peter Müller and derivatives
The transport formula of Meyer-Peter and Müller, originally developed in 1948, was designed for well-sorted fine gravel at a transport stage of about 8. The formula uses the above nondimensionalization for shear stress,
,
and Hans Einstein's nondimensionalization for sediment volumetric discharge per unit width
.
Their formula reads:
.
Their experimentally determined value for is 0.047, and is the third commonly used value for this (in addition to Parker's 0.03 and Shields' 0.06).
Because of its broad use, some revisions to the formula have taken place over the years that show that the coefficient on the left ("8" above) is a function of the transport stage:
The variations in the coefficient were later generalized as a function of dimensionless shear stress:
Wilcock and Crowe
In 2003, Peter Wilcock and Joanna Crowe (now Joanna Curran) published a sediment transport formula that works with multiple grain sizes across the sand and gravel range. Their formula works with surface grain size distributions, as opposed to older models which use subsurface grain size distributions (and thereby implicitly infer a surface grain sorting).
Their expression is more complicated than the basic sediment transport rules (such as that of Meyer-Peter and Müller) because it takes into account multiple grain sizes: this requires consideration of reference shear stresses for each grain size, the fraction of the total sediment supply that falls into each grain size class, and a "hiding function".
The "hiding function" takes into account the fact that, while small grains are inherently more mobile than large grains, on a mixed-grain-size bed, they may be trapped in deep pockets between large grains. Likewise, a large grain on a bed of small particles will be stuck in a much smaller pocket than if it were on a bed of grains of the same size. In gravel-bed rivers, this can cause "equal mobility", in which small grains can move just as easily as large ones. As sand is added to the system, it moves away from the "equal mobility" portion of the hiding function to one in which grain size again matters.
Their model is based on the transport stage, or ratio of bed shear stress to critical shear stress for the initiation of grain motion. Because their formula works with several grain sizes simultaneously, they define the critical shear stress for each grain size class, , to be equal to a "reference shear stress", .
They express their equations in terms of a dimensionless transport parameter, (with the "" indicating nondimensionality and the "" indicating that it is a function of grain size):
is the volumetric bed load transport rate of size class per unit channel width . is the proportion of size class that is present on the bed.
They came up with two equations, depending on the transport stage, . For :
and for :
.
This equation asymptotically reaches a constant value of as becomes large.
Wilcock and Kenworthy
In 2002, Peter Wilcock and T. A. Kenworthy, following Peter Wilcock (1998), published a sediment bed-load transport formula that works with only two sediments fractions, i.e. sand and gravel fractions. A mixed-sized sediment bed-load transport model using only two fractions offers practical advantages in terms of both computational and conceptual modeling by taking into account the nonlinear effects of sand presence in gravel beds on bed-load transport rate of both fractions. In fact, in the two-fraction bed load formula appears a new ingredient with respect to that of Meyer-Peter and Müller that is the proportion of fraction on the bed surface where the subscript represents either the sand (s) or gravel (g) fraction. The proportion , as a function of sand content , physically represents the relative influence of the mechanisms controlling sand and gravel transport, associated with the change from a clast-supported to matrix-supported gravel bed. Moreover, since spans between 0 and 1, phenomena that vary with include the relative size effects producing "hiding" of fine grains and "exposure" of coarse grains.
The "hiding" effect takes into account the fact that, while small grains are inherently more mobile than large grains, on a mixed-grain-size bed, they may be trapped in deep pockets between large grains. Likewise, a large grain on a bed of small particles will be stuck in a much smaller pocket than if it were on a bed of grains of the same size, which the Meyer-Peter and Müller formula refers to. In gravel-bed rivers, this can cause "equal mobility", in which small grains can move just as easily as large ones. As sand is added to the system, it moves away from the "equal mobility" portion of the hiding function to one in which grain size again matters.
Their model is based on the transport stage, i.e. , or ratio of bed shear stress to critical shear stress for the initiation of grain motion. Because their formula works with only two fractions simultaneously, they define the critical shear stress for each of the two grain size classes, , where represents either the sand (s) or gravel (g) fraction. The critical shear stress that represents the incipient motion for each of the two fractions is consistent with established values in the limit of pure sand and gravel beds and shows a sharp change with increasing sand content over the transition from a clast- to matrix-supported bed.
They express their equations in terms of a dimensionless transport parameter, (with the "" indicating nondimensionality and the "" indicating that it is a function of grain size):
is the volumetric bed load transport rate of size class per unit channel width . is the proportion of size class that is present on the bed.
They came up with two equations, depending on the transport stage, . For :
and for :
.
This equation asymptotically reaches a constant value of as becomes large and the symbols have the following values:
In order to apply the above formulation, it is necessary to specify the characteristic grain sizes for the sand portion and for the gravel portion of the surface layer, the fractions and of sand and gravel, respectively in the surface layer, the submerged specific gravity of the sediment R and shear velocity associated with skin friction .
Kuhnle et al.
For the case in which sand fraction is transported by the current over and through an immobile gravel bed, Kuhnle et al.(2013), following the theoretical analysis done by Pellachini (2011), provides a new relationship for the bed load transport of the sand fraction when gravel particles remain at rest. It is worth mentioning that Kuhnle et al. (2013) applied the Wilcock and Kenworthy (2002) formula to their experimental data and found out that predicted bed load rates of sand fraction were about 10 times greater than measured and approached 1 as the sand elevation became near the top of the gravel layer. They, also, hypothesized that the mismatch between predicted and measured sand bed load rates is due to the fact that the bed shear stress used for the Wilcock and Kenworthy (2002) formula was larger than that available for transport within the gravel bed because of the sheltering effect of the gravel particles.
To overcome this mismatch, following Pellachini (2011), they assumed that the variability of the bed shear stress available for the sand to be transported by the current would be some function of the so-called "Roughness Geometry Function" (RGF), which represents the gravel bed elevations distribution. Therefore, the sand bed load formula follows as:
where
the subscript refers to the sand fraction, s represents the ratio where is the sand fraction density, is the RGF as a function of the sand level within the gravel bed, is the bed shear stress available for sand transport and is the critical shear stress for incipient motion of the sand fraction, which was calculated graphically using the updated Shields-type relation of Miller et al.(1977).
Suspended load
Suspended load is carried in the lower to middle parts of the flow, and moves at a large fraction of the mean flow velocity in the stream.
A common characterization of suspended sediment concentration in a flow is given by the Rouse Profile. This characterization works for the situation in which sediment concentration at one particular elevation above the bed can be quantified. It is given by the expression:
Here, is the elevation above the bed, is the concentration of suspended sediment at that elevation, is the flow depth, is the Rouse number, and relates the eddy viscosity for momentum to the eddy diffusivity for sediment, which is approximately equal to one.
Experimental work has shown that ranges from 0.93 to 1.10 for sands and silts.
The Rouse profile characterizes sediment concentrations because the Rouse number includes both turbulent mixing and settling under the weight of the particles. Turbulent mixing results in the net motion of particles from regions of high concentrations to low concentrations. Because particles settle downward, for all cases where the particles are not neutrally buoyant or sufficiently light that this settling velocity is negligible, there is a net negative concentration gradient as one goes upward in the flow. The Rouse Profile therefore gives the concentration profile that provides a balance between turbulent mixing (net upwards) of sediment and the downwards settling velocity of each particle.
Bed material load
Bed material load comprises the bed load and the portion of the suspended load that is sourced from the bed.
Three common bed material transport relations are the "Ackers-White", "Engelund-Hansen", "Yang" formulae. The first is for sand to granule-size gravel, and the second and third are for sand though Yang later expanded his formula to include fine gravel. That all of these formulae cover the sand-size range and two of them are exclusively for sand is that the sediment in sand-bed rivers is commonly moved simultaneously as bed and suspended load.
Engelund–Hansen
The bed material load formula of Engelund and Hansen is the only one to not include some kind of critical value for the initiation of sediment transport. It reads:
where is the Einstein nondimensionalization for sediment volumetric discharge per unit width, is a friction factor, and is the Shields stress. The Engelund–Hansen formula is one of the few sediment transport formulae in which a threshold "critical shear stress" is absent.
Wash load
Wash load is carried within the water column as part of the flow, and therefore moves with the mean velocity of main stream. Wash load concentrations are approximately uniform in the water column. This is described by the endmember case in which the Rouse number is equal to 0 (i.e. the settling velocity is far less than the turbulent mixing velocity), which leads to a prediction of a perfectly uniform vertical concentration profile of material.
Total load
Some authors have attempted formulations for the total sediment load carried in water. These formulas are designed largely for sand, as (depending on flow conditions) sand often can be carried as both bed load and suspended load in the same stream or shoreface.
Bed load sediment mitigation at intake structures
Riverside intake structures used in water supply, canal diversions, and water cooling can experience entrainment of bed load (sand-size) sediments. These entrained sediments produce multiple deleterious effects such as reduction or blockage of intake capacity, feedwater pump impeller damage or vibration, and result in sediment deposition in downstream pipelines and canals. Structures that modify local near-field secondary currents are useful to mitigate these effects and limit or prevent bed load sediment entry.
See also
References
External links
Liu, Z. (2001), Sediment Transport.
Moore, A. Fluvial sediment transport lecture notes, Kent State.
Wilcock, P. Sediment Transport Seminar, January 26–28, 2004, University of California at Berkeley
Southard, J. B. (2007), Sediment Transport and Sedimentary Structures
Linwood, J. G., Suspended Sediment Concentration and Discharge in a West London River.
Fluid mechanics
Sedimentology
Environmental engineering
Hydrology
Physical geography
Geological processes
Transport phenomena | Sediment transport | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 6,501 | [
"Transport phenomena",
"Physical phenomena",
"Hydrology",
"Chemical engineering",
"Civil engineering",
"Environmental engineering",
"Fluid mechanics"
] |
7,999,626 | https://en.wikipedia.org/wiki/G%C3%A5rding%27s%20inequality | In mathematics, Gårding's inequality is a result that gives a lower bound for the bilinear form induced by a real linear elliptic partial differential operator. The inequality is named after Lars Gårding.
Statement of the inequality
Let be a bounded, open domain in -dimensional Euclidean space and let denote the Sobolev space of -times weakly differentiable functions with weak derivatives in . Assume that satisfies the -extension property, i.e., that there exists a bounded linear operator such that for all .
Let L be a linear partial differential operator of even order 2k, written in divergence form
and suppose that L is uniformly elliptic, i.e., there exists a constant θ > 0 such that
Finally, suppose that the coefficients Aαβ are bounded, continuous functions on the closure of Ω for |α| = |β| = k and that
Then Gårding's inequality holds: there exist constants C > 0 and G ≥ 0
where
is the bilinear form associated to the operator L.
Application: the Laplace operator and the Poisson problem
Be careful, in this application, Garding's Inequality seems useless here as the final result is a direct consequence of Poincaré's Inequality, or Friedrich Inequality. (See talk on the article).
As a simple example, consider the Laplace operator Δ. More specifically, suppose that one wishes to solve, for f ∈ L2(Ω) the Poisson equation
where Ω is a bounded Lipschitz domain in Rn. The corresponding weak form of the problem is to find u in the Sobolev space H01(Ω) such that
where
The Lax–Milgram lemma ensures that if the bilinear form B is both continuous and elliptic with respect to the norm on H01(Ω), then, for each f ∈ L2(Ω), a unique solution u must exist in H01(Ω). The hypotheses of Gårding's inequality are easy to verify for the Laplace operator Δ, so there exist constants C and G ≥ 0
Applying the Poincaré inequality allows the two terms on the right-hand side to be combined, yielding a new constant K > 0 with
which is precisely the statement that B is elliptic. The continuity of B is even easier to see: simply apply the Cauchy–Schwarz inequality and the fact that the Sobolev norm is controlled by the L2 norm of the gradient.
References
(Theorem 9.17)
Theorems in functional analysis
Inequalities
Partial differential equations
Sobolev spaces | Gårding's inequality | [
"Mathematics"
] | 530 | [
"Theorems in mathematical analysis",
"Mathematical theorems",
"Binary relations",
"Theorems in functional analysis",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems"
] |
8,000,600 | https://en.wikipedia.org/wiki/Hubbard%E2%80%93Stratonovich%20transformation | The Hubbard–Stratonovich (HS) transformation is an exact mathematical transformation invented by Russian physicist Ruslan L. Stratonovich and popularized by British physicist John Hubbard. It is used to convert a particle theory into its respective field theory by linearizing the density operator in the many-body interaction term of the Hamiltonian and introducing an auxiliary scalar field. It is defined via the integral identity
where the real constant . The basic idea of the HS transformation is to reformulate a system of particles interacting through two-body potentials into a system of independent particles interacting with a fluctuating field. The procedure is widely used in polymer physics, classical particle physics, spin glass theory, and electronic structure theory.
Calculation of resulting field theories
The resulting field theories are well-suited for the application of effective approximation techniques, like the mean field approximation. A major difficulty arising in the simulation with such field theories is their highly oscillatory nature in case of strong interactions, which leads to the well-known numerical sign problem. The problem originates from the repulsive part of the interaction potential, which implicates the introduction of the complex factor via the HS transformation.
References
Functions and mappings
Transforms | Hubbard–Stratonovich transformation | [
"Physics",
"Materials_science",
"Mathematics"
] | 243 | [
"Materials science stubs",
"Functions and mappings",
"Mathematical analysis",
"Mathematical objects",
"Mathematical relations",
"Condensed matter physics",
"Transforms",
"Condensed matter stubs"
] |
8,000,781 | https://en.wikipedia.org/wiki/Displacement%20field%20%28mechanics%29 | In mechanics, a displacement field is the assignment of displacement vectors for all points in a region or body that are displaced from one state to another. A displacement vector specifies the position of a point or a particle in reference to an origin or to a previous position. For example, a displacement field may be used to describe the effects of deformation on a solid body.
Formulation
Before considering displacement, the state before deformation must be defined. It is a state in which the coordinates of all points are known and described by the function:
where
is a placement vector
are all the points of the body
are all the points in the space in which the body is present
Most often it is a state of the body in which no forces are applied.
Then given any other state of this body in which coordinates of all its points are described as the displacement field is the difference between two body states:
where is a displacement field, which for each point of the body specifies a displacement vector.
Decomposition
The displacement of a body has two components: a rigid-body displacement and a deformation.
A rigid-body displacement consists of a simultaneous translation and rotation of the body without changing its shape or size.
Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration to a current or deformed configuration (Figure 1).
A change in the configuration of a continuum body can be described by a displacement field. A displacement field is a vector field of all displacement vectors for all particles in the body, which relates the deformed configuration with the undeformed configuration. The distance between any two particles changes if and only if deformation has occurred. If displacement occurs without deformation, then it is a rigid-body displacement.
Displacement gradient tensor
Two types of displacement gradient tensor may be defined, following the Lagrangian and Eulerian specifications.
The displacement of particles indexed by variable may be expressed as follows. The vector joining the positions of a particle in the undeformed configuration and deformed configuration is called the displacement vector, , denoted or below.
Material coordinates (Lagrangian description)
Using in place of and in place of , both of which are vectors from the origin of the coordinate system to each respective point, we have the Lagrangian description of the displacement vector:
where are the orthonormal unit vectors that define the basis of the spatial (lab frame) coordinate system.
Expressed in terms of the material coordinates, i.e. as a function of , the displacement field is:
where is the displacement vector representing rigid-body translation.
The partial derivative of the displacement vector with respect to the material coordinates yields the material displacement gradient tensor . Thus we have,
where is the material deformation gradient tensor and is a rotation.
Spatial coordinates (Eulerian description)
In the Eulerian description, the vector extending from a particle in the undeformed configuration to its location in the deformed configuration is called the displacement vector:
where are the unit vectors that define the basis of the material (body-frame) coordinate system.
Expressed in terms of spatial coordinates, i.e. as a function of , the displacement field is:
The spatial derivative, i.e., the partial derivative of the displacement vector with respect to the spatial coordinates, yields the spatial displacement gradient tensor . Thus we have,
where is the spatial deformation gradient tensor.
Relationship between the material and spatial coordinate systems
are the direction cosines between the material and spatial coordinate systems with unit vectors and , respectively. Thus
The relationship between and is then given by
Knowing that
then
Combining the coordinate systems of deformed and undeformed configurations
It is common to superimpose the coordinate systems for the deformed and undeformed configurations, which results in , and the direction cosines become Kronecker deltas, i.e.,
Thus in material (undeformed) coordinates, the displacement may be expressed as:
And in spatial (deformed) coordinates, the displacement may be expressed as:
See also
Stress
Strain
References
Continuum mechanics
Materials science
Vector physical quantities | Displacement field (mechanics) | [
"Physics",
"Materials_science",
"Mathematics",
"Engineering"
] | 831 | [
"Applied and interdisciplinary physics",
"Physical quantities",
"Continuum mechanics",
"Quantity",
"Classical mechanics",
"Materials science",
"Vector physical quantities",
"nan"
] |
8,001,517 | https://en.wikipedia.org/wiki/Yield%20surface | A yield surface is a five-dimensional surface in the six-dimensional space of stresses. The yield surface is usually convex and the state of stress of inside the yield surface is elastic. When the stress state lies on the surface the material is said to have reached its yield point and the material is said to have become plastic. Further deformation of the material causes the stress state to remain on the yield surface, even though the shape and size of the surface may change as the plastic deformation evolves. This is because stress states that lie outside the yield surface are non-permissible in rate-independent plasticity, though not in some models of viscoplasticity.
The yield surface is usually expressed in terms of (and visualized in) a three-dimensional principal stress space (), a two- or three-dimensional space spanned by stress invariants () or a version of the three-dimensional Haigh–Westergaard stress space. Thus we may write the equation of the yield surface (that is, the yield function) in the forms:
where are the principal stresses.
where is the first principal invariant of the Cauchy stress and are the second and third principal invariants of the deviatoric part of the Cauchy stress.
where are scaled versions of and and is a function of .
where are scaled versions of and , and is the stress angle or Lode angle
Invariants used to describe yield surfaces
The first principal invariant () of the Cauchy stress (), and the second and third principal invariants () of the deviatoric part () of the Cauchy stress are defined as:
where () are the principal values of , () are the principal values of , and
where is the identity matrix.
A related set of quantities, (), are usually used to describe yield surfaces for cohesive frictional materials such as rocks, soils, and ceramics. These are defined as
where is the equivalent stress. However, the possibility of negative values of and the resulting imaginary makes the use of these quantities problematic in practice.
Another related set of widely used invariants is () which describe a cylindrical coordinate system (the Haigh–Westergaard coordinates). These are defined as:
The plane is also called the Rendulic plane. The angle is called stress angle, the value is sometimes called the Lode parameter and the relation between and was first given by Novozhilov V.V. in 1951, see also
The principal stresses and the Haigh–Westergaard coordinates are related by
A different definition of the Lode angle can also be found in the literature:
in which case the ordered principal stresses (where ) are related by
Examples of yield surfaces
There are several different yield surfaces known in engineering, and those most popular are listed below.
Tresca yield surface
The Tresca yield criterion is taken to be the work of Henri Tresca. It is also known as the maximum shear stress theory (MSST) and the Tresca–Guest (TG) criterion. In terms of the principal stresses the Tresca criterion is expressed as
Where is the yield strength in shear, and is the tensile yield strength.
Figure 1 shows the Tresca–Guest yield surface in the three-dimensional space of principal stresses. It is a prism of six sides and having infinite length. This means that the material remains elastic when all three principal stresses are roughly equivalent (a hydrostatic pressure), no matter how much it is compressed or stretched. However, when one of the principal stresses becomes smaller (or larger) than the others the material is subject to shearing. In such situations, if the shear stress reaches the yield limit then the material enters the plastic domain. Figure 2 shows the Tresca–Guest yield surface in two-dimensional stress space, it is a cross section of the prism along the plane.
von Mises yield surface
The von Mises yield criterion is expressed in the principal stresses as
where is the yield strength in uniaxial tension.
Figure 3 shows the von Mises yield surface in the three-dimensional space of principal stresses. It is a circular cylinder of infinite length with its axis inclined at equal angles to the three principal stresses. Figure 4 shows the von Mises yield surface in two-dimensional space compared with Tresca–Guest criterion. A cross section of the von Mises cylinder on the plane of produces the elliptical shape of the yield surface.
Burzyński-Yagn criterion
This criterion reformulated as the function of the hydrostatic nodes with the coordinates and
represents the general equation of a second order surface of revolution about the hydrostatic axis. Some special case are:
cylinder (Maxwell (1865), Huber (1904), von Mises (1913), Hencky (1924)),
cone (Botkin (1940), Drucker-Prager (1952), Mirolyubov (1953)),
paraboloid (Burzyński (1928), Balandin (1937), Torre (1947)),
ellipsoid centered of symmetry plane , (Beltrami (1885)),
ellipsoid centered of symmetry plane with (Schleicher (1926)),
hyperboloid of two sheets (Burzynski (1928), Yagn (1931)),
hyperboloid of one sheet centered of symmetry plane , , (Kuhn (1980))
hyperboloid of one sheet , (Filonenko-Boroditsch (1960), Gol’denblat-Kopnov (1968), Filin (1975)).
The relations compression-tension and torsion-tension can be computed to
The Poisson's ratios at tension and compression are obtained using
For ductile materials the restriction
is important. The application of rotationally symmetric criteria for brittle failure with
has not been studied sufficiently.
The Burzyński-Yagn criterion is well suited for academic purposes. For practical applications, the third invariant of the deviator in the odd and even power should be introduced in the equation, e.g.:
Huber criterion
The Huber criterion consists of the Beltrami ellipsoid and a scaled von Mises cylinder in the principal stress space, see also
with . The transition between the surfaces in the cross section is continuously differentiable.
The criterion represents the "classical view" with respect to inelastic material behavior:
pressure-sensitive material behavior for with and
pressure-insensitive material behavior for with
The Huber criterion can be used as a yield surface with an empirical restriction for Poisson's ratio at tension , which leads to .
The modified Huber criterion, see also, cf.
consists of the Schleicher ellipsoid with the restriction of Poisson's ratio at compression
and a cylinder with the -transition in the cross section .
The second setting for the parameters and follows with the compression / tension relation
The modified Huber criterion can be better fitted to the measured data as the Huber criterion. For setting it follows and .
The Huber criterion and the modified Huber criterion should be preferred to the von Mises criterion since one obtains safer results in the region .
For practical applications the third invariant of the deviator should be considered in these criteria.
Mohr–Coulomb yield surface
The Mohr–Coulomb yield (failure) criterion is similar to the Tresca criterion, with additional provisions for materials with different tensile and compressive yield strengths. This model is often used to model concrete, soil or granular materials. The Mohr–Coulomb yield criterion may be expressed as:
where
and the parameters and are the yield (failure) stresses of the material in uniaxial compression and tension, respectively. The formula reduces to the Tresca criterion if .
Figure 5 shows Mohr–Coulomb yield surface in the three-dimensional space of principal stresses. It is a conical prism and determines the inclination angle of conical surface. Figure 6 shows Mohr–Coulomb yield surface in two-dimensional stress space. In Figure 6 and is used for and , respectively, in the formula. It is a cross section of this conical prism on the plane of . In Figure 6 Rr and Rc are used for Syc and Syt, respectively, in the formula.
Drucker–Prager yield surface
The Drucker–Prager yield criterion is similar to the von Mises yield criterion, with provisions for handling materials with differing tensile and compressive yield strengths. This criterion is most often used for concrete where both normal and shear stresses can determine failure. The Drucker–Prager yield criterion may be expressed as
where
and , are the uniaxial yield stresses in compression and tension respectively. The formula reduces to the von Mises equation if .
Figure 7 shows Drucker–Prager yield surface in the three-dimensional space of principal stresses. It is a regular cone. Figure 8 shows Drucker–Prager yield surface in two-dimensional space. The elliptical elastic domain is a cross section of the cone on the plane of ; it can be chosen to intersect the Mohr–Coulomb yield surface in different number of vertices. One choice is to intersect the Mohr–Coulomb yield surface at three vertices on either side of the line, but usually selected by convention to be those in the compression regime. Another choice is to intersect the Mohr–Coulomb yield surface at four vertices on both axes (uniaxial fit) or at two vertices on the diagonal (biaxial fit). The Drucker-Prager yield criterion is also commonly expressed in terms of the material cohesion and friction angle.
Bresler–Pister yield surface
The Bresler–Pister yield criterion is an extension of the Drucker Prager yield criterion that uses three parameters, and has additional terms for materials that yield under hydrostatic compression.
In terms of the principal stresses, this yield criterion may be expressed as
where are material constants. The additional parameter gives the yield surface an ellipsoidal cross section when viewed from a direction perpendicular to its axis. If is the yield stress in uniaxial compression, is the yield stress in uniaxial tension, and is the yield stress in biaxial compression, the parameters can be expressed as
Willam–Warnke yield surface
The Willam–Warnke yield criterion is a three-parameter smoothed version of the Mohr–Coulomb yield criterion that has similarities in form to the Drucker–Prager and Bresler–Pister yield criteria.
The yield criterion has the functional form
However, it is more commonly expressed in Haigh–Westergaard coordinates as
The cross-section of the surface when viewed along its axis is a smoothed triangle (unlike Mohr–Coulomb). The Willam–Warnke yield surface is convex and has unique and well defined first and second derivatives on every point of its surface. Therefore, the Willam–Warnke model is computationally robust and has been used for a variety of cohesive-frictional materials.
Podgórski and Rosendahl trigonometric yield surfaces
Normalized with respect to the uniaxial tensile stress , the Podgórski criterion as function of the stress angle reads
with the shape function of trigonal symmetry in the -plane
It contains the criteria of von Mises (circle in the -plane, , ), Tresca (regular hexagon, , ), Mariotte (regular triangle, , ), Ivlev (regular triangle, , ) and also the cubic criterion of Sayir (the Ottosen criterion ) with and the isotoxal (equilateral) hexagons of the Capurso criterion with . The von Mises - Tresca transition follows with , . The isogonal (equiangular) hexagons of the Haythornthwaite criterion containing the Schmidt-Ishlinsky criterion (regular hexagon) cannot be described with the Podgórski ctiterion.
The Rosendahl criterion reads
with the shape function of hexagonal symmetry in the -plane
It contains the criteria of von Mises (circle, , ), Tresca (regular hexagon, , ), Schmidt—Ishlinsky (regular hexagon, , ), Sokolovsky (regular dodecagon, , ), and also the bicubic criterion with or equally with and the isotoxal dodecagons of the unified yield criterion of Yu with . The isogonal dodecagons of the multiplicative ansatz criterion of hexagonal symmetry containing the Ishlinsky-Ivlev criterion (regular dodecagon) cannot be described by the Rosendahl criterion.
The criteria of Podgórski and Rosendahl describe single surfaces in principal stress space without any additional outer contours and plane intersections. Note that in order to avoid numerical issues the real part function can be introduced to the shape function: and . The generalization in the form is relevant for theoretical investigations.
A pressure-sensitive extension of the criteria can be obtained with the linear -substitution
which is sufficient for many applications, e.g. metals, cast iron, alloys, concrete, unreinforced polymers, etc.
Bigoni–Piccolroaz yield surface
The Bigoni–Piccolroaz yield criterion is a seven-parameter surface defined by
where is the "meridian" function
describing the pressure-sensitivity and is the "deviatoric" function
describing the Lode-dependence of yielding. The seven, non-negative material parameters:
define the shape of the meridian and deviatoric sections.
This criterion represents a smooth and convex surface, which is closed both in hydrostatic tension and compression and has a
drop-like shape, particularly suited to describe frictional and granular materials. This criterion has also been generalized to the case of surfaces with corners.
Cosine Ansatz (Altenbach-Bolchoun-Kolupaev)
For the formulation of the strength criteria the stress angle
can be used.
The following criterion of isotropic material behavior
contains a number of other well-known less general criteria, provided suitable parameter values are chosen.
Parameters and describe the geometry of the surface in the -plane. They are subject to the constraints
which follow from the convexity condition. A more precise formulation of the third constraints is proposed in.
Parameters and describe the position of the intersection points of the yield surface with hydrostatic axis (space diagonal in the principal stress space). These intersections points are called hydrostatic nodes.
In the case of materials which do not fail at hydrostatic pressure (steel, brass, etc.) one gets . Otherwise for materials which fail at hydrostatic pressure (hard foams, ceramics, sintered materials, etc.) it follows .
The integer powers and , describe the curvature of the meridian. The meridian with is a straight line and with – a parabola.
Barlat's Yield Surface
For the anisotropic materials, depending on the direction of the applied process (e.g., rolling) the mechanical properties vary and, therefore, using an anisotropic yield function is crucial. Since 1989 Frederic Barlat has developed a family of yield functions for constitutive modelling of plastic anisotropy. Among them, Yld2000-2D yield criteria has been applied for a wide range of sheet metals (e.g., aluminum alloys and advanced high-strength steels). The Yld2000-2D model is a non-quadratic type yield function based on two linear transformation of the stress tensor:
:
where is the effective stress. and and are the transformed matrices (by linear transformation C or L):
where s is the deviatoric stress tensor.
for principal values of X’ and X”, the model could be expressed as:
and:
where are eight parameters of the Barlat's Yld2000-2D model to be identified with a set of experiments.
See also
Yield (engineering)
Plasticity (physics)
Stress
Henri Tresca
von Mises stress
Mohr–Coulomb theory
Hill yield criterion
Hosford yield criterion
Strain
Strain tensor
Stress–energy tensor
Stress concentration
3-D elasticity
Frederic Barlat
References
Plasticity (physics)
Solid mechanics
Continuum mechanics
Materials science
Structural analysis | Yield surface | [
"Physics",
"Materials_science",
"Engineering"
] | 3,332 | [
"Structural engineering",
"Solid mechanics",
"Applied and interdisciplinary physics",
"Continuum mechanics",
"Deformation (mechanics)",
"Aerospace engineering",
"Structural analysis",
"Materials science",
"Classical mechanics",
"Plasticity (physics)",
"Mechanics",
"nan",
"Mechanical engineer... |
10,330,610 | https://en.wikipedia.org/wiki/Crystal%20base | A crystal base for a representation of a quantum group on a -vector space
is not a base of that vector space but rather a -base of where is a -lattice in that vector space. Crystal bases appeared in the work of and also in the work of . They can be viewed as specializations as of the canonical basis defined by .
Definition
As a consequence of its defining relations, the quantum group can be regarded as a Hopf algebra over the field of all rational functions of an indeterminate q over , denoted .
For simple root and non-negative integer , define
In an integrable module , and for weight , a vector (i.e. a vector in with weight ) can be uniquely decomposed into the sums
where , , only if , and only if .
Linear mappings can be defined on by
Let be the integral domain of all rational functions in which are regular at (i.e. a rational function is an element of if and only if there exist polynomials and in the polynomial ring such that , and ).
A crystal base for is an ordered pair , such that
is a free -submodule of such that
is a -basis of the vector space over
and , where and
and
and
To put this into a more informal setting, the actions of and are generally singular at on an integrable module . The linear mappings and on the module are introduced so that the actions of and are regular at on the module. There exists a -basis of weight vectors for , with respect to which the actions of and are regular at for all i. The module is then restricted to the free -module generated by the basis, and the basis vectors, the -submodule and the actions of and are evaluated at . Furthermore, the basis can be chosen such that at , for all , and are represented by mutual transposes, and map basis vectors to basis vectors or 0.
A crystal base can be represented by a directed graph with labelled edges. Each vertex of the graph represents an element of the -basis of , and a directed edge, labelled by i, and directed from vertex to vertex , represents that (and, equivalently, that ), where is the basis element represented by , and is the basis element represented by . The graph completely determines the actions of and at . If an integrable module has a crystal base, then the module is irreducible if and only if the graph representing the crystal base is connected (a graph is called "connected" if the set of vertices cannot be partitioned into the union of nontrivial disjoint subsets and such that there are no edges joining any vertex in to any vertex in ).
For any integrable module with a crystal base, the weight spectrum for the crystal base is the same as the weight spectrum for the module, and therefore the weight spectrum for the crystal base is the same as the weight spectrum for the corresponding module of the appropriate Kac–Moody algebra. The multiplicities of the weights in the crystal base are also the same as their multiplicities in the corresponding module of the appropriate Kac–Moody algebra.
It is a theorem of Kashiwara that every integrable highest weight module has a crystal base. Similarly, every integrable lowest weight module has a crystal base.
Tensor products of crystal bases
Let be an integrable module with crystal base and be an integrable module with crystal base . For crystal bases, the coproduct , given by
is adopted. The integrable module has crystal base , where . For a basis vector , define
The actions of and on are given by
The decomposition of the product two integrable highest weight modules into irreducible submodules is determined by the decomposition of the graph of the crystal base into its connected components (i.e. the highest weights of the submodules are determined, and the multiplicity of each highest weight is determined).
References
External links
Lie algebras
Representation theory
Quantum groups | Crystal base | [
"Mathematics"
] | 816 | [
"Representation theory",
"Fields of abstract algebra"
] |
10,334,720 | https://en.wikipedia.org/wiki/Network%20for%20Earthquake%20Engineering%20Simulation | The George E. Brown, Jr. Network for Earthquake Engineering Simulation (NEES) was created by the National Science Foundation (NSF) to improve infrastructure design and construction practices to prevent or minimize damage during an earthquake or tsunami. Its headquarters were at Purdue University in West Lafayette, Indiana as part of cooperative agreement #CMMI-0927178, and it ran from 2009 till 2014. The mission of NEES is to accelerate improvements in seismic design and performance by serving as a collaboratory for discovery and innovation.
Description
The NEES network features 14 geographically distributed, shared-use laboratories that support several types of experimental work: geotechnical centrifuge research, shake table tests, large-scale structural testing, tsunami wave basin experiments, and field site research. Participating universities include: Cornell University; Lehigh University;Oregon State University; Rensselaer Polytechnic Institute; University at Buffalo, SUNY; University of California, Berkeley; University of California, Davis; University of California, Los Angeles; University of California, San Diego; University of California, Santa Barbara; University of Illinois at Urbana-Champaign; University of Minnesota; University of Nevada, Reno; and the University of Texas, Austin.
The equipment sites (labs) and a central data repository are connected to the global earthquake engineering community via the NEEShub, which is powered by the HUBzero software developed at Purdue University specifically to help the scientific community share resources and collaborate. The cyberinfrastructure, connected via Internet2, provides interactive simulation tools, a simulation tool development area, a curated central data repository, user-developed databases, animated presentations, user support, telepresence, mechanism for uploading and sharing resources and statistics about users, and usage patterns.
This allows researchers to: securely store, organize and share data within a standardized framework in a central location, remotely observe and participate in experiments through the use of synchronized real-time data and video, collaborate with colleagues to facilitate the planning, performance, analysis, and publication of research experiments and conduct computational and hybrid simulations that may combine the results of multiple distributed experiments and link physical experiments with computer simulations to enable the investigation of overall system performance. The cyberinfrastructure supports analytical simulations using the OpenSees software.
These resources jointly provide the means for collaboration and discovery to improve the seismic design and performance of civil and mechanical infrastructure systems.
Cyberinfrastructure
Cyberinfrastructure is an infrastructure based on computer networks and application-specific software, tools, and data repositories that support research in a particular discipline. The term "cyberinfrastructure" was coined by the National Science Foundation.
Projects
NEES Research covers a wide range of topics including performance of existing and new construction, energy dissipation and base isolation systems, innovative materials, lifeline systems such as pipelines, piping, and bridges, and nonstructural systems such a ceilings and cladding. Researchers are also investigation soil remediation technologies for liquefiable soils, and collecting information about tsunami impacts and building performance after recent earthquakes. The permanently instrumented field sites operated by NEES@UCSB support field observations of ground motions, ground deformations, pore pressure response, and soil-foundation-structure interaction.
The NEESwood project investigated the design of low and mid-rise wood-frame construction in seismic regions. The NEES@UCLA mobile field laboratory, consisting of large mobile shakers, field-deployable monitoring instrumentation systems, was utilized to collect forced and ambient vibration data from a four-story reinforced concrete (RC) building damaged in the 1994 Northridge earthquake. Shake table tests on pipe systems anchored in a full-scale, seven-story building performed on the Large High-Performance Outdoor Shake Table at NEES@UCSD investigated seismic design methods for anchors fastening nonstructural components.
Education, outreach, and training
The NEES collaboratory includes educational programs to meet learning goals and technology transfer for various stakeholders. Programs include a geographically distributed Research Experience for Undergraduates (REU) program, museum exhibits, an ambassador program, curriculum modules, and a Research to Practice webinar series aimed at informing practicing engineers of the outcomes of NEES research.
Companion cyberinfrastructure provides a framework for helping educators to enrich their curriculum with these resources. NEESacademy, a portal within NEEShub, is designed to support effective organization, assessment, implementation, and dissemination of learning experiences related to earthquake science and engineering. One source of content is the education and outreach products developed by NEES researchers, but anyone can contribute resources.
Soil liquefaction research
The George E. Brown, Jr. Network for Earthquake Engineering Simulation (NEES) hosts two geotechnical centrifuges for studying soil behavior. The NEES centrifuge at University of California Davis has radius of 9.1 m (to bucket floor), maximum payload mass of 4500 kg, and available bucket area of 4.0 m2. The centrifuge is capable of producing 75g's of centrifugal acceleration at its effective radius of 8.5 m. The centrifuge capacity in terms of the maximum acceleration multiplied by the maximum payload is 53 g x 4500 kg = 240 g-tonnes. The NEES centrifuge at the Center for Earthquake Engineering Simulation (CEES) at Rensselaer Polytechnic Institute has a nominal radius, 2.7 m, which is the distance between the center of payload and the centrifuge axis. The space available for the payload is a depth of 1,000 mm, width of 1,000 mm, height of 800 mm, and a maximum height of 1,200 mm. The performance envelope is 160 g, 1.5 tons, and 150 g-tons (product of payload weight times g).
References
External links
NEES Official Website
NEEScomm IT informational webpage
NEES YouTube Channel
National Science Foundation
MUST-SIM University of Illinois
NEES at Cornell University
NEES at Lehigh University
NEES at Oregon State University
NEES at Rensselaer Polytechnic Institute
NEES at University at Buffalo, SUNY
NEES at NEES University of California, Berkeley
NEES at University of California, Davis
NEES at University of California, Los Angeles
NEES at University of California, San Diego
NEES at University of California, Santa Barbara
NEES at University of Minnesota
NEES at University of Nevada, Reno
NEES at University of Texas, Austin
E-Science
Cyberinfrastructure
Earthquake engineering | Network for Earthquake Engineering Simulation | [
"Technology",
"Engineering"
] | 1,331 | [
"Information and communications technology",
"Structural engineering",
"IT infrastructure",
"Cyberinfrastructure",
"Civil engineering",
"Earthquake engineering"
] |
10,346,175 | https://en.wikipedia.org/wiki/MEMO%20model%20%28wind-flow%20simulation%29 | The MEMO model (version 6.2) is a Eulerian non-hydrostatic prognostic mesoscale model for wind-flow simulation. It was developed by the Aristotle University of Thessaloniki in collaboration with the Universität Karlsruhe. The MEMO Model together with the photochemical dispersion model MARS are the two core models of the European zooming model (EZM). This model belongs to the family of models designed for describing atmospheric transport phenomena in the local-to-regional scale, frequently referred to as mesoscale air pollution models.
History
Initially, EZM was developed for modelling the transport and chemical transformation of pollutants in selected European regions in the frame of the EUROTRAC sub-project EUMAC and therefore it was formerly called the EUMAC Zooming Model (EUROTRAC, 1992). EZM has evolved to be one of the most frequently applied mesoscale air pollution model systems in Europe. It has already been successfully applied for various European airsheds including the Upper Rhine valley and the areas of Basel, Graz, Barcelona, Lisbon, Madrid, Milano, London, Cologne, Lyon, The Hague, Athens (Moussiopoulos, 1994; Moussiopoulos, 1995) and Thessaloniki. More details are to be found elsewhere (Moussiopoulos 1989), (Flassak 1990), (Moussiopoulos et al. 1993).
Model equations
The prognostic mesoscale model MEMO describes the dynamics of the atmospheric boundary layer. In the present model version, air is assumed to be unsaturated. The model solves the continuity equation, the momentum equations and several transport equations for scalars (including the thermal energy equation and, as options, transport equations for water vapour, the turbulent kinetic energy and pollutant concentrations).
Transformation to terrain-following coordinates
The lower boundary of the model domain coincides with the ground. Because of the inhomogeneity of the terrain, it is not possible to impose boundary conditions at that boundary with respect to Cartesian coordinates. Therefore, a transformation of the vertical coordinate to a terrain-following one is performed. Hence, the originally irregularly bounded physical domain is mapped onto one consisting of unit cubes.
Numerical solution of the equation system
The discretized equations are solved numerically on a staggered grid, i.e. the scalar quantities , and are defined at the cell centre while the velocity components , and are defined at the centre of the appropriate interfaces.
Temporal discretization of the prognostic equations is based on the explicit second order Adams-Bashforth scheme. There are two deviations from the Adams-Bashforth scheme: The first refers to the implicit treatment of the nonhydrostatic part of the mesoscale pressure perturbation . To ensure non-divergence of the flow field, an elliptic equation is solved. The elliptic equation is derived from the continuity equation wherein velocity components are expressed in terms of . Since the elliptic equation is derived from the discrete form of the continuity equation and the discrete form of the pressure gradient, conservativity is guaranteed (Flassak and Moussiopoulos, 1988). The discrete pressure equation is solved numerically with a fast elliptic solver in conjunction with a generalized conjugate gradient method. The fast elliptic solver is based on fast Fourier analysis in both horizontal directions and Gaussian elimination in the vertical direction (Moussiopoulos and Flassak, 1989).
The second deviation from the explicit treatment is related to the turbulent diffusion in vertical direction. In case of an explicit treatment of this term, the stability requirement may necessitate an unacceptable abridgement of the time increment. To avoid this, vertical turbulent diffusion is treated using the second order Crank–Nicolson method.
On principle, advective terms can be computed using any suitable advection scheme. In the present version of MEMO, a 3D second-order total-variation-diminishing (TVD) scheme is implemented which is based on the 1D scheme proposed by Harten (1986). It achieves a fair (but not any) reduction of numerical diffusion, the solution being independent of the magnitude of the scalar (preserving transportivity).
Parameterizations
Turbulence and radiative transfer are the most important physical processes that have to be parameterized in a prognostic mesoscale model. In the MEMO model, radiative transfer is calculated with an efficient scheme based on the emissivity method for longwave radiation and an implicit multilayer method for shortwave radiation (Moussiopoulos 1987).
The diffusion terms may be represented as the divergence of the corresponding fluxes. For turbulence parameterizations, K-theory is applied. In case of MEMO turbulence can be treated either with a zero-, one- or two-equation turbulence model. For most applications a one-equation model is used, where a conservation equation for the turbulent kinetic energy is solved.
Initial and boundary conditions
In MEMO, initialization is performed with suitable diagnostic methods: a mass-consistent initial wind field is formulated using an objective analysis model and scalar fields are initialized using appropriate interpolating techniques (Kunz, R., 1991). Data needed to apply the diagnostic methods may be derived either from observations or from larger scale simulations.
Suitable boundary conditions have to be imposed for the wind velocity components , and , the potential temperature and pressure at all boundaries. At open boundaries, wave reflection and deformation may be minimized by the use of so-called radiation conditions (Orlanski 1976).
According to the experience gained so far with the model MEMO, neglecting large scale environmental information might result in instabilities in case of simulations over longer time periods.
For the nonhydrostatic part of the mesoscale pressure perturbation, homogeneous Neumann boundary conditions are used at lateral boundaries. With these conditions, the wind velocity component perpendicular to the boundary remains unaffected by the pressure change.
At the upper boundary, Neumann boundary conditions are imposed for the horizontal velocity components and the potential temperature. To ensure non-reflectivity, a radiative condition is used for the hydrostatic part of the mesoscale pressure perturbation
at that boundary. Hence, vertically propagating internal gravity waves are allowed to leave the computational domain (Klemp and Durran 1983). For the nonhydrostatic part of the mesoscale pressure perturbation, homogeneous staggered Dirichlet conditions are imposed. Being justified by the fact that nonhydrostatic effects are negligible at large heights, this condition is necessary, if singularity of the elliptic pressure equation is to be avoided in view of the Neumann boundary conditions at all other boundaries.
The lower boundary coincides with the ground (or, more precisely, a height above ground corresponding to its aerodynamic roughness). For the non-hydrostatic part of the mesoscale pressure perturbation, inhomogeneous Neumann conditions are imposed at that boundary. All other conditions at the lower boundary follow from the assumption that the –Obukhov similarity theory is valid.
The one way interactive nesting facility is possible within MEMO. Thus, successive simulations on grids of increasing resolution are possible. During these simulations, the results of the application to a coarse grid are used as boundary conditions for the application to the finer grid (Kunz and Moussiopoulos, 1995).
Grid definition
The governing equations are solved numerically on a staggered grid. Scalar quantities as the temperature, pressure, density and also the cell volume are defined at the centre of a grid cell and the velocity components , and at the centre of the appropriate interface. Turbulent fluxes are defined at different locations: Shear fluxes are defined at the centre of the appropriate edges of a grid cell and normal stress fluxes at scalar points. With this definition, the outgoing fluxes of momentum, mass, heat and also turbulent fluxes of a grid cell are identical to incoming flux of the adjacent grid cell. So the numerical method is conservative.
Topography and surface type
For calculations with MEMO, a file must be provided which contains orography height and surface type for each grid location The following surface types are distinguished and must be stored as percentage:
water (type: 1)
arid land (type: 2)
few vegetation (type: 3)
farmland (type: 4)
forest (type: 5)
suburban area (type: 6)
urban area (type: 7)
Only surface types 1–6 have to be stored. Type 7 is the difference between 100% and the sum of types 1–6. If the percentage of a surface type is 100%, then write the number 10 and for all other surface types the number 99.
The orography height is the mean height for each grid location above sea level in meter.
Meteorological data
The prognostic model MEMO is a set of partial differential equations in three spatial directions and in time. To solve these equations, information about the initial state in the whole domain and about the development of all relevant quantities at the lateral boundaries is required.
Initial state
To generate an initial state for the prognostic model, a diagnostic model (Kunz, R., 1991) is applied using measured temperature and wind data. Both data can be provided as:
surface measurements i.e. single measurements directly above the surface (not necessary)
upper air soundings (i.e., soundings that consist of two or more measurements at different heights) at a constant geographical location is required (with at least one sounding for temperature and wind velocity).
Time-dependent boundary conditions
Information about quantities at the lateral boundaries can be taken into account as surface measurements and upper air soundings. Therefore, a key word and the time when boundary data is given must occur in front of a set of boundary information.
Nesting facility
In MEMO, a one-way interactive nesting scheme is implemented. With this nesting scheme a coarse grid and a fine grid simulation can be nested. During the coarse grid simulation, data is interpolated and written to a file. A consecutive fine grid simulation uses this data as lateral boundary values.
See also
Bibliography of atmospheric dispersion modeling
Atmospheric dispersion modeling
List of atmospheric dispersion models
Air pollution dispersion terminology
Useful conversions and formulas for air dispersion modeling
References
EUROTRAC (1992), Annual Report 1991, Part 5.
Flassak, Th. and Moussiopoulos, N. (1988), Direct solution of the Helmholtz equation using Fourier analysis on the CYBER 205, Environmental Software 3, 12–16.
Harten, A. (1986), On a large time-step high resolution scheme, Math. Comp. 46, 379–399.
Klemp, J.B. and Durran, D.R. (1983), An upper boundary condition permitting internal gravity wave radiation in numerical mesoscale models, Mon. Weather Rev.111, 430–444.
Kunz, R. (1991), Entwicklung eines diagnostischen Windmodells zur Berechnung des Anfangszustandes fόr das dynamische Grenzschichtmodell MEMO, Diplomarbeit Universitδt Karlsruhe.
Kunz R. and Moussiopoulos N. (1995), Simulation of the wind field in Athens using refined boundary conditions, Atmos. Environ. 29, 3575–3591.
Moussiopoulos, N. (1987), An efficient scheme to calculate radiative transfer in mesoscale models, Environmental Software 2, 172–191.
Moussiopoulos, N. (1989), Mathematische Modellierung mesoskaliger Ausbreitung in der Atmosphδre, Fortschr.-Ber. VDI, Reihe 15, Nr. 64, pp. 307.
Moussiopoulos N., ed. (1994), The EUMAC Zooming Model (EZM): Model Structure and Applications, EUROTRAC Report, 266 pp.
Moussiopoulos N. (1995), The EUMAC Zooming Model, a tool for local-to-regional air quality studies, Meteorol. Atmos. Phys. 57, 115–133.
Moussiopoulos, N. and Flassak, Th. (1989), A fully vectorized fast direct solver of the Helmholtz equation in Applications of supercomputers in engineering: Algorithms, computer systems and user experience, Brebbia, C.A. and Peters, A. (editors), Elsevier, Amsterdam 67–77.
Moussiopoulos, N., Flassak, Th., Berlowitz, D., Sahm, P. (1993), Simulations of the Wind Field in Athens With the Nonhydrostatic Mesoscale Model MEMO, Environmental Software 8, 29–42.
Orlanski, J. (1976), A simple boundary condition for unbounded hyperbolic flows, J. Comput. Phys. 21, 251–269.
External links
Model Documentation System
European Topic Centre on Air and Climate Change (ETC/ACC)
Atmospheric dispersion modeling
Environmental engineering
Environmental science software
Numerical climate and weather models
Wind | MEMO model (wind-flow simulation) | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,727 | [
"Chemical engineering",
"Environmental science software",
"Atmospheric dispersion modeling",
"Civil engineering",
"Environmental engineering",
"Environmental modelling"
] |
15,460,100 | https://en.wikipedia.org/wiki/Rabbit%20%28nuclear%20engineering%29 | In the field of nuclear engineering, a rabbit is a pneumatically controlled tool used to insert small samples of material inside the core of a nuclear reactor, usually for the purpose of studying the effect of irradiation on the material. Some rabbits have special linings to screen out certain types of neutrons. (For example, the Missouri University of Science and Technology research reactor uses a cadmium-lined rabbit to allow only high-energy neutrons through to samples in its core.)
References
Nuclear technology | Rabbit (nuclear engineering) | [
"Physics"
] | 104 | [
"Nuclear technology",
"Nuclear physics"
] |
15,462,488 | https://en.wikipedia.org/wiki/Fermi%E2%80%93Ulam%20model | The Fermi–Ulam model (FUM) is a dynamical system that was introduced by Polish mathematician Stanislaw Ulam in 1961.
FUM is a variant of Enrico Fermi's primary work on acceleration of cosmic rays, namely Fermi acceleration. The system consists of a particle that collides elastically between a fixed wall and a moving one, each of infinite mass. The walls represent the magnetic mirrors with which the cosmic particles collide.
A. J. Lichtenberg and M. A. Lieberman provided a simplified version of FUM (SFUM) that derives from the Poincaré surface of section and writes
where is the velocity of the particle after the -th collision with the fixed wall, is the corresponding phase of the moving wall, is the velocity law of the moving wall and is the stochasticity parameter of the system.
If the velocity law of the moving wall is differentiable enough, according to KAM theorem invariant curves in the phase space exist. These invariant curves act as barriers that do not allow for a particle to further accelerate and the average velocity of a population of particles saturates after finite iterations of the map. For instance, for sinusoidal velocity law of the moving wall such curves exist, while they do not for sawtooth velocity law that is discontinuous. Consequently, at the first case particles cannot accelerate infinitely, reversely to what happens at the last one.
Over the years FUM became a prototype model for studying non-linear dynamics and coupled mappings.
The rigorous solution of the Fermi-Ulam problem (the velocity and energy of the particle are bounded) was given first by L. D. Pustyl'nikov in (see also and references therein).
In spite of these negative results, if one considers the Fermi–Ulam model in the framework of the special theory of relativity, then under some general conditions the energy of the particle tends to infinity for an open set of initial data.
2D generalization
Though the 1D Fermi–Ulam model does not lead to acceleration for smooth oscillations, unbounded energy growth has been observed in 2D billiards with oscillating boundaries,
The growth rate of energy in chaotic billiards is found to be much larger than that in billiards that are integrable in the static limit.
Strongly chaotic billiard with oscillating boundary can serve as a paradigm for driven chaotic systems. In the experimental arena this topic arises in the theory of nuclear friction, and more recently in the studies of cold atoms that are trapped in optical billiards. The driving induces diffusion in energy, and consequently the absorption coefficient is determined by the Kubo formula.
References
External links
Regular and Chaotic Dynamics: A widely acknowledged scientific book that treats FUM, written by A. J. Lichtenberg and M. A. Lieberman (Appl. Math. Sci. vol 38) (New York: Springer).
Dynamical systems | Fermi–Ulam model | [
"Physics",
"Mathematics"
] | 613 | [
"Mechanics",
"Dynamical systems"
] |
15,464,235 | https://en.wikipedia.org/wiki/Coulomb%20gap | First introduced by M. Pollak, the Coulomb gap is a soft gap in the single-particle density of states (DOS) of a system of interacting localized electrons.
Due to the long-range Coulomb interactions, the single-particle DOS vanishes at the chemical potential, at low enough temperatures, such that thermal excitations do not wash out the gap.
Theory
At zero temperature, a classical treatment of a system gives an upper bound for the DOS near the Fermi energy, first suggested by Efros and Shklovskii. The argument is as follows:
Let us look at the ground state configuration of the system. Defining as the energy of an electron at site , due to the disorder and the Coulomb interaction with all other electrons (we define this both for occupied and unoccupied sites), it is easy to see that the energy needed to move an electron from an occupied site to an unoccupied site is given by the expression:
.
The subtraction of the last term accounts for the fact that contains a term due to the interaction with the electron present at site , but after moving the electron this term should not be considered. It is easy to see from this that there exists an energy such that all sites with energies above it are empty, and below it are full (this is the Fermi energy, but since we are dealing with a system with interactions it is not obvious a-priori that it is still well-defined).
Assume we have a finite single-particle DOS at the Fermi energy, . For every possible transfer of an electron from an occupied site to an unoccupied site , the energy invested should be positive, since we are assuming we are in the ground state of the system, i.e., .
Assuming we have a large system, consider all the sites with energies in the interval The number of these, by assumption, is As explained, of these would be occupied, and the others unoccupied. Of all pairs of occupied and unoccupied sites, let us choose the one where the two are closest to each other. If we assume the sites are randomly distributed in space, we find that the distance between these two sites is of order:
, where is the dimension of space.
Plugging the expression for into the previous equation, we obtain the inequality:
where is a coefficient of order unity. Since , this inequality will necessarily be violated for small enough . Hence, assuming a finite DOS at led to a contradiction. Repeating the above calculation under the assumption that the DOS near is proportional to shows that . This is an upper bound for the Coulomb gap. Efros considered single electron excitations, and obtained an integro-differential equation for the DOS, showing the Coulomb gap in fact follows the above equation (i.e., the upper bound is a tight bound).
Other treatments of the problem include a mean-field numerical approach, as well as more recent treatments such as, also verifying the upper bound suggested above is a tight bound. Many Monte Carlo simulations were also performed, some of them in disagreement with the result quoted above. Few works deal with the quantum aspect of the problem. Classical Coulomb gap in clean system without disorder is well captured within Extended Dynamical Mean Field Theory (EDMFT) supported by Metropolis Monte Carlo simulations.
Experimental observations
Direct experimental confirmation of the gap has been done via tunneling experiments, which probed the single-particle DOS in two and three dimensions. The experiments clearly showed a linear gap in two dimensions, and a parabolic gap in three dimensions.
Another experimental consequence of the Coulomb gap is found in the conductivity of samples in the localized regime.
The existence of a gap in the spectrum of excitations would result in a lowered conductivity than that predicted by Mott variable-range hopping. If one uses the analytical expression of the single-particle DOS in the Mott derivation, a universal dependence is obtained, for any dimension. The observation of this is expected to occur below a certain temperature, such that the optimal energy of hopping would be smaller than the width of the Coulomb gap. The transition from Mott to so-called Efros–Shklovskii variable-range hopping has been observed experimentally for various systems. Nevertheless, no rigorous derivation of the Efros–Shklovskii conductivity formula has been put forth, and in some experiments behavior is observed, with a value of that fits neither the Mott nor the Efros–Shklovskii theories.
See also
Coulomb's law
References
Electronic band structures
Statistical mechanics
Physical quantities | Coulomb gap | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 956 | [
"Electron",
"Physical phenomena",
"Physical quantities",
"Quantity",
"Electronic band structures",
"Condensed matter physics",
"Statistical mechanics",
"Physical properties"
] |
439,454 | https://en.wikipedia.org/wiki/Willard%20Harrison%20Bennett | Willard Harrison Bennett (June 13, 1903 – September 28, 1987) was an American scientist and inventor, born in Findlay, Ohio. Bennett conducted research into plasma physics, astrophysics, geophysics, surface physics, and physical chemistry. The Bennett pinch is named after him.
Biography
Born in Findlay, Ohio, Bennett attended Carnegie Institute of Technology from 1920 to 1922 and Ohio State University; the University of Wisconsin, Sc.M. in physical chemistry, 1926; and the University of Michigan, Ph.D. in physics, 1928. Bennett was elected to a National Research Fellowship in Physics and in 1928 and 1929 studied at the California Institute of Technology. In 1930 he joined the Physics faculty at Ohio State. During the World War II era, he served as an officer in the United States Army and developed aircraft equipment. Following military service, Bennett worked at the National Bureau of Standards, the University of Arkansas, and the United States Naval Research Laboratory. In 1961, he was appointed Burlington Professor of Physics at North Carolina State University (emeritus in 1976). Bennett held 67 patents.
Bennett made scientific history in the 1930s pioneering studies in plasma physics, the study of gases ionized by high-voltage electricity. Bennett invented radio frequency mass spectrometry in (1955). Bennett's radio-frequency mass spectrometer measured the masses of atoms. It was the first such experiment in space. He also researched gases ionized by high-voltage electricity. This research was used in later thermonuclear fusion research.
Invention impact
These studies and later research have been used throughout the world in controlled thermonuclear fusion research. In the 1950s, Bennett's experimental tube called the Stormertron predicted and modeled the Van Allen radiation belts surrounding the Earth six years before they were discovered by satellite. It also reproduced intricate impact patterns found on the Earth's surface which explained many features of the polar aurora. Sputnik 3 carried the first radio frequency mass spectrometer into space.
References
External links
https://www.invent.org/inductees/willard-h-bennett
1903 births
1987 deaths
20th-century American inventors
20th-century American physicists
University of Wisconsin–Madison alumni
People from Findlay, Ohio
Carnegie Mellon University alumni
Mass spectrometrists
American physical chemists
University of Michigan alumni
Plasma physicists
United States Army officers
Fellows of the American Physical Society
20th-century American chemists | Willard Harrison Bennett | [
"Physics",
"Chemistry"
] | 494 | [
"Spectrum (physical sciences)",
"Plasma physics",
"Mass spectrometrists",
"Plasma physicists",
"Mass spectrometry",
"Biochemists"
] |
439,497 | https://en.wikipedia.org/wiki/Classical%20limit | The classical limit or correspondence limit is the ability of a physical theory to approximate or "recover" classical mechanics when considered over special values of its parameters. The classical limit is used with physical theories that predict non-classical behavior.
Quantum theory
A heuristic postulate called the correspondence principle was introduced to quantum theory by Niels Bohr: in effect it states that some kind of continuity argument should apply to the classical limit of quantum systems as the value of the Planck constant normalized by the action of these systems becomes very small. Often, this is approached through "quasi-classical" techniques (cf. WKB approximation).
More rigorously, the mathematical operation involved in classical limits is a group contraction, approximating physical systems where the relevant action is much larger than the reduced Planck constant , so the "deformation parameter" / can be effectively taken to be zero (cf. Weyl quantization.) Thus typically, quantum commutators (equivalently, Moyal brackets) reduce to Poisson brackets, in a group contraction.
In quantum mechanics, due to Heisenberg's uncertainty principle, an electron can never be at rest; it must always have a non-zero kinetic energy, a result not found in classical mechanics. For example, if we consider something very large relative to an electron, like a baseball, the uncertainty principle predicts that it cannot really have zero kinetic energy, but the uncertainty in kinetic energy is so small that the baseball can effectively appear to be at rest, and hence it appears to obey classical mechanics. In general, if large energies and large objects (relative to the size and energy levels of an electron) are considered in quantum mechanics, the result will appear to obey classical mechanics. The typical occupation numbers involved are huge: a macroscopic harmonic oscillator with = 2 Hz, = 10 g, and maximum amplitude 0 = 10 cm, has = , so that ≃ 1030. Further see coherent states. It is less clear, however, how the classical limit applies to chaotic systems, a field known as quantum chaos.
Quantum mechanics and classical mechanics are usually treated with entirely different formalisms: quantum theory using Hilbert space, and classical mechanics using a representation in phase space. One can bring the two into a common mathematical framework in various ways. In the phase space formulation of quantum mechanics, which is statistical in nature, logical connections between quantum mechanics and classical statistical mechanics are made, enabling natural comparisons between them, including the violations of Liouville's theorem (Hamiltonian) upon quantization.
In a crucial paper (1933), Dirac explained how classical mechanics is an emergent phenomenon of quantum mechanics: destructive interference among paths with non-extremal macroscopic actions » obliterate amplitude contributions in the path integral he introduced, leaving the extremal action class, thus the classical action path as the dominant contribution, an observation further elaborated by Feynman in his 1942 PhD dissertation. (Further see quantum decoherence.)
Time-evolution of expectation values
One simple way to compare classical to quantum mechanics is to consider the time-evolution of the expected position and expected momentum, which can then be compared to the time-evolution of the ordinary position and momentum in classical mechanics. The quantum expectation values satisfy the Ehrenfest theorem. For a one-dimensional quantum particle moving in a potential , the Ehrenfest theorem says
Although the first of these equations is consistent with the classical mechanics, the second is not: If the pair were to satisfy Newton's second law, the right-hand side of the second equation would have read
.
But in most cases,
.
If for example, the potential is cubic, then is quadratic, in which case, we are talking about the distinction between and , which differ by .
An exception occurs in case when the classical equations of motion are linear, that is, when is quadratic and is linear. In that special case, and do agree. In particular, for a free particle or a quantum harmonic oscillator, the expected position and expected momentum exactly follows solutions of Newton's equations.
For general systems, the best we can hope for is that the expected position and momentum will approximately follow the classical trajectories. If the wave function is highly concentrated around a point , then and will be almost the same, since both will be approximately equal to . In that case, the expected position and expected momentum will remain very close to the classical trajectories, at least for as long as the wave function remains highly localized in position.
Now, if the initial state is very localized in position, it will be very spread out in momentum, and thus we expect that the wave function will rapidly spread out, and the connection with the classical trajectories will be lost. When the Planck constant is small, however, it is possible to have a state that is well localized in both position and momentum. The small uncertainty in momentum ensures that the particle remains well localized in position for a long time, so that expected position and momentum continue to closely track the classical trajectories for a long time.
Relativity and other deformations
Other familiar deformations in physics involve:
The deformation of classical Newtonian into relativistic mechanics (special relativity), with deformation parameter ; the classical limit involves small speeds, so , and the systems appear to obey Newtonian mechanics.
Similarly for the deformation of Newtonian gravity into general relativity, with deformation parameter Schwarzschild-radius/characteristic-dimension, we find that objects once again appear to obey classical mechanics (flat space), when the mass of an object times the square of the Planck length is much smaller than its size and the sizes of the problem addressed. See Newtonian limit.
Wave optics might also be regarded as a deformation of ray optics for deformation parameter .
Likewise, thermodynamics deforms to statistical mechanics with deformation parameter .
See also
Classical probability density
Ehrenfest theorem
Madelung equations
Fresnel integral
Mathematical formulation of quantum mechanics
Quantum chaos
Quantum decoherence
Quantum limit
Semiclassical physics
Wigner–Weyl transform
WKB approximation
References
Concepts in physics
Quantum mechanics
Theory of relativity
Philosophy of science
Emergence | Classical limit | [
"Physics"
] | 1,252 | [
"Theoretical physics",
"Quantum mechanics",
"nan",
"Theory of relativity"
] |
440,425 | https://en.wikipedia.org/wiki/Penrose%20stairs | The Penrose stairs or Penrose steps, also dubbed the impossible staircase, is an impossible object created by Oscar Reutersvärd in 1937 and later independently discovered and made popular by Lionel Penrose and his son Roger Penrose. A variation on the Penrose triangle, it is a two-dimensional depiction of a staircase in which the stairs make four 90-degree turns as they ascend or descend yet form a continuous loop, so that a person could climb them forever and never get any higher. This is clearly impossible in three-dimensional Euclidean geometry but possible in some non-Euclidean geometry like in nil geometry.
The "continuous staircase" was first presented in an article that the Penroses wrote in 1959, based on the so-called "triangle of Penrose" published by Roger Penrose in the British Journal of Psychology in 1958. M.C. Escher then discovered the Penrose stairs in the following year and made his now famous lithograph Klimmen en dalen (Ascending and Descending) in March 1960. Penrose and Escher were informed of each other's work that same year. Escher developed the theme further in his print Waterval (Waterfall), which appeared in 1961.
In their original article the Penroses noted that "each part of the structure is acceptable as representing a flight of steps but the connections are such that the picture, as a whole, is inconsistent: the steps continually descend in a clockwise direction."
History of discovery
The Penroses
Escher, in the 1950s, had not yet drawn any impossible stairs and was not aware of their existence. Roger Penrose had been introduced to Escher's work at the International Congress of Mathematicians in Amsterdam in 1954. He was "absolutely spellbound" by Escher's work, and on his journey back to England he decided to produce something "impossible" on his own. After experimenting with various designs of bars overlying each other he finally arrived at the impossible triangle. Roger showed his drawings to his father, who immediately produced several variants, including the impossible flight of stairs. They wanted to publish their findings but did not know in what field the subject belonged. Because Lionel Penrose knew the editor of the British Journal of Psychology and convinced him to publish their short manuscript, the finding was finally presented as a psychological subject. After the publication in 1958 the Penroses sent a copy of the article to Escher as a token of their esteem.
While the Penroses credited Escher in their article, Escher noted in a letter to his son in January 1960 that he was:
Escher was captivated by the endless stairs and subsequently wrote a letter to the Penroses in April 1960:
At a conference in Rome in 1985, Roger Penrose said that he had been greatly inspired by Escher's work when he and his father discovered both the Penrose tribar structure (that is, the Penrose triangle) and the continuous steps.
Oscar Reutersvärd
The staircase design had been discovered previously by the Swedish artist Oscar Reutersvärd, but neither Penrose nor Escher was aware of his designs. Inspired by a radio programme on Mozart's method of composition—described as "creative automatism"; that is, each creative idea written down inspired a new idea—Reutersvärd started to draw a series of impossible objects on a journey from Stockholm to Paris in 1950 in the same "unconscious, automatic" way. He did not realize that his figure was a continuous flight of stairs while drawing, but the process enabled him to trace his increasingly complex designs step by step. When M.C. Escher's Ascending and Descending was sent to Reutersvärd in 1961, he was impressed but didn't like the irregularities of the stairs (). Throughout the 1960s, Reutersvärd sent several letters to Escher to express his admiration for his work, but the Dutch artist failed to respond. Roger Penrose only discovered Reutersvärd's work in 1984.
Escherian Stairwell
The Escherian Stairwell is a viral video based on the Penrose stairs illusion. The video, filmed at Rochester Institute of Technology by Michael Lacanilao, was edited to create a seemingly cyclic stairwell such that if someone walks in either direction, they will end up where they started. The video claims that the stairwell, whose name evokes M.C. Escher's impossible objects, was built in the 1960s by the fictitious architect Rafael Nelson Aboganda. The video was revealed to be an Internet hoax, as individuals have travelled to Rochester Institute of Technology to view the staircase.
In popular culture
People walking along Penrose stairs feature in the video of Cliff Richard's "Some People (Cliff Richard song)".
The Penrose stairs appeared twice in the movie Inception. This paradoxical illusion can only be realized in the dream worlds of the film. In the film, the hero descends the stairs fleeing from a guard. In the real world, the hero should always be in front of the villain throughout this chase. However, in the case of the Penrose stairs the hero descends another flight of stairs to catch up to the antagonist and catch him unaware.
The cover of the 2011 album Angles by American rock band The Strokes depicts a complex set of Penrose stairs.
In their 2015 single called "Greek Tragedy," English rock band The Wombats mentions the Penrose steps.
See also
Mathematics and art
Shepard tone
Strange loop
Infinite descent
Notes
References
Optical illusions
Stairways
Impossible objects | Penrose stairs | [
"Physics"
] | 1,103 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
440,704 | https://en.wikipedia.org/wiki/Colonnade | In classical architecture, a colonnade is a long sequence of columns joined by their entablature, often free-standing, or part of a building. Paired or multiple pairs of columns are normally employed in a colonnade which can be straight or curved. The space enclosed may be covered or open. In St. Peter's Square in Rome, Bernini's great colonnade encloses a vast open elliptical space.
When in front of a building, screening the door (Latin porta), it is called a portico. When enclosing an open court, a peristyle. A portico may be more than one rank of columns deep, as at the Pantheon in Rome or the stoae of Ancient Greece.
When the intercolumniation is alternately wide and narrow, a colonnade may be termed "araeosystyle" (Gr. αραιος, "widely spaced", and συστυλος, "with columns set close together"), as in the case of the western porch of St Paul's Cathedral and the east front of the Louvre.
History
Colonnades (formerly as colonade) have been built since ancient times and interpretations of the classical model have continued through to modern times, and Neoclassical styles remained popular for centuries. At the British Museum, for example, porticos are continued along the front as a colonnade. The porch of columns that surrounds the Lincoln Memorial in Washington, D.C., (in style a peripteral classical temple) can be termed a colonnade. As well as the traditional use in buildings and monuments, colonnades are used in sports stadiums such as the Harvard Stadium in Boston, where the entire horseshoe-shaped stadium is topped by a colonnade. The longest colonnade in the United States, with 36 Corinthian columns, is the New York State Education Building in Albany, New York.
Notable colonnades
Ancient world
Renaissance and Baroque periods
Neoclassical
Modern interpretations
See also
Arcade
Cloister
Engaged column
References
Columns and entablature
Architectural elements | Colonnade | [
"Technology",
"Engineering"
] | 432 | [
"Building engineering",
"Structural system",
"Architectural elements",
"Columns and entablature",
"Components",
"Architecture"
] |
440,789 | https://en.wikipedia.org/wiki/Gyrator | A gyrator is a passive, linear, lossless, two-port electrical network element proposed in 1948 by Bernard D. H. Tellegen as a hypothetical fifth linear element after the resistor, capacitor, inductor and ideal transformer. Unlike the four conventional elements, the gyrator is non-reciprocal. Gyrators permit network realizations of two-(or-more)-port devices which cannot be realized with just the four conventional elements. In particular, gyrators make possible network realizations of isolators and circulators. Gyrators do not however change the range of one-port devices that can be realized. Although the gyrator was conceived as a fifth linear element, its adoption makes both the ideal transformer and either the capacitor or inductor redundant. Thus the number of necessary linear elements is in fact reduced to three. Circuits that function as gyrators can be built with transistors and op-amps using feedback.
Tellegen invented a circuit symbol for the gyrator and suggested a number of ways in which a practical gyrator might be built.
An important property of a gyrator is that it inverts the current–voltage characteristic of an electrical component or network. In the case of linear elements, the impedance is also inverted. In other words, a gyrator can make a capacitive circuit behave inductively, a series LC circuit behave like a parallel LC circuit, and so on. It is primarily used in active filter design and miniaturization.
Behaviour
An ideal gyrator is a linear two-port device which couples the current on one port to the voltage on the other and conversely. The instantaneous currents and instantaneous voltages are related by
where is the gyration resistance of the gyrator.
The gyration resistance (or equivalently its reciprocal the gyration conductance) has an associated direction indicated by an arrow on the schematic diagram. By convention, the given gyration resistance or conductance relates the voltage on the port at the head of the arrow to the current at its tail. The voltage at the tail of the arrow is related to the current at its head by minus the stated resistance. Reversing the arrow is equivalent to negating the gyration resistance, or to reversing the polarity of either port.
Although a gyrator is characterized by its resistance value, it is a lossless component. From the governing equations, the instantaneous power into the gyrator is identically zero:
A gyrator is an entirely non-reciprocal device, and hence is represented by antisymmetric impedance and admittance matrices:
If the gyration resistance is chosen to be equal to the characteristic impedance of the two ports (or to their geometric mean if these are not the same), then the scattering matrix for the gyrator is
which is likewise antisymmetric. This leads to an alternative definition of a gyrator: a device which transmits a signal unchanged in the forward (arrow) direction, but reverses the polarity of the signal travelling in the backward direction (or equivalently, 180° phase-shifts the backward-travelling signal). The symbol used to represent a gyrator in one-line diagrams (where a waveguide or transmission line is shown as a single line rather than as a pair of conductors), reflects this one-way phase shift.
As with a quarter-wave transformer, if one port of a gyrator is terminated with a linear load, then the other port presents an impedance inversely proportional to the impedance of that load:
A generalization of the gyrator is conceivable, in which the forward and backward gyration conductances have different magnitudes, so that the admittance matrix is
However, this no longer represents a passive device.
Name
Tellegen named the element gyrator as a blend of gyroscope and the common device suffix -tor (as in resistor, capacitor, transistor etc.) The -tor ending is even more suggestive in Tellegen's native Dutch, where the related element transformer is called transformator. The gyrator is related to the gyroscope by an analogy in its behaviour.
The analogy with the gyroscope is due to the relationship between the torque and angular velocity of the gyroscope on the two axes of rotation. A torque on one axis will produce a proportional change in angular velocity on the other axis and conversely. A mechanical–electrical analogy of the gyroscope making torque and angular velocity the analogs of voltage and current results in the electrical gyrator.
Relationship to the ideal transformer
An ideal gyrator is similar to an ideal transformer in being a linear, lossless, passive, memoryless two-port device. However, whereas a transformer couples the voltage on port 1 to the voltage on port 2, and the current on port 1 to the current on port 2, the gyrator cross-couples voltage to current and current to voltage. Cascading two gyrators achieves a voltage-to-voltage coupling identical to that of an ideal transformer.
Cascaded gyrators of gyration resistance and are equivalent to a transformer of turns ratio . Cascading a transformer and a gyrator, or equivalently cascading three gyrators produces a single gyrator of gyration resistance .
From the point of view of network theory, transformers are redundant when gyrators are available. Anything that can be built from resistors, capacitors, inductors, transformers and gyrators, can also be built using just resistors, gyrators and inductors (or capacitors).
Magnetic circuit analogy
In the two-gyrator equivalent circuit for a transformer, described above, the gyrators may be identified with the transformer windings, and the loop connecting the gyrators with the transformer magnetic core. The electric current around the loop then corresponds to the rate-of-change of magnetic flux through the core, and the electromotive force (EMF) in the loop due to each gyrator corresponds to the magnetomotive force (MMF) in the core due to each winding.
The gyration resistances are in the same ratio as the winding turn-counts, but collectively of no particular magnitude. So, choosing an arbitrary conversion factor of ohms per turn, a loop EMF is related to a core MMF by
and the loop current is related to the core flux-rate by
The core of a real, non-ideal, transformer has finite permeance (non-zero reluctance ), such that the flux and total MMF satisfy
which means that in the gyrator loop
corresponding to the introduction of a series capacitor
in the loop. This is Buntenbach's capacitance–permeance analogy, or the gyrator–capacitor model of magnetic circuits.
Application
Simulated inductor
A gyrator can be used to transform a load capacitance into an inductance. At low frequencies and low powers, the behaviour of the gyrator can be reproduced by a small op-amp circuit. This supplies a means of providing an inductive element in a small electronic circuit or integrated circuit. Before the invention of the transistor, coils of wire with large inductance might be used in electronic filters. An inductor can be replaced by a much smaller assembly containing a capacitor, operational amplifiers or transistors, and resistors. This is especially useful in integrated circuit technology.
Operation
In the circuit shown, one port of the gyrator is between the input terminal and ground, while the other port is terminated with the capacitor. The circuit works by inverting and multiplying the effect of the capacitor in an RC differentiating circuit, where the voltage across the resistor R behaves through time in the same manner as the voltage across an inductor. The op-amp follower buffers this voltage and applies it back to the input through the resistor RL. The desired effect is an impedance of the form of an ideal inductor L with a series resistance RL:
From the diagram, the input impedance of the op-amp circuit is
With RLRC = L, it can be seen that the impedance of the simulated inductor is the desired impedance in parallel with the impedance of the RC circuit. In typical designs, R is chosen to be sufficiently large such that the first term dominates; thus, the RC circuit's effect on input impedance is negligible:
This is the same as a resistance RL in series with an inductance L = RLRC. There is a practical limit on the minimum value that RL can take, determined by the current output capability of the op-amp.
The impedance cannot increase indefinitely with frequency, and eventually the second term limits the impedance to the value of R.
Comparison with actual inductors
Simulated elements are electronic circuits that imitate actual elements. Simulated elements cannot replace physical inductors in all the possible applications as they do not possess all the unique properties of physical inductors.
Magnitudes. In typical applications, both the inductance and the resistance of the gyrator are much greater than that of a physical inductor. Gyrators can be used to create inductors from the microhenry range up to the megahenry range. Physical inductors are typically limited to tens of henries, and have parasitic series resistances from hundreds of microhms through the low kilohm range. The parasitic resistance of a gyrator depends on the topology, but with the topology shown, series resistances will typically range from tens of ohms through hundreds of kilohms.
Quality. Physical capacitors are often much closer to "ideal capacitors" than physical inductors are to "ideal inductors". Because of this, a synthesized inductor realized with a gyrator and a capacitor may, for certain applications, be closer to an "ideal inductor" than any (practical) physical inductor can be. Thus, use of capacitors and gyrators may improve the quality of filter networks that would otherwise be built using inductors. Also, the Q factor of a synthesized inductor can be selected with ease. The Q of an LC filter can be either lower or higher than that of an actual LC filter – for the same frequency, the inductance is much higher, the capacitance much lower, but the resistance also higher. Gyrator inductors typically have higher accuracy than physical inductors, due to the lower cost of precision capacitors than inductors.
Energy storage. Simulated inductors do not have the inherent energy storing properties of the real inductors and this limits the possible power applications. The circuit cannot respond like a real inductor to sudden input changes (it does not produce a high-voltage back EMF); its voltage response is limited by the power supply. Since gyrators use active circuits, they only function as a gyrator within the power supply range of the active element. Hence gyrators are usually not very useful for situations requiring simulation of the 'flyback' property of inductors, where a large voltage spike is caused when current is interrupted. A gyrator's transient response is limited by the bandwidth of the active device in the circuit and by the power supply.
Externalities. Simulated inductors do not react to external magnetic fields and permeable materials the same way that real inductors do. They also don't create magnetic fields (and induce currents in external conductors) the same way that real inductors do. This limits their use in applications such as sensors, detectors and transducers.
Grounding. The fact that one side of the simulated inductor is grounded restricts the possible applications (real inductors are floating). This limitation may preclude its use in some low-pass and notch filters. However the gyrator can be used in a floating configuration with another gyrator so long as the floating "grounds" are tied together. This allows for a floating gyrator, but the inductance simulated across the input terminals of the gyrator pair must be cut in half for each gyrator to ensure that the desired inductance is met (the impedance of inductors in series adds together). This is not typically done as it requires even more components than in a standard configuration and the resulting inductance is a result of two simulated inductors, each with half of the desired inductance.
Applications
The primary application for a gyrator is to reduce the size and cost of a system by removing the need for bulky, heavy and expensive inductors. For example, RLC bandpass filter characteristics can be realized with capacitors, resistors and operational amplifiers without using inductors. Thus graphic equalizers can be achieved with capacitors, resistors and operational amplifiers without using inductors because of the invention of the gyrator.
Gyrator circuits are extensively used in telephony devices that connect to a POTS system. This has allowed telephones to be much smaller, as the gyrator circuit carries the DC part of the line loop current, allowing the transformer carrying the AC voice signal to be much smaller due to the elimination of DC current through it.
Gyrators are used in most DAAs (data access arrangements).
Circuitry in telephone exchanges has also been affected with gyrators being used in line cards. Gyrators are also widely used in hi-fi for graphic equalizers, parametric equalizers, discrete bandstop and bandpass filters such as rumble filters), and FM pilot tone filters.
There are many applications where it is not possible to use a gyrator to replace an inductor:
High voltage systems utilizing flyback (beyond working voltage of transistors/amplifiers)
RF systems commonly use real inductors as they are quite small at these frequencies and integrated circuits to build an active gyrator are either expensive or non-existent. However, passive gyrators are possible.
Power conversion, where a coil is used as energy storage.
Impedance inversion
In microwave circuits, impedance inversion can be achieved using a quarter-wave impedance transformer instead of a gyrator. The quarter-wave transformer is a passive device and is far simpler to build than a gyrator. Unlike the gyrator, the transformer is a reciprocal component. The transformer is an example of a distributed-element circuit.
In other energy domains
Analogs of the gyrator exist in other energy domains. The analogy with the mechanical gyroscope has already been pointed out in the name section. Also, when systems involving multiple energy domains are being analysed as a unified system through analogies, such as mechanical-electrical analogies, the transducers between domains are considered either transformers or gyrators depending on which variables they are translating. Electromagnetic transducers translate current into force and velocity into voltage. In the impedance analogy however, force is the analog of voltage and velocity is the analog of current, thus electromagnetic transducers are gyrators in this analogy. On the other hand, piezoelectric transducers are transformers (in the same analogy).
Thus another possible way to make an electrical passive gyrator is to use transducers to translate into the mechanical domain and back again, much as is done with mechanical filters. Such a gyrator can be made with a single mechanical element by using a multiferroic material using its magnetoelectric effect. For instance, a current carrying coil wound around a multiferroic material will cause vibration through the multiferroic's magnetostrictive property. This vibration will induce a voltage between electrodes embedded in the material through the multiferroic's piezoelectric property. The overall effect is to translate a current into a voltage resulting in gyrator action.
See also
Sallen–Key topology
Frequency-dependent negative resistor
References
Analog circuits
Dutch inventions
Linear filters | Gyrator | [
"Engineering"
] | 3,400 | [
"Analog circuits",
"Electronic engineering"
] |
440,906 | https://en.wikipedia.org/wiki/Seiche | A seiche ( ) is a standing wave in an enclosed or partially enclosed body of water. Seiches and seiche-related phenomena have been observed on lakes, reservoirs, swimming pools, bays, harbors, caves, and seas. The key requirement for formation of a seiche is that the body of water be at least partially bounded, allowing the formation of the standing wave.
The term was promoted by the Swiss hydrologist François-Alphonse Forel in 1890, who was the first to make scientific observations of the effect in Lake Geneva. The word had apparently long been used in the region to describe oscillations in alpine lakes. According to Wilson (1972), this Swiss French dialect word comes from the Latin word meaning "dry", i.e., as the water recedes, the beach dries. The French word or (dry) descends from the Latin.
Seiches in harbours can be caused by long-period or infragravity waves, which are due to subharmonic nonlinear wave interaction with the wind waves, having periods longer than the accompanying wind-generated waves.
Causes and nature
Seiches are often imperceptible to the naked eye, and observers in boats on the surface may not notice that a seiche is occurring due to the extremely long periods.
The effect is caused by resonances in a body of water that has been disturbed by one or more factors, most often meteorological effects (wind and atmospheric pressure variations), seismic activity, or tsunamis. Gravity always seeks to restore the horizontal surface of a body of liquid water, as this represents the configuration in which the water is in hydrostatic equilibrium.
Vertical harmonic motion results, producing an impulse that travels the length of the basin at a velocity that depends on the depth of the water. The impulse is reflected back from the end of the basin, generating interference. Repeated reflections produce standing waves with one or more nodes, or points, that experience no vertical motion. The frequency of the oscillation is determined by the size of the basin, its depth and contours, and the water temperature.
The longest natural period of a seiche is the period associated with the fundamental resonance for the body of water—corresponding to the longest standing wave. For a surface seiche in an enclosed rectangular body of water this can be estimated using Merian's formula:
where T is the longest natural period, L and h are the length and average depth of the body of water, and g the acceleration of gravity.
Higher-order harmonics are also observed. The period of the second harmonic will be half the natural period, the period of the third harmonic will be a third of the natural period, and so forth.
Occurrence
Seiches have been observed on both lakes and seas. The key requirement is that the body of water be partially constrained to allow formation of standing waves. Regularity of geometry is not required; even harbours with exceedingly irregular shapes are routinely observed to oscillate with very stable frequencies.
Lake seiches
Low rhythmic seiches are almost always present on larger lakes. They are usually unnoticeable among the common wave patterns, except during periods of unusual calm. Harbours, bays, and estuaries are often prone to small seiches with amplitudes of a few centimetres and periods of a few minutes.
The original studies in Lake Geneva by François-Alphonse Forel found the longitudinal period to have a 73-minute cycle, and the transversal seiche to have a period of around 10 minutes. Another lake well known for its regular seiches is New Zealand's Lake Wakatipu, which varies its surface height at Queenstown by 20 centimetres in a 27-minute cycle. Seiches can also form in semi-enclosed seas; the North Sea often experiences a lengthwise seiche with a period of about 36 hours.
The National Weather Service issues low water advisories for portions of the Great Lakes when seiches of or greater are likely to occur. Lake Erie is particularly prone to wind-caused seiches because of its shallowness and its elongation on a northeast–southwest axis, which frequently matches the direction of prevailing winds and therefore maximises the fetch of those winds. These can lead to extreme seiches of up to between the ends of the lake.
The effect is similar to a storm surge like that caused by hurricanes along ocean coasts, but the seiche effect can cause oscillation back and forth across the lake for some time. In 1954, the remnants of Hurricane Hazel piled up water along the northwestern Lake Ontario shoreline near Toronto, causing extensive flooding, and established a seiche that subsequently caused flooding along the south shore.
Lake seiches can occur very quickly: on July 13, 1995, a large seiche on Lake Superior caused the water level to fall and then rise again by one metre (three feet) within fifteen minutes, leaving some boats hanging from the docks on their mooring lines when the water retreated. The same storm system that caused the 1995 seiche on Lake Superior produced a similar effect in Lake Huron, in which the water level at Port Huron changed by over two hours. On June 26, 1954, on Lake Michigan in Chicago, eight fishermen were swept away from piers at Montrose and North Avenue Beaches and drowned when a seiche hit the Chicago waterfront.
Lakes in seismically active areas, such as Lake Tahoe in California/Nevada, are significantly at risk from seiches. Geological evidence indicates that the shores of Lake Tahoe may have been hit by seiches and tsunamis as much as high in prehistoric times, and local researchers have called for the risk to be factored into emergency plans for the region.
Earthquake-generated seiches can be observed thousands of miles away from the epicentre of a quake. Swimming pools are especially prone to seiches caused by earthquakes, as the ground tremors often match the resonant frequencies of small bodies of water. The 1994 Northridge earthquake in California caused swimming pools to overflow across southern California. The massive Good Friday earthquake that hit Alaska in 1964 caused seiches in swimming pools as far away as Puerto Rico. The earthquake that hit Lisbon, Portugal in 1755 also caused seiches farther north in Loch Lomond, Loch Long, Loch Katrine and Loch Ness in Scotland, and in canals in Sweden. The 2004 Indian Ocean earthquake caused seiches in standing water bodies in many Indian states as well as in Bangladesh, Nepal, and northern Thailand. Seiches were again observed in Uttar Pradesh, Tamil Nadu and West Bengal in India as well as in many locations in Bangladesh during the 2005 Kashmir earthquake.
The 1950 Assam–Tibet earthquake is known to have generated seiches as far away as Norway and southern England. Other earthquakes in the Indian sub-continent known to have generated seiches include the 1803 Kumaon-Barahat, 1819 Allah Bund, 1842 Central Bengal, 1905 Kangra, 1930 Dhubri, 1934 Nepal-Bihar, 2001 Bhuj, 2005 Nias, 2005 Teresa Island earthquakes. The February 27, 2010 Chile earthquake produced a seiche on Lake Pontchartrain, Louisiana, with a height of around . The 2010 Baja California earthquake produced large seiches that quickly became an internet phenomenon.
Seiches up to at least 1.8 m (6 feet) were observed in Sognefjorden, Norway, during the 2011 Tōhoku earthquake in Japan.
Sea and bay seiches
Seiches have been observed in seas such as the Adriatic Sea and the Baltic Sea. This results in the flooding of Venice and Saint Petersburg, respectively, as both cities are constructed on former marshland. In St. Petersburg, seiche-induced flooding is common along the Neva River in the autumn. The seiche is driven by a low-pressure region in the North Atlantic moving onshore, giving rise to cyclonic lows on the Baltic Sea. The low pressure of the cyclone draws greater-than-normal quantities of water into the virtually landlocked Baltic. As the cyclone continues inland, long, low-frequency seiche waves with wavelengths up to several hundred kilometres are established in the Baltic. When the waves reach the narrow and shallow Neva Bay, they become much higher—ultimately flooding the Neva embankments. Similar phenomena are observed at Venice, resulting in the MOSE Project, a system of 79 mobile barriers designed to protect the three entrances to the Venetian Lagoon.
In Japan, seiches have been observed in Nagasaki Bay, most often in the spring. During a seiche event on 31 March 1979, a water-level displacement of was recorded at Nagasaki tide station; the maximum displacement in the whole bay is thought to have reached as much as . Seiches in Western Kyushu—including Nagasaki Bay—are often induced by a low in the atmospheric pressure passing South of Kyushu island. Seiches in Nagasaki Bay have a period of about 30 to 40 minutes. Locally, seiches have caused floods, destroyed port facilities and damaged the fishery: hence the local word for seiche, , from , meaning 'the dragging-away of a fishing net'.
On occasion, tsunamis can produce seiches as a result of local geographic peculiarities. For instance, the tsunami that hit Hawaii in 1946 had a fifteen-minute interval between wave fronts. The natural resonant period of Hilo Bay is about thirty minutes. That meant that every second wave was in phase with the bay, creating a seiche. As a result, Hilo suffered worse damage than any other place in Hawaii, with the combined tsunami and seiche reaching a height of along the Bayfront, killing 96 people in the city alone. Seiche waves may continue for several days after a tsunami.
Tide-generated internal solitary waves (solitons) can excite coastal seiches at the following locations: Magueyes Island in Puerto Rico,
Puerto Princesa in Palawan Island,
Trincomalee Bay in Sri Lanka,
and in the Bay of Fundy in eastern Canada, where seiches cause some of the highest recorded tidal fluctuations in the world.
A dynamical mechanism exists for the generation of coastal seiches by deep-sea internal waves. These waves can generate a sufficient current at the shelf break to excite coastal seiches.
In September 2023, an enormous landslide resulting from a melting glacier near Dickson Fjord in Greenland triggered a megatsunami about high. This was followed by a seiche with waves up to high oscillating within the fjord. This seiche lasted nine days, reflecting the avalanche's large size and the fjord's long, narrow shape. During that period, it generated unusual seismic reverberations detected around the world, puzzling seismologists for some time before they could identify their source.
Underwater (internal) waves
Seiches are also observed beneath the surface of constrained bodies of water, acting along the thermocline.
In analogy with the Merian formula, the expected period of the internal wave can be expressed as:
with
where T is the natural period, L is the length of the water body, the average thicknesses of the two layers separated by stratification (e.g. epilimnion and hypolimnion), the densities of these two same layers and g the acceleration of gravity.
As the thermocline moves up and down a sloping lake bed, it creates a 'swash zone', where temperatures can vary rapidly, potentially affecting fish habitat. As the thermocline rises up a sloping lake bed, it can also cause benthic turbulence by convective overturning, whereas the falling thermocline experiences greater stratification and low turbulence at the lake bed. Internal waves can also degenerate into non-linear internal waves on sloping lake-beds. When such non-linear waves break on the lake bed, they can be an important source of turbulence and have the potential for sediment resuspension
Cave seiches
On September 19, 2022, a seiche reaching occurred at Devils Hole at Death Valley National Park in the U.S. after a 7.6-magnitude earthquake hit western Mexico, about away. Seiches were also observed in the cave after powerful earthquakes in 2012, 2018 and 2019.
Engineering for seiche protection
Engineers consider seiche phenomena in the design of flood protection works (e.g., Saint Petersburg Dam), reservoirs and dams (e.g., Grand Coulee Dam), potable water storage basins, harbours, and even spent nuclear fuel storage basins. Structures and beach-dune systems are particularly vulnerable to damage from high water levels. Wetlands may be severely affected even by small fluctuations in water levels, and therefore historical and predicted water level fluctuations are crucial data for any coastal design. Information on seiches, along with storm surges, and tidal fluctuations is essential.
The period of a seiche depends on the size and depth of the basin in which it occurs. If an incoming wave train has a period similar to the natural frequency of the harbour, each wave will amplify the seiche's intensity, resulting in rougher waters within the harbour compared to the surrounding sea, which can create problems for shipping. The levels of high water in Venice for example, are the result of a combination of storm surge, barometric surge and seiches.
See also
Vajont Dam, Disused gravity arch dam in Italy, overtopped by a seiche in 1963
Notes
Further reading
External links
General
What is a seiche?
Seiche. Encyclopædia Britannica. Retrieved January 24, 2004, from Encyclopædia Britannica Premium Service.
Seiche calculator
Bonanza for Lake Superior: Seiches Do More Than Move Water
Great Lakes Storms Photo Gallery Seiches, Storm Surges, and Edge Waves from NOAA
Shelf Response for an identical pair of incident KdV solitons
Relationship to aquatic "monsters"
The Unmuseum
"The Legend of the Lake Champlain Monster" in The Skeptical Inquirer
Geological page
Limnology
Flood
Water waves
Water | Seiche | [
"Physics",
"Chemistry",
"Environmental_science"
] | 2,896 | [
"Physical phenomena",
"Hydrology",
"Water waves",
"Flood",
"Waves",
"Water",
"Fluid dynamics"
] |
440,959 | https://en.wikipedia.org/wiki/Mpemba%20effect | The Mpemba effect is the name given to the observation that a liquid (typically water) that is initially hot can freeze faster than the same liquid which begins cold, under otherwise similar conditions. There is disagreement about its theoretical basis and the parameters required to produce the effect.
The Mpemba effect is named after Tanzanian Erasto Bartholomeo Mpemba, who described it in 1963 as a secondary school student. The initial discovery and observations of the effect originate in ancient times; Aristotle said that it was common knowledge.
Definition
The phenomenon, when taken to mean "hot water freezes faster than cold", is difficult to reproduce or confirm because it is ill-defined. Monwhea Jeng proposed a more precise wording: "There exists a set of initial parameters, and a pair of temperatures, such that given two bodies of water identical in these parameters, and differing only in initial uniform temperatures, the hot one will freeze sooner."
Even with Jeng's definition, it is not clear whether "freezing" refers to the point at which water forms a visible surface layer of ice, the point at which the entire volume of water becomes a solid block of ice, or when the water reaches . Jeng's definition suggests simple ways in which the effect might be observed, such as if a warmer temperature melts the frost on a cooling surface, thereby increasing thermal conductivity between the cooling surface and the water container. Alternatively, the Mpemba effect may not be evident in situations and under circumstances that at first seem to qualify.
Observations
Historical context
Various effects of heat on the freezing of water were described by ancient scientists, including Aristotle: "The fact that the water has previously been warmed contributes to its freezing quickly: for so it cools sooner. Hence many people, when they want to cool water quickly, begin by putting it in the sun." Aristotle's explanation involved antiperistasis: "...the supposed increase in the intensity of a quality as a result of being surrounded by its contrary quality."
Francis Bacon noted that "slightly tepid water freezes more easily than that which is utterly cold." René Descartes wrote in his Discourse on the Method, relating the phenomenon to his vortex theory: "One can see by experience that water that has been kept on a fire for a long time freezes faster than other, the reason being that those of its particles that are least able to stop bending evaporate while the water is being heated."
Scottish scientist Joseph Black investigated a special case of the phenomenon by comparing previously boiled with unboiled water; he found that the previously boiled water froze more quickly. Evaporation was controlled for. He discussed the influence of stirring on the results of the experiment, noting that stirring the unboiled water led to it freezing at the same time as the previously boiled water, and also noted that stirring the very-cold unboiled water led to immediate freezing. Joseph Black then discussed Daniel Gabriel Fahrenheit's description of supercooling of water, arguing that the previously boiled water could not be as readily supercooled.
Mpemba's observation
The effect is named after Tanzanian scientist Erasto Mpemba. He described it in 1963 in Form 3 of Magamba Secondary School, Tanganyika; when freezing a hot ice cream mixture in a cookery class, he noticed that it froze before a cold mixture. He later became a student at Mkwawa Secondary (formerly High) School in Iringa. The headmaster invited Dr. Denis Osborne from the University College in Dar es Salaam to give a lecture on physics. After the lecture, Mpemba asked him, "If you take two similar containers with equal volumes of water, one at and the other at , and put them into a freezer, the one that started at freezes first. Why?" Mpemba was at first ridiculed by both his classmates and his teacher. After initial consternation, however, Osborne experimented on the issue back at his workplace and confirmed Mpemba's finding. They published the results together in 1969, while Mpemba was studying at the College of African Wildlife Management.
Mpemba and Osborne described placing samples of water in beakers in the icebox of a domestic refrigerator on a sheet of polystyrene foam. They showed the time for freezing to start was longest with an initial temperature of and that it was much less at around . They ruled out loss of liquid volume by evaporation and the effect of dissolved air as significant factors. In their setup, most heat loss was found to be from the liquid surface.
Modern experimental work
David Auerbach has described an effect that he observed in samples in glass beakers placed into a liquid cooling bath. In all cases the water supercooled, reaching a temperature of typically before spontaneously freezing. Considerable random variation was observed in the time required for spontaneous freezing to start and in some cases this resulted in the water which started off hotter (partially) freezing first.
In 2016, Burridge and Linden defined the criterion as the time to reach , carried out experiments, and reviewed published work to date. They noted that the large difference originally claimed had not been replicated, and that studies showing a small effect could be influenced by variations in the positioning of thermometers: "We conclude, somewhat sadly, that there is no evidence to support meaningful observations of the Mpemba effect."
In controlled experiments, the effect can entirely be explained by undercooling and the time of freezing was determined by what container was used. Experimental results confirming the Mpemba effect have been criticized for being flawed, not accounting for dissolved solids and gasses, and other confounding factors.
Philip Ball, a reviewer for Physics World wrote: "Even if the Mpemba effect is real — if hot water can sometimes freeze more quickly than cold — it is not clear whether the explanation would be trivial or illuminating." Ball wrote that investigations of the phenomenon need to control a large number of initial parameters (including type and initial temperature of the water, dissolved gas and other impurities, and size, shape and material of the container, and temperature of the refrigerator) and need to settle on a particular method of establishing the time of freezing, all of which might affect the presence or absence of the Mpemba effect. The required vast multidimensional array of experiments might explain why the effect is not yet understood.
New Scientist recommends starting the experiment with containers at , respectively, to maximize the effect.
Suggested explanations
While the actual occurrence of the Mpemba effect is disputed, several theoretical explanations could explain its occurrence.
In 2017, two research groups independently and simultaneously found a theoretical Mpemba effect and also predicted a new "inverse" Mpemba effect in which heating a cooled, far-from-equilibrium system takes less time than another system that is initially closer to equilibrium. Zhiyue Lu and Oren Raz yielded a general criterion based on Markovian statistical mechanics, predicting the appearance of the inverse Mpemba effect in the Ising model and diffusion dynamics. Antonio Lasanta and co-authors also predicted the direct and inverse Mpemba effects for a granular gas in a far-from-equilibrium initial state. Lasanta's paper also suggested that a very generic mechanism leading to both Mpemba effects is due to a particle velocity distribution function that significantly deviates from the Maxwell–Boltzmann distribution.
James Brownridge, a physicist at Binghamton University, has said that supercooling is involved. Several molecular dynamics simulations have also supported that changes in hydrogen bonding during supercooling take a major role in the process. In 2017, Yunwen Tao and co-authors suggested that the vast diversity and peculiar occurrence of different hydrogen bonds could contribute to the effect. They argued that the number of strong hydrogen bonds increases as temperature is elevated, and that the existence of the small strongly bonded clusters facilitates in turn the nucleation of hexagonal ice when warm water is rapidly cooled down. The authors used vibrational spectroscopy and modelling with density functional theory-optimized water clusters.
The following explanations have also been proposed:
Microbubble-induced heat transfer: The process of boiling induced microbubbles in water that remain stably suspended as the water cools, then act by convection to transfer heat more quickly as the water cools.
Evaporation: The evaporation of the warmer water reduces the mass of the water to be frozen. Evaporation is endothermic, meaning that the water mass is cooled by vapor carrying away the heat, but this alone probably does not account for the entirety of the effect.
Convection, accelerating heat transfers: Reduction of water density below tends to suppress the convection currents that cool the lower part of the liquid mass; the lower density of hot water would reduce this effect, perhaps sustaining the more rapid initial cooling. Higher convection in the warmer water may also spread ice crystals around faster.
Frost: Frost has insulating effects. The lower temperature water will tend to freeze from the top, reducing further heat loss by radiation and air convection, while the warmer water will tend to freeze from the bottom and sides because of water convection. This is disputed as there are experiments that account for this factor.
Solutes: Calcium carbonate, magnesium carbonate, and other mineral salts dissolved in water can precipitate out when water is boiled, leading to an increase in the freezing point compared to non-boiled water that contains all the dissolved minerals.
Thermal conductivity:
The container of hotter liquid may melt through a layer of frost that is acting as an insulator under the container (frost is an insulator, as mentioned above), allowing the container to come into direct contact with a much colder lower layer that the frost formed on (ice, refrigeration coils, etc.) The container now rests on a much colder surface (or one better at removing heat, such as refrigeration coils) than the originally colder water, and so cools far faster from this point on.
Conduction through the bottom is dominant, when the bottom of a hot beaker has been wetted by melted ice, and then sticky frozen to it. In context of Mpemba effect it is a mistake to think that bottom ice insulates, compared to poor air cooling properties.
Dissolved gases: Cold water can contain more dissolved gases than hot water, which may somehow change the properties of the water with respect to convection currents, a proposition that has some experimental support but no theoretical explanation.
Hydrogen bonding: In warm water, hydrogen bonding is weaker.
Crystallization: Another explanation suggests that the relatively higher population of water hexamer states in warm water might be responsible for the faster crystallization.
Distribution function: Strong deviations from the Maxwell–Boltzmann distribution result in potential Mpemba effect showing up in gases.
Similar effects
Other phenomena in which large effects may be achieved faster than small effects are:
Latent heat: Turning ice to water takes the same amount of energy as heating water from to .
Leidenfrost effect: Lower temperature boilers can sometimes vaporize water faster than higher temperature boilers.
Strong Mpemba effect
The possibility of a "strong Mpemba effect" where exponentially faster cooling can occur in a system at particular initial temperatures was predicted in 2017 by Klich, Raz, Hirschberg and Vucelja. In 2020 the strong Mpemba effect was demonstrated experimentally by Avinash Kumar and John Boechhoefer in a colloidal system.
Quantum Mpemba effect
In 2024, Goold and coworkers described their quantum-mechanical analysis of an the abstract problem wherein "an initially hot system is quenched into a cold bath and reaches equilibrium faster than an initially cooler system."In addition to their theoretical work, which used non-equilibrium quantum dynamics, their paper includes computational studies of spin systems which exhibit the effect. They concluded that certain initial conditions of a quantum-dynamical system can lead to a simultaneous increase in the thermalization rate and the free energy.
See also
Density of water
Heat capacity
Water cluster
Newton's law of cooling
References
Notes
Bibliography
Auerbach attributes the Mpemba effect to differences in the behaviour of supercooled formerly hot water and formerly cold water.
An extensive study of freezing experiments.
External links
A possible explanation of the Mpemba Effect
An analysis of the Mpemba effect London South Bank University
– History and analysis of the Mpemba effect
An historical interview with Erasto B. Mpemba, Dr Denis G. Osborne and Ray deSouza
High school experiment description, with link to experimental results
in the University of California Usenet Physics FAQ
Mpemba Competition - Royal Society of Chemistry
Physical paradoxes
Thermodynamics
Phase transitions
Unsolved problems in physics
Water physics
Physical phenomena
Hysteresis
1969 in Tanzania
Science and technology in Tanzania | Mpemba effect | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 2,632 | [
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Critical phenomena",
"Unsolved problems in physics",
"Materials science",
"Condensed matter physics",
"Thermodynamics",
"Statistical mechanics",
"Water physics",
"Hysteresis",
"Matter",
"Dynamical systems"
] |
441,197 | https://en.wikipedia.org/wiki/Blood%20smear | A blood smear, peripheral blood smear or blood film is a thin layer of blood smeared on a glass microscope slide and then stained in such a way as to allow the various blood cells to be examined microscopically. Blood smears are examined in the investigation of hematological (blood) disorders and are routinely employed to look for blood parasites, such as those of malaria and filariasis.
Preparation
A blood smear is made by placing a drop of blood on one end of a slide, and using a spreader slide to disperse the blood over the slide's length. The aim is to get a region, called a monolayer, where the cells are spaced far enough apart to be counted and differentiated. The monolayer is found in the "feathered edge" created by the spreader slide as it draws the blood forward.
The slide is left to air dry, after which the blood is fixed to the slide by immersing it briefly in methanol. The fixative is essential for good staining and presentation of cellular detail. After fixation, the slide is stained to distinguish the cells from each other.
Routine analysis of blood in medical laboratories is usually performed on blood films stained with Romanowsky stains such as Wright's stain, Giemsa stain, or Diff-Quik. Wright-Giemsa combination stain is also a popular choice. These stains allow for the detection of white blood cell, red blood cell, and platelet abnormalities. Hematopathologists often use other specialized stains to aid in the differential diagnosis of blood disorders.
After staining, the monolayer is viewed under a microscope using magnification up to 1000 times. Individual cells are examined and their morphology is characterized and recorded.
Clinical significance
Blood smear examination is usually performed in conjunction with a complete blood count in order to investigate abnormal results or confirm results that the automated analyzer has flagged as unreliable.
Microscopic examination of the shape, size, and coloration of red blood cells is useful for determining the cause of anemia. Disorders such as iron deficiency anemia, sickle cell anemia, megaloblastic anemia and microangiopathic hemolytic anemia result in characteristic abnormalities on the blood film.
The proportions of different types of white blood cells can be determined from the blood smear. This is known as a manual white blood cell differential. The white blood cell differential can reveal abnormalities in the proportions of white blood cell types, such as neutrophilia and eosinophilia, as well as the presence of abnormal cells such as the circulating blast cells seen in acute leukemia. Qualitative abnormalities of white blood cells, like toxic granulation, are also visible on the blood smear. Modern complete blood count analyzers can provide an automated white blood cell differential, but they have a limited ability to differentiate immature and abnormal cells, so manual examination of the blood smear is frequently indicated.
Blood smear examination is the preferred diagnostic method for certain parasitic infections, such as malaria and babesiosis. Rarely, bacteria may be visible on the blood smear in patients with severe sepsis.
Malaria
The preferred and most reliable diagnosis of malaria is microscopic examination of blood smears, because each of the four major parasite species has distinguishing characteristics. Two sorts of blood smear are traditionally used.
Thin smears are similar to usual blood films and allow species identification, because the parasite's appearance is best preserved in this preparation.
Thick smears allow the microscopist to screen a larger volume of blood and are about eleven times more sensitive than the thin film, so picking up low levels of infection is easier on the thick film, but the appearance of the parasite is much more distorted and therefore distinguishing between the different species can be much more difficult.
From the thick smear, an experienced microscopist can detect all parasites they encounter. Microscopic diagnosis can be difficult because the early trophozoites ("ring form") of all four species look identical and it is never possible to diagnose species on the basis of a single ring form; species identification is always based on several trophozoites.
The biggest pitfall in most laboratories in developed countries is leaving too great a delay between taking the blood sample and making the blood smears. As blood cools to room temperature, male gametocytes will divide and release microgametes: these are long sinuous filamentous structures that can be mistaken for organisms such as Borrelia. If the blood is kept at warmer temperatures, schizonts will rupture and merozoites invading erythrocytes will mistakenly give the appearance of the accolé form of P. falciparum. If P. vivax or P. ovale is left for several hours in EDTA, the buildup of acid in the sample will cause the parasitised erythrocytes to shrink and the parasite will roll up, simulating the appearance of P. malariae. This problem is made worse if anticoagulants such as heparin or citrate are used. The anticoagulant that causes the least problems is EDTA. Romanowsky stain or a variant stain is usually used. Some laboratories mistakenly use the same staining pH as they do for routine haematology blood films (pH 6.8): malaria blood films must be stained at pH 7.2, or Schüffner's dots and James' dots will not be seen.
Immunochromatographic capture procedures (rapid diagnostic tests such as the malaria antigen detection tests) are nonmicroscopic diagnostic options for the laboratory that may not have appropriate microscopy expertise available.
References
External links
Blood photomicrographs
Blood tests
Pathology | Blood smear | [
"Chemistry",
"Biology"
] | 1,169 | [
"Blood tests",
"Chemical pathology",
"Pathology"
] |
441,228 | https://en.wikipedia.org/wiki/Electrical%20synapse | An electrical synapse, or gap junction, is a mechanical and electrically conductive synapse, a functional junction between two neighboring neurons. The synapse is formed at a narrow gap between the pre- and postsynaptic neurons known as a gap junction. At gap junctions, such cells approach within about 3.8 nm of each other, a much shorter distance than the 20- to 40-nanometer distance that separates cells at a chemical synapse. In many animals, electrical synapse-based systems co-exist with chemical synapses.
Compared to chemical synapses, electrical synapses conduct nerve impulses faster and provide continuous-time bidirectional coupling via linked cytoplasm. As such, the notion of signal directionality across these synapses is not always defined. They are known to produce synchronization of network activity in the brain and can create chaotic network level dynamics. In situations where a signal direction can be defined, they lack gain (unlike chemical synapses)—the signal in the postsynaptic neuron is the same or smaller than that of the originating neuron . The fundamental bases for perceiving electrical synapses comes down to the connexons that are located in the gap junction between two neurons. Electrical synapses are often found in neural systems that require the fastest possible response, such as defensive reflexes. An important characteristic of electrical synapses is that they are mostly bidirectional, allowing impulse transmission in either direction.
Structure
Each gap junction (sometimes called a nexus) contains numerous gap junction channels that cross the plasma membranes of both cells. With a lumen diameter of about 1.2 to 2.0 nm, the pore of a gap junction channel is wide enough to allow ions and even medium-size molecules like signaling molecules to flow from one cell to the next, thereby connecting the two cells' cytoplasm. Thus when the membrane potential of one cell changes, ions may move through from one cell to the next, carrying positive charge with them and depolarizing the postsynaptic cell.
Gap junction channels are composed of two hemichannels called connexons in vertebrates, one contributed by each cell at the synapse. Connexons are formed by six 7.5 nm long, four-pass membrane-spanning protein subunits called connexins, which may be identical or slightly different from one another.
An autapse is an electrical (or chemical) synapse formed when the axon of one neuron synapses with its own dendrites.
Effects
They are found in many regions in animal and human body. The simplicity of electrical synapses results in synapses that are fast, but more importantly the bidirectional coupling can produce very complex behaviors at the network level.
Without the need for receptors to recognize chemical messengers, signal transmission at electrical synapses is more rapid than that which occurs across chemical synapses, the predominant kind of junctions between neurons. Chemical transmission exhibits synaptic delay—recordings from squid synapses and neuromuscular junctions of the frog reveal a delay of 0.5 to 4.0 milliseconds—whereas electrical transmission takes place with almost no delay. However, the difference in speed between chemical and electrical synapses is not as marked in mammals as it is in cold-blooded animals.
Because electrical synapses do not involve neurotransmitters, electrical neurotransmission is less modifiable than chemical neurotransmission.
The response always has the same sign as the source. For example, depolarization of the pre-synaptic membrane will always induce a depolarization in the post-synaptic membrane, and vice versa for hyperpolarization.
The response in the postsynaptic neuron is in general smaller in amplitude than the source. The amount of attenuation of the signal is due to the membrane resistance of the presynaptic and postsynaptic neurons.
Long-term changes can be seen in electrical synapses. For example, changes in electrical synapses in the retina are seen during light and dark adaptations of the retina.
The relative speed of electrical synapses also allows for many neurons to fire synchronously. Because of the speed of transmission, electrical synapses are found in escape mechanisms and other processes that require quick responses, such as the response to danger of the sea hare Aplysia, which quickly releases large quantities of ink to obscure enemies' vision.
Normally, current carried by ions could travel in either direction through this type of synapse. However, sometimes the junctions are rectifying synapses, containing voltage-gated ion channels that open in response to depolarization of an axon's plasma membrane, and prevent current from traveling in one of the two directions. Some channels may also close in response to increased calcium () or hydrogen () ion concentration, so as not to spread damage from one cell to another.
There is also evidence of synaptic plasticity where the electrical connection established can either be strengthened or weakened as a result of activity, or during changes in the intracellular concentration of magnesium.
Electrical synapses are present throughout the central nervous system and have been studied specifically in the neocortex, hippocampus, thalamic reticular nucleus, locus coeruleus, inferior olivary nucleus, mesencephalic nucleus of the trigeminal nerve, olfactory bulb, retina, and spinal cord of vertebrates. Other examples of functional gap junctions detected in vivo are in the striatum, cerebellum, and suprachiasmatic nucleus.
History
The model of a reticular network of directly interconnected cells was one of the early hypotheses for the organization of the nervous system at the beginning of the 20th century. This reticular hypothesis was considered to conflict directly with the now predominant neuron doctrine, a model in which isolated, individual neurons signal to each other chemically across synaptic gaps. These two models came into sharp contrast at the award ceremony for the 1906 Nobel Prize in Physiology or Medicine, in which the award went jointly to Camillo Golgi, a reticularist and widely recognized cell biologist, and Santiago Ramón y Cajal, the champion of the neuron doctrine and the father of modern neuroscience. Golgi delivered his Nobel lecture first, in part detailing evidence for a reticular model of the nervous system. Ramón y Cajal then took the podium and refuted Golgi's conclusions in his lecture. Modern understanding of the coexistence of chemical and electrical synapses, however, suggests that both models are physiologically significant; it could be said that the Nobel committee acted with great foresight in awarding the Prize jointly.
There was substantial debate on whether the transmission of information between neurons was chemical or electrical in the first decades of the twentieth century, but chemical synaptic transmission was seen as the only answer after Otto Loewi's demonstration of chemical communication between neurons and heart muscle. Thus, the discovery of electrical communication was surprising.
Electrical synapses were first demonstrated between escape-related giant neurons in crayfish in the late 1950s, and were later found in vertebrates.
See also
Junctional complex
Cardiac muscle
References
Further reading
Cell communication
Electrophysiology
Neural synapse | Electrical synapse | [
"Biology"
] | 1,534 | [
"Cell communication",
"Cellular processes"
] |
441,529 | https://en.wikipedia.org/wiki/Transducin | Transducin (Gt) is a protein naturally expressed in vertebrate retina rods and cones and it is very important in vertebrate phototransduction. It is a type of heterotrimeric G-protein with different α subunits in rod and cone photoreceptors.
Light leads to conformational changes in rhodopsin, which in turn leads to the activation of transducin. Transducin activates phosphodiesterase, which results in the breakdown of cyclic guanosine monophosphate (cGMP). The intensity of the flash response is directly proportional to the number of transducin activated.
Function in phototransduction
Transducin is activated by metarhodopsin II, a conformational change in rhodopsin caused by the absorption of a photon by the rhodopsin moiety retinal. The light causes isomerization of retinal from 11-cis to all-trans. Isomerization causes a change in the opsin to become metarhodopsin II. When metarhodopsin activates transducin, the guanosine diphosphate (GDP) bound to the α subunit (Tα) is exchanged for guanosine triphosphate (GTP) from the cytoplasm. The α subunit dissociates from the βγ subunits (Tβγ). Activated transducin α-subunit activates cGMP phosphodiesterase. cGMP phosphodiesterase breaks down cGMP, an intracellular second messenger which opens cGMP-gated cation channels. Phosphodiesterase hydrolyzes cGMP to 5’-GMP. Decrease in cGMP concentration leads to decreased opening of cation channels and subsequently hyperpolarization of the membrane potential.
Transducin is deactivated when the α-subunit-bound GTP is hydrolyzed to GDP. This process is accelerated by a complex containing an RGS (Regulator of G-protein Signaling)-protein and the gamma-subunit of the effector, cyclic GMP phosphodiesterase.
Mechanism of activation
The Tα subunit of transducin contains three functional domains: one for rhodopsin/Tβγ interaction, one for GTP binding, and the last for activation of cGMP phosphodiesterase.
There are different isoforms of Tα, seen in rod and cone cells. However, the isoforms exhibit functional interchangeability in the phototransduction cascade and shouldn't solely account for differences in light sensitivity. Although the focus for phototransduction is on Tα, Tβγ is crucial for rhodopsin to bind to transducin. The rhodopsin/Tβγ binding domain contains the amino and carboxyl terminal of the Tα. The amino terminal is the site of interaction for rhodopsin while the carboxyl terminal is that for Tβγ binding. The amino terminal might be anchored or in close proximity to the carboxyl terminal for activation of the transducin molecule by rhodopsin.
Interaction with photolyzed rhodopsin opens up the GTP-binding site to allow for rapid exchange of GDP for GTP. The binding site is in the closed conformation in the absence of photolyzed rhodopsin. Normally in the closed conformation, an α-helix located near the binding site is in a position which hinders the GTP/GDP exchange. A conformational change of the Tα by photolyzed rhodopsin causes the tilting of the helix, opening the GTP-binding site.
Once GTP has been exchanged for GDP, the GTP-Tα complex undergoes two major changes: dissociation from photolyzed rhodopsin and the Tβγ subunit and exposure of the phosphodiesterase (PDE) binding site for interaction with latent PDE. The conformational changes initiated in the transducin by binding of GTP are transmitted to the PDE binding site and cause it to be exposed for binding to PDE. The GTP-induced conformational changes could also disrupt the rhodopsin/Tβγ binding site and lead to dissociation from the GTP-Tα complex.
The Tβγ complex
An underlying assumption for G-proteins is that α, β, and γ subunits are present in the same concentration. However, there is evidence that there are more Tβ and Tγ than Tα in rod outer segments (ROS). The excess Tβ and Tγ have been concluded to be floating freely around in the ROS, though it cannot be associated with the Tα at any given time. One possible explanation for the excess Tβγ is increased availability for Tα to rebind. Since Tβγ is crucial for the binding of transducin, reacquisition of the heterotrimeric conformation could lead to more rapid binding to another GTP molecule and thus faster phototransduction.
Though Tβγ has been mentioned to be crucial for Tα binding to rhodopsin, there is also evidence that Tβγ may have a crucial, possibly direct role in nucleotide exchange than previously thought. Rhodopsin was found to specifically cause a conformational switch in the carboxyl terminal of the Tγ subunit. This change ultimately regulates the allosteric nucleotide exchange on the Tα. This domain could serve as a major area for interactions with rhodopsin and for rhodopsin to regulate nucleotide exchange on the Tα. Activation of the G protein transducin by rhodopsin was thought to proceed by the lever mechanism. Rhodopsin-binding causes helix formation at the carboxyl terminal on the Tγ and brings the Tγ carboxyl and Tα. Carboxyl terminals closer together to facilitate nucleotide exchange. Tα can accelerate the rate of activation of light-off induced Protein Kinase A due to binding to rhodopsin. As well as, transducin achieves full functional activation upon binding to activated rhodopsin.
Mutations in this domain abolish rhodopsin-transducin interaction. This conformational switch in the Tγ may be preserved in the G protein γ subunit family.
Interaction with cGMP phosphodiesterase and deactivation
Transducin activation ultimately results in stimulation of the biological effector molecule cGMP phosphodiesterase, an oligomer with α, β and two inhibitory γ subunits. The α and β subunits are the larger molecular weight subunits and make up the catalytic moiety of PDE.
In the phototransduction system, GTP-bound-Tα binds to the γ subunit of PDE. There are two proposed mechanisms for the activation of PDE. The first proposes that the GTP-bound-Tα releases the PDE γ subunit from the catalytic subunits in order to activate hydrolysis. The second more likely mechanism proposes that binding causes a positional shift of the γ subunit, allowing better accessibility of the catalytic subunit for cGMP hydrolysis. The GTPase activity of Tα hydrolyzes GTP to GDP and changes the conformation of the Tα subunit, increasing its affinity to bind to the α and β subunits on the PDE. The binding of Tα to these larger subunits results in another conformational change in PDE and inhibits the hydrolysis ability of the catalytic subunit. This binding site on the larger molecular subunit may be immediately adjacent to the Tα binding site on the γ subunit.
Although the traditional mechanism involves activation of PDE by GTP-bound Tα, GDP-bound Tα has also been demonstrated to have the ability to activate PDE. Experiments of PDE activation in the dark (without the presence of GTP) show small but reproducible PDE activation. This can be explained by the activation of PDE by free GDP-bound Tα. PDE γ subunit affinity for GDP-bound Tα, however, seems to be about 100-fold smaller than for GTP-bound Tα. The mechanism by which GDP-bound Tα activates PDE remains unknown however, it is speculated to be similar to the activation of PDE by GTP-bound Tα.
In order to prevent activation of PDE in the dark, the concentration of GDP-bound Tα should be kept to a minimum. This job seems to fall to the Tβγ to keep the GDP-bound Tα bound in the form of holotransducin.
For deactivation, hydrolysis of the bound GTP by the Tα is necessary for Tα deactivation and returning the transducin to its basal from. However, simple hydrolysis of GTP may not necessarily be enough to deactivate PDE. Tβγ comes into play here again with an important role in PDE deactivation. The addition of Tβγ facilitates inhibition of the PDE catalytic moiety because it binds with the Tα-GTP complex. The reassociated form of transducin is not able to bind to PDE any longer. This frees PDE to recouple to photolyzed rhodopsin and return PDE to its initial state to await activation by another GTP bound Tα.
Genes
Rods: GNAT1, GNB1, GNGT1; Cones: GNAT2, GNB3, GNGT2
References
External links
G proteins | Transducin | [
"Chemistry"
] | 2,027 | [
"G proteins",
"Signal transduction"
] |
441,790 | https://en.wikipedia.org/wiki/Shannon%20hydroelectric%20scheme | The Shannon hydroelectric Scheme was a major development by the Irish Free State in the 1920s to harness the power of the River Shannon. Its product, the Ardnacrusha power plant, is a hydroelectric power station located near Ardnacrusha within County Clare approximately from the Limerick border. It is Ireland's largest river hydroelectric scheme and is operated on a purpose built headrace connected to the River Shannon. The plant includes fish ladders so that returning fish, such as salmon, can climb the river safely past the power station.
Completed within 7 years of Irish independence in 1922 at a cost which was equivalent to one fifth of the Irish state's annual budget, the plant enabled an enormous surge in demand for electricity across the country and demonstrated the ability of the new government to develop during a difficult financial period. The plant was constructed by the German company Siemens-Schuckert, although much of the design was done by Irish engineers and Ireland provided most of the labour force. The scheme involved changes to the flow of the whole river, multiple dams and bridges and the construction of a national power grid.
The generating plant at Ardnacrusha is composed of three vertical-shaft Francis turbine generators (commissioned in 1929) and one vertical-shaft Kaplan turbine generator (commissioned in 1934) operating under an average head of 28.5 metres. The scheme originally was designed for six turbines, with four turbines fitted. The 85 MW of generating plant in Ardnacrusha was adequate to meet the electricity demand of the entire country in the early years. The full output equates to about 332,000 MWh generated on an annual basis. Ardnacrusha generates at 10.5 kilovolts (kV) but this is transformed to 38 kV for local distribution and to 110 kV for long distance transmission.
Background
The first plan to harness the Shannon's power between Lough Derg and Limerick was published in 1844 by Sir Robert Kane. Inspired by Nikola Tesla's 1896 project at Niagara Falls, "Frazer's Scheme" proposed a head-race canal ending at Doonass, and was sanctioned by the 1901 "Shannon Water and Electric Power Act". This envisaged a seasonal scheme with a back-up steam turbine to generate electricity in the summer, but the overall cost was considered too great and the Act was shelved. In 1902 SF Dick proposed a sharper fall at Doonass. The British Board of Trade appointed a committee in 1918 which approved proposals by Theodore Stevens and published a report in 1922. This envisaged altering upper lake levels to create extra storage of 10,000 million cubic feet, at a cost of £2.6m.
At the end of 1923, the engineer Thomas McLaughlin approached the new Irish Free State's Minister for Industry and Commerce Patrick McGilligan with a proposal for a much more ambitious project. McLaughlin had started working for Siemens-Schuckert, a large German engineering firm, in late 1922, and his scheme would exploit the full height difference between Lough Allen and the sea. He drew on the analysis of 25 years of flow at the weir at Killaloe published by John Chaloner Smith, an engineer with the Commissioners of Public Works. McGilligan was enthusiastic although the President of the Executive Council, W. T. Cosgrave, was more cautious. The scheme was published by Siemens in September 1924 and the government appointed a team of experts from Norway and Switzerland to check its viability. It caused considerable political controversy as the cost of £5.2m was a large part of the new state's entire budget in 1925 of £25m and interests in Dublin preferred a more localised solution. But the experts supported the centralised solution which would require a distribution grid all over the country but recommended a two-stage implementation of the power generators. The government accepted this and by April 1925 had introduced the Shannon Electricity Act, 1925 in the Dáil.
Construction
In 1925 Siemens started the works with Dr. McLaughlin as managing director and Professor Frank Sharman Rishworth, who took a leave of absence from University College Galway, as chief civil engineer. A completion time limit of three and a half years, with penalty clauses for failure of adherence to this limit, was written into the contract. Around 150 of the skilled workers and engineers on the power station were Germans. A camp was set up for the workers that included living quarters for 750 men and a dining room that seated 600. Initially employment for 700 was provided, whilst at its peak there were 5,200 employed during the construction phase, with this dropping back to 2,500 near completion.
Siemens had to import a vast array of machinery from Bremen and Hamburg and built a 96 km narrow gauge railway to transport workers and supplies around the site, which included 76 steam locomotives. The government made good the local roads which were in an appalling state. The headrace involved building embankments up to 25m high over a distance of 10 km and many unforeseen geological problems were encountered. of soil had to be moved and of rock. Four major bridges were built and nine rivers diverted as well as numerous streams. Three large Parsons turbines were installed at the base of the dam which could generate 35MW, more than the entire public supply of the time. In addition a supply network of 110kV power lines was installed to Dublin, Cork and other centres.
The construction project was not without controversy. Unskilled labourers were only paid agricultural wages producing strikes, national and governmental debate over wages, conditions, and spending over-runs. Despite this, there was a final cost overrun for Siemens of £150,000.
The site attracted a huge number of sightseers, transported by excursion trains from all parts of Ireland. By 1929 it reckoned that 250,000 spectators had been guided over the works.
Electrifying Ireland
In 1927, the Electricity Supply Board (ESB) was established and took control of the scheme and electricity supply and generation generally. McLaughlin became the managing director.
The Shannon Scheme was officially opened at Parteen Weir on 22 July 1929. One of the largest engineering projects of its day, it served as a model for large-scale electrification projects worldwide. Operated by the ESB, it had an immediate impact on the social, economic and industrial development of Ireland. By 1935, it was producing 80 per cent of Ireland's electricity. It continues to supply this power in the 21st century although its contribution is only 2% as of 2017. At the time, it was the largest hydroelectric station in the world, though this was soon superseded by the Hoover Dam, which commenced construction in 1930.
The London Financial Times was highly impressed with the result, commenting:
They have thrown on their shoulders the not easy task of breaking what is in reality an enormous inferiority complex and the Shannon Scheme is one - and probably the most vital - of their methods of doing it.
Within three years the demand for electricity in Ireland had expanded so much that stage 2 was initiated. Instead of the planned three extra penstocks, only one was used but it used a new 30 MW Kaplan turbine with seven blades which produces high efficiency on the relatively small head and therefore increased the capacity of the station to 75 MW by 1933. In 1937, the Poulaphouca Reservoir hydroelectric plant on the River Liffey in County Wicklow was constructed, adding another 35 MW.
In 2002, on the 75th anniversary of the plant, its historic status was recognised by the Institute of Electrical and Electronics Engineers, in partnership with the American Society of Civil Engineers, who marked the facility as an Engineering Milestone of the 20th century.
Environmental consequences
The opening of the scheme had, and continues to have, a significant environmental effect on the part of the Shannon bypassed by the head-race canal, from Parteen Villa north of O'Briens Bridge to about a mile north of Limerick city. This length of river, especially that running past Castleconnell and the Falls of Doonass was in the nineteenth and early twentieth centuries world-famous for fishing, particularly salmon fishing. The diverting of water to the power station had a disastrous effect on this, for two main reasons: Initially, there was no fish pass at Ardnacrusha to allow the salmon to migrate further up the river; this was later rectified. Secondly, the reduction in water flow down the natural channel encouraged more fish to either migrate towards the head-race canal, or to the Mulkear river instead. The problem continues to this day, and the salmon fishing is no longer comparable with the period up to the 1920s, with stocks reduced by about 90%. The conservation status of the critically other species of native fish have also been hurt by the low water levels, such as the critically endangered eel.
Effects on the bypassed river channel
Reduction in water flow
Once opened, the great majority of the Shannon's water was diverted via the head-race canal to the power station. The ESB are required by law to allow 10 cubic meters per second (10 m3/s) to flow down the natural channel. This is roughly what the natural flow would be during dry summer periods prior to the weir being built. All surplus water can be diverted for power generation. The maximum capacity of Ardnacrusha is approximately 400 m3/s, 40 times that which is required to flow down the natural channel (although the power station does not necessarily run at this capacity at all times). For the first few years after the opening of the scheme, water was diverted to the power station only as necessary for the electricity demand at the time, and thus the impact on the river was not initially severe. However, as demand increased, more and more water was diverted, until eventually a situation was reached where, at all times, all available water was diverted for power generation, and the natural channel was permanently reduced to the minimum water flow allowed (except during extreme conditions). In exceptionally wet periods, the flow of water out of Lough Derg is greater than 400 m3/s, and it is then necessary for the surplus to be released down the natural channel through Castleconnell. During these brief periods, the Falls of Doonass are temporarily restored to their former glory. How often this occurs depends on seasonal weather patterns: some years there is no increase above the minimum flow at all. This has led to a substantially dried-up riverbed. The most obvious result on the river south of Parteen Villa always being kept at summer levels is the silting of many of the old salmon pools, and the growth of trees and bushes in many parts of the former riverbed, thus significantly altering both the appearance and ecosystem of the river.
When built, Ardnacrusha had the capacity to supply power for the entire country. Currently, it accounts for around 2-3% of the ESB's overall power output. Given the small overall amount of power produced per cubic meter, there is a substantial case for increasing water flow to the natural channel, now that Arnacrusha is producing so small a proportion of ESB's power. For example, increasing the flow of the river to 50 m3/s would reduce Ardnacrusha's capacity by 1/10 (flow reduced by 40 m3/s), or 8 megawatts; less than 0.3% of ESB's national capacity, whilst increasing the water flow to the natural channel 5-fold. This would have a major beneficial effect on the condition of the river south of O'Briens Bridge.
Navigation
The navigable section of water from the southern end of the Killaloe Canal to World's End, Castleconnell, was linked to Limerick via the lateral Plassey-Errina canal which had six locks. This became redundant with the construction of the new canal to Ardnacrusha, was dewatered and subsequently became derelict. Recently, several sections have been cleared, and it is now possible to walk from O'Briens Bridge to Errina lock along the old tow path.
As there is no lock at Parteen weir linking the natural channel to Lough Derg, it is no longer possible for any watercraft to enter, by water, this part of the Shannon.
Navigation of the Shannon is now by the head-race canal which is wide. When all turbines are operating, the speed of the water is which can be challenging in both directions. This leads to the double lock at Ardnacrusha, which will take boats long and wide. The two locks have a combined drop of up to .
Shannon Eel Management Programme
A trap and transport scheme is in force on the Shannon as part of an eel management programme following the discovery of a reduced eel population. This scheme ensures safe passage for young eels past Ardnacrusha.
Effects upriver from the scheme
Flooding
The maximum capacity of Ardnacrusha is approximately 400 m3/s. As this is much greater than is available during summer months, during the early years of operation water was stored in the major lakes on the Shannon, Lough Derg, Lough Ree and Lough Allen because Ardnacrusha provided a significant contribution to meeting the nation's electricity requirements during that period. By holding these lakes at a higher than natural level, by means of weirs, water accumulated during the wet winter months could be released during much drier periods to maintain supply to the power station. Weirs already existed at Killaloe and Athlone to control lake levels in Lough Derg and Lough Ree respectively. Upon completion of Ardnacrusha, the weir at Athlone was modified and brought under ESB control, and a new weir built at the mouth of Lough Allen to further regulate water levels (the weir at Killaloe was removed, as Lough Derg's water level is now controlled by Parteen weir itself). In more recent decades Ardnacrusha's significance for electricity production has decreased, and water is no longer stored on the Shannon lakes for electricity generation.
Navigation
The scheme simplified navigation between Killaloe and Limerick, as watercraft need to traverse just one double lock at Ardnacrusha. The majority of the Killaloe canal was submerged under the new lake (the 'flooded section') south of Killaloe, allowing direct access to the head-race canal. The ESB is responsible for maintaining water levels for navigation throughout the Shannon to between predetermined limits, but they have the right to prioritise levels for electricity generation, should water shortages arise.
See also
River Shannon to Dublin pipeline
References
External links
Electricity Supply Board - background on establishment
Clare Library - Ardnacrusha
Buildings and structures in County Clare
Hydroelectric power stations in the Republic of Ireland
Historic Civil Engineering Landmarks
River Shannon | Shannon hydroelectric scheme | [
"Engineering"
] | 2,993 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
441,935 | https://en.wikipedia.org/wiki/Visual%20cryptography | Visual cryptography is a cryptographic technique which allows visual information (pictures, text, etc.) to be encrypted in such a way that the decrypted information appears as a visual image.
One of the best-known techniques has been credited to Moni Naor and Adi Shamir, who developed it in 1994. They demonstrated a visual secret sharing scheme, where a binary image was broken up into n shares so that only someone with all n shares could decrypt the image, while any shares revealed no information about the original image. Each share was printed on a separate transparency, and decryption was performed by overlaying the shares. When all n shares were overlaid, the original image would appear. There are several generalizations of the basic scheme including k-out-of-n visual cryptography, and using opaque sheets but illuminating them by multiple sets of identical illumination patterns under the recording of only one single-pixel detector.
Using a similar idea, transparencies can be used to implement a one-time pad encryption, where one transparency is a shared random pad, and another transparency acts as the ciphertext. Normally, there is an expansion of space requirement in visual cryptography. But if one of the two shares is structured recursively, the efficiency of visual cryptography can be increased to 100%.
Some antecedents of visual cryptography are in patents from the 1960s. Other antecedents are in the work on perception and secure communication.
Visual cryptography can be used to protect biometric templates in which decryption does not require any complex computations.
Example
In this example, the binary image has been split into two component images. Each component image has a pair of pixels for every pixel in the original image. These pixel pairs are shaded black or white according to the following rule: if the original image pixel was black, the pixel pairs in the component images must be complementary; randomly shade one ■□, and the other □■. When these complementary pairs are overlapped, they will appear dark gray. On the other hand, if the original image pixel was white, the pixel pairs in the component images must match: both ■□ or both □■. When these matching pairs are overlapped, they will appear light gray.
So, when the two component images are superimposed, the original image appears. However, without the other component, a component image reveals no information about the original image; it is indistinguishable from a random pattern of ■□ / □■ pairs. Moreover, if you have one component image, you can use the shading rules above to produce a counterfeit component image that combines with it to produce any image at all.
(2, n) visual cryptography sharing case
Sharing a secret with an arbitrary number of people, n, such that at least 2 of them are required to decode the secret is one form of the visual secret sharing scheme presented by Moni Naor and Adi Shamir in 1994. In this scheme we have a secret image which is encoded into n shares printed on transparencies. The shares appear random and contain no decipherable information about the underlying secret image, however if any 2 of the shares are stacked on top of one another the secret image becomes decipherable by the human eye.
Every pixel from the secret image is encoded into multiple subpixels in each share image using a matrix to determine the color of the pixels.
In the (2, n) case, a white pixel in the secret image is encoded using a matrix from the following set, where each row gives the subpixel pattern for one of the components:
{all permutations of the columns of} :
While a black pixel in the secret image is encoded using a matrix from the following set:
{all permutations of the columns of} :
For instance in the (2,2) sharing case (the secret is split into 2 shares and both shares are required to decode the secret) we use complementary matrices to share a black pixel and identical matrices to share a white pixel. Stacking the shares we have all the subpixels associated with the black pixel now black while 50% of the subpixels associated with the white pixel remain white.
Cheating the (2, n) visual secret sharing scheme
Horng et al. proposed a method that allows colluding parties to cheat an honest party in visual cryptography. They take advantage of knowing the underlying distribution of the pixels in the shares to create new shares that combine with existing shares to form a new secret message of the cheaters choosing.
We know that 2 shares are enough to decode the secret image using the human visual system. But examining two shares also gives some information about the 3rd share. For instance, colluding participants may examine their shares to determine when they both have black pixels and use that information to determine that another participant will also have a black pixel in that location. Knowing where black pixels exist in another party's share allows them to create a new share that will combine with the predicted share to form a new secret message. In this way a set of colluding parties that have enough shares to access the secret code can cheat other honest parties.
Visual steganography
2×2 subpixels can also encode a binary image in each component image, as in the scheme on the right. Each white pixel of each component image is represented by two black subpixels, while each black pixel is represented by three black subpixels.
When overlaid, each white pixel of the secret image is represented by three black subpixels, while each black pixel is represented by all four subpixels black. Each corresponding pixel in the component images is randomly rotated to avoid orientation leaking information about the secret image.
In popular culture
In "Do Not Forsake Me Oh My Darling", a 1967 episode of TV series The Prisoner, the protagonist uses a visual cryptography overlay of multiple transparencies to reveal a secret message – the location of a scientist friend who had gone into hiding.
See also
Grille (cryptography)
Steganography
References
External links
Java implementation and illustrations of Visual Cryptography
Python implementation of Visual Cryptography
Visual Cryptography on Cipher Machines & Cryptology
Doug Stinson's visual cryptography page
Liu, Feng; Yan, Wei Qi (2014) Visual Cryptography for Image Processing and Security: Theory, Methods, and Applications, Springer
Cryptography | Visual cryptography | [
"Mathematics",
"Engineering"
] | 1,315 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
442,079 | https://en.wikipedia.org/wiki/Taguchi%20methods | Taguchi methods () are statistical methods, sometimes called robust design methods, developed by Genichi Taguchi to improve the quality of manufactured goods, and more recently also applied to engineering, biotechnology, marketing and advertising. Professional statisticians have welcomed the goals and improvements brought about by Taguchi methods, particularly by Taguchi's development of designs for studying variation, but have criticized the inefficiency of some of Taguchi's proposals.
Taguchi's work includes three principal contributions to statistics:
A specific loss function
The philosophy of off-line quality control; and
Innovations in the design of experiments.
Loss functions
Loss functions in the statistical theory
Traditionally, statistical methods have relied on mean-unbiased estimators of treatment effects: Under the conditions of the Gauss–Markov theorem, least squares estimators have minimum variance among all mean-unbiased linear estimators. The emphasis on comparisons of means also draws (limiting) comfort from the law of large numbers, according to which the sample means converge to the true mean. Fisher's textbook on the design of experiments emphasized comparisons of treatment means.
However, loss functions were avoided by Ronald A. Fisher.
Taguchi's use of loss functions
Taguchi knew statistical theory mainly from the followers of Ronald A. Fisher, who also avoided loss functions.
Reacting to Fisher's methods in the design of experiments, Taguchi interpreted Fisher's methods as being adapted for seeking to improve the mean outcome of a process. Indeed, Fisher's work had been largely motivated by programmes to compare agricultural yields under different treatments and blocks, and such experiments were done as part of a long-term programme to improve harvests.
However, Taguchi realised that in much industrial production, there is a need to produce an outcome on target, for example, to machine a hole to a specified diameter, or to manufacture a cell to produce a given voltage. He also realised, as had Walter A. Shewhart and others before him, that excessive variation lay at the root of poor manufactured quality and that reacting to individual items inside and outside specification was counterproductive.
He therefore argued that quality engineering should start with an understanding of quality costs in various situations. In much conventional industrial engineering, the quality costs are simply represented by the number of items outside specification multiplied by the cost of rework or scrap. However, Taguchi insisted that manufacturers broaden their horizons to consider cost to society. Though the short-term costs may simply be those of non-conformance, any item manufactured away from nominal would result in some loss to the customer or the wider community through early wear-out; difficulties in interfacing with other parts, themselves probably wide of nominal; or the need to build in safety margins. These losses are externalities and are usually ignored by manufacturers, which are more interested in their private costs than social costs. Such externalities prevent markets from operating efficiently, according to analyses of public economics. Taguchi argued that such losses would inevitably find their way back to the originating corporation (in an effect similar to the tragedy of the commons), and that by working to minimise them, manufacturers would enhance brand reputation, win markets and generate profits.
Such losses are, of course, very small when an item is near to negligible. Donald J. Wheeler characterised the region within specification limits as where we deny that losses exist. As we diverge from nominal, losses grow until the point where losses are too great to deny and the specification limit is drawn. All these losses are, as W. Edwards Deming would describe them, unknown and unknowable, but Taguchi wanted to find a useful way of representing them statistically. Taguchi specified three situations:
Larger the better (for example, agricultural yield);
Smaller the better (for example, carbon dioxide emissions); and
On-target, minimum-variation (for example, a mating part in an assembly).
The first two cases are represented by simple monotonic loss functions. In the third case, Taguchi adopted a squared-error loss function for several reasons:
It is the first "symmetric" term in the Taylor series expansion of real analytic loss-functions.
Total loss is measured by the variance. For uncorrelated random variables, as variance is additive the total loss is an additive measurement of cost.
The squared-error loss function is widely used in statistics, following Gauss's use of the squared-error loss function in justifying the method of least squares.
Reception of Taguchi's ideas by statisticians
Though many of Taguchi's concerns and conclusions are welcomed by statisticians and economists, some ideas have been especially criticized. For example, Taguchi's recommendation that industrial experiments maximise some signal-to-noise ratio (representing the magnitude of the mean of a process compared to its variation) has been criticized.
Off-line quality control
Taguchi's rule for manufacturing
Taguchi realized that the best opportunity to eliminate variation of the final product quality is during the design of a product and its manufacturing process. Consequently, he developed a strategy for quality engineering that can be used in both contexts. The process has three stages:
System design
Parameter (measure) design
Tolerance design
System design
This is design at the conceptual level, involving creativity and innovation.
Parameter design
Once the concept is established, the nominal values of the various dimensions and design parameters need to be set, the detail design phase of conventional engineering. Taguchi's radical insight was that the exact choice of values required is under-specified by the performance requirements of the system. In many circumstances, this allows the parameters to be chosen so as to minimize the effects on performance arising from variation in manufacture, environment and cumulative damage. This is sometimes called robustification.
Robust parameter designs consider controllable and uncontrollable noise variables; they seek to exploit relationships and optimize settings that minimize the effects of the noise variables.
Tolerance design
With a successfully completed parameter design, and an understanding of the effect that the various parameters have on performance, resources can be focused on reducing and controlling variation in the critical few dimensions.
Design of experiments
Taguchi developed his experimental theories independently. Taguchi read works following R. A. Fisher only in 1954.
Outer arrays
Taguchi's designs aimed to allow greater understanding of variation than did many of the traditional designs from the analysis of variance (following Fisher). Taguchi contended that conventional sampling is inadequate here as there is no way of obtaining a random sample of future conditions. In Fisher's design of experiments and analysis of variance, experiments aim to reduce the influence of nuisance factors to allow comparisons of the mean treatment-effects. Variation becomes even more central in Taguchi's thinking.
Taguchi proposed extending each experiment with an "outer array" (possibly an orthogonal array); the "outer array" should simulate the random environment in which the product would function. This is an example of judgmental sampling. Many quality specialists have been using "outer arrays".
Later innovations in outer arrays resulted in "compounded noise." This involves combining a few noise factors to create two levels in the outer array: First, noise factors that drive output lower, and second, noise factors that drive output higher. "Compounded noise" simulates the extremes of noise variation but uses fewer experimental runs than would previous Taguchi designs.
Management of interactions
Interactions, as treated by Taguchi
Many of the orthogonal arrays that Taguchi has advocated are saturated arrays, allowing no scope for estimation of interactions. This is a continuing topic of controversy. However, this is only true for "control factors" or factors in the "inner array". By combining an inner array of control factors with an outer array of "noise factors", Taguchi's approach provides "full information" on control-by-noise interactions, it is claimed. Taguchi argues that such interactions have the greatest importance in achieving a design that is robust to noise factor variation. The Taguchi approach provides more complete interaction information than typical fractional factorial designs, its adherents claim.
Followers of Taguchi argue that the designs offer rapid results and that interactions can be eliminated by proper choice of quality characteristics. That notwithstanding, a "confirmation experiment" offers protection against any residual interactions. If the quality characteristic represents the energy transformation of the system, then the "likelihood" of control factor-by-control factor interactions is greatly reduced, since "energy" is "additive".
Inefficiencies of Taguchi's designs
Interactions are part of the real world. In Taguchi's arrays, interactions are confounded and difficult to resolve.
Statisticians in response surface methodology (RSM) advocate the "sequential assembly" of designs: In the RSM approach, a screening design is followed by a "follow-up design" that resolves only the confounded interactions judged worth resolution. A second follow-up design may be added (time and resources allowing) to explore possible high-order univariate effects of the remaining variables, as high-order univariate effects are less likely in variables already eliminated for having no linear effect. With the economy of screening designs and the flexibility of follow-up designs, sequential designs have great statistical efficiency. The sequential designs of response surface methodology require far fewer experimental runs than would a sequence of Taguchi's designs.
Assessment
Genichi Taguchi has made valuable contributions to statistics and engineering. His emphasis on loss to society, techniques for investigating variation in experiments, and his overall strategy of system, parameter and tolerance design have been influential in improving manufactured quality worldwide.
See also
References
Bibliography
Box, G. E. P. and Draper, Norman. 2007. Response Surfaces, Mixtures, and Ridge Analyses, Second Edition [of Empirical Model-Building and Response Surfaces, 1987], Wiley.
R. H. Hardin and N. J. A. Sloane, "A New Approach to the Construction of Optimal Designs", Journal of Statistical Planning and Inference, vol. 37, 1993, pp. 339-369
R. H. Hardin and N. J. A. Sloane, "Computer-Generated Minimal (and Larger) Response Surface Designs: (I) The Sphere"
R. H. Hardin and N. J. A. Sloane, "Computer-Generated Minimal (and Larger) Response Surface Designs: (II) The Cube"
Moen, R D; Nolan, T W & Provost, L P (1991) Improving Quality Through Planned Experimentation
Bagchi Tapan P and Madhuranjan Kumar (1992) Multiple Criteria Robust Design of Electronic Devices, Journal of Electronic Manufacturing, vol 3(1), pp. 31–38
Montgomery, D. C. Ch. 9, 6th Edition [of Design and Analysis of Experiments, 2005], Wiley.
Manufacturing
Quality control tools
Systems engineering
Design of experiments
Japanese inventions | Taguchi methods | [
"Engineering"
] | 2,200 | [
"Systems engineering",
"Manufacturing",
"Mechanical engineering"
] |
442,136 | https://en.wikipedia.org/wiki/Computability | Computability is the ability to solve a problem in an effective manner. It is a key topic of the field of computability theory within mathematical logic and the theory of computation within computer science. The computability of a problem is closely linked to the existence of an algorithm to solve the problem.
The most widely studied models of computability are the Turing-computable and μ-recursive functions, and the lambda calculus, all of which have computationally equivalent power. Other forms of computability are studied as well: computability notions weaker than Turing machines are studied in automata theory, while computability notions stronger than Turing machines are studied in the field of hypercomputation.
Problems
A central idea in computability is that of a (computational) problem, which is a task whose computability can be explored.
There are two key types of problems:
A decision problem fixes a set S, which may be a set of strings, natural numbers, or other objects taken from some larger set U. A particular instance of the problem is to decide, given an element u of U, whether u is in S. For example, let U be the set of natural numbers and S the set of prime numbers. The corresponding decision problem corresponds to primality testing.
A function problem consists of a function f from a set U to a set V. An instance of the problem is to compute, given an element u in U, the corresponding element f(u) in V. For example, U and V may be the set of all finite binary strings, and f may take a string and return the string obtained by reversing the digits of the input (so f(0101) = 1010).
Other types of problems include search problems and optimization problems.
One goal of computability theory is to determine which problems, or classes of problems, can be solved in each model of computation.
Formal models of computation
A model of computation is a formal description of a particular type of computational process. The description often takes the form of an abstract machine that is meant to perform the task at hand. General models of computation equivalent to a Turing machine (see Church–Turing thesis) include:
Lambda calculus A computation consists of an initial lambda expression (or two if you want to separate the function and its input) plus a finite sequence of lambda terms, each deduced from the preceding term by one application of beta reduction.
Combinatory logic
A concept which has many similarities to -calculus, but also important differences exist (e.g. fixed point combinator Y has normal form in combinatory logic but not in -calculus). Combinatory logic was developed with great ambitions: understanding the nature of paradoxes, making foundations of mathematics more economic (conceptually), eliminating the notion of variables (thus clarifying their role in mathematics).
μ-recursive functions A computation consists of a μ-recursive function, i.e. its defining sequence, any input value(s) and a sequence of recursive functions appearing in the defining sequence with inputs and outputs. Thus, if in the defining sequence of a recursive function the functions and appear, then terms of the form or might appear. Each entry in this sequence needs to be an application of a basic function or follow from the entries above by using composition, primitive recursion or μ-recursion. For instance if , then for to appear, terms like and must occur above. The computation terminates only if the final term gives the value of the recursive function applied to the inputs.
String rewriting systems Includes Markov algorithms, that use grammar-like rules to operate on strings of symbols; also Post canonical system.
Register machine
A theoretical idealization of a computer. There are several variants. In most of them, each register can hold a natural number (of unlimited size), and the instructions are simple (and few in number), e.g. only decrementation (combined with conditional jump) and incrementation exist (and halting). The lack of the infinite (or dynamically growing) external store (seen at Turing machines) can be understood by replacing its role with Gödel numbering techniques: the fact that each register holds a natural number allows the possibility of representing a complicated thing (e.g. a sequence, or a matrix etc.) by an appropriate huge natural number — unambiguity of both representation and interpretation can be established by number theoretical foundations of these techniques.
Turing machine Also similar to the finite state machine, except that the input is provided on an execution "tape", which the Turing machine can read from, write to, or move back and forth past its read/write "head". The tape is allowed to grow to arbitrary size. The Turing machine is capable of performing complex calculations which can have arbitrary duration. This model is perhaps the most important model of computation in computer science, as it simulates computation in the absence of predefined resource limits.
Multitape Turing machine Here, there may be more than one tape; moreover there may be multiple heads per tape. Surprisingly, any computation that can be performed by this sort of machine can also be performed by an ordinary Turing machine, although the latter may be slower or require a larger total region of its tape.
P′′
Like Turing machines, P′′ uses an infinite tape of symbols (without random access), and a rather minimalistic set of instructions. But these instructions are very different, thus, unlike Turing machines, P′′ does not need to maintain a distinct state, because all “memory-like” functionality can be provided only by the tape. Instead of rewriting the current symbol, it can perform a modular arithmetic incrementation on it. P′′ has also a pair of instructions for a cycle, inspecting the blank symbol. Despite its minimalistic nature, it has become the parental formal language of an implemented and (for entertainment) used programming language called Brainfuck.
In addition to the general computational models, some simpler computational models are useful for special, restricted applications. Regular expressions, for example, specify string patterns in many contexts, from office productivity software to programming languages. Another formalism mathematically equivalent to regular expressions, Finite automata are used in circuit design and in some kinds of problem-solving. Context-free grammars specify programming language syntax. Non-deterministic pushdown automata are another formalism equivalent to context-free grammars.
Different models of computation have the ability to do different tasks. One way to measure the power of a computational model is to study the class of formal languages that the model can generate; in such a way the Chomsky hierarchy of languages is obtained.
Other restricted models of computation include:
Deterministic finite automaton (DFA) Also called a finite-state machine. All real computing devices in existence today can be modeled as a finite-state machine, as all real computers operate on finite resources. Such a machine has a set of states, and a set of state transitions which are affected by the input stream. Certain states are defined to be accepting states. An input stream is fed into the machine one character at a time, and the state transitions for the current state are compared to the input stream, and if there is a matching transition the machine may enter a new state. If at the end of the input stream the machine is in an accepting state, then the whole input stream is accepted.
Nondeterministic finite automaton (NFA) Another simple model of computation, although its processing sequence is not uniquely determined. It can be interpreted as taking multiple paths of computation simultaneously through a finite number of states. However, it is possible to prove that any NFA is reducible to an equivalent DFA.
Pushdown automaton Similar to the finite state machine, except that it has available an execution stack, which is allowed to grow to arbitrary size. The state transitions additionally specify whether to add a symbol to the stack, or to remove a symbol from the stack. It is more powerful than a DFA due to its infinite-memory stack, although only the top element of the stack is accessible at any time.
Power of automata
With these computational models in hand, we can determine what their limits are. That is, what classes of languages can they accept?
Power of finite-state machines
Computer scientists call any language that can be accepted by a finite-state machine a regular language. Because of the restriction that the number of possible states in a finite state machine is finite, we can see that to find a language that is not regular, we must construct a language that would require an infinite number of states.
An example of such a language is the set of all strings consisting of the letters 'a' and 'b' which contain an equal number of the letter 'a' and 'b'. To see why this language cannot be correctly recognized by a finite state machine, assume first that such a machine M exists. M must have some number of states n. Now consider the string x consisting of 'a's followed by 'b's.
As M reads in x, there must be some state in the machine that is repeated as it reads in the first series of 'a's, since there are 'a's and only n states by the pigeonhole principle. Call this state S, and further let d be the number of 'a's that our machine read in order to get from the first occurrence of S to some subsequent occurrence during the 'a' sequence. We know, then, that at that second occurrence of S, we can add in an additional d (where ) 'a's and we will be again at state S. This means that we know that a string of 'a's must end up in the same state as the string of 'a's. This implies that if our machine accepts x, it must also accept the string of 'a's followed by 'b's, which is not in the language of strings containing an equal number of 'a's and 'b's. In other words, M cannot correctly distinguish between a string of equal number of 'a's and 'b's and a string with 'a's and 'b's.
We know, therefore, that this language cannot be accepted correctly by any finite-state machine, and is thus not a regular language. A more general form of this result is called the Pumping lemma for regular languages, which can be used to show that broad classes of languages cannot be recognized by a finite state machine.
Power of pushdown automata
Computer scientists define a language that can be accepted by a pushdown automaton as a Context-free language, which can be specified as a Context-free grammar. The language consisting of strings with equal numbers of 'a's and 'b's, which we showed was not a regular language, can be decided by a push-down automaton. Also, in general, a push-down automaton can behave just like a finite-state machine, so it can decide any language which is regular. This model of computation is thus strictly more powerful than finite state machines.
However, it turns out there are languages that cannot be decided by push-down automaton either. The result is similar to that for regular expressions, and won't be detailed here. There exists a Pumping lemma for context-free languages. An example of such a language is the set of prime numbers.
Power of Turing machines
Turing machines can decide any context-free language, in addition to languages not decidable by a push-down automaton, such as the language consisting of prime numbers. It is therefore a strictly more powerful model of computation.
Because Turing machines have the ability to "back up" in their input tape, it is possible for a Turing machine to run for a long time in a way that is not possible with the other computation models previously described. It is possible to construct a Turing machine that will never finish running (halt) on some inputs. We say that a Turing machine can decide a language if it eventually will halt on all inputs and give an answer. A language that can be so decided is called a recursive language. We can further describe Turing machines that will eventually halt and give an answer for any input in a language, but which may run forever for input strings which are not in the language. Such Turing machines could tell us that a given string is in the language, but we may never be sure based on its behavior that a given string is not in a language, since it may run forever in such a case. A language which is accepted by such a Turing machine is called a recursively enumerable language.
The Turing machine, it turns out, is an exceedingly powerful model of automata. Attempts to amend the definition of a Turing machine to produce a more powerful machine have surprisingly met with failure. For example, adding an extra tape to the Turing machine, giving it a two-dimensional (or three- or any-dimensional) infinite surface to work with can all be simulated by a Turing machine with the basic one-dimensional tape. These models are thus not more powerful. In fact, a consequence of the Church–Turing thesis is that there is no reasonable model of computation which can decide languages that cannot be decided by a Turing machine.
The question to ask then is: do there exist languages which are recursively enumerable, but not recursive? And, furthermore, are there languages which are not even recursively enumerable?
The halting problem
The halting problem is one of the most famous problems in computer science, because it has profound implications on the theory of computability and on how we use computers in everyday practice. The problem can be phrased:
Given a description of a Turing machine and its initial input, determine whether the program, when executed on this input, ever halts (completes). The alternative is that it runs forever without halting.
Here we are asking not a simple question about a prime number or a palindrome, but we are instead turning the tables and asking a Turing machine to answer a question about another Turing machine. It can be shown (See main article: Halting problem) that it is not possible to construct a Turing machine that can answer this question in all cases.
That is, the only general way to know for sure if a given program will halt on a particular input in all cases is simply to run it and see if it halts. If it does halt, then you know it halts. If it doesn't halt, however, you may never know if it will eventually halt. The language consisting of all Turing machine descriptions paired with all possible input streams on which those Turing machines will eventually halt, is not recursive. The halting problem is therefore called non-computable or undecidable.
An extension of the halting problem is called Rice's theorem, which states that it is undecidable (in general) whether a given language possesses any specific nontrivial property.
Beyond recursively enumerable languages
The halting problem is easy to solve, however, if we allow that the Turing machine that decides it may run forever when given input which is a representation of a Turing machine that does not itself halt. The halting language is therefore recursively enumerable. It is possible to construct languages which are not even recursively enumerable, however.
A simple example of such a language is the complement of the halting language; that is the language consisting of all Turing machines paired with input strings where the Turing machines do not halt on their input. To see that this language is not recursively enumerable, imagine that we construct a Turing machine M which is able to give a definite answer for all such Turing machines, but that it may run forever on any Turing machine that does eventually halt. We can then construct another Turing machine that simulates the operation of this machine, along with simulating directly the execution of the machine given in the input as well, by interleaving the execution of the two programs. Since the direct simulation will eventually halt if the program it is simulating halts, and since by assumption the simulation of M will eventually halt if the input program would never halt, we know that will eventually have one of its parallel versions halt. is thus a decider for the halting problem. We have previously shown, however, that the halting problem is undecidable. We have a contradiction, and we have thus shown that our assumption that M exists is incorrect. The complement of the halting language is therefore not recursively enumerable.
Concurrency-based models
A number of computational models based on concurrency have been developed, including the parallel random-access machine and the Petri net. These models of concurrent computation still do not implement any mathematical functions that cannot be implemented by Turing machines.
Stronger models of computation
The Church–Turing thesis conjectures that there is no effective model of computing that can compute more mathematical functions than a Turing machine. Computer scientists have imagined many varieties of hypercomputers, models of computation that go beyond Turing computability.
Infinite execution
Imagine a machine where each step of the computation requires half the time of the previous step (and hopefully half the energy of the previous step...). If we normalize to 1/2 time unit the amount of time required for the first step (and to 1/2 energy unit the amount of energy required for the first step...), the execution would require
time unit (and 1 energy unit...) to run. This infinite series converges to 1, which means that this Zeno machine can execute a countably infinite number of steps in 1 time unit (using 1 energy unit...). This machine is capable of deciding the halting problem by directly simulating the execution of the machine in question. By extension, any convergent infinite [must be provably infinite] series would work. Assuming that the infinite series converges to a value n, the Zeno machine would complete a countably infinite execution in n time units.
Oracle machines
So-called Oracle machines have access to various "oracles" which provide the solution to specific undecidable problems. For example, the Turing machine may have a "halting oracle" which answers immediately whether a given Turing machine will ever halt on a given input. These machines are a central topic of study in recursion theory.
Limits of hyper-computation
Even these machines, which seemingly represent the limit of automata that we could imagine, run into their own limitations. While each of them can solve the halting problem for a Turing machine, they cannot solve their own version of the halting problem. For example, an Oracle machine cannot answer the question of whether a given Oracle machine will ever halt.
See also
Automata theory
Abstract machine
List of undecidable problems
Computational complexity theory
Computability logic
References
Part Two: Computability Theory, Chapters 3–6, pp. 123–222.
Chapter 3: Computability, pp. 57–70. | Computability | [
"Mathematics"
] | 3,934 | [
"Computability theory",
"Mathematical logic"
] |
442,291 | https://en.wikipedia.org/wiki/John%20Harsanyi | John Charles Harsanyi (; May 29, 1920 – August 9, 2000) was a Hungarian-American economist who spent most of his career at the University of California, Berkeley. He was the recipient of the Nobel Memorial Prize in Economic Sciences in 1994.
Harsanyi is best known for his contributions to the study of game theory and its application to economics, specifically for his developing the highly innovative analysis of games of incomplete information, so-called Bayesian games. He also made important contributions to the use of game theory and economic reasoning in political and moral philosophy (specifically utilitarian ethics) as well as contributing to the study of equilibrium selection. For his work, he was a co-recipient along with John Nash and Reinhard Selten of the 1994 Nobel Memorial Prize in Economic Sciences.
He moved to the United States in 1956, and spent most of his life there. According to György Marx, he was one of The Martians.
Early life
Harsanyi was born on May 29, 1920, in Budapest, Hungary, the son of Alice Harsányi (née Gombos) and Károly Harsányi, a pharmacy owner. His parents converted from Judaism to Catholicism a year before he was born. He attended high school at the Lutheran Gymnasium in Budapest. In high school, he became one of the best problem solvers of the KöMaL, the Mathematical and Physical Monthly for Secondary Schools. Founded in 1893, this periodical is generally credited with a large share of Hungarian students' success in mathematics. He also won the first prize in the Eötvös mathematics competition for high school students.
Although he wanted to study mathematics and philosophy, his father sent him to France in 1939 to enroll in chemical engineering at the University of Lyon. However, because of the start of World War II, Harsanyi returned to Hungary to study pharmacology at the University of Budapest (today: Eötvös Loránd University), earning a diploma in 1944. As a pharmacology student, Harsanyi escaped conscription into the Royal Hungarian Army which, as a person of Jewish descent, would have meant forced labor.
However, in 1944 (after the fall of the Horthy regime and the seizure of power by the Arrow Cross Party) his military deferment was cancelled and he was compelled to join a forced labor unit on the Eastern Front. After seven months of forced labor, when the German authorities decided to deport his unit to a concentration camp in Austria, John Harsanyi managed to escape and found sanctuary for the rest of the war in a Jesuit house.
Postwar
After the end of the war, Harsanyi returned to the University of Budapest for graduate studies in philosophy and sociology, earning his PhD in both subjects in 1947. Then a devout Catholic, he simultaneously studied theology, also joining lay ranks of the Dominican Order. He later abandoned Catholicism, becoming an atheist for the rest of his life. Harsanyi spent the academic year 1947–1948 on the faculty of the Institute of Sociology of the University of Budapest, where he met Anne Klauber, his future wife. He was forced to resign the faculty because of openly expressing his anti-Marxist opinions, while Anne faced increasing peer pressure to leave him for the same reason.
Harsanyi remained in Hungary for the following two years attempting to sell his family's pharmacy without losing it to the authorities. After it became apparent that the communist party would confiscate the pharmacy in 1950, he fled with Anne and her parents by illegally crossing the border into Austria and then going to Australia where Klauber's parents had some friends.
Australia
The two did not marry until they arrived in Australia because Klauber's immigration papers would need to be changed to reflect her married name. The two arrived with her parents on December 30, 1950, and they looked to marry immediately. Harsanyi and Klauber were married on January 2, 1951. Neither spoke much English and understood little of what they were told to say to each other. Harsanyi later explained to his new wife that she had promised to cook better food than she usually did.
Harsanyi's Hungarian degrees were not recognized in Australia, but they earned him credit at the University of Sydney for a master's degree. Harsanyi worked in a factory during the day and studied economics in the evening at the University of Sydney, finishing with a M.A. in 1953. While studying in Sydney, he started publishing research papers in economic journals, including the Journal of Political Economy and the Review of Economic Studies. The degree allowed him to take a teaching position in 1954 at the University of Queensland in Brisbane. While in Brisbane, Harsanyi's wife became a fashion designer for a small factory.
Later years
In 1956, Harsanyi received a Rockefeller scholarship that enabled him and Anne to spend the next two years in the United States, at Stanford University and, for a semester, at the Cowles Foundation. At Stanford Harsanyi wrote a dissertation in game theory under the supervision of Kenneth Arrow, earning a second PhD in economics in 1959, while Anne earned an MA in psychology. Harsanyi's student visa expired in 1958 and the two returned to Australia.
After working for a short time as a researcher at the Australian National University in Canberra, Harsanyi became frustrated with the lack of interest in game theory in Australia. With the help of Kenneth Arrow and James Tobin, he was able to move to the United States, taking a position as professor of economics at the Wayne State University in Detroit between 1961 and 1963. In 1964, he moved to Berkeley, California; he remained at the University of California, Berkeley, until retiring in 1990. Shortly after arriving in Berkeley, he and Anne had a child, Tom.
While teaching at Berkeley, Harsanyi did extensive research in game theory. Harold Kuhn, who had been John von Neumann's student in Princeton and already had games theory publications encouraged him in this. The work for which he won the 1994 Nobel Prize in economics was a series of articles published in 1967 and 1968 which established what has become the standard framework for analyzing "games of incomplete information", situations in which the various strategic decisionmakers have different information about the parameters of the game. He resolved the problem of how players could make decisions while not knowing what each other knows by modelling the situation with initial moves by Nature using known probabilities to choose the parameters, with some players observing Nature's move but other players just knowing the probabilities and the fact that some players have observed the actual realized values. This relies on assuming that all players know the structure of the game, which means they all have "common priors", knowing the probabilities Nature uses in selecting parameters values, an assumption known as the Harsanyi Doctrine.
From 1966 to 1968, Harsanyi was part of a team of game theorists tasked with advising the United States Arms Control and Disarmament Agency in collaboration with Mathematica, a consulting group from Princeton University led by Harold Kuhn and Oskar Morgenstern.
John Harsanyi died on August 9, 2000, from a heart attack in Berkeley, California, after developing Alzheimer's disease.
Tribute
On 29 May 2010, Google celebrated John Harsányi's Birthday with a doodle.
Publications
Harsanyi began researching utilitarian ethics in the mid-fifties at the University of Queensland in Brisbane. This led to two publications explaining why, before understanding moral problems, the difference between people's personal and moral preferences must be distinguished. As he says at the beginning of his essay included in the book edited by A. Sen and B. Williams (see below), he tries to reconcile three traditions of Western moral thinking, those of Adam Smith, Immanuel Kant and the utilitarians (Bentham, Mill, Sidgwick and Edgeworth). He is considered one of the most important exponents of the "rule utilitarianism".
After moving to the US on a Rockefeller Fellowship where he was supervised by Kenneth Arrow, Harsanyi was influenced by Nash's publications on game theory and became increasingly interested in the topic.
Pdf.
Pdf.
Reprinted as:
See also
List of economists
The Martians (scientists)
Veil of ignorance
List of Jewish Nobel laureates
References
External links
IDEAS/RePEc
News article remembering Harsanyi's life and career
Obituary in The Independent (London)
Kenneth J. Arrow, "John C. Harsanyi", Biographical Memoirs of the National Academy of Sciences (2001)
1920 births
2000 deaths
20th-century Hungarian economists
American atheists
American Nobel laureates
American people of Hungarian-Jewish descent
Consequentialists
Distinguished fellows of the American Economic Association
Fasori Gimnázium alumni
Fellows of the Econometric Society
Former Roman Catholics
Game theorists
Hungarian atheists
Hungarian emigrants to Australia
Hungarian Nobel laureates
Nobel laureates from Austria-Hungary
Jewish American atheists
Jewish American economists
Nobel laureates in Economics
Stanford University alumni
University of California, Berkeley College of Letters and Science faculty
University of Sydney alumni
Utilitarians
Wayne State University faculty
Writers from Budapest
20th-century American economists
20th-century American philosophers | John Harsanyi | [
"Mathematics"
] | 1,891 | [
"Game theorists",
"Game theory"
] |
442,370 | https://en.wikipedia.org/wiki/List%20of%20prime%20numbers | This is a list of articles about prime numbers. A prime number (or prime) is a natural number greater than 1 that has no positive divisors other than 1 and itself. By Euclid's theorem, there are an infinite number of prime numbers. Subsets of the prime numbers may be generated with various formulas for primes. The first 1000 primes are listed below, followed by lists of notable types of prime numbers in alphabetical order, giving their respective first terms. 1 is neither prime nor composite.
The first 1000 prime numbers
The following table lists the first 1000 primes, with 20 columns of consecutive primes in each of the 50 rows.
.
The Goldbach conjecture verification project reports that it has computed all primes smaller than 4×10. That means 95,676,260,903,887,607 primes (nearly 10), but they were not stored. There are known formulae to evaluate the prime-counting function (the number of primes smaller than a given value) faster than computing the primes. This has been used to compute that there are 1,925,320,391,606,803,968,923 primes (roughly 2) smaller than 10. A different computation found that there are 18,435,599,767,349,200,867,866 primes (roughly 2) smaller than 10, if the Riemann hypothesis is true.
Lists of primes by type
Below are listed the first prime numbers of many named forms and types. More details are in the article for the name. n is a natural number (including 0) in the definitions.
Balanced primes
Primes with equal-sized prime gaps after and before them, so that they are equal to the arithmetic mean of the nearest primes after and before.
5, 53, 157, 173, 211, 257, 263, 373, 563, 593, 607, 653, 733, 947, 977, 1103, 1123, 1187, 1223, 1367, 1511, 1747, 1753, 1907, 2287, 2417, 2677, 2903, 2963, 3307, 3313, 3637, 3733, 4013, 4409, 4457, 4597, 4657, 4691, 4993, 5107, 5113, 5303, 5387, 5393 ().
Bell primes
Primes that are the number of partitions of a set with n members.
2, 5, 877, 27644437, 35742549198872617291353508656626642567, 359334085968622831041960188598043661065388726959079837.
The next term has 6,539 digits. ()
Chen primes
Where p is prime and p+2 is either a prime or semiprime.
2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 47, 53, 59, 67, 71, 83, 89, 101, 107, 109, 113, 127, 131, 137, 139, 149, 157, 167, 179, 181, 191, 197, 199, 211, 227, 233, 239, 251, 257, 263, 269, 281, 293, 307, 311, 317, 337, 347, 353, 359, 379, 389, 401, 409 ()
Circular primes
A circular prime number is a number that remains prime on any cyclic rotation of its digits (in base 10).
2, 3, 5, 7, 11, 13, 17, 31, 37, 71, 73, 79, 97, 113, 131, 197, 199, 311, 337, 373, 719, 733, 919, 971, 991, 1193, 1931, 3119, 3779, 7793, 7937, 9311, 9377, 11939, 19391, 19937, 37199, 39119, 71993, 91193, 93719, 93911, 99371, 193939, 199933, 319993, 331999, 391939, 393919, 919393, 933199, 939193, 939391, 993319, 999331 ()
Some sources only list the smallest prime in each cycle, for example, listing 13, but omitting 31 (OEIS really calls this sequence circular primes, but not the above sequence):
2, 3, 5, 7, 11, 13, 17, 37, 79, 113, 197, 199, 337, 1193, 3779, 11939, 19937, 193939, 199933, 1111111111111111111, 11111111111111111111111 ()
All repunit primes are circular.
Cluster primes
A cluster prime is a prime p such that every even natural number k ≤ p − 3 is the difference of two primes not exceeding p.
3, 5, 7, 11, 13, 17, 19, 23, ... ()
All odd primes between 3 and 89, inclusive, are cluster primes. The first 10 primes that are not cluster primes are:
2, 97, 127, 149, 191, 211, 223, 227, 229, 251.
Cousin primes
Where (p, p + 4) are both prime.
(3, 7), (7, 11), (13, 17), (19, 23), (37, 41), (43, 47), (67, 71), (79, 83), (97, 101), (103, 107), (109, 113), (127, 131), (163, 167), (193, 197), (223, 227), (229, 233), (277, 281) (, )
Cuban primes
Of the form where x = y + 1.
7, 19, 37, 61, 127, 271, 331, 397, 547, 631, 919, 1657, 1801, 1951, 2269, 2437, 2791, 3169, 3571, 4219, 4447, 5167, 5419, 6211, 7057, 7351, 8269, 9241, 10267, 11719, 12097, 13267, 13669, 16651, 19441, 19927, 22447, 23497, 24571, 25117, 26227, 27361, 33391, 35317 ()
Of the form where x = y + 2.
13, 109, 193, 433, 769, 1201, 1453, 2029, 3469, 3889, 4801, 10093, 12289, 13873, 18253, 20173, 21169, 22189, 28813, 37633, 43201, 47629, 60493, 63949, 65713, 69313, 73009, 76801, 84673, 106033, 108301, 112909, 115249 ()
Cullen primes
Of the form n×2 + 1.
3, 393050634124102232869567034555427371542904833 ()
Delicate primes
Primes that having any one of their (base 10) digits changed to any other value will always result in a composite number.
294001, 505447, 584141, 604171, 971767, 1062599, 1282529, 1524181, 2017963, 2474431, 2690201, 3085553, 3326489, 4393139 ()
Dihedral primes
Primes that remain prime when read upside down or mirrored in a seven-segment display.
2, 5, 11, 101, 181, 1181, 1811, 18181, 108881, 110881, 118081, 120121,
121021, 121151, 150151, 151051, 151121, 180181, 180811, 181081 ()
Eisenstein primes without imaginary part
Eisenstein integers that are irreducible and real numbers (primes of the form 3n − 1).
2, 5, 11, 17, 23, 29, 41, 47, 53, 59, 71, 83, 89, 101, 107, 113, 131, 137, 149, 167, 173, 179, 191, 197, 227, 233, 239, 251, 257, 263, 269, 281, 293, 311, 317, 347, 353, 359, 383, 389, 401 ()
Emirps
Primes that become a different prime when their decimal digits are reversed. The name "emirp" is the reverse of the word "prime".
13, 17, 31, 37, 71, 73, 79, 97, 107, 113, 149, 157, 167, 179, 199, 311, 337, 347, 359, 389, 701, 709, 733, 739, 743, 751, 761, 769, 907, 937, 941, 953, 967, 971, 983, 991 ()
Euclid primes
Of the form p# + 1 (a subset of primorial primes).
3, 7, 31, 211, 2311, 200560490131 ()
Euler irregular primes
A prime that divides Euler number for some .
19, 31, 43, 47, 61, 67, 71, 79, 101, 137, 139, 149, 193, 223, 241, 251, 263, 277, 307, 311, 349, 353, 359, 373, 379, 419, 433, 461, 463, 491, 509, 541, 563, 571, 577, 587 ()
Euler (p, p − 3) irregular primes
Primes such that is an Euler irregular pair.
149, 241, 2946901 ()
Factorial primes
Of the form n! − 1 or n! + 1.
2, 3, 5, 7, 23, 719, 5039, 39916801, 479001599, 87178291199, 10888869450418352160768000001, 265252859812191058636308479999999, 263130836933693530167218012159999999, 8683317618811886495518194401279999999 ()
Fermat primes
Of the form 2 + 1.
3, 5, 17, 257, 65537 ()
these are the only known Fermat primes, and conjecturally the only Fermat primes. The probability of the existence of another Fermat prime is less than one in a billion.
Generalized Fermat primes
Of the form a + 1 for fixed integer a.
a = 2: 3, 5, 17, 257, 65537 ()
a = 4: 5, 17, 257, 65537
a = 6: 7, 37, 1297
a = 8: (none exist)
a = 10: 11, 101
a = 12: 13
a = 14: 197
a = 16: 17, 257, 65537
a = 18: 19
a = 20: 401, 160001
a = 22: 23
a = 24: 577, 331777
Fibonacci primes
Primes in the Fibonacci sequence F = 0, F = 1,
F = F + F.
2, 3, 5, 13, 89, 233, 1597, 28657, 514229, 433494437, 2971215073, 99194853094755497, 1066340417491710595814572169, 19134702400093278081449423917 ()
Fortunate primes
Fortunate numbers that are prime (it has been conjectured they all are).
3, 5, 7, 13, 17, 19, 23, 37, 47, 59, 61, 67, 71, 79, 89, 101, 103, 107, 109, 127, 151, 157, 163, 167, 191, 197, 199, 223, 229, 233, 239, 271, 277, 283, 293, 307, 311, 313, 331, 353, 373, 379, 383, 397 ()
Gaussian primes
Prime elements of the Gaussian integers; equivalently, primes of the form 4n + 3.
3, 7, 11, 19, 23, 31, 43, 47, 59, 67, 71, 79, 83, 103, 107, 127, 131, 139, 151, 163, 167, 179, 191, 199, 211, 223, 227, 239, 251, 263, 271, 283, 307, 311, 331, 347, 359, 367, 379, 383, 419, 431, 439, 443, 463, 467, 479, 487, 491, 499, 503 ()
Good primes
Primes p for which p > p p for all 1 ≤ i ≤ n−1, where p is the nth prime.
5, 11, 17, 29, 37, 41, 53, 59, 67, 71, 97, 101, 127, 149, 179, 191, 223, 227, 251, 257, 269, 307 ()
Happy primes
Happy numbers that are prime.
7, 13, 19, 23, 31, 79, 97, 103, 109, 139, 167, 193, 239, 263, 293, 313, 331, 367, 379, 383, 397, 409, 487, 563, 617, 653, 673, 683, 709, 739, 761, 863, 881, 907, 937, 1009, 1033, 1039, 1093 ()
Harmonic primes
Primes p for which there are no solutions to H ≡ 0 (mod p) and H ≡ −ω (mod p) for 1 ≤ k ≤ p−2, where H denotes the k-th harmonic number and ω denotes the Wolstenholme quotient.
5, 13, 17, 23, 41, 67, 73, 79, 107, 113, 139, 149, 157, 179, 191, 193, 223, 239, 241, 251, 263, 277, 281, 293, 307, 311, 317, 331, 337, 349 ()
Higgs primes for squares
Primes p for which p − 1 divides the square of the product of all earlier terms.
2, 3, 5, 7, 11, 13, 19, 23, 29, 31, 37, 43, 47, 53, 59, 61, 67, 71, 79, 101, 107, 127, 131, 139, 149, 151, 157, 173, 181, 191, 197, 199, 211, 223, 229, 263, 269, 277, 283, 311, 317, 331, 347, 349 ()
Highly cototient primes
Primes that are a cototient more often than any integer below it except 1.
2, 23, 47, 59, 83, 89, 113, 167, 269, 389, 419, 509, 659, 839, 1049, 1259, 1889 ()
Home primes
For , write the prime factorization of in base 10 and concatenate the factors; iterate until a prime is reached.
2, 3, 211, 5, 23, 7, 3331113965338635107, 311, 773, 11, 223, 13, 13367, 1129, 31636373, 17, 233, 19, 3318308475676071413, 37, 211, 23, 331319, 773, 3251, 13367, 227, 29, 547, 31, 241271, 311, 31397, 1129, 71129, 37, 373, 313, 3314192745739, 41, 379, 43, 22815088913, 3411949, 223, 47, 6161791591356884791277 ()
Irregular primes
Odd primes p that divide the class number of the p-th cyclotomic field.
37, 59, 67, 101, 103, 131, 149, 157, 233, 257, 263, 271, 283, 293, 307, 311, 347, 353, 379, 389, 401, 409, 421, 433, 461, 463, 467, 491, 523, 541, 547, 557, 577, 587, 593, 607, 613 ()
(p, p − 3) irregular primes
(See Wolstenholme prime)
(p, p − 5) irregular primes
Primes p such that (p, p−5) is an irregular pair.
37
(p, p − 9) irregular primes
Primes p such that (p, p − 9) is an irregular pair.
67, 877 ()
Isolated primes
Primes p such that neither p − 2 nor p + 2 is prime.
2, 23, 37, 47, 53, 67, 79, 83, 89, 97, 113, 127, 131, 157, 163, 167, 173, 211, 223, 233, 251, 257, 263, 277, 293, 307, 317, 331, 337, 353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 439, 443, 449, 457, 467, 479, 487, 491, 499, 503, 509, 541, 547, 557, 563, 577, 587, 593, 607, 613, 631, 647, 653, 673, 677, 683, 691, 701, 709, 719, 727, 733, 739, 743, 751, 757, 761, 769, 773, 787, 797, 839, 853, 863, 877, 887, 907, 911, 919, 929, 937, 941, 947, 953, 967, 971, 977, 983, 991, 997 ()
Leyland primes
Of the form x + y, with 1 < x < y.
17, 593, 32993, 2097593, 8589935681, 59604644783353249, 523347633027360537213687137, 43143988327398957279342419750374600193 ()
Long primes
Primes p for which, in a given base b, gives a cyclic number. They are also called full reptend primes. Primes p for base 10:
7, 17, 19, 23, 29, 47, 59, 61, 97, 109, 113, 131, 149, 167, 179, 181, 193, 223, 229, 233, 257, 263, 269, 313, 337, 367, 379, 383, 389, 419, 433, 461, 487, 491, 499, 503, 509, 541, 571, 577, 593 ()
Lucas primes
Primes in the Lucas number sequence L = 2, L = 1,
L = L + L.
2, 3, 7, 11, 29, 47, 199, 521, 2207, 3571, 9349, 3010349, 54018521, 370248451, 6643838879, 119218851371, 5600748293801, 688846502588399, 32361122672259149 ()
Lucky primes
Lucky numbers that are prime.
3, 7, 13, 31, 37, 43, 67, 73, 79, 127, 151, 163, 193, 211, 223, 241, 283, 307, 331, 349, 367, 409, 421, 433, 463, 487, 541, 577, 601, 613, 619, 631, 643, 673, 727, 739, 769, 787, 823, 883, 937, 991, 997 ()
Mersenne primes
Of the form 2 − 1.
3, 7, 31, 127, 8191, 131071, 524287, 2147483647, 2305843009213693951, 618970019642690137449562111, 162259276829213363391578010288127, 170141183460469231731687303715884105727 ()
, there are 52 known Mersenne primes. The 13th, 14th, and 52nd have respectively 157, 183, and 41,024,320 digits. This includes the largest known prime 2136,279,841-1, which is the 52nd Mersenne prime.
Mersenne divisors
Primes p that divide 2 − 1, for some prime number n.
3, 7, 23, 31, 47, 89, 127, 167, 223, 233, 263, 359, 383, 431, 439, 479, 503, 719, 839, 863, 887, 983, 1103, 1319, 1367, 1399, 1433, 1439, 1487, 1823, 1913, 2039, 2063, 2089, 2207, 2351, 2383, 2447, 2687, 2767, 2879, 2903, 2999, 3023, 3119, 3167, 3343 ()
All Mersenne primes are, by definition, members of this sequence.
Mersenne prime exponents
Primes p such that 2 − 1 is prime.
2, 3, 5, 7, 13, 17, 19, 31, 61, 89,
107, 127, 521, 607, 1279, 2203, 2281, 3217, 4253, 4423,
9689, 9941, 11213, 19937, 21701, 23209, 44497, 86243, 110503, 132049,
216091, 756839, 859433, 1257787, 1398269, 2976221, 3021377, 6972593, 13466917, 20996011,
24036583, 25964951, 30402457, 32582657, 37156667, 42643801, 43112609, 57885161 ()
, four more are known to be in the sequence, but it is not known whether they are the next:
74207281, 77232917, 82589933, 136279841
Double Mersenne primes
A subset of Mersenne primes of the form 2 − 1 for prime p.
7, 127, 2147483647, 170141183460469231731687303715884105727 (primes in )
Generalized repunit primes
Of the form (a − 1) / (a − 1) for fixed integer a.
For a = 2, these are the Mersenne primes, while for a = 10 they are the repunit primes. For other small a, they are given below:
a = 3: 13, 1093, 797161, 3754733257489862401973357979128773, 6957596529882152968992225251835887181478451547013 ()
a = 4: 5 (the only prime for a = 4)
a = 5: 31, 19531, 12207031, 305175781, 177635683940025046467781066894531, 14693679385278593849609206715278070972733319459651094018859396328480215743184089660644531 ()
a = 6: 7, 43, 55987, 7369130657357778596659, 3546245297457217493590449191748546458005595187661976371 ()
a = 7: 2801, 16148168401, 85053461164796801949539541639542805770666392330682673302530819774105141531698707146930307290253537320447270457
a = 8: 73 (the only prime for a = 8)
a = 9: none exist
Other generalizations and variations
Many generalizations of Mersenne primes have been defined. This include the following:
Primes of the form , including the Mersenne primes and the cuban primes as special cases
Williams primes, of the form
Mills primes
Of the form ⌊θ⌋, where θ is Mills' constant. This form is prime for all positive integers n.
2, 11, 1361, 2521008887, 16022236204009818131831320183 ()
Minimal primes
Primes for which there is no shorter sub-sequence of the decimal digits that form a prime. There are exactly 26 minimal primes:
2, 3, 5, 7, 11, 19, 41, 61, 89, 409, 449, 499, 881, 991, 6469, 6949, 9001, 9049, 9649, 9949, 60649, 666649, 946669, 60000049, 66000049, 66600049 ()
Newman–Shanks–Williams primes
Newman–Shanks–Williams numbers that are prime.
7, 41, 239, 9369319, 63018038201, 489133282872437279, 19175002942688032928599 ()
Non-generous primes
Primes p for which the least positive primitive root is not a primitive root of p2. Three such primes are known; it is not known whether there are more.
2, 40487, 6692367337 ()
Palindromic primes
Primes that remain the same when their decimal digits are read backwards.
2, 3, 5, 7, 11, 101, 131, 151, 181, 191, 313, 353, 373, 383, 727, 757, 787, 797, 919, 929, 10301, 10501, 10601, 11311, 11411, 12421, 12721, 12821, 13331, 13831, 13931, 14341, 14741 ()
Palindromic wing primes
Primes of the form with . This means all digits except the middle digit are equal.
101, 131, 151, 181, 191, 313, 353, 373, 383, 727, 757, 787, 797, 919, 929, 11311, 11411, 33533, 77377, 77477, 77977, 1114111, 1117111, 3331333, 3337333, 7772777, 7774777, 7778777, 111181111, 111191111, 777767777, 77777677777, 99999199999 ()
Partition primes
Partition function values that are prime.
2, 3, 5, 7, 11, 101, 17977, 10619863, 6620830889, 80630964769, 228204732751, 1171432692373, 1398341745571, 10963707205259, 15285151248481, 10657331232548839, 790738119649411319, 18987964267331664557 ()
Pell primes
Primes in the Pell number sequence P = 0, P = 1,
P = 2P + P.
2, 5, 29, 5741, 33461, 44560482149, 1746860020068409, 68480406462161287469, 13558774610046711780701, 4125636888562548868221559797461449 ()
Permutable primes
Any permutation of the decimal digits is a prime.
2, 3, 5, 7, 11, 13, 17, 31, 37, 71, 73, 79, 97, 113, 131, 199, 311, 337, 373, 733, 919, 991, 1111111111111111111, 11111111111111111111111 ()
Perrin primes
Primes in the Perrin number sequence P(0) = 3, P(1) = 0, P(2) = 2,
P(n) = P(n−2) + P(n−3).
2, 3, 5, 7, 17, 29, 277, 367, 853, 14197, 43721, 1442968193, 792606555396977, 187278659180417234321, 66241160488780141071579864797 ()
Pierpont primes
Of the form 23 + 1 for some integers u,v ≥ 0.
These are also class 1- primes.
2, 3, 5, 7, 13, 17, 19, 37, 73, 97, 109, 163, 193, 257, 433, 487, 577, 769, 1153, 1297, 1459, 2593, 2917, 3457, 3889, 10369, 12289, 17497, 18433, 39367, 52489, 65537, 139969, 147457 ()
Pillai primes
Primes p for which there exist n > 0 such that p divides n! + 1 and n does not divide p − 1.
23, 29, 59, 61, 67, 71, 79, 83, 109, 137, 139, 149, 193, 227, 233, 239, 251, 257, 269, 271, 277, 293, 307, 311, 317, 359, 379, 383, 389, 397, 401, 419, 431, 449, 461, 463, 467, 479, 499 ()
Primes of the form n4 + 1
Of the form n4 + 1.
2, 17, 257, 1297, 65537, 160001, 331777, 614657, 1336337, 4477457, 5308417, 8503057, 9834497, 29986577, 40960001, 45212177, 59969537, 65610001, 126247697, 193877777, 303595777, 384160001, 406586897, 562448657, 655360001 ()
Primeval primes
Primes for which there are more prime permutations of some or all the decimal digits than for any smaller number.
2, 13, 37, 107, 113, 137, 1013, 1237, 1367, 10079 ()
Primorial primes
Of the form p# ± 1.
3, 5, 7, 29, 31, 211, 2309, 2311, 30029, 200560490131, 304250263527209, 23768741896345550770650537601358309 (union of and )
Proth primes
Of the form k×2 + 1, with odd k and k < 2.
3, 5, 13, 17, 41, 97, 113, 193, 241, 257, 353, 449, 577, 641, 673, 769, 929, 1153, 1217, 1409, 1601, 2113, 2689, 2753, 3137, 3329, 3457, 4481, 4993, 6529, 7297, 7681, 7937, 9473, 9601, 9857 ()
Pythagorean primes
Of the form 4n + 1.
5, 13, 17, 29, 37, 41, 53, 61, 73, 89, 97, 101, 109, 113, 137, 149, 157, 173, 181, 193, 197, 229, 233, 241, 257, 269, 277, 281, 293, 313, 317, 337, 349, 353, 373, 389, 397, 401, 409, 421, 433, 449 ()
Prime quadruplets
Where (p, p+2, p+6, p+8) are all prime.
(5, 7, 11, 13), (11, 13, 17, 19), (101, 103, 107, 109), (191, 193, 197, 199), (821, 823, 827, 829), (1481, 1483, 1487, 1489), (1871, 1873, 1877, 1879), (2081, 2083, 2087, 2089), (3251, 3253, 3257, 3259), (3461, 3463, 3467, 3469), (5651, 5653, 5657, 5659), (9431, 9433, 9437, 9439) (, , , )
Quartan primes
Of the form x + y, where x,y > 0.
2, 17, 97, 257, 337, 641, 881 ()
Ramanujan primes
Integers R that are the smallest to give at least n primes from x/2 to x for all x ≥ R (all such integers are primes).
2, 11, 17, 29, 41, 47, 59, 67, 71, 97, 101, 107, 127, 149, 151, 167, 179, 181, 227, 229, 233, 239, 241, 263, 269, 281, 307, 311, 347, 349, 367, 373, 401, 409, 419, 431, 433, 439, 461, 487, 491 ()
Regular primes
Primes p that do not divide the class number of the p-th cyclotomic field.
3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 41, 43, 47, 53, 61, 71, 73, 79, 83, 89, 97, 107, 109, 113, 127, 137, 139, 151, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 239, 241, 251, 269, 277, 281 ()
Repunit primes
Primes containing only the decimal digit 1.
11, 1111111111111111111 (19 digits), 11111111111111111111111 (23 digits) ()
The next have 317, 1031, 49081, 86453, 109297, 270343 digits ()
Residue classes of primes
Of the form an + d for fixed integers a and d. Also called primes congruent to d modulo a.
The primes of the form 2n+1 are the odd primes, including all primes other than 2. Some sequences have alternate names: 4n+1 are Pythagorean primes, 4n+3 are the integer Gaussian primes, and 6n+5 are the Eisenstein primes (with 2 omitted). The classes 10n+d (d = 1, 3, 7, 9) are primes ending in the decimal digit d.
2n+1: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53 ()
4n+1: 5, 13, 17, 29, 37, 41, 53, 61, 73, 89, 97, 101, 109, 113, 137 ()
4n+3: 3, 7, 11, 19, 23, 31, 43, 47, 59, 67, 71, 79, 83, 103, 107 ()
6n+1: 7, 13, 19, 31, 37, 43, 61, 67, 73, 79, 97, 103, 109, 127, 139 ()
6n+5: 5, 11, 17, 23, 29, 41, 47, 53, 59, 71, 83, 89, 101, 107, 113 ()
8n+1: 17, 41, 73, 89, 97, 113, 137, 193, 233, 241, 257, 281, 313, 337, 353 ()
8n+3: 3, 11, 19, 43, 59, 67, 83, 107, 131, 139, 163, 179, 211, 227, 251 ()
8n+5: 5, 13, 29, 37, 53, 61, 101, 109, 149, 157, 173, 181, 197, 229, 269 ()
8n+7: 7, 23, 31, 47, 71, 79, 103, 127, 151, 167, 191, 199, 223, 239, 263 ()
10n+1: 11, 31, 41, 61, 71, 101, 131, 151, 181, 191, 211, 241, 251, 271, 281 ()
10n+3: 3, 13, 23, 43, 53, 73, 83, 103, 113, 163, 173, 193, 223, 233, 263 ()
10n+7: 7, 17, 37, 47, 67, 97, 107, 127, 137, 157, 167, 197, 227, 257, 277 ()
10n+9: 19, 29, 59, 79, 89, 109, 139, 149, 179, 199, 229, 239, 269, 349, 359 ()
12n+1: 13, 37, 61, 73, 97, 109, 157, 181, 193, 229, 241, 277, 313, 337, 349 ()
12n+5: 5, 17, 29, 41, 53, 89, 101, 113, 137, 149, 173, 197, 233, 257, 269 ()
12n+7: 7, 19, 31, 43, 67, 79, 103, 127, 139, 151, 163, 199, 211, 223, 271 ()
12n+11: 11, 23, 47, 59, 71, 83, 107, 131, 167, 179, 191, 227, 239, 251, 263 ()
Safe primes
Where p and (p−1) / 2 are both prime.
5, 7, 11, 23, 47, 59, 83, 107, 167, 179, 227, 263, 347, 359, 383, 467, 479, 503, 563, 587, 719, 839, 863, 887, 983, 1019, 1187, 1283, 1307, 1319, 1367, 1439, 1487, 1523, 1619, 1823, 1907 ()
Self primes in base 10
Primes that cannot be generated by any integer added to the sum of its decimal digits.
3, 5, 7, 31, 53, 97, 211, 233, 277, 367, 389, 457, 479, 547, 569, 613, 659, 727, 839, 883, 929, 1021, 1087, 1109, 1223, 1289, 1447, 1559, 1627, 1693, 1783, 1873 ()
Sexy primes
Where (p, p + 6) are both prime.
(5, 11), (7, 13), (11, 17), (13, 19), (17, 23), (23, 29), (31, 37), (37, 43), (41, 47), (47, 53), (53, 59), (61, 67), (67, 73), (73, 79), (83, 89), (97, 103), (101, 107), (103, 109), (107, 113), (131, 137), (151, 157), (157, 163), (167, 173), (173, 179), (191, 197), (193, 199) (, )
Smarandache–Wellin primes
Primes that are the concatenation of the first n primes written in decimal.
2, 23, 2357 ()
The fourth Smarandache-Wellin prime is the 355-digit concatenation of the first 128 primes that end with 719.
Solinas primes
Of the form 2 ± 2 ± 1, where 0 < b < a.
3, 5, 7, 11, 13 ()
Sophie Germain primes
Where p and 2p + 1 are both prime. A Sophie Germain prime has a corresponding safe prime.
2, 3, 5, 11, 23, 29, 41, 53, 83, 89, 113, 131, 173, 179, 191, 233, 239, 251, 281, 293, 359, 419, 431, 443, 491, 509, 593, 641, 653, 659, 683, 719, 743, 761, 809, 911, 953 ()
Stern primes
Primes that are not the sum of a smaller prime and twice the square of a nonzero integer.
2, 3, 17, 137, 227, 977, 1187, 1493 ()
, these are the only known Stern primes, and possibly the only existing.
Super-primes
Primes with prime-numbered indexes in the sequence of prime numbers (the 2nd, 3rd, 5th, ... prime).
3, 5, 11, 17, 31, 41, 59, 67, 83, 109, 127, 157, 179, 191, 211, 241, 277, 283, 331, 353, 367, 401, 431, 461, 509, 547, 563, 587, 599, 617, 709, 739, 773, 797, 859, 877, 919, 967, 991 ()
Supersingular primes
There are exactly fifteen supersingular primes:
2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 41, 47, 59, 71 ()
Thabit primes
Of the form 3×2 − 1.
2, 5, 11, 23, 47, 191, 383, 6143, 786431, 51539607551, 824633720831, 26388279066623, 108086391056891903, 55340232221128654847, 226673591177742970257407 ()
The primes of the form 3×2 + 1 are related.
7, 13, 97, 193, 769, 12289, 786433, 3221225473, 206158430209, 6597069766657 ()
Prime triplets
Where (p, p+2, p+6) or (p, p+4, p+6) are all prime.
(5, 7, 11), (7, 11, 13), (11, 13, 17), (13, 17, 19), (17, 19, 23), (37, 41, 43), (41, 43, 47), (67, 71, 73), (97, 101, 103), (101, 103, 107), (103, 107, 109), (107, 109, 113), (191, 193, 197), (193, 197, 199), (223, 227, 229), (227, 229, 233), (277, 281, 283), (307, 311, 313), (311, 313, 317), (347, 349, 353) (, , )
Truncatable prime
Left-truncatable
Primes that remain prime when the leading decimal digit is successively removed.
2, 3, 5, 7, 13, 17, 23, 37, 43, 47, 53, 67, 73, 83, 97, 113, 137, 167, 173, 197, 223, 283, 313, 317, 337, 347, 353, 367, 373, 383, 397, 443, 467, 523, 547, 613, 617, 643, 647, 653, 673, 683 ()
Right-truncatable
Primes that remain prime when the least significant decimal digit is successively removed.
2, 3, 5, 7, 23, 29, 31, 37, 53, 59, 71, 73, 79, 233, 239, 293, 311, 313, 317, 373, 379, 593, 599, 719, 733, 739, 797, 2333, 2339, 2393, 2399, 2939, 3119, 3137, 3733, 3739, 3793, 3797 ()
Two-sided
Primes that are both left-truncatable and right-truncatable. There are exactly fifteen two-sided primes:
2, 3, 5, 7, 23, 37, 53, 73, 313, 317, 373, 797, 3137, 3797, 739397 ()
Twin primes
Where (p, p+2) are both prime.
(3, 5), (5, 7), (11, 13), (17, 19), (29, 31), (41, 43), (59, 61), (71, 73), (101, 103), (107, 109), (137, 139), (149, 151), (179, 181), (191, 193), (197, 199), (227, 229), (239, 241), (269, 271), (281, 283), (311, 313), (347, 349), (419, 421), (431, 433), (461, 463) (, )
Unique primes
The list of primes p for which the period length of the decimal expansion of 1/p is unique (no other prime gives the same period).
3, 11, 37, 101, 9091, 9901, 333667, 909091, 99990001, 999999000001, 9999999900000001, 909090909090909091, 1111111111111111111, 11111111111111111111111, 900900900900990990990991 ()
Wagstaff primes
Of the form (2 + 1) / 3.
3, 11, 43, 683, 2731, 43691, 174763, 2796203, 715827883, 2932031007403, 768614336404564651, 201487636602438195784363, 845100400152152934331135470251, 56713727820156410577229101238628035243 ()
Values of n:
3, 5, 7, 11, 13, 17, 19, 23, 31, 43, 61, 79, 101, 127, 167, 191, 199, 313, 347, 701, 1709, 2617, 3539, 5807, 10501, 10691, 11279, 12391, 14479, 42737, 83339, 95369, 117239, 127031, 138937, 141079, 267017, 269987, 374321 ()
Wall–Sun–Sun primes
A prime p > 5, if p divides the Fibonacci number , where the Legendre symbol is defined as
, no Wall-Sun-Sun primes are known.
Wieferich primes
Primes p such that for fixed integer a > 1.
2p − 1 ≡ 1 (mod p2): 1093, 3511 ()
3p − 1 ≡ 1 (mod p2): 11, 1006003 ()
4p − 1 ≡ 1 (mod p2): 1093, 3511
5p − 1 ≡ 1 (mod p2): 2, 20771, 40487, 53471161, 1645333507, 6692367337, 188748146801 ()
6p − 1 ≡ 1 (mod p2): 66161, 534851, 3152573 ()
7p − 1 ≡ 1 (mod p2): 5, 491531 ()
8p − 1 ≡ 1 (mod p2): 3, 1093, 3511
9p − 1 ≡ 1 (mod p2): 2, 11, 1006003
10p − 1 ≡ 1 (mod p2): 3, 487, 56598313 ()
11p − 1 ≡ 1 (mod p2): 71
12p − 1 ≡ 1 (mod p2): 2693, 123653 ()
13p − 1 ≡ 1 (mod p2): 2, 863, 1747591 ()
14p − 1 ≡ 1 (mod p2): 29, 353, 7596952219 ()
15p − 1 ≡ 1 (mod p2): 29131, 119327070011 ()
16p − 1 ≡ 1 (mod p2): 1093, 3511
17p − 1 ≡ 1 (mod p2): 2, 3, 46021, 48947 ()
18p − 1 ≡ 1 (mod p2): 5, 7, 37, 331, 33923, 1284043 ()
19p − 1 ≡ 1 (mod p2): 3, 7, 13, 43, 137, 63061489 ()
20p − 1 ≡ 1 (mod p2): 281, 46457, 9377747, 122959073 ()
21p − 1 ≡ 1 (mod p2): 2
22p − 1 ≡ 1 (mod p2): 13, 673, 1595813, 492366587, 9809862296159 ()
23p − 1 ≡ 1 (mod p2): 13, 2481757, 13703077, 15546404183, 2549536629329 ()
24p − 1 ≡ 1 (mod p2): 5, 25633
25p − 1 ≡ 1 (mod p2): 2, 20771, 40487, 53471161, 1645333507, 6692367337, 188748146801
, these are all known Wieferich primes with a ≤ 25.
Wilson primes
Primes p for which p divides (p−1)! + 1.
5, 13, 563 ()
, these are the only known Wilson primes.
Wolstenholme primes
Primes p for which the binomial coefficient
16843, 2124679 ()
, these are the only known Wolstenholme primes.
Woodall primes
Of the form n×2 − 1.
7, 23, 383, 32212254719, 2833419889721787128217599, 195845982777569926302400511, 4776913109852041418248056622882488319 ()
See also
References
External links
All prime numbers from 31 to 6,469,693,189 for free download.
Lists of Primes at the Prime Pages.
The Nth Prime Page Nth prime through n=10^12, pi(x) through x=3*10^13, Random primes in same range.
Interface to a list of the first 98 million primes (primes less than 2,000,000,000)
Selected prime related sequences in OEIS.
Fischer, R. Thema: Fermatquotient B^(P−1) == 1 (mod P^2) (Lists Wieferich primes in all bases up to 1052)
Prime numbers
Prime | List of prime numbers | [
"Mathematics"
] | 11,679 | [
"Prime numbers",
"Mathematical objects",
"Number theory",
"Numbers",
"Number-related lists"
] |
442,505 | https://en.wikipedia.org/wiki/Particle%20in%20a%20spherically%20symmetric%20potential | In quantum mechanics, a spherically symmetric potential is a system of which the potential only depends on the radial distance from the spherical center and a location in space. A particle in a spherically symmetric potential will behave accordingly to said potential and can therefore be used as an approximation, for example, of the electron in a hydrogen atom or of the formation of chemical bonds.
In the general time-independent case, the dynamics of a particle in a spherically symmetric potential are governed by a Hamiltonian of the following form:
Here, is the mass of the particle, is the momentum operator, and the potential depends only on the vector magnitude of the position vector, that is, the radial distance from the origin (hence the spherical symmetry of the problem).
To describe a particle in a spherically symmetric system, it is convenient to use spherical coordinates; denoted by , and . The time-independent Schrödinger equation for the system is then a separable, partial differential equation. This means solutions to the angular dimensions of the equation can be found independently of the radial dimension. This leaves an ordinary differential equation in terms only of the radius, , which determines the eigenstates for the particular potential, .
Structure of the eigenfunctions
If solved by separation of variables, the eigenstates of the system will have the form:
in which the spherical angles and represent the polar and azimuthal angle, respectively. Those two factors of are often grouped together as spherical harmonics, so that the eigenfunctions take the form:
The differential equation which characterises the function is called the radial equation.
Derivation of the radial equation
The kinetic energy operator in spherical polar coordinates is:The spherical harmonics satisfy
Substituting this into the Schrödinger equation we get a one-dimensional eigenvalue equation,This equation can be reduced to an equivalent 1-D Schrödinger equation by substituting , where satisfieswhich is precisely the one-dimensional Schrödinger equation with an effective potential given bywhere . The correction to the potential V(r) is called the centrifugal barrier term.
If , then near the origin, .
Spherically symmetric Hamiltonians
Since the Hamiltonian is spherically symmetric, it is said to be invariant under rotation, ie:
Since angular momentum operators are generators of rotation, applying the Baker-Campbell-Hausdorff Lemma we get:
Since this equation holds for all values of , we get that , or that every angular momentum component commutes with the Hamiltonian.
Since and are such mutually commuting operators that also commute with the Hamiltonian, the wavefunctions can be expressed as or where is used to label different wavefunctions.
Since also commutes with the Hamiltonian, the energy eigenvalues in such cases are always independent of .
Combined with the fact that differential operators only act on the functions of and , it shows that if the solutions are assumed to be separable as , the radial wavefunction can always be chosen independent of values. Thus the wavefunction is expressed as:
Solutions for potentials of interest
There are five cases of special importance:
, or solving the vacuum in the basis of spherical harmonics, which serves as the basis for other cases.
(finite) for and zero elsewhere.
for and infinite elsewhere, the spherical equivalent of the square well, useful to describe bound states in a nucleus or quantum dot.
for the three-dimensional isotropic harmonic oscillator.
to describe bound states of hydrogen-like atoms.
The solutions are outlined in these cases, which should be compared to their counterparts in cartesian coordinates, cf. particle in a box.
Vacuum case states
Let us now consider . Introducing the dimensionless variablesthe equation becomes a Bessel equation for :where regular solutions for positive energies are given by so-called Bessel functions of the first kind so that the solutions written for are the so-called spherical Bessel function
.
The solutions of the Schrödinger equation in polar coordinates in vacuum are thus labelled by three quantum numbers: discrete indices ℓ and m, and k varying continuously in :These solutions represent states of definite angular momentum, rather than of definite (linear) momentum, which are provided by plane waves .
Sphere with finite "square" potential
Consider the potential for and elsewhere - that is, inside a sphere of radius the potential is equal to and it is zero outside the sphere. A potential with such a finite discontinuity is called a square potential.
We first consider bound states, i.e. states which display the particle mostly inside the box (confined states). Those have an energy less than the potential outside the sphere, i.e., they have negative energy. Also worth noticing is that unlike Coulomb potential, featuring an infinite number of discrete bound states, the spherical square well has only a finite (if any) number because of its finite range.
The resolution essentially follows that of the vacuum case above with normalization of the total wavefunction added, solving two Schrödinger equations — inside and outside the sphere — of the previous kind, i.e., with constant potential. The following constraints must hold for a normalizable, physical wavefunction:
The wavefunction must be regular at the origin.
The wavefunction and its derivative must be continuous at the potential discontinuity.
The wavefunction must converge at infinity.
The first constraint comes from the fact that Neumann and Hankel functions are singular at the origin. The physical requirement that must be defined everywhere selected Bessel function of the first kind over the other possibilities in the vacuum case. For the same reason, the solution will be of this kind inside the sphere:Note that for bound states, . Bound states bring the novelty as compared to the vacuum case now that . This, along with the third constraint, selects the Hankel function of the first kind as the only converging solution at infinity (the singularity at the origin of these functions does not matter since we are now outside the sphere):The second constraint on continuity of at along with normalization allows the determination of constants and . Continuity of the derivative (or logarithmic derivative for convenience) requires quantization of energy.
Sphere with infinite "square" potential
In case where the potential well is infinitely deep, so that we can take inside the sphere and outside, the problem becomes that of matching the wavefunction inside the sphere (the spherical Bessel functions) with identically zero wavefunction outside the sphere. Allowed energies are those for which the radial wavefunction vanishes at the boundary. Thus, we use the zeros of the spherical Bessel functions to find the energy spectrum and wavefunctions. Calling the kth zero of , we have:so that the problem is reduced to the computations of these zeros , typically by using a table or calculator, as these zeros are not solvable for the general case.
In the special case (spherical symmetric orbitals), the spherical Bessel function is , which zeros can be easily given as . Their energy eigenvalues are thus:
3D isotropic harmonic oscillator
The potential of a 3D isotropic harmonic oscillator is
An N-dimensional isotropic harmonic oscillator has the energies
i.e., is a non-negative integral number; is the (same) fundamental frequency of the modes of the oscillator. In this case , so that the radial Schrödinger equation becomes,
Introducing
and recalling that , we will show that the radial Schrödinger equation has the normalized solution,
where the function is a generalized Laguerre polynomial in of order .
The normalization constant is,
The eigenfunction is associated with energy , where
This is the same result as the quantum harmonic oscillator, with .
Derivation
First we transform the radial equation by a few successive substitutions to the generalized Laguerre differential equation, which has known solutions: the generalized Laguerre functions. Then we normalize the generalized Laguerre functions to unity. This normalization is with the usual volume element .
First we scale the radial coordinate
and then the equation becomes
with .
Consideration of the limiting behavior of at the origin and at infinity suggests the following substitution for ,
This substitution transforms the differential equation to
where we divided through with , which can be done so long as y is not zero.
Transformation to Laguerre polynomials
If the substitution is used, , and the differential operators become
and
The expression between the square brackets multiplying becomes the differential equation characterizing the generalized Laguerre equation (see also Kummer's equation):
with .
Provided is a non-negative integral number, the solutions of this equations are generalized (associated) Laguerre polynomials
From the conditions on follows: (i) and (ii) and are either both odd or both even. This leads to the condition on given above.
Recovery of the normalized radial wavefunction
Remembering that , we get the normalized radial solution:
The normalization condition for the radial wave function is:
Substituting , gives and the equation becomes:
By making use of the orthogonality properties of the generalized Laguerre polynomials, this equation simplifies to:
Hence, the normalization constant can be expressed as:
Other forms of the normalization constant can be derived by using properties of the gamma function, while noting that and are both of the same parity. This means that is always even, so that the gamma function becomes:
where we used the definition of the double factorial. Hence, the normalization constant is also given by:
Hydrogen-like atoms
A hydrogenic (hydrogen-like) atom is a two-particle system consisting of a nucleus and an electron. The two particles interact through the potential given by Coulomb's law:
where
ε0 is the permittivity of the vacuum,
Z is the atomic number (eZ is the charge of the nucleus),
e is the elementary charge (charge of the electron),
r is the distance between the electron and the nucleus.
In order to simplify the Schrödinger equation, we introduce the following constants that define the atomic unit of energy and length:
where is the reduced mass in the limit. Substitute and into the radial Schrödinger equation given above. This gives an equation in which all natural constants are hidden,
Two classes of solutions of this equation exist:
(i) is negative, the corresponding eigenfunctions are square-integrable and the values of are quantized (discrete spectrum).
(ii) is non-negative, every real non-negative value of is physically allowed (continuous spectrum), the corresponding eigenfunctions are non-square integrable. Considering only class (i) solutions restricts the solutions to wavefunctions which are bound states, in contrast to the class (ii) solutions that are known as scattering states.
For class (i) solutions with negative W the quantity is real and positive. The scaling of , i.e., substitution of gives the Schrödinger equation:
For the inverse powers of x are negligible and the normalizable (and therefore, physical) solution for large is . Similarly, for the inverse square power dominates and the physical solution for small is xℓ+1.
Hence, to obtain a full range solution we substitute
The equation for becomes,
Provided is a non-negative integer, this equation has polynomial solutions written as
which are generalized Laguerre polynomials of order . The energy becomes
The principal quantum number satisfies . Since , the total radial wavefunction is
with normalization which absorbs extra terms from
via
The corresponding energy is
References
Partial differential equations
Quantum models | Particle in a spherically symmetric potential | [
"Physics"
] | 2,407 | [
"Quantum models",
"Quantum mechanics"
] |
18,395,527 | https://en.wikipedia.org/wiki/Photomagnetism | Photomagnetism (photomagnetic effect) is the effect in which a material acquires (and in some cases loses) its ferromagnetic properties in response to light. The current model for this phenomenon is a light-induced electron transfer, accompanied by the reversal of the spin direction of an electron. This leads to an increase in spin concentration, causing the magnetic transition. Currently the effect is only observed to persist (for any significant time) at very low temperature. But at temperatures such as 5K, the effect may persist for several days.
Mechanism
The magnetisation and demagnetisation (where not demagnetised thermally) occur through intermediate states as shown (right). The magnetising and demagnetising wavelengths provide the energy for the system to reach the intermediate states which then relaxed non-radiatively to one of the two states (the intermediate state for magnetisation and demagnetisation are different and so the photon flux is not wasted by relaxation to the same state from which the system was just excited). A direct transition from the ground state to the magnetic state and, more importantly, vice versa is a forbidden transition, and this leads to the magnetised state being metastable and persisting for a long period at low temperatures.
Prussian blue analogues
One of the most promising groups of molecular photomagnetic materials are Co-Fe Prussian blue analogues (i.e. compounds with the same structure and similar chemical make up to Prussian blue.) A Prussian blue analogue has a chemical formula M1-2xCo1+x[Fe(CN)6]•zH2O where x and z are variables (z may be zero) and M is an alkali metal. Prussian blue analogues have a face centre cubic structure.
It is essential that the structure be non-stoichiometric. In this case the iron molecules are randomly replaced by water (6 molecules of water per replaced iron). This non-stoichiometry is essential to the photomagnetism of Prussian blue analogues as regions which contain an iron vacancy are more stable in the non-magnetic state and regions without a vacancy are more stable in the magnetic state. By illumination by the correct frequency one or another of these regions can be locally changed to its more stable state from the bulk state, triggering the phase change of the entire molecule. The reverse phase change can be accomplished by exciting the other type of region by the appropriate frequency.
See also
Photomagnetic effect
Photochromism
References
Further reading
Condensed matter physics
Ferromagnetism
Magneto-optic effects | Photomagnetism | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 532 | [
"Physical phenomena",
"Phases of matter",
"Electric and magnetic fields in matter",
"Materials science",
"Optical phenomena",
"Magnetic ordering",
"Ferromagnetism",
"Condensed matter physics",
"Magneto-optic effects",
"Matter"
] |
18,397,882 | https://en.wikipedia.org/wiki/Holbrook%20Superconductor%20Project | The Holbrook Superconductor Project is the world's first production superconducting transmission power cable. The lines were commissioned in 2008. The suburban Long Island electrical substation is fed by a 600 meter long tunnel containing approximately 155,000 meters of high-temperature superconductor wire manufactured by American Superconductor, installed underground and chilled to superconducting temperature with liquid nitrogen.
Project
The project was funded by the United States Department of Energy, and operates as part of the Long Island Power Authority (LIPA) power grid. The project team comprised American Superconductor, Nexans, Air Liquide and LIPA. It broke ground on July 4, 2006, was first energized April 22, 2008, and was commissioned on June 25, 2008. Between commissioning and March 2009 refrigeration events impacted normal operation.
The superconductor is bismuth strontium calcium copper oxide (BSCCO) which superconducts at liquid nitrogen temperatures. Other parts of the system include a liquid nitrogen storage tank, a Brayton cycle Helium refrigerator, and a number of cryostats which manage the transition between cryogenic and ambient temperatures. The system capacity is 574 MVA with an operating voltage of 138 kV at a maximum current of 2400 A.
See also
Technological applications of superconductivity
References
Superconductivity
Electric power transmission systems in the United States
Electric power distribution
Energy infrastructure on Long Island, New York | Holbrook Superconductor Project | [
"Physics",
"Materials_science",
"Engineering"
] | 304 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
2,504,081 | https://en.wikipedia.org/wiki/Baeyer%E2%80%93Drewsen%20indigo%20synthesis | The Baeyer–Drewsen indigo synthesis (1882) is an organic reaction in which indigo is prepared from 2-nitrobenzaldehyde and acetone The reaction was developed by von Baeyer and Viggo Drewsen in 1880 to produce the first synthetic indigo at laboratory scale. This procedure is not used at industrial scale.
The reaction is classified as an aldol condensation. As a practical route to indigo, this method was displaced by routes from aniline.
Mechanism
Note
In the English literature this reaction is sometimes called Baeyer–Drewson reaction, although the author of the original paper was spelled Drewsen.
References
External links
Lab Manual
Lab-synthesis of indigo
Nitrogen heterocycle forming reactions
Organic reactions
Name reactions | Baeyer–Drewsen indigo synthesis | [
"Chemistry"
] | 149 | [
"Name reactions",
"Organic reactions"
] |
2,504,412 | https://en.wikipedia.org/wiki/Hilbert%20manifold | In mathematics, a Hilbert manifold is a manifold modeled on Hilbert spaces. Thus it is a separable Hausdorff space in which each point has a neighbourhood homeomorphic to an infinite dimensional Hilbert space. The concept of a Hilbert manifold provides a possibility of extending the theory of manifolds to infinite-dimensional setting. Analogous to the finite-dimensional situation, one can define a differentiable Hilbert manifold by considering a maximal atlas in which the transition maps are differentiable.
Properties
Many basic constructions of manifold theory, such as the tangent space of a manifold and a tubular neighbourhood of a submanifold (of finite codimension) carry over from the finite dimensional situation to the Hilbert setting with little change. However, in statements involving maps between manifolds, one often has to restrict consideration to Fredholm maps, that is, maps whose differential at every point is Fredholm. The reason for this is that Sard's lemma holds for Fredholm maps, but not in general. Notwithstanding this difference, Hilbert manifolds have several very nice properties.
Kuiper's theorem: If is a compact topological space or has the homotopy type of a CW complex then every (real or complex) Hilbert space bundle over is trivial. In particular, every Hilbert manifold is parallelizable.
Every smooth Hilbert manifold can be smoothly embedded onto an open subset of the model Hilbert space.
Every homotopy equivalence between two Hilbert manifolds is homotopic to a diffeomorphism. In particular every two homotopy equivalent Hilbert manifolds are already diffeomorphic. This stands in contrast to lens spaces and exotic spheres, which demonstrate that in the finite-dimensional situation, homotopy equivalence, homeomorphism, and diffeomorphism of manifolds are distinct properties.
Although Sard's Theorem does not hold in general, every continuous map from a Hilbert manifold can be arbitrary closely approximated by a smooth map which has no critical points.
Examples
Any Hilbert space is a Hilbert manifold with a single global chart given by the identity function on Moreover, since is a vector space, the tangent space to at any point is canonically isomorphic to itself, and so has a natural inner product, the "same" as the one on Thus can be given the structure of a Riemannian manifold with metric where denotes the inner product in
Similarly, any open subset of a Hilbert space is a Hilbert manifold and a Riemannian manifold under the same construction as for the whole space.
There are several mapping spaces between manifolds which can be viewed as Hilbert spaces by only considering maps of suitable Sobolev class. For example we can consider the space of all maps from the unit circle into a manifold This can be topologized via the compact open topology as a subspace of the space of all continuous mappings from the circle to that is, the free loop space of The Sobolev kind mapping space described above is homotopy equivalent to the free loop space. This makes it suited to the study of algebraic topology of the free loop space, especially in the field of string topology. We can do an analogous Sobolev construction for the loop space, making it a codimension Hilbert submanifold of where is the dimension of
See also
Global analysis – which uses Hilbert manifolds and other kinds of infinite-dimensional manifolds
References
. Contains a general introduction to Hilbert manifolds and many details about the free loop space.
. Another introduction with more differential topology.
N. Kuiper, The homotopy type of the unitary group of Hilbert spaces", Topology 3, 19-30
J. Eells, K. D. Elworthy, "On the differential topology of Hilbert manifolds", Global analysis. Proceedings of Symposia in Pure Mathematics, Volume XV 1970, 41-44.
J. Eells, K. D. Elworthy, "Open embeddings of certain Banach manifolds", Annals of Mathematics 91 (1970), 465-485
D. Chataur, "A Bordism Approach to String Topology", preprint https://arxiv.org/abs/math.at/0306080
External links
Hilbert manifold at the Manifold Atlas
Differential geometry
General topology
Generalized manifolds
Manifolds
Nonlinear functional analysis
Riemannian geometry
Riemannian manifolds
Structures on manifolds | Hilbert manifold | [
"Mathematics"
] | 885 | [
"General topology",
"Space (mathematics)",
"Riemannian manifolds",
"Metric spaces",
"Topological spaces",
"Topology",
"Manifolds"
] |
2,505,125 | https://en.wikipedia.org/wiki/Rattleback | A rattleback is a semi-ellipsoidal top which will rotate on its axis in a preferred direction. If spun in the opposite direction, it becomes unstable, "rattles" to a stop and reverses its spin to the preferred direction.
For most rattlebacks the motion will happen when the rattleback is spun in one direction, but not when spun in the other. Some exceptional rattlebacks will reverse when spun in either direction.
This counterintuitive behavior makes the rattleback a physical curiosity that has excited human imagination since prehistoric times.
A rattleback may also be known as a "anagyre", "(rebellious) celt", "Celtic stone", "druid stone", "rattlerock", "Robinson Reverser", "spin bar", "wobble stone" (or "wobblestone") and by product names including "ARK", "Bizzaro Swirl", "Space Pet" and "Space Toy".
History
Archeologists who investigated ancient Celtic and Egyptian sites in the 19th century found celts which exhibited the spin-reversal motion. The antiquarian word celt (the "c" is soft, pronounced as "s") describes lithic tools and weapons shaped like an adze, axe, chisel, or hoe.
The first modern descriptions of these celts were published in the 1890s when Gilbert Walker wrote his "On a curious dynamical property of celts" for the Proceedings of the Cambridge Philosophical Society in Cambridge, England, and "On a dynamical top" for the Quarterly Journal of Pure and Applied Mathematics in Somerville, Massachusetts, US.
Additional examinations of rattlebacks were published in 1909 and 1918, and by the 1950s and 1970s, several more examinations were made. But, the popular fascination with the objects has increased notably since the 1980s when no fewer than 28 examinations were published.
Size and materials
Rattleback artifacts are typically stone and come in various sizes. Modern ones sold as novelty puzzles and toys are generally made of plastic, wood, or glass, and come in sizes from a few inches up to long. A rattleback can also be made by bending a spoon.
Two rattleback design types exist: they have either an asymmetrical base with a skewed rolling axis, or a symmetrical base with offset weighting at the ends.
Physics
The spin-reversal motion follows from the growth of instabilities on the other rotation axes, that are rolling (on the main axis) and pitching (on the crosswise axis).
When there is an asymmetry in the mass distribution with respect to the plane formed by the pitching and the vertical axes, a coupling of these two instabilities arises; one can imagine how the asymmetry in mass will deviate the rattleback when pitching, which will create some rolling.
The amplified mode will differ depending on the spin direction, which explains the rattleback's asymmetrical behavior. Depending on whether it is rather a pitching or rolling instability that dominates, the growth rate will be very high or quite low.
This explains why, due to friction, most rattlebacks appear to exhibit spin-reversal motion only when spun in the pitching-unstable direction, also known as the strong reversal direction. When the rattleback is spun in the "stable direction", also known as the weak reversal direction, friction and damping often slow the rattleback to a stop before the rolling instability has time to fully build. Some rattlebacks, however, exhibit "unstable behavior" when spun in either direction, and incur several successive spin reversals per spin.
Other ways to add motion to a rattleback include tapping by pressing down momentarily on either of its ends, and rocking by pressing down repeatedly on either of its ends.
For a comprehensive analysis of rattleback's motion, see V.Ph. Zhuravlev and D.M. Klimov (2008). The previous papers were based on simplified assumptions and limited to studying local instability of its steady-state oscillation.
Realistic mathematical modelling of a rattleback is presented by G. Kudra and J. Awrejcewicz (2015). They focused on modelling of the contact forces and tested different versions of models of friction and rolling resistance, obtaining good agreement with the experimental results.
Numerical simulations predict that a rattleback situated on a harmonically oscillating base can exhibit rich bifurcation dynamics, including different types of periodic, quasi-periodic and chaotic motions.
See also
Tesla's Egg of Columbus
Tennis racket theorem
References
External links
Doherty, Paul. Scientific Explorations. Spoon Rattleback. 2000.
Sanderson, Jonathan. Activity of the Week: Rattleback.
Simon Fraser University: Celt. physics demonstration. Burnaby, British Columbia, Canada.
University of Cambridge Millennium Mathematics Project "Boomerangs and Gyroscopes."
Puzzles
Traditional toys
Wooden toys
Novelty items
Educational toys
Articles containing video clips
Spinning tops
Classical mechanics | Rattleback | [
"Physics"
] | 1,008 | [
"Mechanics",
"Classical mechanics"
] |
2,505,416 | https://en.wikipedia.org/wiki/Biocatalysis | Biocatalysis refers to the use of living (biological) systems or their parts to speed up (catalyze) chemical reactions. In biocatalytic processes, natural catalysts, such as enzymes, perform chemical transformations on organic compounds. Both enzymes that have been more or less isolated and enzymes still residing inside living cells are employed for this task. Modern biotechnology, specifically directed evolution, has made the production of modified or non-natural enzymes possible. This has enabled the development of enzymes that can catalyze novel small molecule transformations that may be difficult or impossible using classical synthetic organic chemistry. Utilizing natural or modified enzymes to perform organic synthesis is termed chemoenzymatic synthesis; the reactions performed by the enzyme are classified as chemoenzymatic reactions.
History
Biocatalysis underpins some of the oldest chemical transformations known to humans, for brewing predates recorded history. The oldest records of brewing are about 6000 years old and refer to the Sumerians.
The employment of enzymes and whole cells have been important for many industries for centuries. The most obvious uses have been in the food and drink businesses where the production of wine, beer, cheese etc. is dependent on the effects of the microorganisms.
More than one hundred years ago, biocatalysis was employed to do chemical transformations on non-natural man-made organic compounds, with the last 30 years seeing a substantial increase in the application of biocatalysis to produce fine chemicals, especially for the pharmaceutical industry.
Since biocatalysis deals with enzymes and microorganisms, it is historically classified separately from "homogeneous catalysis" and "heterogeneous catalysis". However, mechanistically speaking, biocatalysis is simply a special case of heterogeneous catalysis.
Advantages of chemoenzymatic synthesis
-Enzymes are environmentally benign, being completely degraded in the environment.
-Most enzymes typically function under mild or biological conditions, which minimizes problems of undesired side-reactions such as decomposition, isomerization, racemization and rearrangement, which often plague traditional methodology.
-Enzymes selected for chemoenzymatic synthesis can be immobilized on a solid support. These immobilized enzymes demonstrate improved stability and re-usability.
-Through the development of protein engineering, specifically site-directed mutagenesis and directed evolution, enzymes can be modified to enable non-natural reactivity. Modifications may also allow for a broader substrate range, enhance reaction rate or catalyst turnover.
-Enzymes exhibit extreme selectivity towards their substrates. Typically enzymes display three major types of selectivity:
Chemoselectivity: Since the purpose of an enzyme is to act on a single type of functional group, other sensitive functionalities, which would normally react to a certain extent under chemical catalysis, survive. As a result, biocatalytic reactions tend to be "cleaner" and laborious purification of product(s) from impurities emerging through side-reactions can largely be omitted.
Regioselectivity and diastereoselectivity: Due to their complex three-dimensional structure, enzymes may distinguish between functional groups which are chemically situated in different regions of the substrate molecule.
Enantioselectivity: Since almost all enzymes are made from L-amino acids, enzymes are chiral catalysts. As a consequence, any type of chirality present in the substrate molecule is "recognized" upon the formation of the enzyme-substrate complex. Thus a prochiral substrate may be transformed into an optically active product and both enantiomers of a racemic substrate may react at different rates.
These reasons, and especially the latter, are the major reasons why synthetic chemists have become interested in biocatalysis. This interest in turn is mainly due to the need to synthesize enantiopure compounds as chiral building blocks for Pharmaceutical drugs and agrochemicals.
Asymmetric biocatalysis
The use of biocatalysis to obtain enantiopure compounds can be divided into two different methods:
Kinetic resolution of a racemic mixture
Biocatalyzed asymmetric synthesis
In kinetic resolution of a racemic mixture, the presence of a chiral object (the enzyme) converts one of the stereoisomers of the reactant into its product at a greater reaction rate than for the other reactant stereoisomer. The stereochemical mixture has now been transformed into a mixture of two different compounds, making them separable by normal methodology.
Biocatalyzed kinetic resolution is utilized extensively in the purification of racemic mixtures of synthetic amino acids. Many popular amino acid synthesis routes, such as the Strecker Synthesis, result in a mixture of R and S enantiomers. This mixture can be purified by (I) acylating the amine using an anhydride and then (II) selectively deacylating only the L enantiomer using hog kidney acylase. These enzymes are typically extremely selective for one enantiomer leading to very large differences in rate, allowing for selective deacylation. Finally the two products are now separable by classical techniques, such as chromatography.
The maximum yield in such kinetic resolutions is 50%, since a yield of more than 50% means that some of wrong isomer also has reacted, giving a lower enantiomeric excess. Such reactions must therefore be terminated before equilibrium is reached. If it is possible to perform such resolutions under conditions where the two substrate- enantiomers are racemizing continuously, all substrate may in theory be converted into enantiopure product. This is called dynamic resolution.
In biocatalyzed asymmetric synthesis, a non-chiral unit becomes chiral in such a way that the different possible stereoisomers are formed in different quantities. The chirality is introduced into the substrate by influence of enzyme, which is chiral. Yeast is a biocatalyst for the enantioselective reduction of ketones.
The Baeyer–Villiger oxidation is another example of a biocatalytic reaction. In one study a specially designed mutant of Candida antarctica was found to be an effective catalyst for the Michael addition of acrolein with acetylacetone at 20 °C in absence of additional solvent.
Another study demonstrates how racemic nicotine (mixture of S and R-enantiomers 1 in scheme 3) can be deracemized in a one-pot procedure involving a monoamine oxidase isolated from Aspergillus niger which is able to oxidize only the amine S-enantiomer to the imine 2 and involving an ammonia–borane reducing couple which can reduce the imine 2 back to the amine 1. In this way the S-enantiomer will continuously be consumed by the enzyme while the R-enantiomer accumulates. It is even possible to stereoinvert pure S to pure R.
Photoredox enabled biocatalysis
Recently, photoredox catalysis has been applied to biocatalysis, enabling unique, previously inaccessible transformations. Photoredox chemistry relies upon light to generate free radical intermediates. These radical intermediates are achiral thus racemic mixtures of product are obtained when no external chiral environment is provided. Enzymes can provide this chiral environment within the active site and stabilize a particular conformation and favoring formation of one, enantiopure product. Photoredox enabled biocatalysis reactions fall into two categories:
Internal coenzyme/cofactor photocatalyst
External photocatalyst
Certain common hydrogen atom transfer (HAT) cofactors (NADPH and Flavin) can operate as single electron transfer (SET) reagents. Although these species are capable of HAT without irradiation, their redox potentials are enhance by nearly 2.0 V upon visible light irradiation. When paired with their respective enzymes (typically ene-reductases) This phenomenon has been utilized by chemists to develop enantioselective reduction methodologies. For example medium sized lactams can be synthesized in the chiral environment of an ene-reductase through a reductive, baldwin favored, radical cyclization terminated by enantioselective HAT from NADPH.
The second category of photoredox enabled biocatalytic reactions use an external photocatalyst (PC). Many types of PCs with a large range of redox potentials can be utilized, allowing for greater tunability of reactive compared to using a cofactor. Rose bengal, and external PC, was utilized in tandem with an oxidoreductase to enantioselectively deacylate medium sized alpha-acyl-ketones.
Using an external PC has some downsides. For example, external PCs typically complicate reaction design because the PC may react with both the bound and unbound substrate. If a reaction occurs between the unbound substrate and the PC, enantioselectivity is lost and other side reactions may occur.
Agricultural uses
Bioenzymes are also bio catalyst. They are prepared by fermentation of organic waste, jaggery and water in ratio 3:1:10 for three months. It increases the soil microbe population and speeds up composting and decomposition and so is included in catalyts. It heals the soil. It is one of the best best organic liquid fertilizer. It is diluted with water.
Further reading
Kim, Jinhyun; Lee, Sahng Ha; Tieves, Florian; Paul, Caroline E.; Hollmann, Frank; Park, Chan Beum (5 July 2019). "Nicotinamide adenine dinucleotide as a photocatalyst". Science Advances. 5 (7): eaax0501. doi:10.1126/sciadv.aax0501.
See also
List of enzymes
Industrial enzymes
References
External links
Austrian Centre of Industrial Biotechnology official website
The Centre of Excellence for Biocatalysis - CoEBio3
The University of Exeter - Biocatalysis Centre
Center for Biocatalysis and Bioprocessing - The University of Iowa
TU Delft - Biocatalysis & Organic Chemistry (BOC)
KTH Stockholm - Biocatalysis Research Group
Institute of Technical Biocatalysis at the Hamburg University of Technology (TUHH)
Biocascades Project
Enzymes
Organic chemistry
Catalysis | Biocatalysis | [
"Chemistry"
] | 2,156 | [
"Catalysis",
"Chemical kinetics",
"nan"
] |
2,505,643 | https://en.wikipedia.org/wiki/Triiodide | In chemistry, triiodide usually refers to the triiodide ion, . This anion, one of the polyhalogen ions, is composed of three iodine atoms. It is formed by combining aqueous solutions of iodide salts and iodine. Some salts of the anion have been isolated, including thallium(I) triiodide (Tl+[I3]−) and ammonium triiodide ([NH4]+[I3]−). Triiodide is observed to be a red colour in solution.
Nomenclature
Other chemical compounds with "triiodide" in their name may contain three iodide centers that are not bonded to each other as the triiodide ion, but exist instead as separate iodine atoms or iodide ions. Examples include nitrogen triiodide (NI3) and phosphorus triiodide (PI3), where individual iodine atoms are covalently bonded to a central atom. As some cations have the theoretical possibility to form compounds with both triiodide and iodide ions, such as ammonium, compounds containing iodide anions in a 3:1 stoichiometric ratio should only be referred to as triiodides in cases where the triiodide anion is present. It may also be helpful to indicate the oxidation number of a metal cation, where appropriate. For example, the covalent molecule gallium triiodide (Ga2I6) is better referred to as gallium(III) iodide to emphasise that it is iodide anions that are present, and not triiodide.
Preparation
The following exergonic equilibrium gives rise to the triiodide ion:
I2 + I− ⇌
In this reaction, iodide is viewed as a Lewis base, and the iodine is a Lewis acid. The process is analogous to the reaction of S8 with sodium sulfide (which forms polysulfides) except that the higher polyiodides have branched structures.
Structure and bonding
The ion is linear and symmetrical. According to valence shell electron pair repulsion theory, the central iodine atom has three equatorial lone pairs, and the terminal iodine atoms are bonded axially in a linear fashion, due to the three lone pairs bonding to the central iodine-atom. In the molecular orbital model, a common explanation for the hypervalent bonding on the central iodine involves a three-center four-electron bond. The I−I bond is longer than in diatomic iodine, I2.
In ionic compounds, the bond lengths and angles of triiodide vary depending on the nature of the cation. The triiodide anion is easily polarised and in many salts, one I−I bond becomes shorter than the other. Only in combination with large cations, e.g. a quaternary ammonium such as [N(CH3)4]+, may the triiodide remain roughly symmetrical.
In solution phase, the bond lengths and angles of triiodide vary depending on the nature of solvent. The protic solvents tend to localize the triiodide anion's excess charge, resulting in the triiodide anion's asymmetric structure. For example, the triiodide anion in methanol has an asymmetric bent structure with a charge localized on the longer end of the anion.
The dimensions of the triiodide [Ia−Ib−Ic]− bonds in a few sample compounds are shown below:
{| class="wikitable"
|-
! compound
! Ia−Ib (pm)
! Ib−Ic (pm)
! angle (°)
|-
| TlI3
| 306.3
| 282.6
| 177.9
|-
| RbI3
| 305.1
| 283.3
| 178.11
|-
| CsI3
| 303.8
| 284.2
| 178.00
|-
| NH4I3
| 311.4
| 279.7
| 178.55
|-
| I3− (in methanol)
| 309.0
| 296.0
| 152.0
|}
Properties
The triiodide ion is the simplest polyiodide; several higher polyiodides exist. In solution, it appears yellow in low concentrations, and brown at higher concentrations. The triiodide ion is responsible for the well-known blue-black color which arises when iodine solutions interact with starch. Iodide does not react with starch; nor do solutions of iodine in nonpolar solvents.
Lugol's iodine contains potassium iodide and a stoichiometric amount of elemental iodine, so that significant amounts of triiodide ion exist in this solution. Tincture of iodine, although nominally a solution of elemental iodine in ethanol, also contains significant amounts of triiodide, due to its content of both iodide and water.
Photochemistry
Triiodide is a model system in photochemistry. Its reaction mechanism has been studied in gas phase, solution and the solid state. In gas phase, the reaction proceeds in multiple pathways that include iodine molecule, metastable ions and iodine radicals as photoproducts, which are formed by two-body and three-body dissociation. In condensed phases, due to confinement, geminate recombination is more common. In solution, only two-body dissociation of triiodide has been observed. In the protic solvents, an iodine atom at the shorter end of the triiodide anion dissociates upon photoexcitation showing two-body dissociation. In the solid state, the triiodide photochemistry has been studied in compounds involving quaternary ammonium cations, such as tetrabutylammonium triiodide. It has been shown that the solid state photoreaction mechanism depends on the light wavelength, yielding fast recovery in a few picoseconds or going through a two-stage process that involves the formation and break-up of a tetraiodide intermediate on longer timescales. Besides, triiodide photochemistry is an important contributor in the environmental cycle of iodine. Because of the presence of heavy iodine atoms and the well-calibrated chemical pathways, triiodide has also become a computational benchmark system for relativistic quantum chemistry.
Electrochemistry
The redox reactions of triiodide and iodide has been proposed as critical steps in dye-sensitized solar cells. and rechargeable batteries.
See also
Polyiodide
Tribromide
Polyhalogen ions
Three-center four-electron bond
Iodine–starch test
Iodometry
Povidone-iodine
Lugol's iodine
Dye-sensitized solar cell
Organic superconductor
References
External links
Kinetic study of the iodine–persulfate reaction
Anions
Iodides
Hypervalent molecules
Homonuclear triatomic molecules
Polyhalides
pl:Trójjodki | Triiodide | [
"Physics",
"Chemistry"
] | 1,491 | [
"Matter",
"Anions",
"Molecules",
"Hypervalent molecules",
"Ions"
] |
2,505,822 | https://en.wikipedia.org/wiki/Helicopter%20flight%20controls | Helicopter flight controls are used to achieve and maintain controlled aerodynamic helicopter flight. Changes to the aircraft flight control system transmit mechanically to the rotor, producing aerodynamic effects on the rotor blades that make the helicopter move in a desired way. To tilt forward and back (pitch) or sideways (roll) requires that the controls alter the angle of attack of the main rotor blades cyclically during rotation, creating differing amounts of lift at different points in the cycle. To increase or decrease overall lift requires that the controls alter the angle of attack for all blades collectively by equal amounts at the same time, resulting in ascent, descent, acceleration and deceleration.
A typical helicopter has three flight control inputs: the cyclic stick, the collective lever, and the anti-torque pedals. Depending on the complexity of the helicopter, the cyclic and collective may be linked together by a mixing unit, a mechanical or hydraulic device that combines the inputs from both and then sends along the "mixed" input to the control surfaces to achieve the desired result. The manual throttle may also be considered a flight control because it is needed to maintain rotor speed on smaller helicopters without governors. The governors also help the pilot control the collective pitch on the helicopter's main rotors, to keep a stable, more accurate flight.
Controls
Cyclic
The cyclic control, commonly called the cyclic stick or just cyclic, is similar in appearance on most helicopters to a control stick from a fixed-wing aircraft. The cyclic stick commonly rises up from beneath the front of each pilot's seat. The Robinson R22 has a "teetering" cyclic design connected to a central column located between the two seats. Helicopters with fly-by-wire systems allow a cyclic-style controller to be mounted to the side of the pilot seat.
The cyclic is used to control the main rotor in order to change the helicopter's direction of movement. In a hover, the cyclic controls the movement of the helicopter forward, back, and laterally. During forward flight, the cyclic control inputs cause flight path changes similar to fixed-wing aircraft flight; left or right inputs cause the helicopter to roll into a turn in the desired direction, and forward and back inputs change the pitch attitude of the helicopter resulting in altitude changes (climbing or descending flight).
The control is called the cyclic because it independently changes the mechanical pitch angle or feathering angle of each main rotor blade according to its position in the cycle. The pitch is changed so that each blade will have the same angle of incidence as it passes the same point in the cycle, changing the lift generated by the blade at that point and causing each blade to change its angle of incidence, that is, to rotate slightly along its long axis, in sequence as it passes the same point. If that point is dead ahead, the blade pitch increases briefly in that direction. Thus, If the pilot pushes the cyclic forward, the rotor disk tilts forward, and the helicopter is drawn straight ahead. If the pilot pushes the cyclic to the right, the rotor disk tilts to the right.
Any rotor system has a delay between the point in rotation where the controls introduce a change in pitch and the point where the desired change in the rotor blade's flight occurs. This difference is caused by phase lag, often confused with gyroscopic precession. A rotor is an oscillatory system that obeys the laws that govern vibration—which, depending on the rotor system, may resemble the behaviour of a gyroscope.
Collective
The collective pitch control, or collective lever, is normally located on the left side of the pilot's seat with an adjustable friction control to prevent inadvertent movement. The collective changes the pitch angle of all the main rotor blades collectively (i.e., all at the same time) and is independent of their position in the rotational cycle. Therefore, if a collective input is made, all the blades change equally, and as a result, the helicopter increases or decreases its total lift derived from the rotor. In level flight this would cause a climb or descent, while with the helicopter pitched forward an increase in total lift would produce an acceleration together with a given amount of ascent.
If a helicopter suffers a power failure a pilot can adjust the collective pitch to keep the rotor spinning, generating enough lift to touch down and skid in a relatively soft landing.
The collective pitch control in a Boeing CH-47 Chinook is called a thrust control, but serves the same purpose, except that it controls two rotor systems, applying differential collective pitch.
Throttle
Helicopter rotors are designed to operate at a specific rotational speed. The throttle controls the power of the engine, which is connected to the rotor by a transmission. The throttle setting must maintain enough engine power to keep the rotor speed within the limits where the rotor produces enough lift for flight. In many helicopters, the throttle control is a single or dual motorcycle-style twist grip mounted on the collective control (rotation is opposite of a motorcycle throttle), while some multi-engine helicopters have power levers.
In many piston engine-powered helicopters, the pilot manipulates the throttle to maintain rotor speed. Turbine engine helicopters, and some piston helicopters, use governors or other electro-mechanical control systems to maintain rotor speed and relieve the pilot of routine responsibility for that task. (There is normally also a manual reversion available in the event of a governor failure.)
Anti-torque pedals
The anti-torque pedals are located in the same place as the rudder pedals in an airplane, and serve a similar purpose—they control the direction that the nose of the aircraft points. Applying the pedal in a given direction changes the tail rotor blade pitch, increasing or reducing tail rotor thrust and making the nose yaw in the direction of the applied pedal
Later designs known as 'NOTAR' use an air stream to provide anti-torque control instead of a tail rotor. This air stream is generated in the fuselage by a small fan or turbine, and directed out of the rear of the tail-boom through vent holes. Internal control vanes can vary this flow, allowing the yaw axis to be controlled. NOTAR systems are safer than using a spinning tail rotor, and the absence of the rotor also removes its associated drag, potentially increasing efficiency.
Flight conditions
There are three basic flight conditions for a helicopter: hover, forward flight and autorotation.
Hover
Some pilots consider hovering the most challenging aspect of helicopter flight. Because helicopters are generally dynamically unstable, deviations from a given attitude are not corrected without pilot input. Thus, frequent control inputs and corrections must be made by the pilot to keep the helicopter at a desired location and altitude. The pilot's use of control inputs in a hover is as follows: the cyclic is used to eliminate drift in the horizontal plane (e.g., forward, aft, and side to side motion); the collective is used to maintain desired altitude; and the tail rotor (or anti-torque system) pedals are used to control nose direction or heading. It is the interaction of these controls that can make learning to hover difficult, since often an adjustment in any one control requires adjustment of the other two, necessitating pilot familiarity with the coupling of control inputs needed to produce smooth flight.
Forward flight
In forward flight, a helicopter's flight controls behave more like those in a fixed-wing aircraft. Moving the cyclic forward makes the nose pitch down, thus losing altitude and increasing airspeed. Moving the cyclic back makes the nose pitch up, slowing the helicopter and making it climb. Increasing collective (power) while maintaining a constant airspeed induces a climb, while decreasing collective (power) makes the helicopter descend. Coordinating these two inputs, down collective plus aft (back) cyclic or up collective plus forward cyclic causes airspeed changes while maintaining a constant altitude. The pedals serve the same function in both a helicopter and an airplane, to maintain balanced flight. This is done by applying a pedal input in the direction necessary to center the ball in the turn and bank indicator.
Forward flight in a helicopter has limitations different from a fixed-wing aircraft. In a fixed-wing aircraft the maximum airspeed is limited by the stress that the airframe can withstand; in a helicopter it is limited by the RPM of the rotor and the effective airspeed over each blade.
In a stationary hover, each rotor blade will experience the same airspeed at a constant RPM. In forward flight conditions, one rotor blade will be moving into the oncoming air stream while the other moves away from it. At certain airspeeds, this can create a dangerous condition in which the receding rotor blade stalls, causing unstable flight.
Autorotation
Differential pitch control
For helicopters with two horizontally-mounted rotors, changes in attitude often require having the two rotors behave inversely in response to the standard control inputs from the pilot. Those with coaxial rotors (such as the Kamov Ka-50) have both rotors mounted on the same mast, one above the other on concentric drive shafts contra-rotating—spinning in opposite directions on a shared axis—and make yaw changes by increasing the collective pitch of the rotor spinning in the direction of the desired turn while simultaneously reducing the collective pitch of the other, creating dissymmetry of torque.
Tandem-rotor craft (such as in the Boeing CH-47 Chinook) also employ two rotors spinning in opposite directions—termed counter-rotation when it occurs from two separate points on the same airframe—but have the rotors on separate drive shafts through masts at the nose and tail. This configuration uses differential collective pitch to change the overall pitch attitude of the aircraft. When the pilot moves the cyclic forward to pitch the nose down and accelerate forward, the helicopter responds by decreasing collective pitch on the front rotor and increasing collective pitch on the rear rotor proportionally, pivoting the two ends around their common center of mass. Changes in yaw are made with differential cyclic pitch, the front rotor altering cyclic pitch in the direction desired and the opposite pitch applied to the rear, once again pivoting the craft around its center.
Conversely, the synchropter and transverse-mounted rotor counter rotating rotorcraft (such as the Bell/Boeing V-22 tilt rotor) have two large horizontal rotor assemblies mounted side by side, and use differential collective pitch to affect the roll of the aircraft. Like tandem rotors, differential cyclic pitch is used to control movement about the yaw axis.
See also
Aeronautical engineering
Autogyro
Helicopter rotor
References
Notes
Sources
Flight Standards Service. Rotorcraft Flying Handbook: FAA Manual H-8083-21. Washington, DC: Flight Standards Service, Federal Aviation Administration, U.S. Dept. of Transportation, 2001. .
AOPA: Aircraft Owners and Pilots Association http://www.aopa.org/News-and-Video/All-News/2013/November/27/rotocraft-rookie-helicopter-controls
Helicopter components
Aircraft controls
Helicopter | Helicopter flight controls | [
"Engineering"
] | 2,231 | [
"Systems engineering",
"Aircraft systems"
] |
2,506,847 | https://en.wikipedia.org/wiki/Hormonal%20sentience | Hormonal sentience, first described by Robert A. Freitas Jr., describes the information processing rate in plants, which are mostly based on hormones instead of neurons like in all major animals (except sponges). Plants can to some degree communicate with each other and there are even examples of one-way-communication with animals.
Acacia trees produce tannin to defend themselves when they are grazed upon by animals. The airborne scent of the tannin is picked up by other acacia trees, which then start to produce tannin themselves as a protection from the nearby animals.
When attacked by caterpillars, some plants can release chemical signals to attract parasitic wasps that attack the caterpillars.
A similar phenomenon can be found not only between plants and animals, but also between fungi and animals. There exists some sort of communication between a fungus garden and workers of the leaf-cutting ant Atta sexdens rubropilosa. If the garden is fed with plants that are poisonous for the fungus, it signals this to the ants, which then will avoid fertilizing the fungus garden with any more of the poisonous plant.
The Venus flytrap, during a 1- to 20-second sensitivity interval, counts two stimuli before snapping shut on its insect prey, a processing peak of 1 bit/s. Mass is 10–100 grams, so the flytrap's SQ is about +1. Plants generally take hours to respond to stimuli though, so vegetative SQs (Sentience Quotient) tend to cluster around -2.
See also
Biosemiotics
Phytosemiotics
Plant intelligence
Plant perception (physiology)
:Category:Plant intelligence
References
External links
Xenopsychology by Robert A. Freitas Jr.
Botany | Hormonal sentience | [
"Biology"
] | 362 | [
"Plants",
"Botany"
] |
2,507,104 | https://en.wikipedia.org/wiki/Aposematism | Aposematism is the advertising by an animal, whether terrestrial or marine, to potential predators that it is not worth attacking or eating. This unprofitability may consist of any defenses which make the prey difficult to kill and eat, such as toxicity, venom, foul taste or smell, sharp spines, or aggressive nature. These advertising signals may take the form of conspicuous coloration, sounds, odours, or other perceivable characteristics. Aposematic signals are beneficial for both predator and prey, since both avoid potential harm.
The term was coined in 1877 by Edward Bagnall Poulton for Alfred Russel Wallace's concept of warning coloration. Aposematism is exploited in Müllerian mimicry, where species with strong defences evolve to resemble one another. By mimicking similarly coloured species, the warning signal to predators is shared, causing them to learn more quickly at less of a cost.
A genuine aposematic signal that a species actually possesses chemical or physical defences is not the only way to deter predators. In Batesian mimicry, a mimicking species resembles an aposematic model closely enough to share the protection, while many species have bluffing deimatic displays which may startle a predator long enough to enable an otherwise undefended prey to escape.
Etymology
The term aposematism was coined by the English zoologist Edward Bagnall Poulton in his 1890 book The Colours of Animals. He based the term on the Ancient Greek words ἀπό apo 'away' and σῆμα sēma 'sign', referring to signs that warn other animals away.
Defence mechanism
The function of aposematism is to prevent attack, by warning potential predators that the prey animal has defenses such as being unpalatable or poisonous. The easily detected warning is a primary defense mechanism, and the non-visible defenses are secondary. Aposematic signals are primarily visual, using bright colours and high-contrast patterns such as stripes. Warning signals are honest indications of noxious prey, because conspicuousness evolves in tandem with noxiousness. Thus, the brighter and more conspicuous the organism, the more toxic it usually is. This is in contrast to deimatic displays, which attempt to startle a predator with a threatening appearance but which are bluffing, unsupported by any strong defences.
The most common and effective colours are red, yellow, black, and white. These colours provide strong contrast with green foliage, resist changes in shadow and lighting, are highly chromatic, and provide distance dependent camouflage. Some forms of warning coloration provide this distance dependent camouflage by having an effective pattern and color combination that do not allow for easy detection by a predator from a distance, but are warning-like from a close proximity, allowing for an advantageous balance between camouflage and aposematism. Warning coloration evolves in response to background, light conditions, and predator vision. Visible signals may be accompanied by odors, sounds or behavior to provide a multi-modal signal which is more effectively detected by predators.
Unpalatability, broadly understood, can be created in a variety of ways. Some insects such as the ladybird or tiger moth contain bitter-tasting chemicals, while the skunk produces a noxious odor, and the poison glands of the poison dart frog, the sting of a velvet ant or neurotoxin in a black widow spider make them dangerous or painful to attack. Tiger moths advertise their unpalatability by either producing ultrasonic noises which warn bats to avoid them, or by warning postures which expose brightly coloured body parts (see Unkenreflex), or exposing eyespots. Velvet ants (actually parasitic wasps) such as Dasymutilla occidentalis both have bright colours and produce audible noises when grabbed (via stridulation), which serve to reinforce the warning. Among mammals, predators can be dissuaded when a smaller animal is aggressive and able to defend itself, as for example in honey badgers.
Prevalence
In terrestrial ecosystems
Aposematism is widespread in insects, but less so in vertebrates, being mostly confined to a smaller number of reptile, amphibian, and fish species, and some foul-smelling or aggressive mammals. Pitohuis, red and black birds whose toxic feathers and skin apparently comes from the poisonous beetles they ingest, could be included. It has been proposed that aposematism played a role in human evolution, body odour carrying a warning to predators of large hominins able to defend themselves with weapons.
Perhaps the most numerous aposematic vertebrates are the poison dart frogs (family: Dendrobatidae). These neotropical anuran amphibians exhibit a wide spectrum of coloration and toxicity. Some species in this poison frog family (particularly Dendrobates, Epipedobates, and Phyllobates) are conspicuously coloured and sequester one of the most toxic alkaloids among all living species. Within the same family, there are also cryptic frogs (such as Colostethus and Mannophryne) that lack these toxic alkaloids. Although these frogs display an extensive array of coloration and toxicity, there is very little genetic difference between the species. Evolution of their conspicuous coloration is correlated to traits such as chemical defense, dietary specialization, acoustic diversification, and increased body mass.
Some plants are thought to employ aposematism to warn herbivores of unpalatable chemicals or physical defences such as prickled leaves or thorns. Many insects, such as cinnabar moth caterpillars, acquire toxic chemicals from their host plants. Among mammals, skunks and zorillas advertise their foul-smelling chemical defences with sharply contrasting black-and-white patterns on their fur, while the similarly-patterned badger and honey badger advertise their sharp claws, powerful jaws, and aggressive natures. Some brightly coloured birds such as passerines with contrasting patterns may also be aposematic, at least in females; but since male birds are often brightly coloured through sexual selection, and their coloration is not correlated with edibility, it is unclear whether aposematism is significant.
The sound-producing rattle of rattlesnakes is an acoustic form of aposematism. Sound production by the caterpillar of the Polyphemus moth, Antheraea polyphemus, may similarly be acoustic aposematism, connected to and preceded by chemical defences. Similar acoustic defences exist in a range of Bombycoidea caterpillars.
In marine ecosystems
The existence of aposematism in marine ecosystems has been debated. Many marine organisms, particularly those on coral reefs, are brightly coloured or patterned, including sponges, corals, molluscs, and fish, with little or no connection to chemical or physical defenses. Caribbean reef sponges are brightly coloured, and many species are full of toxic chemicals, but there is no statistical relationship between the two factors.
Nudibranch molluscs are the most commonly cited examples of aposematism in marine ecosystems, but the evidence for this has been contested, mostly because (1) there are few examples of mimicry among species, (2) many species are nocturnal or cryptic, and (3) bright colours at the red end of the colour spectrum are rapidly attenuated as a function of water depth. For example, the Spanish Dancer nudibranch (genus Hexabranchus), among the largest of tropical marine slugs, potently chemically defended, and brilliantly red and white, is nocturnal and has no known mimics.
Mimicry is to be expected as Batesian mimics with weak defences can gain a measure of protection from their resemblance to aposematic species. Other studies have concluded that nudibranchs such as the slugs of the family Phyllidiidae from Indo-Pacific coral reefs are aposematically coloured. Müllerian mimicry has been implicated in the coloration of some Mediterranean nudibranchs, all of which derive defensive chemicals from their sponge diet.
The crown-of-thorns starfish, like other starfish such as Metrodira subulata, has conspicuous coloration and conspicuous long, sharp spines, as well as cytolytic saponins, chemicals which could function as an effective defence; this evidence is argued to be sufficient for such species to be considered aposematic.
It has been proposed that aposematism and mimicry is less evident in marine invertebrates than terrestrial insects because predation is a more intense selective force for many insects, which disperse as adults rather than as larvae and have much shorter generation times. Further, there is evidence that fish predators such as blueheads may adapt to visual cues more rapidly than do birds, making aposematism less effective. However, there is experimental evidence that pink warty sea cucumbers are aposematic, and that the chromatic and achromatic signals that they provide to predators both independently reduce the rate of attack.
Blue-ringed octopuses are venomous. They spend much of their time hiding in crevices whilst displaying effective camouflage patterns with their dermal chromatophore cells. However, if they are provoked, they quickly change colour, becoming bright yellow with each of the 50-60 rings flashing bright iridescent blue within a third of a second. It is often stated this is an aposematic warning display, but the hypothesis has rarely if ever been tested.
Behaviour
The mechanism of defence relies on the memory of the would-be predator; a bird that has once experienced a foul-tasting grasshopper will endeavor to avoid a repetition of the experience. As a consequence, aposematic species are often gregarious. Before the memory of a bad experience attenuates, the predator may have the experience reinforced through repetition. Aposematic organisms are often slow-moving, as they have little need for speed and agility. Instead, their morphology is frequently tough and resistant to injury, thereby allowing them to escape once the predator is warned off.
Aposematic species do not need to hide or stay still as cryptic organisms do, so aposematic individuals benefit from more freedom in exposed areas and can spend more time foraging, allowing them to find more and better quality food. They may make use of conspicuous mating displays, including vocal signals, which may then develop through sexual selection.
Origins of the theory
Wallace, 1867
In a letter to Alfred Russel Wallace dated 23 February 1867, Charles Darwin wrote, "On Monday evening I called on Bates & put a difficulty before him, which he could not answer, & as on some former similar occasion, his first suggestion was, 'you had better ask Wallace'. My difficulty is, why are caterpillars sometimes so beautifully & artistically coloured?" Darwin was puzzled because his theory of sexual selection (where females choose their mates based on how attractive they are) could not apply to caterpillars since they are immature and hence not sexually active.
Wallace replied the next day with the suggestion that since some caterpillars "...are protected by a disagreeable taste or odour, it would be a positive advantage to them never to be mistaken for any of the palatable catterpillars [sic], because a slight wound such as would be caused by a peck of a bird's bill almost always I believe kills a growing . Any gaudy & conspicuous colour therefore, that would plainly distinguish them from the brown & green eatable , would enable birds to recognise them easily as at a kind not fit for food, & thus they would escape seizure which is as bad as being eaten."
Since Darwin was enthusiastic about the idea, Wallace asked the Entomological Society of London to test the hypothesis. In response, the entomologist John Jenner Weir conducted experiments with caterpillars and birds in his aviary, and in 1869 he provided the first experimental evidence for warning coloration in animals. The evolution of aposematism surprised 19th-century naturalists because the probability of its establishment in a population was presumed to be low, since a conspicuous signal suggested a higher chance of predation.
Poulton, 1890
Wallace coined the term "warning colours" in an article about animal coloration in 1877. In 1890 Edward Bagnall Poulton renamed the concept aposematism in his book The Colours of Animals. He described the derivation of the term as follows:
Evolution
Aposematism is paradoxical in evolutionary terms, as it makes individuals conspicuous to predators, so they may be killed and the trait eliminated before predators learn to avoid it. If warning coloration puts the first few individuals at such a strong disadvantage, it would never last in the species long enough to become beneficial.
Supported explanations
There is evidence for explanations involving dietary conservatism, in which predators avoid new prey because it is an unknown quantity; this is a long-lasting effect. Dietary conservatism has been demonstrated experimentally in some species of birds and fish.
Further, birds recall and avoid objects that are both conspicuous and foul-tasting longer than objects that are equally foul-tasting but cryptically coloured. This suggests that Wallace's original view, that warning coloration helped to teach predators to avoid prey thus coloured, was correct. However, some birds (inexperienced starlings and domestic chicks) also innately avoid conspicuously coloured objects, as demonstrated using mealworms painted yellow and black to resemble wasps, with dull green controls. This implies that warning coloration works at least in part by stimulating the evolution of predators to encode the meaning of the warning signal, rather than by requiring each new generation to learn the signal's meaning. All of these results contradict the idea that novel, brightly coloured individuals would be more likely to be eaten or attacked by predators.
Alternative hypotheses
Other explanations are possible. Predators might innately fear unfamiliar forms (neophobia) long enough for them to become established, but this is likely to be only temporary.
Alternatively, prey animals might be sufficiently gregarious to form clusters tight enough to enhance the warning signal. If the species was already unpalatable, predators might learn to avoid the cluster, protecting gregarious individuals with the new aposematic trait. Gregariousness would assist predators to learn to avoid unpalatable, gregarious prey. Aposematism could also be favoured in dense populations even if these are not gregarious.
Another possibility is that a gene for aposematism might be recessive and located on the X chromosome. If so, predators would learn to associate the colour with unpalatability from males with the trait, while heterozygous females carry the trait until it becomes common and predators understand the signal. Well-fed predators might also ignore aposematic morphs, preferring other prey species.
A further explanation is that females might prefer brighter males, so sexual selection could result in aposematic males having higher reproductive success than non-aposematic males if they can survive long enough to mate. Sexual selection is strong enough to allow seemingly maladaptive traits to persist despite other factors working against the trait.
Once aposematic individuals reach a certain threshold population, for whatever reason, the predator learning process would be spread out over a larger number of individuals and therefore is less likely to wipe out the trait for warning coloration completely. If the population of aposematic individuals all originated from the same few individuals, the predator learning process would result in a stronger warning signal for surviving kin, resulting in higher inclusive fitness for the dead or injured individuals through kin selection.
A theory for the evolution of aposematism posits that it arises by reciprocal selection between predators and prey, where distinctive features in prey, which could be visual or chemical, are selected by non-discriminating predators, and where, concurrently, avoidance of distinctive prey is selected by predators. Concurrent reciprocal selection (CRS) may entail learning by predators or it may give rise to unlearned avoidances by them. Aposematism arising by CRS operates without special conditions of the gregariousness or the relatedness of prey, and it is not contingent upon predator sampling of prey to learn that aposematic cues are associated with unpalatability or other unprofitable features.
Mimicry
Aposematism is a sufficiently successful strategy to have had significant effects on the evolution of both aposematic and non-aposematic species.
Non-aposematic species have often evolved to mimic the conspicuous markings of their aposematic counterparts. For example, the hornet moth is a deceptive mimic of the yellowjacket wasp; it resembles the wasp, but has no sting. A predator which avoids the wasp will to some degree also avoid the moth. This is known as Batesian mimicry, after Henry Walter Bates, a British naturalist who studied Amazonian butterflies in the second half of the 19th century. Batesian mimicry is frequency dependent: it is most effective when the ratio of mimic to model is low; otherwise, predators will encounter the mimic too often.
A second form of mimicry occurs when two aposematic organisms share the same anti-predator adaptation and non-deceptively mimic each other, to the benefit of both species, since fewer individuals of either species need to be attacked for predators to learn to avoid both of them. This form of mimicry is known as Müllerian mimicry, after Fritz Müller, a German naturalist who studied the phenomenon in the Amazon in the late 19th century.
Many species of bee and wasp that occur together are Müllerian mimics. Their similar coloration teaches predators that a striped pattern is associated with being stung. Therefore, a predator which has had a negative experience with any such species will likely avoid any that resemble it in the future. Müllerian mimicry is found in vertebrates such as the mimic poison frog (Ranitomeya imitator) which has several morphs throughout its natural geographical range, each of which looks very similar to a different species of poison frog which lives in that area.
See also
Handicap principle
References
Sources
External links
Signalling theory
Animal communication
Antipredator adaptations
Evolution by phenotype
Warning coloration
Ecology
Chemical ecology | Aposematism | [
"Chemistry",
"Biology"
] | 3,710 | [
"Chemical ecology",
"Ecology",
"Biological defense mechanisms",
"Antipredator adaptations",
"Biochemistry"
] |
2,507,339 | https://en.wikipedia.org/wiki/Pyrazinamide | Pyrazinamide is a medication used to treat tuberculosis. For active tuberculosis, it is often used with rifampicin, isoniazid, and either streptomycin or ethambutol. It is not generally recommended for the treatment of latent tuberculosis. It is taken by mouth.
Common side effects include nausea, loss of appetite, muscle and joint pains, and rash. More serious side effects include gout, liver toxicity, and sensitivity to sunlight. It is not recommended in those with significant liver disease or porphyria. It is unclear if use during pregnancy is safe but it is likely okay during breastfeeding. Pyrazinamide is in the antimycobacterial class of medications. How it works is not entirely clear.
Pyrazinamide was first made in 1936, but did not come into wide use until 1972. It is on the World Health Organization's List of Essential Medicines. Pyrazinamide is available as a generic medication.
Medical uses
Pyrazinamide is only used in combination with other drugs such as isoniazid and rifampicin in the treatment of Mycobacterium tuberculosis and as directly observed therapy (DOT). It is never used on its own. It has no other indicated medical uses. In particular, it is not used to treat other mycobacteria; Mycobacterium bovis and Mycobacterium leprae are innately resistant to pyrazinamide.
Pyrazinamide is used in the first 2 months of treatment to reduce the duration of treatment required. Regimens not containing pyrazinamide must be taken for 9 months or more.
Pyrazinamide is a potent antiuricosuric drug and consequently has an off-label use in the diagnosis of causes of hypouricemia and hyperuricosuria. It acts on URAT1.
Adverse effects
The most common (roughly 1%) side effect of pyrazinamide is joint pains (arthralgia), but this is not usually so severe that patients need to stop taking it. Pyrazinamide can precipitate gout flares by decreasing renal excretion of uric acid.
The most dangerous side effect of pyrazinamide is hepatotoxicity, which is dose-related. The old dose for pyrazinamide was 40–70 mg/kg daily and the incidence of drug-induced hepatitis has fallen significantly since the recommended dose has been reduced to 12–30 mg/kg daily. In the standard four-drug regimen (isoniazid, rifampicin, pyrazinamide, ethambutol), pyrazinamide is the most common cause of drug-induced hepatitis. It is not possible to clinically distinguish pyrazinamide-induced hepatitis from hepatitis caused by isoniazid or rifampicin; test dosing is required (this is discussed in detail in tuberculosis treatment)
Other side effects include nausea and vomiting, anorexia, sideroblastic anemia, skin rash, urticaria, pruritus, dysuria, interstitial nephritis, malaise, rarely porphyria, and fever.
Pharmacokinetics
Pyrazinamide is well absorbed orally. It crosses inflamed meninges and is an essential part of the treatment of tuberculous meningitis. It is metabolised by the liver and the metabolic products are excreted by the kidneys.
Pyrazinamide is routinely used in pregnancy in the UK and the rest of the world; the World Health Organization (WHO) recommends its use in pregnancy; and extensive clinical experience shows that it is safe. In the US, pyrazinamide is not used in pregnancy, citing insufficient evidence of safety. Pyrazinamide is removed by haemodialysis, so doses should always be given at the end of a dialysis session.
Mechanism of action
Pyrazinamide is a prodrug that stops the growth of M. tuberculosis.
Pyrazinamide diffuses into the granuloma of M. tuberculosis, where the tuberculosis enzyme pyrazinamidase converts pyrazinamide to the active form pyrazinoic acid. Under acidic conditions of pH 5 to 6, the pyrazinoic acid that slowly leaks out converts to the protonated conjugate acid, which is thought to diffuse easily back into the bacilli and accumulate. The net effect is that more pyrazinoic acid accumulates inside the bacillus at acid pH than at neutral pH.
Pyrazinoic acid was thought to inhibit the enzyme fatty acid synthase (FAS) I, which is required by the bacterium to synthesize fatty acids although this has been discounted. The accumulation of pyrazinoic acid was also suggested to disrupt membrane potential and interfere with energy production, necessary for survival of M. tuberculosis at an acidic site of infection. However, since an acidic environment is not essential for pyrazinamide susceptibility and pyrazinamide treatment does not lead to intrabacterial acidification nor rapid disruption of membrane potential, this model has also been discounted. Pyrazinoic acid was proposed to bind to the ribosomal protein S1 (RpsA) and inhibit trans-translation, but more detailed experiments have shown that it does not have this activity.
The current hypothesis is that pyrazinoic acid blocks synthesis of coenzyme A. Pyrazinoic acid binds weakly to the aspartate decarboxylase PanD, triggering its degradation. This is an unusual mechanism of action in that pyrazinamide does not directly block the action of its target, but indirectly triggers its destruction.
Resistance
Mutations in the pncA gene of M. tuberculosis, which encodes a pyrazinamidase and converts pyrazinamide to its active form pyrazinoic acid, are responsible for the majority of pyrazinamide resistance in M. tuberculosis strains. A few pyrazinamide-resistant strains with mutations in the rpsA gene have also been identified. However, a direct association between these rpsA mutations and pyrazinamide resistance has not been established. The pyrazinamide-resistant M. tuberculosis strain DHMH444, which harbors a mutation in the carboxy terminal coding region of rpsA, is fully susceptible to pyrazinoic acid and pyrazinamide resistance of this strain was previously associated with decreased pyrazinamidase activity. Further, this strain was found to be susceptible to pyrazinamide in a mouse model of tuberculosis. Thus, current data indicate that rpsA mutations are not likely to be associated with pyrazinamide resistance. Currently, three main methods of testing are used for pyrazinamide resistance: 1) phenotypic tests where a tuberculosis strain is grown in the presence of increasing concentrations of pyrazinamide, 2) measuring levels of pyrazinamidase enzyme produced by the tuberculosis strain, or 3) looking for mutations in the pncA gene of tuberculosis. Concerns exist that the most widely used method for phenotypic resistance testing may overestimate the number of resistant strains.
Global resistance of tuberculosis to pyrazinamide has been estimated to be in 16% of all cases, and 60% of people with multidrug-resistant tuberculosis.
Abbreviations
The abbreviations PZA and Z are standard, and used commonly in the medical literature, although best practice discourages the abbreviating of drug names to prevent mistakes.
Presentation
Pyrazinamide is a generic drug, and is available in a wide variety of presentations. Pyrazinamide tablets form the bulkiest part of the standard tuberculosis treatment regimen. Pyrazinamide tablets are so large, some people find them impossible to swallow: pyrazinamide syrup is an option.
Pyrazinamide is also available as part of fixed-dose combinations with other TB drugs such as isoniazid and rifampicin (Rifater is an example).
History
Pyrazinamide was first discovered and patented in 1936, but not used against tuberculosis until 1952. Its discovery as an antitubercular agent was remarkable since it has no activity against tuberculosis in vitro, due to not being active at a neutral pH, so would ordinarily not be expected to work in vivo. However, nicotinamide was known to have activity against tuberculosis and pyrazinamide was thought to have a similar effect. Experiments in mice at Lederle and Merck confirmed its ability to kill tuberculosis and it was rapidly used in humans.
References
Anti-tuberculosis drugs
Carboxamides
Enones
Prodrugs
Pyrazines
World Health Organization essential medicines
Wikipedia medicine articles ready to translate
Orphan drugs | Pyrazinamide | [
"Chemistry"
] | 1,858 | [
"Chemicals in medicine",
"Prodrugs"
] |
2,507,552 | https://en.wikipedia.org/wiki/Microchimerism | Microchimerism is the presence of a small number of cells in an individual that have originated from another individual and are therefore genetically distinct. This phenomenon may be related to certain types of autoimmune diseases although the responsible mechanisms are unclear. The term comes from the prefix "micro" + "chimerism" based on the hybrid Chimera of Greek mythology. The concept was first discovered in the 1960s with the term gaining usage in the 1970s.
Types
Human
In humans (and perhaps in all placental mammals), the most common form is fetomaternal microchimerism (also known as fetal cell microchimerism or fetal chimerism) whereby cells from a fetus pass through the placenta and establish cell lineages within the mother. Fetal cells have been documented to persist and multiply in the mother for several decades. The exact phenotype of these cells is unknown, although several different cell types have been identified, such as various immune lineages, mesenchymal stem cells, and placental-derived cells. A 2012 study at the Fred Hutchinson Cancer Research Center, Seattle, has detected cells with the Y chromosome in multiple areas of the brains of deceased women.
Fetomaternal microchimerism occurs during pregnancy and shortly after giving birth for most women. However, not all women who have had children contain fetal cells. Studies suggest that fetomaternal microchimerism could be influenced by killer-cell immunoglobulin-like (KIR) ligands. Lymphocytes also influence the development of persisting fetomaternal microchimerism since natural killer cells compose about 70% of lymphocytes in the first trimester of pregnancy. KIR patterns on maternal natural killer cells of the mother and KIR ligands on the fetal cells could have an effect on fetomaternal microchimerism. In one study, mothers with KIR2DS1 exhibited higher levels of fetomaternal microchimerism compared to mothers who were negative for this activating KIR.
The potential health consequences of these cells are unknown. One hypothesis is that these fetal cells might trigger a graft-versus-host reaction leading to autoimmune disease. This offers a potential explanation for why many autoimmune diseases are more prevalent in middle-aged women. Another hypothesis is that fetal cells home to injured or diseased maternal tissue where they act as stem cells and participate in repair. It is also possible that the fetal cells are merely innocent bystanders and have no effect on maternal health.
After giving birth, about 50–75% of women carry fetal immune cell lines. Maternal immune cells are also found in the offspring yielding in maternal→fetal microchimerism, though this phenomenon is about half as frequent as the former.
Microchimerism had also been shown to exist after blood transfusions to a severely immunocompromised population of patients who suffered trauma.
Other possible sources of microchimerism include gestation, an individual's older sibling, twin sibling, or vanished twin, with the cells being received in utero. Fetal-maternal microchimerism is especially prevalent after abortion or miscarriage.
Animal
Microchimerism occurs in most pairs of twins in cattle. In cattle (and other bovines), the placentas of fraternal twins usually fuse and the twins share blood circulation, resulting in exchange of cell lines. If the twins are a male–female pair, then XX/XY microchimerism results, and male hormones partially masculinize the heifer (female), creating a martin heifer or freemartin. Freemartins appear female, but are infertile and so cannot be used for breeding or dairy production. Microchimerism provides a method of diagnosing the condition, because male genetic material can be detected in a blood sample.
Fetomaternal microchimerism in the brain
Several studies have identified male DNA in the brains of both humans and mice who have previously been pregnant with a male fetus. It has been suggested that the fetal-derived cells can differentiate into those capable of presenting immunomarkers on their surface. There has been no strong evidence to say microchimerism of the maternal brain leads to disease; however, Parkinson's disease correlates with a higher incidence of brain microchimeras. Alzheimer's disease studies support nearly the opposite correlation: the more fetal-derived cells present, the lower the chance of the patient having had Alzheimer's.
Maternal tolerance to paternal-fetal antigens
There are many mechanisms at the maternal-fetal interface to prevent immune rejection of fetal cells. Nevertheless, systemic immunological changes occur in pregnant women. For example, condition of women suffering from autoimmune disorders (e.g. rheumatoid arthritis, multiple sclerosis) improves during pregnancy. These changes in immune responses during pregnancy extend to maternal components specific to fetal antigens, because of feto-maternal cell transfer and their retention in mother tissues.
During pregnancy, numbers of fetal cells in maternal tissues increase and correlate with expansion of CD4+ regulatory T cells (Tregs). Decreased expansion and decidual accumulation of Treg cause pregnancy complications (preeclampsia, abortions).
In mice models, most mother's fetal-specific CD8+ T cells undergo clonal deletion
and express low levels of chemokine receptors and ligands – this prevents remaining fetal-specific CD8+ T cells from entering the maternal-fetal interface. Mother's fetal-specific CD4+ T cells proliferate, and due to FOXP3 expression, differentiate into Treg cells. Mice models show that fetal-specific Treg cells are necessary for successful pregnancy.
Fetal tolerance to noninherited maternal antigens
Fetal T cells accumulate during in utero development. Even though the fetus is exposed to noninherited maternal antigens (NIMAs), fetal CD4+ T cells are capable of alloantigen-induced proliferation, preferentially differentiating to Treg cells and preventing a fetal immune response to maternal antigens. This expanded immune tolerance persists in both mother and offspring after birth and allows microchimeric cells to be retained in tissues.
Postnatal tolerance to NIMAs
NIMA-specific tolerance causes some interesting immunological phenotypes: sensitization to erythrocyte Rhesus factor (Rh) antigens is reduced among Rh- women born to Rh+ women, long-term kidney allograft survival is improved in NIMA-matched donor-recipient sibling pairs, or acuteness of bone marrow transplantation graft-versus-host disease is reduced, when recipients of donor stem cells are NIMA-matched.
Cross-fostering animal studies show that when postnatal NIMA exposure though breastfeeding is eliminated, survival of NIMA-matched allografts is reduced. This suggests that to maintain NIMA-specific tolerance in offspring, breastfeeding is essential, but ingestion of mother's cells alone does not prime NIMA-specific tolerance. Both prenatal and postnatal exposure to mother's cells is required to maintain NIMA-specific tolerance.
Benefits of microchimeric cells
The severity of preexisting autoimmune disorders is reduced during pregnancy and it is most apparent when fetal microchimeric cells levels are highest - during the last trimester. These cells can also replace injured maternal cells and recover tissue function (type I diabetes mouse model showed replacement of defective maternal islet cells by fetal-derived pancreatic cells). Fetal microchimeric cells can differentiate into cell types that infiltrate and replace injured cells in models of Parkinson's disease or myocardial infarction. They also help in wound healing by neoangiogenesis. Seeding of fetal microchimeric cells into maternal tissues has been proposed to promote care of offspring after birth (seeding of maternal breast tissue may promote lactation, and seeding of brain may enhance maternal attention).
Relationship with autoimmune diseases and breast cancer
Microchimerism has been implicated in autoimmune diseases. Independent studies repeatedly suggested that microchimeric cells of fetal origin may be involved in the pathogenesis of systemic sclerosis. Moreover, microchimeric cells of maternal origin may be involved in the pathogenesis of a group of autoimmune diseases found in children, i.e. juvenile idiopathic inflammatory myopathies (one example would be juvenile dermatomyositis). Microchimerism has now been further implicated in other autoimmune diseases, including systemic lupus erythematosus. Contrarily, an alternative hypothesis on the role of microchimeric cells in lesions is that they may be facilitating tissue repair of the damaged organ.
Moreover, fetal immune cells have also been frequently found in breast cancer stroma as compared to samples taken from healthy women. It is not clear, however, whether fetal cell lines promote the development of tumors or, contrarily, protect women from developing breast carcinoma.
Systemic lupus erythematosus
The presence of fetal cells in mothers can be associated with benefits when it comes to certain autoimmune diseases. In particular, male fetal cells are related to helping mothers with systemic lupus erythematosus. When kidney biopsies were taken from patients with lupus nephritis, DNA was extracted and run with PCR. The male fetal DNA was quantified and the presence of specific Y chromosome sequences were found. Women with lupus nephritis containing male fetal cells in their kidney biopsies exhibited better renal system functioning. Levels of serum creatinine, which is related to kidney failure, were low in mothers with high levels of male fetal cells. In contrast, women without male fetal cells who had lupus nephritis showed a more serious form of glomerulonephritis and higher levels of serum creatinine.
The specific role that fetal cells play in microchimerism related to certain autoimmune diseases is not fully understood. However, one hypothesis states that these cells supply antigens, causing inflammation and triggering the release of different foreign antigens. This would trigger autoimmune disease instead of serving as a therapeutic. A different hypothesis states that fetal microchimeric cells are involved in repairing tissues. When tissues get inflamed, fetal microchimeric cells go to the damaged site and aid in repair and regeneration of the tissue.
Thyroid disease
Fetal maternal microchimerism may be related to autoimmune thyroid diseases. There have been reports of fetal cells in the lining of the blood and thyroid glands of patients with autoimmune thyroid disease. These cells could become activated after delivery of the baby after immune suppression in the mother is lost, suggesting a role of fetal cells in the pathogenesis of such diseases. Two types of thyroid disease, Hashimoto's thyroiditis (HT) and Graves' disease (GD), show similarities to graft vs host disease which occurs after hematopoietic stem cell transplants. Fetal cells colonize maternal tissues like the thyroid gland and are able to survive many years postpartum. These fetal microchimeric cells in the thyroid show up in the blood of women affected by thyroid diseases.
Sjögren syndrome
Sjögren syndrome (SS) is an autoimmune rheumatic disease of the exocrine glands. Increased incidence of SS after childbirth suggests a relationship between SS and pregnancy, and this led to the hypothesis that fetal microchimerism may be involved in SS pathogenesis. Studies showed the presence of Y-chromosome-positive fetal cells in minor salivary glands in 11 of 20 women with SS but in only one of eight normal controls. Fetal cells in salivary glands suggest that they may be involved in the development of SS.
Oral lichen planus
Lichen planus (LP) is a T-cell-mediated autoimmune chronic disease of unknown etiology. Females have a three times higher prevalence than men. LP is characterized by T lymphocytes infiltration of the lower levels of epithelium, where they damage basal cells and cause apoptosis. The fetal microchimerism may trigger a fetus versus host reaction and therefore may play a role in the pathogenesis of autoimmune diseases including LP.
Breast cancer
Pregnancy has a positive effect on the prognosis of breast cancer according to several studies and it apparently increases the chance of survival after diagnosis of breast cancer. Possible positive effects of pregnancy could be explained by the persistence of fetal cells in the blood and maternal tissues.
Fetal cells are probably actively migrating from peripheral blood into the tumor tissue where they are preferentially settled in the tumor stroma and one their concentration decreases as they get closer to the healthy breast tissue. There are two suggested mechanisms by which the fetal cells could have the positive effect on the breast cancer prognosis. The first mechanism suggests that fetal cells only oversee cancer cells and they attract components of the immune system if needed. The second option is that the down-regulation of the immune system induced by the presence of fetal cells could ultimately lead to cancer prevention, because women in whom FMC is present produce lower concentrations of inflammatory mediators, which may lead to the development of neoplastic tissue.
The effect also depends on the level of microchimerism: Hyperchimerism (a high rate of microchimerism) and hypochimerism (a low rate of microchimerism) can be related to the negative effect of FMC and thus can promote a worse prognosis of breast cancer. Apparently, women with breast cancer may fail in the process of obtaining and maintaining allogeneic fetal cells. Low concentration and / or complete absence of fetal cells could indicate a predisposition to development of the malignant process.
Other cancers
Study of S. Hallum shows association between male origin fetal cells and ovarian cancer risk. Presence of Y chromosome was used to detect foreign cells in women's blood. Microchimerism is a result of pregnancy, possibility that foreign cells were of transfusion or transplantation origin was rejected due to women's health. Women testing positive for male origin microchimerism cells had reduced hazard rates of ovarian cancer than women testing negative.
Pregnancy at older ages can reduce risk of ovarian cancer. Numbers of microchimeric cells declines after pregnancy, and ovarian cancer is most frequent in postmenopausal women. This suggests that fetal microchimerism may play a protective role in ovarian cancer as well.
Microchimeric cells also cluster several times more in lung tumors than in surrounding healthy lung tissue. Fetal cells from the bone marrow go to the tumor sites where they may have tissue repair functions.
Microchimerism of fetomaternal cell trafficking origin might be associated with the pathogenesis or progression of cervical cancer. Male cells were observed in patients with cervical cancer but not in positive controls. Microchimeric cells might induce the alteration of the woman's immune system and make the cervical tissue more susceptible to HPV infection or provide a suitable environment for tumor growth.
Role of microchimerism in wound healing
Microchimeric fetal cells expressed collagen I, III and TGF-β3, and they were identified in healed maternal cesarean section scars. This suggests that these cells migrate to the site of damage due to maternal skin injury signals, and help repair tissue.
Stem cells
Animal models
Fetomaternal microchimerism has been shown in experimental investigations of whether fetal cells can cross the blood brain barrier in mice. The properties of these cells allow them to cross the blood brain barrier and target injured brain tissue. This mechanism is possible because umbilical cord blood cells express some proteins similar to neurons. When these umbilical cord blood cells are injected in rats with brain injury or stroke, they enter the brain and express certain nerve cell markers. Due to this process, fetal cells could enter the brain during pregnancy and become differentiated into neural cells. Fetal microchimerism can occur in the maternal mouse brain, responding to certain cues in the maternal body.
Health implications
Fetal microchimerism could have an implication on maternal health. Isolating cells in cultures can alter the properties of the stem cells, but in pregnancy the effects of fetal stem cells can be investigated without in vitro cultures. Once characterized and isolated, fetal cells that are able to cross the blood brain barrier could impact certain procedures. For example, isolating stem cells can be accomplished through taking them from sources like the umbilical cord. These fetal stem cells can be used in intravenous infusion to repair the brain tissue. Hormonal changes in pregnancy alter neurogenesis, which could create favorable environments for fetal cells to respond to injury.
The true function on fetal cells in mothers is not fully known, however, there have been reports of positive and negative health effects. The sharing of genes between the fetus and mother may lead to benefits. Due to not all genes being shared, health complications may arise as a result of resource allocation. During pregnancy, fetal cells are able to manipulate the maternal system to draw resources from the placenta, while the maternal system tries to limit it.
See also
Chimerism
Allotransplantation
Telegony
Epigenetics
Cell-free fetal DNA
References
Further reading
Autoimmune diseases
Reproduction
Mating
Evolutionary biology
Sexual selection
Chimerism | Microchimerism | [
"Biology"
] | 3,597 | [
"Evolutionary biology",
"Evolutionary processes",
"Behavior",
"Reproduction",
"Chimerism",
"Biological interactions",
"Ethology",
"Sexual selection",
"Mating"
] |
2,508,210 | https://en.wikipedia.org/wiki/High%20Precision%20Event%20Timer | The High Precision Event Timer (HPET) is a hardware timer available in modern x86-compatible personal computers. Compared to older types of timers available in the x86 architecture, HPET allows more efficient processing of highly timing-sensitive applications, such as multimedia playback and OS task switching. It was developed jointly by Intel and Microsoft and has been incorporated in PC chipsets since 2005. Formerly referred to by Intel as a Multimedia Timer, the term HPET was selected to avoid confusion with the software multimedia timers introduced in the MultiMedia Extensions to Windows 3.0.
Older operating systems that do not support a hardware HPET device can only use older timing facilities, such as the programmable interval timer (PIT) or the real-time clock (RTC). Windows XP, when fitted with the latest hardware abstraction layer (HAL), can also use the processor's Time Stamp Counter (TSC), or ACPI Power Management Timer (ACPI PMTIMER), together with the RTC to provide operating system features that would, in later Windows versions, be provided by the HPET hardware. Confusingly, such Windows XP systems quote "HPET" connectivity in the device driver manager even though the Intel HPET device is not being used.
Features
An HPET chip consists of a 64-bit up-counter (main counter) counting at a frequency of at least 10 MHz, and a set of (at least three, up to 256) comparators. These comparators are 32- or 64-bit-wide. The HPET is programmed via a memory mapped I/O window that is discoverable via ACPI. The HPET circuit in modern PCs is integrated into the southbridge chip.
Each comparator can generate an interrupt when the least significant bits are equal to the corresponding bits of the 64-bit main counter value. The comparators can be put into one-shot mode or periodic mode, with at least one comparator supporting periodic mode and all of them supporting one-shot mode. In one-shot mode the comparator fires an interrupt once when the main counter reaches the value stored in the comparator's register, while in the periodic mode the interrupts are generated at specified intervals.
Comparators can be driven by the operating system, e.g. to provide one timer per CPU for scheduling, or by applications.
Applications
The HPET can produce periodic interrupts at a much higher resolution than the RTC and is often used to synchronize multimedia streams, providing smooth playback and reducing the need to use other timestamp calculations such as an x86-based CPU's RDTSC instruction. This provides improved efficiency, since the CPU does not need to waste cycles to make up for the low resolution of timers, and enables more aggressive use of sleep states, reducing power consumption. In addition to the application-level demand for high-precision clock, there are OS-level benefits in the scheduler and through the availability of a stable clock base for multi-processor systems.
Comparison to predecessors
HPET is meant to supplement and replace the 8254 programmable interval timer and the RTC's periodic interrupt function. Compared to these older timer circuits, the HPET has higher frequency and wider 64-bit counters (although they can be driven in 32-bit mode).
The HPET specification does not define the timer frequency, only requiring a minimum of 10 MHz; the actual frequency is provided to the operating system by a hardware register giving the number of femtoseconds per period (with an upper bound of ). A popular value is 14.3 MHz, 12 times the standard 8254 frequency of 1.193 MHz.
While 8254 and RTC can be put into an HPET-like one-shot mode, the set-up process is so slow that their one-shot mode is not used in practice for tasks requiring precise scheduling. Instead, 8254 and RTC are typically used in periodic mode with a very small time interval. For example, if an application needs to perform several short (some milliseconds, perhaps) waits, it is better to have a periodic timer running constantly with a 1 ms period because of the high setup cost of an 8254 or RTC one-shot timer. This causes an interrupt at every millisecond even if the application needs to do actual work less frequently. With HPET, the extra interrupts can be avoided, because the set-up cost of a HPET one-shot timer is considerably smaller.
Use and compatibility
Operating systems designed before HPET existed cannot use HPET, so they use other timer facilities. Newer operating systems tend to be able to use either. Some hardware has both. Indeed, most current southbridge chips have legacy-supporting instances of PIT, PIC, Advanced Programmable Interrupt Controller (APIC) and RTC devices incorporated into their silicon whether or not they are used by the operating system, which helps very modern PCs run older operating systems.
The following operating systems are known not to be able to use HPET:
Windows XP SP1, and earlier Windows versions, Linux kernels prior to 2.6.
The following operating systems are known to be able to use HPET:
Windows XP SP3, Windows Server 2003 SP2, Windows Server 2008, Windows Server 2008 R2, Windows Vista, Windows 7, x86 based versions of , Linux operating systems using the 2.6 kernel (or later), FreeBSD and OpenSolaris.
The Linux kernel can also use HPET as its clock source. The documentation of Red Hat MRG version 2 states that TSC is the preferred clock source due to its much lower overhead, but it uses HPET as a fallback. A benchmark in that environment for 10 million event counts found that TSC took about 0.6 seconds, HPET took slightly over 12 seconds, and ACPI Power Management Timer took around 24 seconds.
In 2019 it was decided to blacklist HPET in newer Linux kernels when running on some Intel CPUs (Coffee Lake) because of its instability.
Problems
HPET is a continuously running timer that counts upward, not a one-shot device that counts down to zero, causes one interrupt and then stops. Since HPET compares the actual timer value and the programmed target value on equality rather than "greater or equal", interrupts can be missed if the target time has already passed when the comparator value is written into the chip's register. In such a case, not only is the intended interrupt missed, but actually set far into the future (about 232 or 264 counts). In the presence of non-maskable interrupts (such as a System Management Interrupt (SMI)) that do not have a hard upper bound on their execution time, this race condition requires time-consuming re-checks of the timer after setup and is hard to avoid completely. The difficulties are exacerbated if the comparator value is not synchronized with the timer immediately, but delayed by one or two ticks, as some chipsets do.
Besides mentioning the race condition discussed above, a VMware document also lists some other drawbacks: "The specification does not require the timer to be particularly fine grained, to have low drift, or to be fast to read. Some typical implementations run the counter at about 18 MHz and require about the same amount of time (1–2 μs) to read the HPET as with the ACPI timer. Implementations have been observed in which the period register is off by 800 parts per million or more."
Notes
References
Integrated circuits | High Precision Event Timer | [
"Technology",
"Engineering"
] | 1,554 | [
"Computer engineering",
"Integrated circuits"
] |
2,508,768 | https://en.wikipedia.org/wiki/Afwillite | Afwillite is a calcium hydroxide nesosilicate mineral with formula Ca3(SiO3OH)2·2H2O. It occurs as glassy, colorless to white prismatic monoclinic crystals. Its Mohs scale hardness is between 3 and 4. It occurs as an alteration mineral in contact metamorphism of limestone. It occurs in association with apophyllite, natrolite, thaumasite, merwinite, spurrite, gehlenite, ettringite,
portlandite, hillebrandite, foshagite, brucite and calcite.
It was first described in 1925 for an occurrence in the Dutoitspan Mine, Kimberley, South Africa and was named for Alpheus Fuller Williams (1874–1953), a past official of the De Beers diamond company.
Afwillite is typically found in veins of spurrite and it belongs to the nesosilicate sub-class. It is monoclinic, its space group is P2 and its point group is 2.
Formation
It is suggested that afwillite forms in fractured veins of the mineral spurrite. Jennite, afwillite, oyelite and calcite are all minerals that form in layers within spurrite veins. It appears that afwillite, as well as calcite, forms from precipitated fluids. Jennite is actually an alteration of afwillite, but both formed from calcium silicates through hydration. Laboratory studies determined that afwillite forms at a temperature below , usually around 100 °C. Afwillite and spurrite are formed through contact metamorphism of limestone. Contact metamorphism is caused by the interaction of rock with heat and/or fluids from a nearby crystallizing silicate magma.
Structure and properties
Afwillite has a complex monoclinic structure, and the silicon tetrahedra in the crystal structure are held together by hydrogen bonds. It has perfect cleavage parallel to its (101) and poor cleavage parallel to its (100) faces. It is biaxial and its 2V angle, the measurement from one optical axis to the other optical axis, is 50 – 56 degrees. When viewed under crossed polarizers in a petrographic microscope, it displays first-order orange colors, giving a maximum birefringence of 0.0167 (determined by using the Michel–Levy chart). Afwillite is optically positive. Additionally, it has a prismatic crystal habit. Under a microscope afwillite looks like wollastonite, which is in the same family as afwillite.
Afwillite is composed of double chains that consist of calcium and silicon polyhedral connected to each other by sharing corners and edges. This causes continuous sheets to form parallel to its Miller index [01] faces. The sheets are bonded together by hydrogen bonds and are all connected by Ca-Si-O bonds (Malik & Jeffery, 1976). Each calcium atom is in 6-fold octahedral coordination with the oxygen, and the silicon is in 4-fold tetrahedral coordination around the oxygen. Around each silicon there is one OH group and there are three oxygens that neighbor them. The silicon tetrahedra are arranged so that they share an edge with calcium(1), and silicon(2) shares edges with the calcium(2) and calcium(3) polyhedral. The silicon tetrahedra are held together by the OH group and hydrogen bonding occurs between the hydrogen in the OH and the silicon tetrahedra. Hydrogen bonding is caused because the positive ion, hydrogen, is attracted to negatively charge ions which, in this case, are the silicon tetrahedra.
Occurrence in concrete
Afwillite is one of the calcium silicate hydrates (C-S-H) that form when Portland cement sets to form hardened cement paste (HCP) in concrete. The cement gets its strength from the hydration of tri- and di- calcium silicates (C3S and C2S) present in the clinker.
See also
List of minerals
Other calcium silicate hydrate (C-S-H) minerals:
Gyrolite
Jennite
Thaumasite
Tobermorite
Other calcium aluminium silicate hydrate (C-A-S-H) minerals:
Tacharanite
References
Calcium minerals
Cement
Concrete
Nesosilicates
Dihydrate minerals
Geology of Riverside County, California
Crestmore Heights, California
Monoclinic minerals
Minerals in space group 9
Minerals described in 1925 | Afwillite | [
"Engineering"
] | 937 | [
"Structural engineering",
"Concrete"
] |
22,072,718 | https://en.wikipedia.org/wiki/Biological%20network | A biological network is a method of representing systems as complex sets of binary interactions or relations between various biological entities. In general, networks or graphs are used to capture relationships between entities or objects. A typical graphing representation consists of a set of nodes connected by edges.
History of networks
As early as 1736 Leonhard Euler analyzed a real-world issue known as the Seven Bridges of Königsberg, which established the foundation of graph theory. From the 1930s-1950s the study of random graphs were developed. During the mid 1990s, it was discovered that many different types of "real" networks have structural properties quite different from random networks. In the late 2000's, scale-free and small-world networks began shaping the emergence of systems biology, network biology, and network medicine. In 2014, graph theoretical methods were used by Frank Emmert-Streib to analyze biological networks.
In the 1980s, researchers started viewing DNA or genomes as the dynamic storage of a language system with precise computable finite states represented as a finite-state machine. Recent complex systems research has also suggested some far-reaching commonality in the organization of information in problems from biology, computer science, and physics.
Networks in biology
Protein–protein interaction networks
Protein-protein interaction networks (PINs) represent the physical relationship among proteins present in a cell, where proteins are nodes, and their interactions are undirected edges. Due to their undirected nature, it is difficult to identify all the proteins involved in an interaction. Protein–protein interactions (PPIs) are essential to the cellular processes and also the most intensely analyzed networks in biology. PPIs could be discovered by various experimental techniques, among which the yeast two-hybrid system is a commonly used technique for the study of binary interactions. Recently, high-throughput studies using mass spectrometry have identified large sets of protein interactions.
Many international efforts have resulted in databases that catalog experimentally determined protein-protein interactions. Some of them are the Human Protein Reference Database, Database of Interacting Proteins, the Molecular Interaction Database (MINT), IntAct, and BioGRID. At the same time, multiple computational approaches have been proposed to predict interactions. FunCoup and STRING are examples of such databases, where protein-protein interactions inferred from multiple evidences are gathered and made available for public usage.
Recent studies have indicated the conservation of molecular networks through deep evolutionary time. Moreover, it has been discovered that proteins with high degrees of connectedness are more likely to be essential for survival than proteins with lesser degrees. This observation suggests that the overall composition of the network (not simply interactions between protein pairs) is vital for an organism's overall functioning.
Gene regulatory networks (DNA–protein interaction networks)
The genome encodes thousands of genes whose products (mRNAs, proteins) are crucial to the various processes of life, such as cell differentiation, cell survival, and metabolism. Genes produce such products through a process called transcription, which is regulated by a class of proteins called transcription factors. For instance, the human genome encodes almost 1,500 DNA-binding transcription factors that regulate the expression of more than 20,000 human genes. The complete set of gene products and the interactions among them constitutes gene regulatory networks (GRN). GRNs regulate the levels of gene products within the cell and in-turn the cellular processes.
GRNs are represented with genes and transcriptional factors as nodes and the relationship between them as edges. These edges are directional, representing the regulatory relationship between the two ends of the edge. For example, the directed edge from gene A to gene B indicates that A regulates the expression of B. Thus, these directional edges can not only represent the promotion of gene regulation but also its inhibition.
GRNs are usually constructed by utilizing the gene regulation knowledge available from databases such as., Reactome and KEGG. High-throughput measurement technologies, such as microarray, RNA-Seq, ChIP-chip, and ChIP-seq, enabled the accumulation of large-scale transcriptomics data, which could help in understanding the complex gene regulation patterns.
Gene co-expression networks (transcript–transcript association networks)
Gene co-expression networks can be perceived as association networks between variables that measure transcript abundances. These networks have been used to provide a system biologic analysis of DNA microarray data, RNA-seq data, miRNA data, etc. weighted gene co-expression network analysis is extensively used to identify co-expression modules and intramodular hub genes. Co-expression modules may correspond to cell types or pathways, while highly connected intramodular hubs can be interpreted as representatives of their respective modules.
Metabolic networks
Cells break down the food and nutrients into small molecules necessary for cellular processing through a series of biochemical reactions. These biochemical reactions are catalyzed by enzymes. The complete set of all these biochemical reactions in all the pathways represents the metabolic network. Within the metabolic network, the small molecules take the roles of nodes, and they could be either carbohydrates, lipids, or amino acids. The reactions which convert these small molecules from one form to another are represented as edges. It is possible to use network analyses to infer how selection acts on metabolic pathways.
Signaling networks
Signals are transduced within cells or in between cells and thus form complex signaling networks which plays a key role in the tissue structure. For instance, the MAPK/ERK pathway is transduced from the cell surface to the cell nucleus by a series of protein-protein interactions, phosphorylation reactions, and other events. Signaling networks typically integrate protein–protein interaction networks, gene regulatory networks, and metabolic networks. Single cell sequencing technologies allows the extraction of inter-cellular signaling, an example is NicheNet, which allows to modeling intercellular communication by linking ligands to target genes.
Neuronal networks
The complex interactions in the brain make it a perfect candidate to apply network theory. Neurons in the brain are deeply connected with one another, and this results in complex networks being present in the structural and functional aspects of the brain. For instance, small-world network properties have been demonstrated in connections between cortical regions of the primate brain or during swallowing in humans. This suggests that cortical areas of the brain are not directly interacting with each other, but most areas can be reached from all others through only a few interactions.
Food webs
All organisms are connected through feeding interactions. If a species eats or is eaten by another species, they are connected in an intricate food web of predator and prey interactions. The stability of these interactions has been a long-standing question in ecology. That is to say if certain individuals are removed, what happens to the network (i.e., does it collapse or adapt)? Network analysis can be used to explore food web stability and determine if certain network properties result in more stable networks. Moreover, network analysis can be used to determine how selective removals of species will influence the food web as a whole. This is especially important considering the potential species loss due to global climate change.
Between-species interaction networks
In biology, pairwise interactions have historically been the focus of intense study. With the recent advances in network science, it has become possible to scale up pairwise interactions to include individuals of many species involved in many sets of interactions to understand the structure and function of larger ecological networks. The use of network analysis can allow for both the discovery and understanding of how these complex interactions link together within the system's network, a property that has previously been overlooked. This powerful tool allows for the study of various types of interactions (from competitive to cooperative) using the same general framework. For example, plant-pollinator interactions are mutually beneficial and often involve many different species of pollinators as well as many different species of plants. These interactions are critical to plant reproduction and thus the accumulation of resources at the base of the food chain for primary consumers, yet these interaction networks are threatened by anthropogenic change. The use of network analysis can illuminate how pollination networks work and may, in turn, inform conservation efforts. Within pollination networks, nestedness (i.e., specialists interact with a subset of species that generalists interact with), redundancy (i.e., most plants are pollinated by many pollinators), and modularity play a large role in network stability. These network properties may actually work to slow the spread of disturbance effects through the system and potentially buffer the pollination network from anthropogenic changes somewhat. More generally, the structure of species interactions within an ecological network can tell us something about the diversity, richness, and robustness of the network. Researchers can even compare current constructions of species interactions networks with historical reconstructions of ancient networks to determine how networks have changed over time. Much research into these complex species interactions networks is highly concerned with understanding what factors (e.g., species richness, connectance, nature of the physical environment) lead to network stability.
Within-species interaction networks
Network analysis provides the ability to quantify associations between individuals, which makes it possible to infer details about the network as a whole at the species and/or population level. One of the most attractive features of the network paradigm would be that it provides a single conceptual framework in which the social organization of animals at all levels (individual, dyad, group, population) and for all types of interaction (aggressive, cooperative, sexual, etc.) can be studied.
Researchers interested in ethology across many taxa, from insects to primates, are starting to incorporate network analysis into their research. Researchers interested in social insects (e.g., ants and bees) have used network analyses better to understand the division of labor, task allocation, and foraging optimization within colonies. Other researchers are interested in how specific network properties at the group and/or population level can explain individual-level behaviors. Studies have demonstrated how animal social network structure can be influenced by factors ranging from characteristics of the environment to characteristics of the individual, such as developmental experience and personality. At the level of the individual, the patterning of social connections can be an important determinant of fitness, predicting both survival and reproductive success. At the population level, network structure can influence the patterning of ecological and evolutionary processes, such as frequency-dependent selection and disease and information transmission. For instance, a study on wire-tailed manakins (a small passerine bird) found that a male's degree in the network largely predicted the ability of the male to rise in the social hierarchy (i.e., eventually obtain a territory and matings). In bottlenose dolphin groups, an individual's degree and betweenness centrality values may predict whether or not that individual will exhibit certain behaviors, like the use of side flopping and upside-down lobtailing to lead group traveling efforts; individuals with high betweenness values are more connected and can obtain more information, and thus are better suited to lead group travel and therefore tend to exhibit these signaling behaviors more than other group members.
Social network analysis can also be used to describe the social organization within a species more generally, which frequently reveals important proximate mechanisms promoting the use of certain behavioral strategies. These descriptions are frequently linked to ecological properties (e.g., resource distribution). For example, network analyses revealed subtle differences in the group dynamics of two related equid fission-fusion species, Grevy's zebra and onagers, living in variable environments; Grevy's zebras show distinct preferences in their association choices when they fission into smaller groups, whereas onagers do not. Similarly, researchers interested in primates have also utilized network analyses to compare social organizations across the diverse primate order, suggesting that using network measures (such as centrality, assortativity, modularity, and betweenness) may be useful in terms of explaining the types of social behaviors we see within certain groups and not others.
Finally, social network analysis can also reveal important fluctuations in animal behaviors across changing environments. For example, network analyses in female chacma baboons (Papio hamadryas ursinus) revealed important dynamic changes across seasons that were previously unknown; instead of creating stable, long-lasting social bonds with friends, baboons were found to exhibit more variable relationships which were dependent on short-term contingencies related to group-level dynamics as well as environmental variability. Changes in an individual's social network environment can also influence characteristics such as 'personality': for example, social spiders that huddle with bolder neighbors tend to increase also in boldness. This is a very small set of broad examples of how researchers can use network analysis to study animal behavior. Research in this area is currently expanding very rapidly, especially since the broader development of animal-borne tags and computer vision can be used to automate the collection of social associations. Social network analysis is a valuable tool for studying animal behavior across all animal species and has the potential to uncover new information about animal behavior and social ecology that was previously poorly understood.
DNA-DNA chromatin networks
Within a nucleus, DNA is constantly in motion. Perpetual actions such as genome folding and Cohesin extrusion morph the shape of a genome in real time. The spatial location of strands of chromatin relative to each other plays an important role in the activation or suppression of certain genes. DNA-DNA Chromatin Networks help biologists to understand these interactions by analyzing commonalities amongst different loci. The size of a network can vary significantly, from a few genes to several thousand and thus network analysis can provide vital support in understanding relationships among different areas of the genome. As an example, analysis of spatially similar loci within the organization in a nucleus with Genome Architecture Mapping (GAM) can be used to construct a network of loci with edges representing highly linked genomic regions.
The first graphic showcases the Hist1 region of the mm9 mouse genome with each node representing genomic loci. Two nodes are connected by an edge if their linkage disequilibrium is greater than the average across all 81 genomic windows. The locations of the nodes within the graphic are randomly selected and the methodology of choosing edges yields a, simple to show, but rudimentary graphical representation of the relationships in the dataset. The second visual exemplifies the same information as the previous; However, the network starts with every loci placed sequentially in a ring configuration. It then pulls nodes together using linear interpolation by their linkage as a percentage. The figure illustrates strong connections between the center genomic windows as well as the edge loci at the beginning and end of the Hist1 region.
Modelling biological networks
Introduction
To draw useful information from a biological network, an understanding of the statistical and mathematical techniques of identifying relationships within a network is vital. Procedures to identify association, communities, and centrality within nodes in a biological network can provide insight into the relationships of whatever the nodes represent whether they are genes, species, etc. Formulation of these methods transcends disciplines and relies heavily on graph theory, computer science, and bioinformatics.
Association
There are many different ways to measure the relationships of nodes when analyzing a network. In many cases, the measure used to find nodes that share similarity within a network is specific to the application it is being used. One of the types of measures that biologists utilize is correlation which specifically centers around the linear relationship between two variables. As an example, weighted gene co-expression network analysis uses Pearson correlation to analyze linked gene expression and understand genetics at a systems level. Another measure of correlation is linkage disequilibrium. Linkage disequilibrium describes the non-random association of genetic sequences among loci in a given chromosome. An example of its use is in detecting relationships in GAM data across genomic intervals based upon detection frequencies of certain loci.
Centrality
The concept of centrality can be extremely useful when analyzing biological network structures. There are many different methods to measure centrality such as betweenness, degree, Eigenvector, and Katz centrality. Every type of centrality technique can provide different insights on nodes in a particular network; However, they all share the commonality that they are to measure the prominence of a node in a network.
In 2005, Researchers at Harvard Medical School utilized centrality measures with the yeast protein interaction network. They found that proteins that exhibited high Betweenness centrality were more essential and translated closely to a given protein's evolutionary age.
Communities
Studying the community structure of a network by subdividing groups of nodes into like-regions can be an integral tool for bioinformatics when exploring data as a network. A food web of The Secaucus High School Marsh exemplifies the benefits of grouping as the relationships between nodes are far easier to analyze with well-made communities. While the first graphic is hard to visualize, the second provides a better view of the pockets of highly connected feeding relationships that would be expected in a food web. The problem of community detection is still an active problem. Scientists and graph theorists continuously discover new ways of sub sectioning networks and thus a plethora of different algorithms exist for creating these relationships. Like many other tools that biologists utilize to understand data with network models, every algorithm can provide its own unique insight and may vary widely on aspects such as accuracy or time complexity of calculation.
In 2002, a food web of marine mammals in the Chesapeake Bay was divided into communities by biologists using a community detection algorithm based on neighbors of nodes with high degree centrality. The resulting communities displayed a sizable split in pelagic and benthic organisms. Two very common community detection algorithms for biological networks are the Louvain Method and Leiden Algorithm.
The Louvain method is a greedy algorithm that attempts to maximize modularity, which favors heavy edges within communities and sparse edges between, within a set of nodes. The algorithm starts by each node being in its own community and iteratively being added to the particular node's community that favors a higher modularity. Once no modularity increase can occur by joining nodes to a community, a new weighted network is constructed of communities as nodes with edges representing between-community edges and loops representing edges within a community. The process continues until no increase in modularity occurs. While the Louvain Method provides good community detection, there are a few ways that it is limited. By mainly focusing on maximizing a given measure of modularity, it may be led to craft badly connected communities by degrading a model for the sake of maximizing a modularity metric; However, the Louvain Method performs fairly and is can be easy to understand comparatively to many other community detection algorithms.
The Leiden Algorithm expands on the Louvain Method by providing a number of improvements. When joining nodes to a community, only neighborhoods that have been recently changed are considered. This greatly improves the speed of merging nodes. Another optimization is in the refinement phase in-which the algorithm randomly chooses for a node from a set of communities to merge with. This allows for greater depth in choosing communities as Louvain solely focuses on maximizing the modularity that was chosen. The Leiden algorithm, while more complex than Louvain, performs faster with better community detection and can be a valuable tool for identifying groups.
Network Motifs
Network motifs, or statistically significant recurring interaction patterns within a network, are a commonly used tool to understand biological networks. A major use case of network motifs is in Neurophysiology where motif analysis is commonly used to understand interconnected neuronal functions at varying scales. As an example, in 2017, researchers at Beijing Normal University analyzed highly represented 2 and 3 node network motifs in directed functional brain networks constructed by Resting state fMRI data to study the basic mechanisms in brain information flow.
See also
List of omics topics in biology
Biological network inference
Biostatistics
Computational biology
Systems biology
Weighted correlation network analysis
Interactome
Network medicine
Ecological network
References
Books
External links
Networkbio.org, The site of the series of Integrative Network Biology (INB) meetings. For the 2012 event also see www.networkbio.org
Network Tools and Applications in Biology (NETTAB) workshops.
Networkbiology.org, NetworkBiology wiki site.
Linding Lab, Technical University of Denmark (DTU) studies Network Biology and Cellular Information Processing, and is also organizing the Denmark branch of the annual "Integrative Network Biology and Cancer" symposium series.
NRNB.org, The National Resource for Network Biology. A US National Institute of Health (NIH) Biomedical Technology Research Center dedicated to the study of biological networks.
Network Repository The first interactive data and network data repository with real-time visual analytics.
Animal Social Network Repository (ASNR) The first multi-taxonomic repository that collates 790 social networks from more than 45 species, including those of mammals, reptiles, fish, birds, and insects
Biological techniques and tools
Bioinformatics
Systems biology
Networks | Biological network | [
"Engineering",
"Biology"
] | 4,291 | [
"Bioinformatics",
"Biological engineering",
"nan",
"Systems biology"
] |
22,074,819 | https://en.wikipedia.org/wiki/Journal%20of%20Applied%20Mathematics%20and%20Mechanics | The Journal of Applied Mathematics and Mechanics, also known as Zeitschrift für Angewandte Mathematik und Mechanik or ZAMM is a monthly peer-reviewed scientific journal dedicated to applied mathematics. It is published by Wiley-VCH on behalf of the Gesellschaft für Angewandte Mathematik und Mechanik. The editor-in-chief is Holm Altenbach (Otto von Guericke University Magdeburg). According to the Journal Citation Reports, the journal has a 2022 impact factor of 2.3.
Publication history
The journal's first issue appeared in 1921, published by the Verein Deutscher Ingenieure and edited by Richard von Mises.
References
External links
Applied mathematics journals
Monthly journals
Wiley-VCH academic journals
English-language journals
Mechanics journals | Journal of Applied Mathematics and Mechanics | [
"Physics",
"Mathematics"
] | 167 | [
"Applied mathematics",
"Applied mathematics journals",
"Mechanics",
"Mechanics journals"
] |
22,078,147 | https://en.wikipedia.org/wiki/Substrate%20analog | Substrate analogs (substrate state analogues), are chemical compounds with a chemical structure that resemble the substrate molecule in an enzyme-catalyzed chemical reaction. Substrate analogs can act as competitive inhibitors of an enzymatic reaction. An example is phosphoramidate to the Tetrahymena group I ribozyme. Other examples of substrate analogs include 5’-adenylyl-imidodiphosphate, a substrate analog of ATP, and 3-acetylpyridine adenine dinucleotide, a substrate analog of NADH.
As a competitive inhibitor, substrate analogs occupy the same binding site as its analog, and decrease the intended substrate’s efficiency. The maximum rate (Vmax) remains the same while the intended substrate’s affinity (measured by the Michaelis constant KM) is decreased. This means that less of the intended substrate will bind to the enzyme, resulting in less product being formed. In addition, the substrate analog may also be missing chemical components that allow the enzyme to go through with its reaction. This also causes the amount of product created to decrease.
Substrate analogs usually bind to the binding site reversibly. This means that the binding of the substrate analog to the enzyme’s binding site is non-permanent. The effect of the substrate analog can be nullified by increasing the concentration of the originally intended substrate. There are also substrate analogs that bind to the binding site of an enzyme irreversibly. If this is the case, the substrate analog is called an inhibitory substrate analog, a suicide substrate, or a Trojan horse substrate. An example of a substrate analog that is also a suicide substrate/Trojan horse substrate is penicillin, which is an inhibitory substrate analog of peptidoglycan.
Some substrate analogs can still allow the enzyme to synthesize a product despite the enzyme’s inability to metabolize the substrate analog. These substrate analogs are known as gratuitous inducers. An example of a substrate analog that is also a gratuitous inducer is IPTG (isopropyl β-D-1-thiogalactopyranoside), a substrate analog and gratuitous inducer of β-galactosidase activity.
See also
Enzyme
Enzyme inhibitor
Suicide inhibitor
Structural analog, compounds with similar chemical structure
References
Enzyme kinetics
Chemical nomenclature | Substrate analog | [
"Chemistry"
] | 492 | [
"Chemical kinetics",
"nan",
"Enzyme kinetics"
] |
22,079,616 | https://en.wikipedia.org/wiki/European%20Union%20climate%20and%20energy%20package | The European plan on climate change consists of a range of measures adopted by the members of the European Union to fight against climate change. The plan was launched in March 2007, and after months of tough negotiations between the member countries, it was adopted by the European Parliament in December 2008. The package focuses on emissions cuts, renewables and energy efficiency.
Timeline
10 January 2007: The European Commission presented a series of proposals setting ambitious targets of greenhouse gases reduction. It announced the EU would commit itself to reducing emissions of developed countries by 30% (compared to 1990 levels) by 2020 in international negotiations. In addition, the Commission planned its commitment to reduce its domestic emissions by at least 20% by 2020.
8–9 March 2007: The European Council approved of the objectives of reducing emissions of greenhouse gases presented by the commission on 10 January 2007. As part of a plan of action on energy policy for the period 2007–2009, it also supported the 20-20-20 targets.
23 January 2008: The European Commission presented the definitive package, including proposals outlined by the European Council. The plan was to be discussed and adopted by the European Council in March 2008. The commission also proposed to extend the system of emissions trading, to impose reductions of GHG emissions to economic sectors that are not covered by the system, and to promote renewable energies.
13–14 March 2008: The European Council agreed on the guiding principles of the package and set an agenda.
11–18 December 2008: Discussion about the package during the European Council, and definitive adoption of the package by the European Parliament.
December 2009: World Climate Conference in Copenhagen to find an international agreement to succeed the Kyoto Protocol on Climate Change, which expires at the end of 2012.
Origin and adoption
After the Kyoto Protocol, signed in 1997 by most European countries but expiring in 2012, a new international agreement to reduce emissions of greenhouse gases was to be negotiated at Poznan (Poland) and in Copenhagen in 2009. To play a leading role in these negotiations, the European Union wanted to develop as quickly as possible a common position in the fight against climate change, and thus implemented its own measures to deal with climate change.
Initial propositions
Meeting on 8 and 9 March 2007, the European Council adopted new environmental targets even more ambitious than that of the Kyoto Protocol. The plan included the so-called "three 20 targets", but in reality it consisted in four proposals. These aims were:
To reduce emissions of greenhouse gases by 20% by 2020 taking 1990 emissions as the reference.
To increase energy efficiency to save 20% of EU energy consumption by 2020.
To reach 20% of renewable energy in the total energy consumption in the EU by 2020.
To reach 10% of biofuels in the total consumption of vehicles by 2020.
Propositions by the Commission
After having launched the negotiations on the package by proposing to implement measures to fight against climate change in January 2007, the European Commission proposed new measures a year later. The proposals include the three "20 targets" of the previous European Council.
The new guidelines set by the Commission proposed a limit of emissions by vehicles, to develop capture and storage of , to invite each member state to reduce their greenhouse gases emissions, and to reform the European emission trading system. This last proposal was subject to much debate between the member states. The Commission proposed first to extend this system from 2013, and to extend it to all greenhouse gases instead of restricting it to emissions. It also proposed to extend emission ceilings to more sectors and industries. It finally planned to end free allocation and to switch to paying quotas in 2013 for all power producers, and by 2020 for other industries.
Final adoption
The plan was concluded rapidly: it was adopted at the European Council on 11 and 12 December 2008, and was voted by the European Parliament one week later. The initial deadline for the adoption of the package in the Parliament was March 2009. However, protests from some countries arose regarding the modalities for achieving these objectives, notably because of the Great Recession, which caused tough negotiations between countries.
The European Council of 11 and 12 December 2008 definitively adopted the package, but modified the initial measures. The 27 Heads of State and governments finally agreed to implement the 20-20-20 targets: by 2020, reduce by 20% the emissions of greenhouse gases compared to 1990 levels, increase by 20% the energy efficiency in the EU and to reach 20% of renewables in total energy consumption in the EU. As for the auctioning of emission of greenhouse gases, a gradual introduction is scheduled: companies will have to buy 20% of allowances from 2013, 70% in 2020 and 100% in 2027. However, if no international agreement is reached in the next years, the industrial companies most exposed to international competition will benefit free allocation of quotas. Finally, in the sector of electricity, exceptions are envisaged for the new member states until 2020, while the auctioning of all the allowances will be effective from 2013 for other EU members. The package was then submitted to the European Parliament from 15 to 18 December. The EU aims to lead the world towards climate neutrality by the year 2050.
Debates
During the negotiations, some member states have expressed concerns about the increase in energy costs caused by the implementation of the package: the increase could be of 10% to 15% by 2020. Above all, several countries were concerned about the supposed consequences of the auctioning of all emissions of greenhouse gases on electricity prices, on the one hand, and above all on the competitiveness of the most polluting industrial companies.
Poland and most new member states, whose electricity relies mainly on coal, fear that this reform, increasing electricity prices, could undermine their economic growth and their energy security. They wanted to benefit from a derogation allowing a progressive switch to paying quotas, starting at 20% in 2013 to reach 100% in 2020. Poland and the Baltic States also claimed that the package would force them to develop their gas imports from Russia to reduce their GHG emissions, limiting their energy independence. The member states responded by proposing to improve the electrical interconnections of these countries with the European market. In late October, the Prime Ministers of Poland, Sweden, Finland, Estonia, Latvia and Lithuania agreed to establish a plan of energy interconnection.
On the other hand, member states disputed on how to avoid the outsourcings of the most polluting industries, subject to competition of rivals from countries with little involvement in the fight against global warming. Germany proposed the allocation of free emission quotas to the most vulnerable (especially steel industry) companies.
See also
Climate and energy
Climate of Europe
Climate change in the European Union
Effort Sharing Regulation
Energy policy of the European Union
European Climate Change Programme
European Environment Agency
Third Energy Package
References
External links
European Commission website
Climate change in the European Union
Energy in the European Union
Energy policy
Climate change policy
Environmental policy in the EU | European Union climate and energy package | [
"Environmental_science"
] | 1,391 | [
"Environmental social science",
"Energy policy"
] |
22,081,483 | https://en.wikipedia.org/wiki/Cayley%27s%20%CE%A9%20process | In mathematics, Cayley's Ω process, introduced by , is a relatively invariant differential operator on the general linear group, that is used to construct invariants of a group action.
As a partial differential operator acting on functions of n2 variables xij, the omega operator is given by the determinant
For binary forms f in x1, y1 and g in x2, y2 the Ω operator is . The r-fold Ω process Ωr(f, g) on two forms f and g in the variables x and y is then
Convert f to a form in x1, y1 and g to a form in x2, y2
Apply the Ω operator r times to the function fg, that is, f times g in these four variables
Substitute x for x1 and x2, y for y1 and y2 in the result
The result of the r-fold Ω process Ωr(f, g) on the two forms f and g is also called the r-th transvectant and is commonly written (f, g)r.
Applications
Cayley's Ω process appears in Capelli's identity, which
used to find generators for the invariants of various classical groups acting on natural polynomial algebras.
used Cayley's Ω process in his proof of finite generation of rings of invariants of the general linear group. His use of the Ω process gives an explicit formula for the Reynolds operator of the special linear group.
Cayley's Ω process is used to define transvectants.
References
Reprinted in
Invariant theory | Cayley's Ω process | [
"Physics"
] | 325 | [
"Invariant theory",
"Group actions",
"Symmetry"
] |
743,441 | https://en.wikipedia.org/wiki/Bouguer%20anomaly | In geodesy and geophysics, the Bouguer anomaly (named after Pierre Bouguer) is a gravity anomaly, corrected for the height at which it is measured and the attraction of terrain. The height correction alone gives a free-air gravity anomaly.
Definition
The Bouguer anomaly defined as:
Here,
is the free-air gravity anomaly.
is the Bouguer correction which allows for the gravitational attraction of rocks between the measurement point and sea level;
is a terrain correction which allows for deviations of the surface from an infinite horizontal plane
The free-air anomaly , in its turn, is related to the observed gravity as follows:
where:
is the correction for latitude (because the Earth is not a perfect sphere; see normal gravity);
is the free-air correction.
Reduction
A Bouguer reduction is called simple (or incomplete) if the terrain is approximated by an infinite flat plate called the Bouguer plate. A refined (or complete) Bouguer reduction removes the effects of terrain more precisely. The difference between the two is called the (residual) terrain effect (or (residual) terrain correction) and is due to the differential gravitational effect of the unevenness of the terrain; it is always negative.
Simple reduction
The gravitational acceleration outside a Bouguer plate is perpendicular to the plate and towards it, with magnitude 2πG times the mass per unit area, where is the gravitational constant. It is independent of the distance to the plate (as can be proven most simply with Gauss's law for gravity, but can also be proven directly with Newton's law of gravity). The value of is , so is times the mass per unit area. Using = () we get times the mass per unit area. For mean rock density () this gives
The Bouguer reduction for a Bouguer plate of thickness is
where is the density of the material and is the constant of gravitation. On Earth the effect on gravity of elevation is 0.3086 mGal m−1 decrease when going up, minus the gravity of the Bouguer plate, giving the Bouguer gradient of 0.1967 mGal m−1.
More generally, for a mass distribution with the density depending on one Cartesian coordinate z only, gravity for any z is 2πG times the difference in mass per unit area on either side of this z value. A combination of two parallel infinite if equal mass per unit area plates does not produce any gravity between them.
See also
Gravity map
Notes
References
External links
Bouguer anomalies of Belgium. The blue regions are related to deficit masses in the subsurface
Bouguer gravity anomaly grid for the conterminous US by the [United States Geological Survey].
Bouguer anomaly map of Grahamland F.J. Davey (et al.), British Antarctic Survey, BAS Bulletins 1963-1988
Bouguer anomaly map depicting south-eastern Uruguay's Merín Lagoon anomaly (amplitude greater than +100 mGal), and detail of site.
List of Magnetic and Gravity Maps by State by the [United States Geological Survey].
Geophysics
Gravimetry | Bouguer anomaly | [
"Physics"
] | 640 | [
"Applied and interdisciplinary physics",
"Geophysics"
] |
744,171 | https://en.wikipedia.org/wiki/Strong%20perfect%20graph%20theorem | In graph theory, the strong perfect graph theorem is a forbidden graph characterization of the perfect graphs as being exactly the graphs that have neither odd holes (odd-length induced cycles of length at least 5) nor odd antiholes (complements of odd holes). It was conjectured by Claude Berge in 1961. A proof by Maria Chudnovsky, Neil Robertson, Paul Seymour, and Robin Thomas was announced in 2002 and published by them in 2006.
The proof of the strong perfect graph theorem won for its authors a $10,000 prize offered by Gérard Cornuéjols of Carnegie Mellon University and the 2009 Fulkerson Prize.
Statement
A perfect graph is a graph in which, for every induced subgraph, the size of the maximum clique equals the minimum number of colors in a coloring of the graph; perfect graphs include many well-known graph classes including the bipartite graphs, chordal graphs, and comparability graphs. In his 1961 and 1963 works defining for the first time this class of graphs, Claude Berge observed that it is impossible for a perfect graph to contain an odd hole, an induced subgraph in the form of an odd-length cycle graph of length five or more, because odd holes have clique number two and chromatic number three. Similarly, he observed that perfect graphs cannot contain odd antiholes, induced subgraphs complementary to odd holes: an odd antihole with 2k + 1 vertices has clique number k and chromatic number k + 1, which is again impossible for perfect graphs. The graphs having neither odd holes nor odd antiholes became known as the Berge graphs.
Berge conjectured that every Berge graph is perfect, or equivalently that the perfect graphs and the Berge graphs define the same class of graphs. This became known as the strong perfect graph conjecture, until its proof in 2002, when it was renamed the strong perfect graph theorem.
Relation to the weak perfect graph theorem
Another conjecture of Berge, proved in 1972 by László Lovász, is that the complement of every perfect graph is also perfect. This became known as the perfect graph theorem, or (to distinguish it from the strong perfect graph conjecture/theorem) the weak perfect graph theorem. Because Berge's forbidden graph characterization is self-complementary, the weak perfect graph theorem follows immediately from the strong perfect graph theorem.
Proof ideas
The proof of the strong perfect graph theorem by Chudnovsky et al. follows an outline conjectured in 2001 by Conforti, Cornuéjols, Robertson, Seymour, and Thomas, according to which every Berge graph either forms one of five types of basic building block (special classes of perfect graphs) or it has one of four different types of structural decomposition into simpler graphs. A minimally imperfect Berge graph cannot have any of these decompositions, from which it follows that no counterexample to the theorem can exist. This idea was based on previous conjectured structural decompositions of similar type that would have implied the strong perfect graph conjecture but turned out to be false.
The five basic classes of perfect graphs that form the base case of this structural decomposition are the bipartite graphs, line graphs of bipartite graphs, complementary graphs of bipartite graphs, complements of line graphs of bipartite graphs, and double split graphs. It is easy to see that bipartite graphs are perfect: in any nontrivial induced subgraph, the clique number and chromatic number are both two and therefore are equal. The perfection of complements of bipartite graphs, and of complements of line graphs of bipartite graphs, are both equivalent to Kőnig's theorem relating the sizes of maximum matchings, maximum independent sets, and minimum vertex covers in bipartite graphs. The perfection of line graphs of bipartite graphs can be stated equivalently as the fact that bipartite graphs have chromatic index equal to their maximum degree, proven by . Thus, all four of these basic classes are perfect. The double split graphs are a relative of the split graphs that can also be shown to be perfect.
The four types of decompositions considered in this proof are 2-joins, complements of 2-joins, balanced skew partitions, and homogeneous pairs.
A 2-join is a partition of the vertices of a graph into two subsets, with the property that the edges spanning the cut between these two subsets form two vertex-disjoint complete bipartite graphs. When a graph has a 2-join, it may be decomposed into induced subgraphs called "blocks", by replacing one of the two subsets of vertices by a shortest path within that subset that connects one of the two complete bipartite graphs to the other; when no such path exists, the block is formed instead by replacing one of the two subsets of vertices by two vertices, one for each complete bipartite subgraph. A 2-join is perfect if and only if its two blocks are both perfect. Therefore, if a minimally imperfect graph has a 2-join, it must equal one of its blocks, from which it follows that it must be an odd cycle and not Berge. For the same reason, a minimally imperfect graph whose complement has a 2-join cannot be Berge.
A skew partition is a partition of a graph's vertices into two subsets, one of which induces a disconnected subgraph and the other of which has a disconnected complement; had conjectured that no minimal counterexample to the strong perfect graph conjecture could have a skew partition. Chudnovsky et al. introduced some technical constraints on skew partitions, and were able to show that Chvátal's conjecture is true for the resulting "balanced skew partitions". The full conjecture is a corollary of the strong perfect graph theorem.
A homogeneous pair is related to a modular decomposition of a graph. It is a partition of the graph into three subsets V1, V2, and V3 such that V1 and V2 together contain at least three vertices, V3 contains at least two vertices, and for each vertex v in V3 and each i in {1,2} either v is adjacent to all vertices in Vi or to none of them. It is not possible for a minimally imperfect graph to have a homogeneous pair. Subsequent to the proof of the strong perfect graph conjecture, simplified it by showing that homogeneous pairs could be eliminated from the set of decompositions used in the proof.
The proof that every Berge graph falls into one of the five basic classes or has one of the four types of decomposition follows a case analysis, according to whether certain configurations exist within the graph: a "stretcher", a subgraph that can be decomposed into three induced paths subject to certain additional constraints, the complement of a stretcher, and a "proper wheel", a configuration related to a wheel graph, consisting of an induced cycle together with a hub vertex adjacent to at least three cycle vertices and obeying several additional constraints. For each possible choice of whether a stretcher or its complement or a proper wheel exists within the given Berge graph, the graph can be shown to be in one of the basic classes or to be decomposable. This case analysis completes the proof.
Notes
References
.
.
.
.
.
.
.
.
.
. As cited by .
.
.
.
.
. As cited by .
.
.
.
External links
The Strong Perfect Graph Theorem, Václav Chvátal
Perfect graphs
Theorems in graph theory | Strong perfect graph theorem | [
"Mathematics"
] | 1,544 | [
"Theorems in graph theory",
"Theorems in discrete mathematics"
] |
746,495 | https://en.wikipedia.org/wiki/Bioreactor | A bioreactor is any manufactured device or system that supports a biologically active environment. In one case, a bioreactor is a vessel in which a chemical process is carried out which involves organisms or biochemically active substances derived from such organisms. This process can either be aerobic or anaerobic. These bioreactors are commonly cylindrical, ranging in size from litres to cubic metres, and are often made of stainless steel.
It may also refer to a device or system designed to grow cells or tissues in the context of cell culture. These devices are being developed for use in tissue engineering or biochemical/bioprocess engineering.
On the basis of mode of operation, a bioreactor may be classified as batch, fed batch or continuous (e.g. a continuous stirred-tank reactor model). An example of a continuous bioreactor is the chemostat.
Organisms or biochemically active substances growing in bioreactors may be submerged in liquid medium or may be anchored to the surface of a solid medium. Submerged cultures may be suspended or immobilized. Suspension bioreactors may support a wider variety of organisms, since special attachment surfaces are not needed, and can operate at a much larger scale than immobilized cultures. However, in a continuously operated process the organisms will be removed from the reactor with the effluent. Immobilization is a general term describing a wide variety of methods for cell or particle attachment or entrapment. It can be applied to basically all types of
biocatalysis including enzymes, cellular organelles, animal and plant cells and organs. Immobilization is useful for continuously operated processes, since the organisms will not be removed with the reactor effluent, but is limited in scale because the microbes are only present on the surfaces of the vessel.
Large scale immobilized cell bioreactors are:
moving media, also known as moving bed biofilm reactor (MBBR)
packed bed
fibrous bed
membrane
Design
Bioreactor design is a relatively complex engineering task, which is studied in the discipline of biochemical/bioprocess engineering. Under optimum conditions, the microorganisms or cells are able to perform their desired function with limited production of impurities. The environmental conditions inside the bioreactor, such as temperature, nutrient concentrations, pH, and dissolved gases (especially oxygen for aerobic fermentations) affect the growth and productivity of the organisms. The temperature of the fermentation medium is maintained by a cooling jacket, coils, or both. Particularly exothermic fermentations may require the use of external heat exchangers. Nutrients may be continuously added to the fermenter, as in a fed-batch system, or may be charged into the reactor at the beginning of fermentation. The pH of the medium is measured and adjusted with small amounts of acid or base, depending upon the fermentation. For aerobic (and some anaerobic) fermentations, reactant gases (especially oxygen) must be added to the fermentation. Since oxygen is relatively insoluble in water (the basis of nearly all fermentation media), air (or purified oxygen) must be added continuously. The action of the rising bubbles helps mix the fermentation medium and also "strips" out waste gases, such as carbon dioxide. In practice, bioreactors are often pressurized; this increases the solubility of oxygen in water. In an aerobic process, optimal oxygen transfer is sometimes the rate limiting step. Oxygen is poorly soluble in water—even less in warm fermentation broths—and is relatively scarce in air (20.95%). Oxygen transfer is usually helped by agitation, which is also needed to mix nutrients and to keep the fermentation homogeneous. Gas dispersing agitators are used to break up air bubbles and circulate them throughout the vessel.
Fouling can harm the overall efficiency of the bioreactor, especially the heat exchangers. To avoid it, the bioreactor must be easily cleaned. Interior surfaces are typically made of stainless steel for easy cleaning and sanitation. Typically bioreactors are cleaned between batches, or are designed to reduce fouling as much as possible when operated continuously. Heat transfer is an important part of bioreactor design; small vessels can be cooled with a cooling jacket, but larger vessels may require coils or an external heat exchanger.
Types
Photobioreactor
A photobioreactor (PBR) is a bioreactor which incorporates some type of light source (that may be natural sunlight or artificial illumination). Virtually any translucent container could be called a PBR, however the term is more commonly used to define a closed system, as opposed to an open storage tank or pond.
Photobioreactors are used to grow small phototrophic organisms such as cyanobacteria, algae, or moss plants. These organisms use light through photosynthesis as their energy source and do not require sugars or lipids as energy
source. Consequently, risk of contamination with other organisms like bacteria or fungi is lower in photobioreactors when compared to bioreactors for heterotroph organisms.
Sewage treatment
Conventional sewage treatment utilises bioreactors to undertake the main purification processes. In some of these systems, a chemically inert medium with very high surface area is provided as a substrate for the growth of biological film. Separation of excess biological film takes place in settling tanks or cyclones. In other systems aerators supply oxygen to the sewage and biota to create activated sludge in which the biological component is freely mixed in the liquor in "flocs". In these processes, the liquid's biochemical oxygen demand (BOD) is reduced sufficiently to render the contaminated water fit for reuse. The biosolids can be collected for further processing, or dried and used as fertilizer. An extremely simple version of a sewage bioreactor is a septic tank whereby the sewage is left in situ, with or without additional media to house bacteria. In this instance, the biosludge itself is the primary host for the bacteria.
Bioreactors for specialized tissues
Many cells and tissues, especially mammalian ones, must have a surface or other structural support in order to grow, and agitated environments are often destructive to these cell types and tissues. Higher organisms, being auxotrophic, also require highly specialized growth media. This poses a challenge when the goal is to culture larger quantities of cells for therapeutic production purposes, and a significantly different design is needed compared to industrial bioreactors used for growing protein expression systems such as yeast and bacteria.
Many research groups have developed novel bioreactors for growing specialized tissues and cells on a structural scaffold, in attempt to recreate organ-like tissue structures in-vitro. Among these include tissue bioreactors that can grow heart tissue, skeletal muscle tissue, ligaments, cancer tissue models, and others. Currently, scaling production of these specialized bioreactors for industrial use remains challenging and is an active area of research.
For more information on artificial tissue culture, see tissue engineering.
Modelling
Mathematical models act as an important tool in various bio-reactor applications including wastewater treatment. These models are useful for planning efficient process control strategies and predicting the future plant performance. Moreover, these models are beneficial in education and research areas.
Bioreactors are generally used in those industries which are concerned with food, beverages and pharmaceuticals. The emergence of biochemical engineering is of recent origin. Processing of biological materials using biological agents such as cells, enzymes or antibodies are the major pillars of biochemical engineering. Applications of biochemical engineering cover major fields of civilization such as agriculture, food and healthcare, resource recovery and fine chemicals.
Until now, the industries associated with biotechnology have lagged behind other industries in implementing control over the process and optimization strategies. A main drawback in biotechnological process control is the problem of measuring key physical and biochemical parameters.
Operational stages in a bio-process
A bioprocess is composed mainly of three stages—upstream processing, bioreaction, and downstream processing—to convert raw material to finished product.
The raw material can be of biological or non-biological origin. It is first converted to a more suitable form for processing. This is done in an upstream processing step which involves chemical hydrolysis, preparation of liquid medium, separation of particulate, air purification and many other preparatory operations.
After the upstream processing step, the resulting feed is transferred to one or more bioreaction stages. The biochemical reactors or bioreactors form the base of the bioreaction step. This step mainly consists of three operations, namely, production of biomass, metabolite biosynthesis and biotransformation.
Finally, the material produced in the bioreactor must be further processed in the downstream section to convert it into a more useful form. The downstream process mainly consists of physical separation operations which include solid liquid separation, adsorption, liquid-liquid extraction, distillation, drying etc.
Specifications
A typical bioreactor consists of following parts:
Agitator – Used for the mixing of the contents of the reactor which keeps the cells in the perfect homogenous condition for better transport of nutrients and oxygen to the desired product(s).
Baffle – Used to break the vortex formation in the vessel, which is usually highly undesirable as it changes the center of gravity of the system and consumes additional power.
Sparger – In aerobic cultivation process, the purpose of the sparger is to supply adequate oxygen to the growing cells.
Jacket – The jacket provides the annular area for circulation of constant temperature of water which keeps the temperature of the bioreactor at a constant value.
See also
ATP test
Biochemical engineering
Biofuel from algae
Biological hydrogen production (algae)
Bioprocessor
Bioreactor landfill
Biotechnology
Cell culture
Chemostat
Digester
Electro-biochemical reactor (EBR)
Hairy root culture
History of biotechnology
Hollow fiber bioreactor
Immobilized enzyme
Industrial biotechnology
Moving bed biofilm reactor
Septic tank
Single-use bioreactor
Tissue engineering
References
Further reading
Pauline M Doran, Bio-process Engineering Principles, Elsevier, 2nd ed., 2013
Biotechnology company
External links
Photo-bioreactor.
Biotechnology
Biological engineering
Biochemical engineering | Bioreactor | [
"Chemistry",
"Engineering",
"Biology"
] | 2,114 | [
"Bioreactors",
"Biological engineering",
"Chemical reactors",
"Chemical engineering",
"Biochemical engineering",
"Biotechnology",
"Microbiology equipment",
"nan",
"Biochemistry"
] |
747,122 | https://en.wikipedia.org/wiki/Generalized%20linear%20model | In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.
Generalized linear models were formulated by John Nelder and Robert Wedderburn as a way of unifying various other statistical models, including linear regression, logistic regression and Poisson regression. They proposed an iteratively reweighted least squares method for maximum likelihood estimation (MLE) of the model parameters. MLE remains popular and is the default method on many statistical computing packages. Other approaches, including Bayesian regression and least squares fitting to variance stabilized responses, have been developed.
Intuition
Ordinary linear regression predicts the expected value of a given unknown quantity (the response variable, a random variable) as a linear combination of a set of observed values (predictors). This implies that a constant change in a predictor leads to a constant change in the response variable (i.e. a linear-response model). This is appropriate when the response variable can vary, to a good approximation, indefinitely in either direction, or more generally for any quantity that only varies by a relatively small amount compared to the variation in the predictive variables, e.g. human heights.
However, these assumptions are inappropriate for some types of response variables. For example, in cases where the response variable is expected to be always positive and varying over a wide range, constant input changes lead to geometrically (i.e. exponentially) varying, rather than constantly varying, output changes. As an example, suppose a linear prediction model learns from some data (perhaps primarily drawn from large beaches) that a 10 degree temperature decrease would lead to 1,000 fewer people visiting the beach. This model is unlikely to generalize well over different sized beaches. More specifically, the problem is that if you use the model to predict the new attendance with a temperature drop of 10 for a beach that regularly receives 50 beachgoers, you would predict an impossible attendance value of −950. Logically, a more realistic model would instead predict a constant rate of increased beach attendance (e.g. an increase of 10 degrees leads to a doubling in beach attendance, and a drop of 10 degrees leads to a halving in attendance). Such a model is termed an exponential-response model (or log-linear model, since the logarithm of the response is predicted to vary linearly).
Similarly, a model that predicts a probability of making a yes/no choice (a Bernoulli variable) is even less suitable as a linear-response model, since probabilities are bounded on both ends (they must be between 0 and 1). Imagine, for example, a model that predicts the likelihood of a given person going to the beach as a function of temperature. A reasonable model might predict, for example, that a change in 10 degrees makes a person two times more or less likely to go to the beach. But what does "twice as likely" mean in terms of a probability? It cannot literally mean to double the probability value (e.g. 50% becomes 100%, 75% becomes 150%, etc.). Rather, it is the odds that are doubling: from 2:1 odds, to 4:1 odds, to 8:1 odds, etc. Such a model is a log-odds or logistic model.
Generalized linear models cover all these situations by allowing for response variables that have arbitrary distributions (rather than simply normal distributions), and for an arbitrary function of the response variable (the link function) to vary linearly with the predictors (rather than assuming that the response itself must vary linearly). For example, the case above of predicted number of beach attendees would typically be modeled with a Poisson distribution and a log link, while the case of predicted probability of beach attendance would typically be modelled with a Bernoulli distribution (or binomial distribution, depending on exactly how the problem is phrased) and a log-odds (or logit) link function.
Overview
In a generalized linear model (GLM), each outcome Y of the dependent variables is assumed to be generated from a particular distribution in an exponential family, a large class of probability distributions that includes the normal, binomial, Poisson and gamma distributions, among others. The conditional mean μ of the distribution depends on the independent variables X through:
where E(Y | X) is the expected value of Y conditional on X; Xβ is the linear predictor, a linear combination of unknown parameters β; g is the link function.
In this framework, the variance is typically a function, V, of the mean:
It is convenient if V follows from an exponential family of distributions, but it may simply be that the variance is a function of the predicted value.
The unknown parameters, β, are typically estimated with maximum likelihood, maximum quasi-likelihood, or Bayesian techniques.
Model components
The GLM consists of three elements:
1. A particular distribution for modeling from among those which are considered exponential families of probability distributions,
2. A linear predictor , and
3. A link function such that .
Probability distribution
An overdispersed exponential family of distributions is a generalization of an exponential family and the exponential dispersion model of distributions and includes those families of probability distributions, parameterized by and , whose density functions f (or probability mass function, for the case of a discrete distribution) can be expressed in the form
The dispersion parameter, , typically is known and is usually related to the variance of the distribution. The functions , , , , and are known. Many common distributions are in this family, including the normal, exponential, gamma, Poisson, Bernoulli, and (for fixed number of trials) binomial, multinomial, and negative binomial.
For scalar and (denoted and in this case), this reduces to
is related to the mean of the distribution. If is the identity function, then the distribution is said to be in canonical form (or natural form). Note that any distribution can be converted to canonical form by rewriting as and then applying the transformation . It is always possible to convert in terms of the new parametrization, even if is not a one-to-one function; see comments in the page on exponential families.
If, in addition, and are the identity, then is called the canonical parameter (or natural parameter) and is related to the mean through
For scalar and , this reduces to
Under this scenario, the variance of the distribution can be shown to be
For scalar and , this reduces to
Linear predictor
The linear predictor is the quantity which incorporates the information about the independent variables into the model. The symbol η (Greek "eta") denotes a linear predictor. It is related to the expected value of the data through the link function.
η is expressed as linear combinations (thus, "linear") of unknown parameters β. The coefficients of the linear combination are represented as the matrix of independent variables X. η can thus be expressed as
Link function
The link function provides the relationship between the linear predictor and the mean of the distribution function. There are many commonly used link functions, and their choice is informed by several considerations. There is always a well-defined canonical link function which is derived from the exponential of the response's density function. However, in some cases it makes sense to try to match the domain of the link function to the range of the distribution function's mean, or use a non-canonical link function for algorithmic purposes, for example Bayesian probit regression.
When using a distribution function with a canonical parameter the canonical link function is the function that expresses in terms of i.e. For the most common distributions, the mean is one of the parameters in the standard form of the distribution's density function, and then is the function as defined above that maps the density function into its canonical form. When using the canonical link function, which allows to be a sufficient statistic for .
Following is a table of several exponential-family distributions in common use and the data they are typically used for, along with the canonical link functions and their inverses (sometimes referred to as the mean function, as done here).
In the cases of the exponential and gamma distributions, the domain of the canonical link function is not the same as the permitted range of the mean. In particular, the linear predictor may be positive, which would give an impossible negative mean. When maximizing the likelihood, precautions must be taken to avoid this. An alternative is to use a noncanonical link function.
In the case of the Bernoulli, binomial, categorical and multinomial distributions, the support of the distributions is not the same type of data as the parameter being predicted. In all of these cases, the predicted parameter is one or more probabilities, i.e. real numbers in the range . The resulting model is known as logistic regression (or multinomial logistic regression in the case that K-way rather than binary values are being predicted).
For the Bernoulli and binomial distributions, the parameter is a single probability, indicating the likelihood of occurrence of a single event. The Bernoulli still satisfies the basic condition of the generalized linear model in that, even though a single outcome will always be either 0 or 1, the expected value will nonetheless be a real-valued probability, i.e. the probability of occurrence of a "yes" (or 1) outcome. Similarly, in a binomial distribution, the expected value is Np, i.e. the expected proportion of "yes" outcomes will be the probability to be predicted.
For categorical and multinomial distributions, the parameter to be predicted is a K-vector of probabilities, with the further restriction that all probabilities must add up to 1. Each probability indicates the likelihood of occurrence of one of the K possible values. For the multinomial distribution, and for the vector form of the categorical distribution, the expected values of the elements of the vector can be related to the predicted probabilities similarly to the binomial and Bernoulli distributions.
Fitting
Maximum likelihood
The maximum likelihood estimates can be found using an iteratively reweighted least squares algorithm or a Newton's method with updates of the form:
where is the observed information matrix (the negative of the Hessian matrix) and is the score function; or a Fisher's scoring method:
where is the Fisher information matrix. Note that if the canonical link function is used, then they are the same.
Bayesian methods
In general, the posterior distribution cannot be found in closed form and so must be approximated, usually using Laplace approximations or some type of Markov chain Monte Carlo method such as Gibbs sampling.
Examples
General linear models
A possible point of confusion has to do with the distinction between generalized linear models and general linear models, two broad statistical models. Co-originator John Nelder has expressed regret over this terminology.
The general linear model may be viewed as a special case of the generalized linear model with identity link and responses normally distributed. As most exact results of interest are obtained only for the general linear model, the general linear model has undergone a somewhat longer historical development. Results for the generalized linear model with non-identity link are asymptotic (tending to work well with large samples).
Linear regression
A simple, very important example of a generalized linear model (also an example of a general linear model) is linear regression. In linear regression, the use of the least-squares estimator is justified by the Gauss–Markov theorem, which does not assume that the distribution is normal.
From the perspective of generalized linear models, however, it is useful to suppose that the distribution function is the normal distribution with constant variance and the link function is the identity, which is the canonical link if the variance is known. Under these assumptions, the least-squares estimator is obtained as the maximum-likelihood parameter estimate.
For the normal distribution, the generalized linear model has a closed form expression for the maximum-likelihood estimates, which is convenient. Most other GLMs lack closed form estimates.
Binary data
When the response data, Y, are binary (taking on only values 0 and 1), the distribution function is generally chosen to be the Bernoulli distribution and the interpretation of μi is then the probability, p, of Yi taking on the value one.
There are several popular link functions for binomial functions.
Logit link function
The most typical link function is the canonical logit link:
GLMs with this setup are logistic regression models (or logit models).
Probit link function as popular choice of inverse cumulative distribution function
Alternatively, the inverse of any continuous cumulative distribution function (CDF) can be used for the link since the CDF's range is , the range of the binomial mean. The normal CDF is a popular choice and yields the probit model. Its link is
The reason for the use of the probit model is that a constant scaling of the input variable to a normal CDF (which can be absorbed through equivalent scaling of all of the parameters) yields a function that is practically identical to the logit function, but probit models are more tractable in some situations than logit models. (In a Bayesian setting in which normally distributed prior distributions are placed on the parameters, the relationship between the normal priors and the normal CDF link function means that a probit model can be computed using Gibbs sampling, while a logit model generally cannot.)
Complementary log-log (cloglog)
The complementary log-log function may also be used:
This link function is asymmetric and will often produce different results from the logit and probit link functions. The cloglog model corresponds to applications where we observe either zero events (e.g., defects) or one or more, where the number of events is assumed to follow the Poisson distribution. The Poisson assumption means that
where μ is a positive number denoting the expected number of events. If p represents the proportion of observations with at least one event, its complement
and then
A linear model requires the response variable to take values over the entire real line. Since μ must be positive, we can enforce that by taking the logarithm, and letting log(μ) be a linear model. This produces the "cloglog" transformation
Identity link
The identity link g(p) = p is also sometimes used for binomial data to yield a linear probability model. However, the identity link can predict nonsense "probabilities" less than zero or greater than one. This can be avoided by using a transformation like cloglog, probit or logit (or any inverse cumulative distribution function). A primary merit of the identity link is that it can be estimated using linear math—and other standard link functions are approximately linear matching the identity link near p = 0.5.
Variance function
The variance function for "" data is:
where the dispersion parameter τ is exactly 1 for the binomial distribution. Indeed, the standard binomial likelihood omits τ. When it is present, the model is called "quasibinomial", and the modified likelihood is called a quasi-likelihood, since it is not generally the likelihood corresponding to any real family of probability distributions. If τ exceeds 1, the model is said to exhibit overdispersion.
Multinomial regression
The binomial case may be easily extended to allow for a multinomial distribution as the response (also, a Generalized Linear Model for counts, with a constrained total). There are two ways in which this is usually done:
Ordered response
If the response variable is ordinal, then one may fit a model function of the form:
for m > 2. Different links g lead to ordinal regression models like proportional odds models or ordered probit models.
Unordered response
If the response variable is a nominal measurement, or the data do not satisfy the assumptions of an ordered model, one may fit a model of the following form:
for m > 2. Different links g lead to multinomial logit or multinomial probit models. These are more general than the ordered response models, and more parameters are estimated.
Count data
Another example of generalized linear models includes Poisson regression which models count data using the Poisson distribution. The link is typically the logarithm, the canonical link.
The variance function is proportional to the mean
where the dispersion parameter τ is typically fixed at exactly one. When it is not, the resulting quasi-likelihood model is often described as Poisson with overdispersion or quasi-Poisson.
Extensions
Correlated or clustered data
The standard GLM assumes that the observations are uncorrelated. Extensions have been developed to allow for correlation between observations, as occurs for example in longitudinal studies and clustered designs:
Generalized estimating equations (GEEs) allow for the correlation between observations without the use of an explicit probability model for the origin of the correlations, so there is no explicit likelihood. They are suitable when the random effects and their variances are not of inherent interest, as they allow for the correlation without explaining its origin. The focus is on estimating the average response over the population ("population-averaged" effects) rather than the regression parameters that would enable prediction of the effect of changing one or more components of X on a given individual. GEEs are usually used in conjunction with Huber–White standard errors.
Generalized linear mixed models (GLMMs) are an extension to GLMs that includes random effects in the linear predictor, giving an explicit probability model that explains the origin of the correlations. The resulting "subject-specific" parameter estimates are suitable when the focus is on estimating the effect of changing one or more components of X on a given individual. GLMMs are also referred to as multilevel models and as mixed model. In general, fitting GLMMs is more computationally complex and intensive than fitting GEEs.
Generalized additive models
Generalized additive models (GAMs) are another extension to GLMs in which the linear predictor η is not restricted to be linear in the covariates X but is the sum of smoothing functions applied to the xis:
The smoothing functions fi are estimated from the data. In general this requires a large number of data points and is computationally intensive.
See also
(VGLM)
Generalized estimating equation
References
Citations
Bibliography
Further reading
External links
Actuarial science
Regression models | Generalized linear model | [
"Mathematics"
] | 3,847 | [
"Applied mathematics",
"Actuarial science"
] |
747,222 | https://en.wikipedia.org/wiki/Silicon%20on%20sapphire | Silicon on sapphire (SOS) is a hetero-epitaxial process for metal–oxide–semiconductor (MOS) integrated circuit (IC) manufacturing that consists of a thin layer (typically thinner than 0.6 μm) of silicon grown on a sapphire () wafer. SOS is part of the silicon-on-insulator (SOI) family of CMOS (complementary MOS) technologies.
Typically, high-purity artificially grown sapphire crystals are used. The silicon is usually deposited by the decomposition of silane gas () on heated sapphire substrates. The advantage of sapphire is that it is an excellent electrical insulator, preventing stray currents caused by radiation from spreading to nearby circuit elements. SOS faced early challenges in commercial manufacturing because of difficulties in fabricating the very small transistors used in modern high-density applications. This is because the SOS process results in the formation of dislocations, twinning and stacking faults from crystal lattice disparities between the sapphire and silicon. Additionally, there is some aluminum, a p-type dopant, contamination from the substrate in the silicon closest to the interface.
History
In 1963, Harold M. Manasevit was the first to document epitaxial growth of silicon on sapphire while working at the Autonetics division of North American Aviation (now Boeing). In 1964, he published his findings with colleague William Simpson in the Journal of Applied Physics. In 1965, C.W. Mueller and P.H. Robinson fabricated a MOSFET (metal–oxide–semiconductor field-effect transistor) using the silicon-on-sapphire process at RCA Laboratories.
SOS was first used in aerospace and military applications because of its inherent resistance to radiation. More recently, patented advancements in SOS processing and design have been made by Peregrine Semiconductor, allowing SOS to be commercialized in high-volume for high-performance radio-frequency (RF) applications.
Circuits and systems
The advantages of the SOS technology allow research groups to fabricate a variety of SOS circuits and systems that benefit from the technology and advance the state-of-the-art in:
analog-to-digital converters (a nano-Watts prototype was produced by Yale e-Lab)
monolithic digital isolation buffers
SOS-CMOS image sensor arrays (one of the first standard CMOS image sensor arrays capable of transducing light simultaneously from both sides of the die was produced by Yale e-Lab)
patch-clamp amplifiers
energy harvesting devices
three-dimensional (3D) integration with no galvanic connections
charge pumps
temperature sensors
early microprocessors, such as the RCA 1802
Applications
Silicon on sapphire pressure transducer, pressure transmitter and temperature sensor diaphragms have been manufactured using a patented process by Armen Sahagen since 1985. Outstanding performance in high temperature environments helped propel this technology forward. This SOS technology has been licensed throughout the world. ESI Technology Ltd. in the UK have developed a wide range of pressure transducers and pressure transmitters that benefit from the outstanding features of silicon on sapphire.
Peregrine Semiconductor has used SOS technology to develop RF integrated circuits (RFICs) including RF switches, digital step attenuators (DSAs), phase locked-loop (PLL) frequency synthesizers, prescalers, mixers/upconverters, and variable-gain amplifiers. These RFICs are designed for commercial RF applications such as mobile handsets and cellular infrastructure, broadband consumer and DTV, test and measurement, and industrial public safety, as well as rad-hard aerospace and defense markets.
Hewlett-Packard used SOS in some of their CPU designs, particularly in the HP 3000 line of computers.
Silicon on sapphire chips produced in the 1970s proved superior in performance to their all silicon counterparts, but this came at the cost of lower yields of just 9%.
Substrate analysis: SOS structure
The application of epitaxial growth of silicon on sapphire substrates for fabricating MOS devices involves a silicon purification process that mitigates crystal defects which result from a mismatch between sapphire and silicon lattices. For example, Peregrine Semiconductor's SP4T switch is formed on an SOS substrate where the final thickness of silicon is approximately 95 nm. Silicon is recessed in regions outside the polysilicon gate stack by poly oxidation and further recessed by the sidewall spacer formation process to a thickness of approximately 78 nm.
See also
Silicon on insulator
Radiation hardening
References
Further reading
Thin film deposition
Semiconductor device fabrication
MOSFETs
Silicon | Silicon on sapphire | [
"Chemistry",
"Materials_science",
"Mathematics"
] | 942 | [
"Microtechnology",
"Thin film deposition",
"Coatings",
"Thin films",
"Semiconductor device fabrication",
"Planes (geometry)",
"Solid state engineering"
] |
747,290 | https://en.wikipedia.org/wiki/Silicon%20on%20insulator | In semiconductor manufacturing, silicon on insulator (SOI) technology is fabrication of silicon semiconductor devices in a layered silicon–insulator–silicon substrate, to reduce parasitic capacitance within the device, thereby improving performance. SOI-based devices differ from conventional silicon-built devices in that the silicon junction is above an electrical insulator, typically silicon dioxide or sapphire (these types of devices are called silicon on sapphire, or SOS). The choice of insulator depends largely on intended application, with sapphire being used for high-performance radio frequency (RF) and radiation-sensitive applications, and silicon dioxide for diminished short-channel effects in other microelectronics devices. The insulating layer and topmost silicon layer also vary widely with application.
Industry need
SOI technology is one of several manufacturing strategies to allow the continued miniaturization of microelectronic devices, colloquially referred to as "extending Moore's Law" (or "More Moore", abbreviated "MM"). Reported benefits of SOI relative to conventional silicon (bulk CMOS) processing include:
Lower parasitic capacitance due to isolation from the bulk silicon, which improves power consumption at matched performance
Resistance to latchup due to complete isolation of the n- and p-well structures
Higher performance at equivalent VDD. Can work at low VDDs
Reduced temperature dependency due to no doping
Better yield due to high density, better wafer utilization
Reduced antenna issues
No body or well taps are needed
Lower leakage currents due to isolation thus higher power efficiency
Inherently radiation hardened (resistant to soft errors), reducing the need for redundancy
From a manufacturing perspective, SOI substrates are compatible with most conventional fabrication processes. In general, an SOI-based process may be implemented without special equipment or significant retooling of an existing factory. Among challenges unique to SOI are novel metrology requirements to account for the buried oxide layer and concerns about differential stress in the topmost silicon layer. The threshold voltage of the transistor depends on the history of operation and applied voltage to it, thus making modeling harder.
The primary barrier to SOI implementation is the drastic increase in substrate cost, which contributes an estimated 10–15% increase to total manufacturing costs. FD-SOI (Fully Depleted Silicon On Insulator) has been seen as a potential low cost alternative to FinFETs.
SOI transistors
An SOI MOSFET is a metal–oxide–semiconductor field-effect transistor (MOSFET) device in which a semiconductor layer such as silicon or germanium is formed on an insulator layer which may be a buried oxide (BOX) layer formed in a semiconductor substrate. SOI MOSFET devices are adapted for use by the computer industry. The buried oxide layer can be used in SRAM designs. There are two types of SOI devices: PDSOI (partially depleted SOI) and FDSOI (fully depleted SOI) MOSFETs. For an n-type PDSOI MOSFET the sandwiched n-type film between the gate oxide (GOX) and buried oxide (BOX) is large, so the depletion region can't cover the whole n region. So to some extent PDSOI behaves like bulk MOSFET. Obviously there are some advantages over the bulk MOSFETs. The film is very thin in FDSOI devices so that the depletion region covers the whole channel region. In FDSOI the front gate (GOX) supports fewer depletion charges than the bulk so an increase in inversion charges occurs resulting in higher switching speeds. The limitation of the depletion charge by the BOX induces a suppression of the depletion capacitance and therefore a substantial reduction of the subthreshold swing allowing FD SOI MOSFETs to work at lower gate bias resulting in lower power operation. The subthreshold swing can reach the minimum theoretical value for MOSFET at 300K, which is 60mV/decade. This ideal value was first demonstrated using numerical simulation. Other drawbacks in bulk MOSFETs, like threshold voltage roll off, etc. are reduced in FDSOI since the source and drain electric fields can't interfere due to the BOX. The main problem in PDSOI is the "floating body effect (FBE)" since the film is not connected to any of the supplies.
Manufacture of SOI wafers
-based SOI wafers can be produced by several methods:
SIMOX - Separation by IMplantation of OXygen – uses an oxygen ion beam implantation process followed by high temperature annealing to create a buried layer.
Wafer bonding – the insulating layer is formed by directly bonding oxidized silicon with a second substrate. The majority of the second substrate is subsequently removed, the remnants forming the topmost Si layer.
One prominent example of a wafer bonding process is the Smart Cut method developed by the French firm Soitec which uses ion implantation followed by controlled exfoliation to determine the thickness of the uppermost silicon layer.
NanoCleave is a technology developed by Silicon Genesis Corporation that separates the silicon via stress at the interface of silicon and silicon-germanium alloy.
ELTRAN is a technology developed by Canon which is based on porous silicon and water cut.
Seed methods - wherein the topmost Si layer is grown directly on the insulator. Seed methods require some sort of template for homoepitaxy, which may be achieved by chemical treatment of the insulator, an appropriately oriented crystalline insulator, or vias through the insulator from the underlying substrate.
An exhaustive review of these various manufacturing processes may be found in reference
Use in the microelectronics industry
IBM began to use SOI in the high-end RS64-IV "Istar" PowerPC-AS microprocessor in 2000. Other examples of microprocessors built on SOI technology include AMD's 130 nm, 90 nm, 65 nm, 45 nm and 32 nm single, dual, quad, six and eight core processors since 2001. Freescale adopted SOI in their PowerPC 7455 CPU in late 2001, currently Freescale is shipping SOI products in 180 nm, 130 nm, 90 nm and 45 nm lines. The 90 nm PowerPC- and Power ISA-based processors used in the Xbox 360, PlayStation 3, and Wii use SOI technology as well. Competitive offerings from Intel however continue to use conventional bulk CMOS technology for each process node, instead focusing on other venues such as HKMG and tri-gate transistors to improve transistor performance. In January 2005, Intel researchers reported on an experimental single-chip silicon rib waveguide Raman laser built using SOI.
As for the traditional foundries, in July 2006 TSMC claimed no customer wanted SOI, but Chartered Semiconductor devoted a whole fab to SOI.
Use in high-performance radio frequency (RF) applications
In 1990, Peregrine Semiconductor began development of an SOI process technology utilizing a standard 0.5 μm CMOS node and an enhanced sapphire substrate. Its patented silicon on sapphire (SOS) process is widely used in high-performance RF applications. The intrinsic benefits of the insulating sapphire substrate allow for high isolation, high linearity and electro-static discharge (ESD) tolerance. Multiple other companies have also applied SOI technology to successful RF applications in smartphones and cellular radios.
Use in photonics
SOI wafers are widely used in silicon photonics. The crystalline silicon layer on insulator can be used to fabricate optical waveguides and other optical devices, either passive or active (e.g. through suitable implantations). The buried insulator enables propagation of infrared light in the silicon layer on the basis of total internal reflection. The top surface of the waveguides can be either left uncovered and exposed to air (e.g. for sensing applications), or covered with a cladding, typically made of silica
Disadvantages
The major disadvantage of SOI technology when compared to conventional semiconductor industry is increased cost of manufacturing. As of 2012 only IBM and AMD used SOI as basis for high-performance processors and the other manufacturers (Intel, TSMC, Global Foundries etc.) used conventional silicon wafers to build their CMOS chips.
SOI market
As of 2020 the market utilizing the SOI process was projected to grow up by ~15% for the next 5 years according to Market Research Future group.
See also
Intel TeraHertz - similar technology from Intel
Strain engineering
Wafer (electronics)
Wafer bonding
References
External links
SOI Industry Consortium - a site with extensive information and education for SOI technology
SOI IP portal - A search engine for SOI IP
AMDboard - a site with extensive information regarding SOI technology
Advanced Substrate News - a newsletter about the SOI industry, produced by Soitec
MIGAS '04 - The 7th session of MIGAS International Summer School on Advanced Microelectronics, devoted to SOI technology and devices
MIGAS '09 - 12th session of the International Summer School on Advanced Microelectronics: "Silicon on Insulator (SOI) Nanodevices"
Semiconductor structures
Semiconductor technology
Microtechnology
MOSFETs
Nanoelectronics
Semiconductor device fabrication
Silicon | Silicon on insulator | [
"Materials_science",
"Engineering"
] | 1,903 | [
"Microtechnology",
"Materials science",
"Semiconductor device fabrication",
"Nanoelectronics",
"Nanotechnology",
"Semiconductor technology"
] |
12,661,414 | https://en.wikipedia.org/wiki/Field%20emitter%20array | A field emitter array (FEA) is a particular form of large-area field electron source. FEAs are prepared on a silicon substrate by lithographic techniques similar to those used in the fabrication of integrated circuits. Their structure consists of many individual, similar, small-field electron emitters, usually organized in a regular two-dimensional pattern. FEAs need to be distinguished from "film" or "mat" type large-area sources, where a thin film-like layer of material is deposited onto a substrate, using a uniform deposition process, in the hope or expectation that (as a result of statistical irregularities in the process) this film will contain a sufficiently large number of individual emission sites.
Spindt arrays
The original field emitter array was the Spindt array, in which the individual field emitters are small sharp molybdenum cones. Each is deposited inside a cylindrical void in an oxide film, with a counterelectrode deposited on the top of the film. The counterelectrode (called the "gate") contains a separate circular aperture for each conical emitter. The device is named after Charles A. Spindt, who developed this technology at SRI International, publishing the first article describing a single emitter tip microfabricated on a wafer in 1968.
Spindt, Shoulders and Heynick filed a U.S. Patent in 1970 for a vacuum device comprising an array of emitter tips.
Each individual cone is referred to as a Spindt tip. Because Spindt tips have sharp apices, they can generate a high local electric field using a relatively low gate voltage (less than 100 V). Using lithographic manufacturing techniques, individual emitters can be packed extremely close together, resulting in a high average (or "macroscopic") current density of up to 2×107 A/m2 . Spindt-type emitters have a higher emission intensity and a more narrow angular distribution than other FEA technologies.
nano-Spindt arrays
Nano-Spindt arrays represent an evolution of the traditional Spindt-type emitter. Each individual tip is several orders of magnitude smaller; as a result, gate voltages can be lower, since the distance from tip to gate is reduced. In addition, the current extracted from each individual tip is lower, which should result in improved reliability.
Carbon Nanotube (CNT) arrays
An alternative form of FEA is fabricated by creating voids in an oxide film (as for a Spindt array) and then using standard methods to grow one or more carbon nanotubes (CNTs) in each void.
It is also possible to grow "free-standing" CNT arrays.
Applications
Essentially very small electron beam generators, FEAs, have been applied in many different domains. FEAs have been used to create flat panel displays (where they are known as field emission displays (or "nano-emissive displays"). They may also be used in microwave generators, and in RF communications, where they could serve as the cathode in traveling wave tubes (TWTs).
Recently, there has been renewed interest in using field effect arrays as cold cathodes in X-ray tubes. FEAs offer a number of potential advantages over conventional thermionic cathodes, including low power consumption, instantaneous switching, and independence of current and voltage.
References
See also
Field emission display
Field electron emission
Vacuum tubes using field electron emitters
Cold cathode
Vacuum tubes | Field emitter array | [
"Physics"
] | 707 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
12,662,075 | https://en.wikipedia.org/wiki/Enrique%20Loedel%20Palumbo | Enrique Loedel Palumbo (Montevideo Uruguay, June 29, 1901 – La Plata Argentina, July 31, 1962) was an Uruguayan physicist.
Loedel Palumbo was born in Montevideo, Uruguay and studied at the University of La Plata in Argentina. His doctoral advisor was the German physicist of Jewish origin Richard Gans. Loedel wrote his Ph.D. thesis in December 1925 on optical and electrical constants of sugar cane. An extract of the thesis was published in German in Annalen der Physik in 1926. He then began his career as professor in La Plata.
During Einstein's visit to Argentina in 1925 they had a conversation about the differential equation of a point-source gravitational field, which resulted in a paper published by Loedel in Physikalische Zeitschrift. It is claimed that this is the first research paper on relativity ever published by a Latin American scientist.
Loedel Palumbo then spent some time in Germany working with Erwin Schrödinger and Max Planck. He returned to Argentina in 1930 and from there on concentrated on teaching. He published several scientific papers during his career in international journals and wrote several books (in Spanish).
Loedel diagram
Max Born (1920) and systematically Paul Gruner (1921) introduced symmetric Minkowski diagrams in German and French papers, where the ct'-axis is perpendicular to the x-axis, as well as the ct-axis perpendicular to the x'-axis (for sources and historical details, see Loedel diagram).
In 1948 and in subsequent papers, Loedel independently rediscovered such diagrams. They were again rediscovered in 1955 by Henri Amar, who subsequently wrote in 1957 in American Journal of Physics: "I regret my unfamiliarity with South American literature and wish to acknowledge the priority of Professor Loedel's work", along with a note by Loedel Palumbo citing his publications on the geometrical representation of Lorentz transformations. Those diagrams are therefore called "Loedel diagrams", and have been cited by some textbook authors on the subject.
Suppose there are two collinear velocities v and w. How does one find the frame of reference in which the velocities become equal speeds in opposite directions? One solution uses modern algebra to find it:
Suppose and , so that a and b are rapidities corresponding to velocities v and w. Let m = (a + b)/2, the midpoint rapidity. The transformation
of the split-complex number plane represents the required transformation since
and
As the exponents are additive inverses of each other, the images represent equal speeds in opposite directions.
Publications
Física Elemental, Estrada Editorial, Argentina (1941).
Cosmografía (o Elementos de Astronomía), Editorial Estrada, Argentina, 1941.
"Versos de un físico. Física y razón vital.", La Plata, 1934.
"El convencionalismo en el problema de las magnitudes físicas", Actas del Primer Congreso Nacional de Filosofía (Mendoza 1949), Universidad Nacional de Cuyo, Buenos Aires 1950, tomo III, págs. 1559-1564. (Sesiones: VIII. Epistemología y filosofía de la naturaleza.)
"Lógica y Metafísica", conference about causality given at the University of La Plata (date undocumented).
Enseñanza de la Física, Editorial Kapelusz, Buenos Aires, Argentina (1949).
Física relativista, Editorial Kapelusz, Buenos Aires, Argentina, 1955.
Notes and references
1901 births
1962 deaths
Scientists from Montevideo
Uruguayan expatriates in Argentina
Uruguayan expatriates in Germany
Uruguayan physicists
Uruguayan people of German descent
Uruguayan people of Italian descent
Relativity theorists | Enrique Loedel Palumbo | [
"Physics"
] | 797 | [
"Relativity theorists",
"Theory of relativity"
] |
12,666,251 | https://en.wikipedia.org/wiki/Mott%20scattering | In physics, Mott scattering, also referred to as spin-coupling inelastic Coulomb scattering, is the separation of the two spin states of an electron beam by scattering the beam off the Coulomb field of heavy atoms. It is named after Nevill Francis Mott, who first developed the theory. It is mostly used to measure the spin polarization of an electron beam.
In lay terms, Mott scattering is similar to Rutherford scattering but electrons are used instead of alpha particles as they do not interact via the strong interaction (only through weak interaction and electromagnetism), which enable electrons to penetrate the atomic nucleus, giving valuable insight into the nuclear structure.
Description
The electrons are often fired at gold foil because gold has a high atomic number (Z), is non-reactive (does not form an oxide layer), and can be easily made into a thin film (reducing multiple scattering). The presence of a spin-orbit term in the scattering potential introduces a spin dependence in the scattering cross section. Two detectors at exactly the same scattering angle to the left and right of the foil count the number of scattered electrons. The asymmetry A, given by:
is proportional to the degree of spin polarization P according to A = SP, where S is the Sherman function.
The Mott cross section formula is the mathematical description of the scattering of a high energy electron beam from an atomic nucleus-sized positively charged point in space. The Mott scattering is the theoretical diffraction pattern produced by such a mathematical model. It is used as the beginning point in calculations in electron scattering diffraction studies.
The equation for the Mott cross section includes an inelastic scattering term to take into account the recoil of the target proton or nucleus. It also can be corrected for relativistic effects of high energy electrons, and for their magnetic moment.
When an experimentally found diffraction pattern deviates from the mathematically derived Mott scattering, it gives clues as to the size and shape of an atomic nucleus The reason is that the Mott cross section assumes only point-particle Coulombic and magnetic interactions between the incoming electrons and the target. When the target is a charged sphere rather than a point, additions to the Mott cross section equation (form factor terms) can be used to probe the distribution of the charge inside the sphere.
The Born approximation of the diffraction of a beam of electrons by atomic nuclei is an extension of Mott scattering.
References
Electron beam
Foundational quantum physics
Scattering | Mott scattering | [
"Physics",
"Chemistry",
"Materials_science"
] | 513 | [
"Electron",
"Electron beam",
"Foundational quantum physics",
"Quantum mechanics",
"Scattering",
"Particle physics",
"Condensed matter physics",
"Nuclear physics"
] |
3,440,178 | https://en.wikipedia.org/wiki/In-phase%20and%20quadrature%20components | A sinusoid with modulation can be decomposed into, or synthesized from, two amplitude-modulated sinusoids that are in quadrature phase, i.e., with a phase offset of one-quarter cycle (90 degrees or /2 radians). All three sinusoids have the same center frequency. The two amplitude-modulated sinusoids are known as the in-phase (I) and quadrature (Q) components, which describes their relationships with the amplitude- and phase-modulated carrier.
Or in other words, it is possible to create an arbitrarily phase-shifted sine wave, by mixing together two sine waves that are 90° out of phase in different proportions.
The implication is that the modulations in some signal can be treated separately from the carrier wave of the signal. This has extensive use in many radio and signal processing applications. I/Q data is used to represent the modulations of some carrier, independent of that carrier's frequency.
Orthogonality
In vector analysis, a vector with polar coordinates and Cartesian coordinates can be represented as the sum of orthogonal components: Similarly in trigonometry, the angle sum identity expresses:
And in functional analysis, when is a linear function of some variable, such as time, these components are sinusoids, and they are orthogonal functions. A phase-shift of changes the identity to:
,
in which case is the in-phase component. In both conventions is the in-phase amplitude modulation, which explains why some authors refer to it as the actual in-phase component.
Narrowband signal model
In an angle modulation application, with carrier frequency φ is also a time-variant function, giving:
When all three terms above are multiplied by an optional amplitude function, the left-hand side of the equality is known as the amplitude/phase form, and the right-hand side is the quadrature-carrier or IQ form.
Because of the modulation, the components are no longer completely orthogonal functions. But when and are slowly varying functions compared to the assumption of orthogonality is a common one.
Authors often call it a narrowband assumption, or a narrowband signal model.
I/Q data
A stream of information about how to amplitude-modulate the I and Q phases of a sine wave is known as the I/Q data. By just amplitude-modulating these two 90°-out-of-phase sine waves and adding them, it is possible to produce the effect of arbitrarily modulating some carrier: amplitude and phase. And if the IQ data itself has some frequency (e.g. a phasor) then the carrier also can be frequency modulated. So I/Q data is a complete representation of how a carrier is modulated: amplitude, phase and frequency.
For received signals, by determining how much in-phase carrier and how much quadrature carrier is present in the signal it is possible to represent that signal using in-phase and quadrature components, so IQ data can get generated from a signal with reference to a carrier sine wave.
IQ data has extensive use in many signal processing contexts, including for radio modulation, software-defined radio, audio signal processing and electrical engineering.
I/Q data is a two-dimensional stream. Some sources treat I/Q as a complex number; with the I and Q components corresponding to the real and imaginary parts. Others treat it as distinct pairs of values, as a 2D vector, or as separate streams.
When called "I/Q data" the information is likely digital. However, I/Q may be represented as analog signals. The concepts are applicable to both the analog and digital representations of IQ.
This technique of using I/Q data to represent the modulations of a signal separate to the signal's frequency is known as equivalent baseband signal, supported by the . It is sometimes referred to as vector modulation.
The data rate of I/Q is largely independent to the frequency of the signal being modulated. I/Q data can be generated at a relatively slow rate (e.g. millions of bits per second), perhaps generated by software in part of the physical layer of a protocol stack. I/Q data is used to modulate a carrier frequency, which may be faster (e.g. Gigahertz, perhaps an intermediate frequency).
As well as within a transmitter, I/Q data is also a common means to represent the signal from some receiver. Designs such as the Digital down converter allow the input signal to be represented as streams of IQ data, likely for further processing and symbol extraction in a DSP. Analog systems may suffer from issues, such as IQ imbalance.
I/Q data may also be used as a means to capture and store data used in spectrum monitoring. Since I/Q allows the representation of the modulation separate to the actual carrier frequency, it is possible to represent a capture of all the radio traffic in some RF band or section thereof, with a reasonable amount of data, irrespective of the frequency being monitored. E.g. if there is a capture of 100 MHz of Wi-Fi channels within the 5 GHz U-NII band, that IQ capture can be sampled at 200 million samples per second (according to Nyquist) as opposed to the 10,000 million samples per second required to sample directly at 5 GHz.
A vector signal generator will typically use I/Q data alongside some programmed frequency to generate its signal. And similarly a vector signal analyser can provide a stream of I/Q data in its output. Many modulation schemes, e.g. quadrature amplitude modulation rely heavily on I/Q.
Alternating current (AC) circuits
The term alternating current applies to a voltage vs. time function that is sinusoidal with a frequency When it is applied to a typical (linear time-invariant) circuit or device, it causes a current that is also sinusoidal. In general there is a constant phase difference φ between any two sinusoids. The input sinusoidal voltage is usually defined to have zero phase, meaning that it is arbitrarily chosen as a convenient time reference. So the phase difference is attributed to the current function, e.g.
whose orthogonal components are
and as we have seen. When φ happens to be such that the in-phase component is zero, the current and voltage sinusoids are said to be in quadrature, which means they are orthogonal to each other. In that case, no average (active) electrical power is consumed. Rather power is temporarily stored by the device and given back, once every
seconds. Note that the term in quadrature only implies that two sinusoids are orthogonal, not that they are components of another sinusoid.
See also
Analytic signal
IQ imbalance
Constellation diagram
Negative frequency
Phasor
Polar modulation
Quadrature amplitude modulation
Single-sideband modulation
Notes
References
Further reading
Steinmetz, Charles Proteus (1917). Theory and Calculations of Electrical Apparatus 6 (1 ed.). New York: McGraw-Hill Book Company. B004G3ZGTM.
External links
I/Q Data for Dummies
Signal processing
Radio electronics | In-phase and quadrature components | [
"Technology",
"Engineering"
] | 1,476 | [
"Radio electronics",
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
3,443,739 | https://en.wikipedia.org/wiki/Dosing | Dosing generally applies to feeding chemicals or medicines when used in small quantities.
For medicines the term dose is generally used. In the case of inanimate objects the word dosing is typical. The term dose titration, referring to stepwise adjustment of doses until a desired level of effect is reached, is common in medicine.
Engineering
The word dosing is very commonly used by engineers in thermal power stations, in water treatment, in any industry where steam is being generated, and in building services for heating and cooling water treatment. Dosing procedures are also in vogue in textile and similar industries where chemical treatment is involved.
Commercial swimming pools also require chemical dosing in order to control pH balance, chlorine level, and other such water quality criteria. Modern swimming pool plant will have bulk storage of chemicals held in separate dosing tanks, and will have automated controls and dosing pumps to top up the various chemicals as required to control the water quality.
In a power station treatment chemicals are injected or fed to boiler and also to feed and make up water under pressure, but in small dosages or rate of injection. The feeding at all places is done by means of small capacity dosing pumps specially designed for the duty demanded.
In building services the water quality of various pumped fluid systems, including for heating, cooling, and condensate water, will be regularly checked and topped up with chemicals manually as required to suit the required water quality. Most commonly inhibitors will be added to protect the pipework and components against corrosion, or a biocide will be added to stop the growth of bacteria in lower temperature systems. The required chemicals will be added to the fluid system by use of a dosing pot; a multi-valved chamber in which the chemical can be added, and then introduced to the fluid system in a controlled manner.
In food industries, the dosing of ingredients is particularly important in order to ensure the quality of the recipe as well as its food safety where the consumers' health may be directly implicated. Dosing is done in wet processes with dosing pumps but even more importantly in dry processes, just prior to the packaging of the product. Dosing of dry materials is commonly done through Gain-in-Weight dosing or Loss-in-Weight dosing using equipment on load cells.
Agriculture
See pesticide application
The feeding of chemicals in agriculture has also become common due to technology developments. However agricultural dosing is done by means of hand held pressure spray pumps.
Aerial spraying
Sometimes aerial spraying of chemicals by fixed quantities at intervals or dosing is also adopted for agricultural spraying or for atmospheric spraying for eliminating certain types of harmful insects.
References
Engineering concepts
Agricultural terminology
Pharmacodynamics | Dosing | [
"Chemistry",
"Engineering"
] | 544 | [
"Pharmacology",
"Pharmacodynamics",
"nan"
] |
3,443,916 | https://en.wikipedia.org/wiki/Design%20methods | Design methods are procedures, techniques, aids, or tools for designing. They offer a number of different kinds of activities that a designer might use within an overall design process. Conventional procedures of design, such as drawing, can be regarded as design methods, but since the 1950s new procedures have been developed that are more usually grouped under the name of "design methods". What design methods have in common is that they "are attempts to make public the hitherto private thinking of designers; to externalise the design process".
Design methodology is the broader study of method in design: the study of the principles, practices and procedures of designing.
Background
Design methods originated in new approaches to problem solving developed in the mid-20th Century, and also in response to industrialisation and mass-production, which changed the nature of designing. A "Conference on Systematic and Intuitive Methods in Engineering, Industrial Design, Architecture and Communications", held in London in 1962 is regarded as a key event marking the beginning of what became known within design studies as the "design methods movement", leading to the founding of the Design Research Society and influencing design education and practice. Leading figures in this movement in the UK were J. Christopher Jones at the University of Manchester and L. Bruce Archer at the Royal College of Art.
The movement developed through further conferences on new design methods in the UK and USA in the 1960s. The first books on rational design methods, and on creative methods also appeared in this period.
New approaches to design were developing at the same time in Germany, notably at the Ulm School of Design (Hochschule für Gestaltung–HfG Ulm) (1953–1968) under the leadership of Tomás Maldonado. Design teaching at Ulm integrated design with science (including social sciences) and introduced new fields of study such as cybernetics, systems theory and semiotics into design education. Bruce Archer also taught at Ulm, and another influential teacher was Horst Rittel. In 1963 Rittel moved to the School of Architecture at the University of California, Berkeley, where he helped found the Design Methods Group, a society focused on developing and promoting new methods especially in architecture and planning.
At the end of the 1960s two influential, but quite different works were published: Herbert A. Simon's The Sciences of the Artificial and J. Christopher Jones's Design Methods. Simon proposed the "science of design" as "a body of intellectually tough, analytic, partly formalizable, partly empirical, teachable doctrine about the design process", whereas Jones catalogued a variety of approaches to design, both rational and creative, within a context of a broad, futures creating, systems view of design.
The 1970s saw some reaction against the rationality of design methods, notably from two of its pioneers, Christopher Alexander and J. Christopher Jones. Fundamental issues were also raised by Rittel, who characterised design and planning problems as wicked problems, un-amenable to the techniques of science and engineering, which deal with "tame" problems. The criticisms turned some in the movement away from rationalised approaches to design problem solving and towards "argumentative", participatory processes in which designers worked in partnership with the problem stakeholders (clients, customers, users, the community). This led to participatory design, user centered design and the role of design thinking as a creative process in problem solving and innovation.
However, interest in systematic and rational design methods continued to develop strongly in engineering design during the 1980s; for example, through the Conference on Engineering Design series of The Design Society and the work of the Verein Deutscher Ingenieure association in Germany, and also in Japan, where the Japanese Society for the Science of Design had been established as early as 1954. Books on systematic engineering design methods were published in Germany and the UK. In the USA the American Society of Mechanical Engineers Design Engineering Division began a stream on design theory and methodology within its annual conferences. The interest in systematic, rational approaches to design has led to design science and design science (methodology) in engineering and computer science.
Methods and processes
The development of design methods has been closely associated with prescriptions for a systematic process of designing. These process models usually comprise a number of phases or stages, beginning with a statement or recognition of a problem or a need for a new design and culminating in a finalised solution proposal. In his 'Systematic Method for Designers' L. Bruce Archer produced a very elaborate, 229 step model of a systematic design process for industrial design, but also a summary model consisting of three phases: Analytical phase (programming and data collection, analysis), Creative phase (synthesis, development), and Executive phase (communication). The UK's Design Council created the Double Diamond (design process model), which breaks the creative design process into four phases: Discover (insight into the problem), Define (the area to focus upon), Develop (potential solutions), and Deliver (solutions that work). A systematic model for engineering design by Pahl and Beitz has phases of Clarification of the task, Conceptual design, Embodiment design, and Detail design. A less prescriptive approach to designing a basic design process for oneself has been outlined by J. Christopher Jones.
In the engineering design process systematic models tend to be linear, in sequential steps, but acknowledging the necessity of iteration. In architectural design, process models tend to be cyclical and spiral, with iteration as essential to progression towards a final design. In industrial and product design, process models tend to comprise a sequence of stages of divergent and convergent thinking. The Dubberly Design Office has compiled examples of more than 80 design process models, but it is not an exhaustive list.
Within these process models, numerous design methods can be applied. In his book of 'Design Methods' J. C. Jones grouped 26 methods according to their purposes within a design process: Methods of exploring design situations (e.g. Stating Objectives, Investigating User Behaviour, Interviewing Users), Methods of searching for ideas (e.g. Brainstorming, Synectics, Morphological Charts), Methods of exploring problem structure (e.g. Interaction Matrix, Functional Innovation, Information Sorting), Methods of evaluation (e.g. Checklists, Ranking and Weighting).
Nigel Cross outlined eight stages in a process of engineering product design, each with an associated method: Identifying Opportunities - User Scenarios; Clarifying Objectives - Objectives Tree; Establishing Functions - Function Analysis; Setting Requirements - Performance Specification; Determining Characteristics - Quality Function Deployment; Generating Alternatives - Morphological Chart; Evaluating Alternatives - Weighted Objectives; Improving Details - Value Engineering.
Many design methods still currently in use originated in the design methods movement of the 1960s and 70s, adapted to modern design practices. Recent developments have seen the introduction of more qualitative techniques, including ethnographic methods such as cultural probes and situated methods.
Emergence of design research and design studies
The design methods movement had a profound influence on the development of academic interest in design and designing and the emergence of design research and design studies. Arising directly from the 1962 Conference on Design Methods, the Design Research Society (DRS) was founded in the UK in 1966. The purpose of the Society is to promote "the study of and research into the process of designing in all its many fields" and is an interdisciplinary group with many professions represented.
In the USA, a similar Design Methods Group (DMG) was also established in 1966 by Horst Rittel and others at the University of California, Berkeley. The DMG held a conference at MIT in 1968 with a focus on environmental design and planning, and that led to the foundation of the Environmental Design Research Association (EDRA), which held its first conference in 1969. A group interested in design methods and theory in architecture and engineering formed at MIT in the early 1980s, including Donald Schön, who was studying the working practices of architects, engineers and other professionals and developing his theory of reflective practice. In 1984 the National Science Foundation created a Design Theory and Methodology Program to promote methods and process research in engineering design.
Meanwhile in Europe, Vladimir Hubka established the Workshop Design-Konstruction (WDK),which led to a series of International Conferences on Engineering Design (ICED) beginning in 1981 and later became the Design Society.
Academic research journals in design also began publication. DRS initiated Design Studies in 1979, Design Issues appeared in 1984, and Research in Engineering Design in 1989.
Influence on all professional design practice
Several pioneers of design methods developed their work in association with industry. The Ulm school established a significant partnership with the German consumer products company Braun through their designer Dieter Rams. J. Christopher Jones began his approach to systematic design as an ergonomist at the electrical engineering company AEI. L. Bruce Archer developed his systematic approach in projects for medical equipment for the UK National Health Service.
In the USA, designer Henry Dreyfuss had a profound impact on the practice of industrial design by developing systematic processes and promoting the use of anthropometrics, ergonomics and human factors in design, including through his 1955 book 'Designing for People'. Another successful designer, Jay Doblin, was also influential on the theory and practice of design as a systematic process.
Much of current design practice has been influenced and guided by design methods. For example, the influential IDEO consultancy uses design methods extensively in its 'Design Kit' and 'Method Cards'. Increasingly, the intersections of design methods with business and government through the application of design thinking have been championed by numerous consultancies within the design profession. Wide influence has also come through Christopher Alexander's pattern language method, originally developed for architectural and urban design, which has been adopted in software design, interaction design, pedagogical design and other domains.
See also
Design management
Design rationale
Design research
Design science
Design theory
Design thinking
References
Other sources (not cited above)
Ko, A. J. Design Methods. https://faculty.washington.edu/ajko/books/design-methods/index.html
Koberg, D. and J. Bagnall. (1972) The Universal Traveler: A Soft-Systems Guide to Creativity, Problem-Solving, and the Process of Design. Los Altos, CA: Kaufmann. 2nd edition (1981): The All New Universal Traveler: A Soft-Systems Guide to Creativity, Problem-Solving, and the Process of Reaching Goals.
Krippendorff, K. (2006). The Semantic Turn; A New Foundation for Design. Taylor&Francis, CRC Press, USA.
Plowright, P. (2014) Revealing Architectural Design: Methods, Frameworks and Tools. Routledge, UK.
Protzen, J-P. and D. J. Harris. (2010) The Universe of Design: Horst Rittel's Theories of Design and Planning. Routledge.
Pugh, S. (1991), Total Design: Integrated Methods for Successful Product Engineering. Addison-Wesley, UK.
Roozenburg, N. and J. Eekels. (1991) Product Design: Fundamentals and Methods. Wiley, UK.
Ulrich, K. and S. Eppinger. (2011) Product Design and Development. McGraw Hill, USA.
External links
Introductory Lecture on Design Methods by Rhodes Hileman
Abstract: Design Methods
Rethinking Wicked Problems: Unpacking Paradigms, Bridging Universes, Part 1 of 2. J. Conklin, M. Basadur, GK VanPatter; NextDesign Leadership Institute Journal, 2007
Rethinking Wicked Problems: Unpacking Paradigms, Bridging Universes, Part 2 of 2. J. Conklin, M. Basadur, GK VanPatter; NextDesign Leadership Institute Journal, 2007
Double Consciousness: Back to the Future with John Chris Jones. GK VanPatter, John Chris Jones; NextDesign Leadership Institute Journal, 2006
Design studies
Industrial design | Design methods | [
"Engineering"
] | 2,450 | [
"Industrial design",
"Design engineering",
"Design",
"Design studies"
] |
3,445,204 | https://en.wikipedia.org/wiki/St%C3%B8rmer%27s%20theorem | In number theory, Størmer's theorem, named after Carl Størmer, gives a finite bound on the number of consecutive pairs of smooth numbers that exist, for a given degree of smoothness, and provides a method for finding all such pairs using Pell equations. It follows from the Thue–Siegel–Roth theorem that there are only a finite number of pairs of this type, but Størmer gave a procedure for finding them all.
Statement
If one chooses a finite set of prime numbers then the -smooth numbers are defined as the set of integers
that can be generated by products of numbers in . Then Størmer's theorem states that, for every choice of , there are only finitely many pairs of consecutive -smooth numbers. Further, it gives a method of finding them all using Pell equations.
The procedure
Størmer's original procedure involves solving a set of roughly Pell equations, in each one finding only the smallest solution. A simplified version of the procedure, due to D. H. Lehmer, is described below; it solves fewer equations but finds more solutions in each equation.
Let be the given set of primes, and define a number to be -smooth if all its prime factors belong to . Assume ; otherwise there could be no consecutive -smooth numbers, because all -smooth numbers would be odd. Lehmer's method involves solving the Pell equation
for each -smooth square-free number other than . Each such number is generated as a product of a subset of , so there are Pell equations to solve. For each such equation, let be the generated solutions, for in the range from 1 to (inclusive), where is the largest of the primes in .
Then, as Lehmer shows, all consecutive pairs of -smooth numbers are of the form . Thus one can find all such pairs by testing the numbers of this form for -smoothness.
Lehmer's paper furthermore shows that applying a similar procedure to the equation
where ranges over all -smooth square-free numbers other than 1, yields those pairs of -smooth numbers separated by 2: the smooth pairs are then , where is one of the first solutions of that equation.
Example
To find the ten consecutive pairs of {2,3,5}-smooth numbers (in music theory, giving the superparticular ratios for just tuning) let P = {2,3,5}. There are seven P-smooth squarefree numbers q (omitting the eighth P-smooth squarefree number, 2): 1, 3, 5, 6, 10, 15, and 30, each of which leads to a Pell equation. The number of solutions per Pell equation required by Lehmer's method is max(3, (5 + 1)/2) = 3, so this method generates three solutions to each Pell equation, as follows.
For q = 1, the first three solutions to the Pell equation x2 − 2y2 = 1 are (3,2), (17,12), and (99,70). Thus, for each of the three values xi = 3, 17, and 99, Lehmer's method tests the pair (xi − 1)/2, (xi + 1)/2 for smoothness; the three pairs to be tested are (1,2), (8,9), and (49,50). Both (1,2) and (8,9) are pairs of consecutive P-smooth numbers, but (49,50) is not, as 49 has 7 as a prime factor.
For q = 3, the first three solutions to the Pell equation x2 − 6y2 = 1 are (5,2), (49,20), and (485,198). From the three values xi = 5, 49, and 485 Lehmer's method forms the three candidate pairs of consecutive numbers (xi − 1)/2, (xi + 1)/2: (2,3), (24,25), and (242,243). Of these, (2,3) and (24,25) are pairs of consecutive P-smooth numbers but (242,243) is not.
For q = 5, the first three solutions to the Pell equation x2 − 10y2 = 1 are (19,6), (721,228), and (27379,8658). The Pell solution (19,6) leads to the pair of consecutive P-smooth numbers (9,10); the other two solutions to the Pell equation do not lead to P-smooth pairs.
For q = 6, the first three solutions to the Pell equation x2 − 12y2 = 1 are (7,2), (97,28), and (1351,390). The Pell solution (7,2) leads to the pair of consecutive P-smooth numbers (3,4).
For q = 10, the first three solutions to the Pell equation x2 − 20y2 = 1 are (9,2), (161,36), and (2889,646). The Pell solution (9,2) leads to the pair of consecutive P-smooth numbers (4,5) and the Pell solution (161,36) leads to the pair of consecutive P-smooth numbers (80,81).
For q = 15, the first three solutions to the Pell equation x2 − 30y2 = 1 are (11,2), (241,44), and (5291,966). The Pell solution (11,2) leads to the pair of consecutive P-smooth numbers (5,6).
For q = 30, the first three solutions to the Pell equation x2 − 60y2 = 1 are (31,4), (1921,248), and (119071,15372). The Pell solution (31,4) leads to the pair of consecutive P-smooth numbers (15,16).
Number and size of solutions
Størmer's original result can be used to show that the number of consecutive pairs of integers that are smooth with respect to a set of k primes is at most 3k − 2k. Lehmer's result produces a tighter bound for sets of small primes: (2k − 1) × max(3,(pk+1)/2).
The number of consecutive pairs of integers that are smooth with respect to the first k primes are
1, 4, 10, 23, 40, 68, 108, 167, 241, 345, ... .
The largest integer from all these pairs, for each k, is
2, 9, 81, 4375, 9801, 123201, 336141, 11859211, ... .
OEIS also lists the number of pairs of this type where the larger of the two integers in the pair is square or triangular , as both types of pair arise frequently.
The size of the solutions can also be bounded: in the case where and are required to be -smooth, then
where and is the product of all elements of , and in the case where the smooth pair is , we have
Generalizations and applications
Louis Mordell wrote about this result, saying that it "is very pretty, and there are many applications of it."
In mathematics
used Størmer's method to prove Catalan's conjecture on the nonexistence of consecutive perfect powers (other than 8,9) in the case where one of the two powers is a square.
proved that every number x4 + 1, for x > 3, has a prime factor greater than or equal to 137. Størmer's theorem is an important part of his proof, in which he reduces the problem to the solution of 128 Pell equations.
Several authors have extended Størmer's work by providing methods for listing the solutions to more general diophantine equations, or by providing more general divisibility criteria for the solutions to Pell equations.
describe a computational procedure that, empirically, finds many but not all of the consecutive pairs of smooth numbers described by Størmer's theorem, and is much faster
than using Pell's equation to find all solutions.
In music theory
In the musical practice of just intonation, musical intervals can be described as ratios between positive integers. More specifically, they can be described as ratios between members of the harmonic series. Any musical tone can be broken into its fundamental frequency and harmonic frequencies, which are integer multiples of the fundamental. This series is conjectured to be the basis of natural harmony and melody. The tonal complexity of ratios between these harmonics is said to get more complex with higher prime factors. To limit this tonal complexity, an interval is said to be n-limit when both its numerator and denominator are n-smooth. Furthermore, superparticular ratios are very important in just tuning theory as they represent ratios between adjacent members of the harmonic series.
Størmer's theorem allows all possible superparticular ratios in a given limit to be found. For example, in the 3-limit (Pythagorean tuning), the only possible superparticular ratios are 2/1 (the octave), 3/2 (the perfect fifth), 4/3 (the perfect fourth), and 9/8 (the whole step). That is, the only pairs of consecutive integers that have only powers of two and three in their prime factorizations are (1,2), (2,3), (3,4), and (8,9). If this is extended to the 5-limit, six additional superparticular ratios are available: 5/4 (the major third), 6/5 (the minor third), 10/9 (the minor tone), 16/15 (the minor second), 25/24 (the minor semitone), and 81/80 (the syntonic comma). All are musically meaningful.
Notes
References
Mathematics of music
Theorems in number theory | Størmer's theorem | [
"Mathematics"
] | 2,131 | [
"Mathematics of music",
"Mathematical theorems",
"Applied mathematics",
"Theorems in number theory",
"Mathematical problems",
"Number theory"
] |
3,446,185 | https://en.wikipedia.org/wiki/Protection%20ring | In computer science, hierarchical protection domains, often called protection rings, are mechanisms to protect data and functionality from faults (by improving fault tolerance) and malicious behavior (by providing computer security).
Computer operating systems provide different levels of access to resources. A protection ring is one of two or more hierarchical levels or layers of privilege within the architecture of a computer system. This is generally hardware-enforced by some CPU architectures that provide different CPU modes at the hardware or microcode level. Rings are arranged in a hierarchy from most privileged (most trusted, usually numbered zero) to least privileged (least trusted, usually with the highest ring number). On most operating systems, Ring 0 is the level with the most privileges and interacts most directly with the physical hardware such as certain CPU functionality (e.g. the control registers) and I/O controllers.
Special mechanisms are provided to allow an outer ring to access an inner ring's resources in a predefined manner, as opposed to allowing arbitrary usage. Correctly gating access between rings can improve security by preventing programs from one ring or privilege level from misusing resources intended for programs in another. For example, spyware running as a user program in Ring 3 should be prevented from turning on a web camera without informing the user, since hardware access should be a Ring 1 function reserved for device drivers. Programs such as web browsers running in higher numbered rings must request access to the network, a resource restricted to a lower numbered ring.
X86S, a canceled Intel architecture published in 2024, has only ring 0 and ring 3. Ring 1 and 2 were to be removed under X86S since modern OSes never utilize them.
Implementations
Multiple rings of protection were among the most revolutionary concepts introduced by the Multics operating system, a highly secure predecessor of today's Unix family of operating systems. The GE 645 mainframe computer did have some hardware access control, including the same two modes that the other GE-600 series machines had, and segment-level permissions in its memory management unit ("Appending Unit"), but that was not sufficient to provide full support for rings in hardware, so Multics supported them by trapping ring transitions in software; its successor, the Honeywell 6180, implemented them in hardware, with support for eight rings; Protection rings in Multics were separate from CPU modes; code in all rings other than ring 0, and some ring 0 code, ran in slave mode.
However, most general-purpose systems use only two rings, even if the hardware they run on provides more CPU modes than that. For example, Windows 7 and Windows Server 2008 (and their predecessors) use only two rings, with ring 0 corresponding to kernel mode and ring 3 to user mode, because earlier versions of Windows NT ran on processors that supported only two protection levels.
Many modern CPU architectures (including the popular Intel x86 architecture) include some form of ring protection, although the Windows NT operating system, like Unix, does not fully utilize this feature. OS/2 does, to some extent, use three rings: ring 0 for kernel code and device drivers, ring 2 for privileged code (user programs with I/O access permissions), and ring 3 for unprivileged code (nearly all user programs). Under DOS, the kernel, drivers and applications typically run on ring 3 (however, this is exclusive to the case where protected-mode drivers or DOS extenders are used; as a real-mode OS, the system runs with effectively no protection), whereas 386 memory managers such as EMM386 run at ring 0. In addition to this, DR-DOS' EMM386 3.xx can optionally run some modules (such as DPMS) on ring 1 instead. OpenVMS uses four modes called (in order of decreasing privileges) Kernel, Executive, Supervisor and User.
A renewed interest in this design structure came with the proliferation of the Xen VMM software, ongoing discussion on monolithic vs. micro-kernels (particularly in Usenet newsgroups and Web forums), Microsoft's Ring-1 design structure as part of their NGSCB initiative, and hypervisors based on x86 virtualization such as Intel VT-x (formerly Vanderpool).
The original Multics system had eight rings, but many modern systems have fewer. The hardware remains aware of the current ring of the executing instruction thread at all times, with the help of a special machine register. In some systems, areas of virtual memory are instead assigned ring numbers in hardware. One example is the Data General Eclipse MV/8000, in which the top three bits of the program counter (PC) served as the ring register. Thus code executing with the virtual PC set to 0xE200000, for example, would automatically be in ring 7, and calling a subroutine in a different section of memory would automatically cause a ring transfer.
The hardware severely restricts the ways in which control can be passed from one ring to another, and also enforces restrictions on the types of memory access that can be performed across rings. Using x86 as an example, there is a special gate structure which is referenced by the call instruction that transfers control in a secure way towards predefined entry points in lower-level (more trusted) rings; this functions as a supervisor call in many operating systems that use the ring architecture. The hardware restrictions are designed to limit opportunities for accidental or malicious breaches of security. In addition, the most privileged ring may be given special capabilities (such as real memory addressing that bypasses the virtual memory hardware).
ARM version 7 architecture implements three privilege levels: application (PL0), operating system (PL1), and hypervisor (PL2). Unusually, level 0 (PL0) is the least-privileged level, while level 2 is the most-privileged level. ARM version 8 implements four exception levels: application (EL0), operating system (EL1), hypervisor (EL2), and secure monitor / firmware (EL3), for AArch64 and AArch32.
Ring protection can be combined with processor modes (master/kernel/privileged/supervisor mode versus slave/unprivileged/user mode) in some systems. Operating systems running on hardware supporting both may use both forms of protection or only one.
Effective use of ring architecture requires close cooperation between hardware and the operating system. Operating systems designed to work on multiple hardware platforms may make only limited use of rings if they are not present on every supported platform. Often the security model is simplified to "kernel" and "user" even if hardware provides finer granularity through rings.
Modes
Supervisor mode
In computer terms, supervisor mode is a hardware-mediated flag that can be changed by code running in system-level software. System-level tasks or threads may have this flag set while they are running, whereas user-level applications will not. This flag determines whether it would be possible to execute machine code operations such as modifying registers for various descriptor tables, or performing operations such as disabling interrupts. The idea of having two different modes to operate in comes from "with more power comes more responsibility" a program in supervisor mode is trusted never to fail, since a failure may cause the whole computer system to crash.
Supervisor mode is "an execution mode on some processors which enables execution of all instructions, including privileged instructions. It may also give access to a different address space, to memory management hardware and to other peripherals. This is the mode in which the operating system usually runs."
In a monolithic kernel, the operating system runs in supervisor mode and the applications run in user mode. Other types of operating systems, like those with an exokernel or microkernel, do not necessarily share this behavior.
Some examples from the PC world:
Linux, macOS and Windows are three operating systems that use supervisor/user mode. To perform specialized functions, user mode code must perform a system call into supervisor mode or even to the kernel space where trusted code of the operating system will perform the needed task and return the execution back to the userspace. Additional code can be added into kernel space through the use of loadable kernel modules, but only by a user with the requisite permissions, as this code is not subject to the access control and safety limitations of user mode.
DOS (for as long as no 386 memory manager such as EMM386 is loaded), as well as other simple operating systems and many embedded devices run in supervisor mode permanently, meaning that drivers can be written directly as user programs.
Most processors have at least two different modes. The x86-processors have four different modes divided into four different rings. Programs that run in Ring 0 can do anything with the system, and code that runs in Ring 3 should be able to fail at any time without impact to the rest of the computer system. Ring 1 and Ring 2 are rarely used, but could be configured with different levels of access.
In most existing systems, switching from user mode to kernel mode has an associated high cost in performance. It has been measured, on the basic request getpid, to cost 1000–1500 cycles on most machines. Of these just around 100 are for the actual switch (70 from user to kernel space, and 40 back), the rest is "kernel overhead". In the L3 microkernel, the minimization of this overhead reduced the overall cost to around 150 cycles.
Maurice Wilkes wrote: ... it eventually became clear that the hierarchical protection that rings provided did not closely match the requirements of the system programmer and gave little or no improvement on the simple system of having two modes only. Rings of protection lent themselves to efficient implementation in hardware, but there was little else to be said for them. [...] The attractiveness of fine-grained protection remained, even after it was seen that rings of protection did not provide the answer... This again proved a blind alley...
To gain performance and determinism, some systems place functions that would likely be viewed as application logic, rather than as device drivers, in kernel mode; security applications (access control, firewalls, etc.) and operating system monitors are cited as examples. At least one embedded database management system, eXtremeDB Kernel Mode, has been developed specifically for kernel mode deployment, to provide a local database for kernel-based application functions, and to eliminate the context switches that would otherwise occur when kernel functions interact with a database system running in user mode.
Functions are also sometimes moved across rings in the other direction. The Linux kernel, for instance, injects into processes a vDSO section which contains functions that would normally require a system call, i.e. a ring transition. Instead of doing a syscall these functions use static data provided by the kernel. This avoids the need for a ring transition and so is more lightweight than a syscall. The function gettimeofday can be provided this way.
Hypervisor mode
Recent CPUs from Intel and AMD offer x86 virtualization instructions for a hypervisor to control Ring 0 hardware access. Although they are mutually incompatible, both Intel VT-x (codenamed "Vanderpool") and AMD-V (codenamed "Pacifica") allow a guest operating system to run Ring 0 operations natively without affecting other guests or the host OS.
Before hardware-assisted virtualization, guest operating systems ran under ring 1. Any attempt that requires a higher privilege level to perform (ring 0) will produce an interrupt and then be handled using software; this is called "Trap and Emulate".
To assist virtualization and reduce overhead caused by the reason above, VT-x and AMD-V allow the guest to run under Ring 0. VT-x introduces VMX Root/Non-root Operation: The hypervisor runs in VMX Root Operation mode, possessing the highest privilege. Guest OS runs in VMX Non-Root Operation mode, which allows them to operate at ring 0 without having actual hardware privileges. VMX non-root operation and VMX transitions are controlled by a data structure called a virtual-machine control. These hardware extensions allow classical "Trap and Emulate" virtualization to perform on x86 architecture but now with hardware support.
Privilege level
A privilege level in the x86 instruction set controls the access of the program currently running on the processor to resources such as memory regions, I/O ports, and special instructions. There are 4 privilege levels ranging from 0 which is the most privileged, to 3 which is least privileged. Most modern operating systems use level 0 for the kernel/executive, and use level 3 for application programs. Any resource available to level n is also available to levels 0 to n, so the privilege levels are rings. When a lesser privileged process tries to access a higher privileged process, a general protection fault exception is reported to the OS.
It is not necessary to use all four privilege levels. Current operating systems with wide market share including Microsoft Windows, macOS, Linux, iOS and Android mostly use a paging mechanism with only one bit to specify the privilege level as either Supervisor or User (U/S Bit). Windows NT uses the two-level system.
The real mode programs in 8086 are executed at level 0 (highest privilege level) whereas virtual mode in 8086 executes all programs at level 3.
Potential future uses for the multiple privilege levels supported by the x86 ISA family include containerization and virtual machines. A host operating system kernel could use instructions with full privilege access (kernel mode), whereas applications running on the guest OS in a virtual machine or container could use the lowest level of privileges in user mode. The virtual machine and guest OS kernel could themselves use an intermediate level of instruction privilege to invoke and virtualize kernel-mode operations such as system calls from the point of view of the guest operating system.
IOPL
The IOPL (I/O Privilege level) flag is a flag found on all IA-32 compatible x86 CPUs. It occupies bits 12 and 13 in the FLAGS register. In protected mode and long mode, it shows the I/O privilege level of the current program or task. The Current Privilege Level (CPL) (CPL0, CPL1, CPL2, CPL3) of the task or program must be less than or equal to the IOPL in order for the task or program to access I/O ports.
The IOPL can be changed using POPF(D) and IRET(D) only when the current privilege level is Ring 0.
Besides IOPL, the I/O Port Permissions in the TSS also take part in determining the ability of a task to access an I/O port.
Miscellaneous
In x86 systems, the x86 hardware virtualization (VT-x and SVM) is referred as "ring −1", the System Management Mode is referred as "ring −2", the Intel Management Engine and AMD Platform Security Processor are sometimes referred as "ring −3".
Use of hardware features
Many CPU hardware architectures provide far more flexibility than is exploited by the operating systems that they normally run. Proper use of complex CPU modes requires very close cooperation between the operating system and the CPU, and thus tends to tie the OS to the CPU architecture. When the OS and the CPU are specifically designed for each other, this is not a problem (although some hardware features may still be left unexploited), but when the OS is designed to be compatible with multiple, different CPU architectures, a large part of the CPU mode features may be ignored by the OS. For example, the reason Windows uses only two levels (ring 0 and ring 3) is that some hardware architectures that were supported in the past (such as PowerPC or MIPS) implemented only two privilege levels.
Multics was an operating system designed specifically for a special CPU architecture (which in turn was designed specifically for Multics), and it took full advantage of the CPU modes available to it. However, it was an exception to the rule. Today, this high degree of interoperation between the OS and the hardware is not often cost-effective, despite the potential advantages for security and stability.
Ultimately, the purpose of distinct operating modes for the CPU is to provide hardware protection against accidental or deliberate corruption of the system environment (and corresponding breaches of system security) by software. Only "trusted" portions of system software are allowed to execute in the unrestricted environment of kernel mode, and then, in paradigmatic designs, only when absolutely necessary. All other software executes in one or more user modes. If a processor generates a fault or exception condition in a user mode, in most cases system stability is unaffected; if a processor generates a fault or exception condition in kernel mode, most operating systems will halt the system with an unrecoverable error. When a hierarchy of modes exists (ring-based security), faults and exceptions at one privilege level may destabilize only the higher-numbered privilege levels. Thus, a fault in Ring 0 (the kernel mode with the highest privilege) will crash the entire system, but a fault in Ring 2 will only affect Rings 3 and beyond and Ring 2 itself, at most.
Transitions between modes are at the discretion of the executing thread when the transition is from a level of high privilege to one of low privilege (as from kernel to user modes), but transitions from lower to higher levels of privilege can take place only through secure, hardware-controlled "gates" that are traversed by executing special instructions or when external interrupts are received.
Microkernel operating systems attempt to minimize the amount of code running in privileged mode, for purposes of security and elegance, but ultimately sacrificing performance.
See also
Call gate (Intel)
Memory segmentation
Protected mode available on x86-compatible 80286 CPUs and newer
IOPL (CONFIG.SYS directive) – an OS/2 directive to run DLL code at ring 2 instead of at ring 3
Segment descriptor
Supervisor Call instruction
System Management Mode (SMM)
Principle of least privilege
Notes
References
Intel 80386 Programmer's Reference
Further reading
Central processing unit
Computer security models
Operating system technology | Protection ring | [
"Engineering"
] | 3,745 | [
"Cybersecurity engineering",
"Computer security models"
] |
1,202,098 | https://en.wikipedia.org/wiki/Voigt%20profile | The Voigt profile (named after Woldemar Voigt) is a probability distribution given by a convolution of a Cauchy-Lorentz distribution and a Gaussian distribution. It is often used in analyzing data from spectroscopy or diffraction.
Definition
Without loss of generality, we can consider only centered profiles, which peak at zero. The Voigt profile is then
where x is the shift from the line center, is the centered Gaussian profile:
and is the centered Lorentzian profile:
The defining integral can be evaluated as:
where Re[w(z)] is the real part of the Faddeeva function evaluated for
In the limiting cases of and then simplifies to and , respectively.
History and applications
In spectroscopy, a Voigt profile results from the convolution of two broadening mechanisms, one of which alone would produce a Gaussian profile (usually, as a result of the Doppler broadening), and the other would produce a Lorentzian profile. Voigt profiles are common in many branches of spectroscopy and diffraction. Due to the expense of computing the Faddeeva function, the Voigt profile is sometimes approximated using a pseudo-Voigt profile.
Properties
The Voigt profile is normalized:
since it is a convolution of normalized profiles. The Lorentzian profile has no moments (other than the zeroth), and so the moment-generating function for the Cauchy distribution is not defined. It follows that the Voigt profile will not have a moment-generating function either, but the characteristic function for the Cauchy distribution is well defined, as is the characteristic function for the normal distribution. The characteristic function for the (centered) Voigt profile will then be the product of the two:
Since normal distributions and Cauchy distributions are stable distributions, they are each closed under convolution (up to change of scale), and it follows that the Voigt distributions are also closed under convolution.
Cumulative distribution function
Using the above definition for z , the cumulative distribution function (CDF) can be found as follows:
Substituting the definition of the Faddeeva function (scaled complex error function) yields for the indefinite integral:
which may be solved to yield
where is a hypergeometric function. In order for the function to approach zero as x approaches negative infinity (as the CDF must do), an integration constant of 1/2 must be added. This gives for the CDF of Voigt:
The uncentered Voigt profile
If the Gaussian profile is centered at and the Lorentzian profile is centered at , the convolution is centered at and the characteristic function is:
The probability density function is simply offset from the centered profile by :
where:
The mode and median are both located at .
Derivatives
Using the definition above for and , the first and second derivatives can be expressed in terms of the Faddeeva function as
and
respectively.
Often, one or multiple Voigt profiles and/or their respective derivatives need to be fitted to a measured signal by means of non-linear least squares, e.g., in spectroscopy. Then, further partial derivatives can be utilised to accelerate computations. Instead of approximating the Jacobian matrix with respect to the parameters , , and with the aid of finite differences, the corresponding analytical expressions can be applied. With and , these are given by:
for the original voigt profile ;
for the first order partial derivative ; and
for the second order partial derivative . Since and play a relatively similar role in the calculation of , their respective partial derivatives also look quite similar in terms of their structure, although they result in totally different derivative profiles. Indeed, the partial derivatives with respect to and show more similarity since both are width parameters. All these derivatives involve only simple operations (multiplications and additions) because the computationally expensive and are readily obtained when computing . Such a reuse of previous calculations allows for a derivation at minimum costs. This is not the case for finite difference gradient approximation as it requires the evaluation of for each gradient respectively.
Voigt functions
The Voigt functions U, V, and H (sometimes called the line broadening function) are defined by
where
erfc is the complementary error function, and w(z) is the Faddeeva function.
Relation to Voigt profile
with Gaussian sigma relative variables
and
Numeric approximations
Tepper-García Function
The Tepper-García function, named after German-Mexican Astrophysicist Thor Tepper-García, is a combination of an exponential function and rational functions that approximates the line broadening function over a wide range of its parameters.
It is obtained from a truncated power series expansion of the exact line broadening function.
In its most computationally efficient form, the Tepper-García function can be expressed as
where , , and .
Thus the line broadening function can be viewed, to first order, as a pure Gaussian function plus a correction factor that depends linearly on the microscopic properties of the absorbing medium (encoded in ); however, as a result of the early truncation in the series expansion, the error in the approximation is still of order , i.e. . This approximation has a relative accuracy of
over the full wavelength range of , provided that .
In addition to its high accuracy, the function is easy to implement as well as computationally fast. It is widely used in the field of quasar absorption line analysis.
Pseudo-Voigt approximation
The pseudo-Voigt profile (or pseudo-Voigt function) is an approximation of the Voigt profile V(x) using a linear combination of a Gaussian curve G(x) and a Lorentzian curve L(x) instead of their convolution.
The pseudo-Voigt function is often used for calculations of experimental spectral line shapes.
The mathematical definition of the normalized pseudo-Voigt profile is given by
with .
is a function of full width at half maximum (FWHM) parameter.
There are several possible choices for the parameter. A simple formula, accurate to 1%, is
where now, is a function of Lorentz (), Gaussian () and total () Full width at half maximum (FWHM) parameters. The total FWHM () parameter is described by:
The width of the Voigt profile
The full width at half maximum (FWHM) of the Voigt profile can be found from the
widths of the associated Gaussian and Lorentzian widths. The FWHM of the Gaussian profile
is
The FWHM of the Lorentzian profile is
An approximate relation (accurate to within about 1.2%) between the widths of the Voigt, Gaussian, and Lorentzian profiles is:
By construction, this expression is exact for a pure Gaussian or Lorentzian.
A better approximation with an accuracy of 0.02% is given by (originally found by Kielkopf)
Again, this expression is exact for a pure Gaussian or Lorentzian.
In the same publication, a slightly more precise (within 0.012%), yet significantly more complicated expression can be found.
Asymmetric Pseudo-Voigt (Martinelli) function
The asymmetry pseudo-Voigt (Martinelli) function resembles a split normal distribution by having different widths on each side of the peak position. Mathematically this is expressed as:
with being the weight of the Lorentzian and the width being a split function (for and for ). In the limit , the Martinelli function returns to a symmetry pseudo Voigt function. The Martinelli function has been used to model elastic scattering on resonant inelastic X-ray scattering instruments.
References
External links
http://jugit.fz-juelich.de/mlz/libcerf, numeric C library for complex error functions, provides a function voigt(x, sigma, gamma) with approximately 13–14 digits precision.
The original article is : Voigt, Woldemar, 1912, ''Das Gesetz der Intensitätsverteilung innerhalb der Linien eines Gasspektrums'', Sitzungsbericht der Bayerischen Akademie der Wissenschaften, 25, 603 (see also: http://publikationen.badw.de/de/003395768)
Continuous distributions
Spectroscopy
Special functions
Probability distributions with non-finite variance | Voigt profile | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,790 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Special functions",
"Instrumental analysis",
"Combinatorics",
"Spectroscopy"
] |
1,202,852 | https://en.wikipedia.org/wiki/Constructed%20wetland | A constructed wetland is an artificial wetland to treat sewage, greywater, stormwater runoff or industrial wastewater. It may also be designed for land reclamation after mining, or as a mitigation step for natural areas lost to land development. Constructed wetlands are engineered systems that use the natural functions of vegetation, soil, and organisms to provide secondary treatment to wastewater. The design of the constructed wetland has to be adjusted according to the type of wastewater to be treated. Constructed wetlands have been used in both centralized and decentralized wastewater systems. Primary treatment is recommended when there is a large amount of suspended solids or soluble organic matter (measured as biochemical oxygen demand and chemical oxygen demand).
Similar to natural wetlands, constructed wetlands also act as a biofilter and/or can remove a range of pollutants (such as organic matter, nutrients, pathogens, heavy metals) from the water. Constructed wetlands are designed to remove water pollutants such as suspended solids, organic matter and nutrients (nitrogen and phosphorus). All types of pathogens (i.e., bacteria, viruses, protozoans and helminths) are expected to be removed to some extent in a constructed wetland. Subsurface wetlands provide greater pathogen removal than surface wetlands.
There are two main types of constructed wetlands: subsurface flow and surface flow. The planted vegetation plays an important role in contaminant removal. The filter bed, consisting usually of sand and gravel, has an equally important role to play. Some constructed wetlands may also serve as a habitat for native and migratory wildlife, although that is not their main purpose. Subsurface flow constructed wetlands are designed to have either horizontal flow or vertical flow of water through the gravel and sand bed. Vertical flow systems have a smaller space requirement than horizontal flow systems.
Terminology
Many terms are used to denote constructed wetlands, such as reed beds, soil infiltration beds, treatment wetlands, engineered wetlands, man-made or artificial wetlands. A biofilter has some similarities with a constructed wetland, but is usually without plants.
The term of constructed wetlands can also be used to describe restored and recultivated land that was destroyed in the past through draining and converting into farmland, or mining.
Overview
A constructed wetland is an engineered sequence of water bodies designed to treat wastewater or storm water runoff.
Vegetation in a wetland provides a substrate (roots, stems, and leaves) upon which microorganisms can grow as they break down organic materials. This community of microorganisms is known as the periphyton. The periphyton and natural chemical processes are responsible for approximately 90 percent of pollutant removal and waste breakdown. The plants remove about seven to ten percent of pollutants, and act as a carbon source for the microbes when they decay. Different species of aquatic plants have different rates of heavy metal uptake, a consideration for plant selection in a constructed wetland used for water treatment. Constructed wetlands are of two basic types: subsurface flow and surface flow wetlands.
Constructed wetlands are one example of nature-based solutions and of phytoremediation.
Constructed wetland systems are highly controlled environments that intend to mimic the occurrences of soil, flora, and microorganisms in natural wetlands to aid in treating wastewater. They are constructed with flow regimes, micro-biotic composition, and suitable plants in order to produce the most efficient treatment process.
Uses
Constructed wetlands can be used to treat raw sewage, storm water, agricultural and industrial effluent. Constructed wetlands mimic the functions of natural wetlands to capture stormwater, reduce nutrient loads, and create diverse wildlife habitat. Constructed wetlands are used for wastewater treatment or for greywater treatment.
Many regulatory agencies list treatment wetlands as one of their recommended "best management practices" for controlling urban runoff.
Removal of contaminants
Physical, chemical, and biological processes combine in wetlands to remove contaminants from wastewater. An understanding of these processes is fundamental not only to designing wetland systems but to understanding the fate of chemicals once they enter the wetland. Theoretically, wastewater treatment within a constructed wetland occurs as it passes through the wetland medium and the plant rhizosphere. A thin film around each root hair is aerobic due to the leakage of oxygen from the rhizomes, roots, and rootlets. Aerobic and anaerobic micro-organisms facilitate decomposition of organic matter. Microbial nitrification and subsequent denitrification releases nitrogen as gas to the atmosphere. Phosphorus is coprecipitated with iron, aluminium, and calcium compounds located in the root-bed medium. Suspended solids filter out as they settle in the water column in surface flow wetlands or are physically filtered out by the medium within subsurface flow wetlands. Harmful bacteria, fungi, and viruses are reduced by filtration and adsorption by biofilms on the gravel or sand media in subsurface flow and vertical flow systems.
Nitrogen removal
The dominant forms of nitrogen in wetlands that are of importance to wastewater treatment include organic nitrogen, ammonia, ammonium, nitrate and nitrite. Total nitrogen refers to all nitrogen species. Wastewater nitrogen removal is important because of ammonia's toxicity to fish if discharged into watercourses. Excessive nitrates in drinking water is thought to cause methemoglobinemia in infants, which decreases the blood's oxygen transport ability. Moreover, excess input of N from point and non-point sources to surface water promotes eutrophication in rivers, lakes, estuaries, and coastal oceans which causes several problems in aquatic ecosystems e.g. toxic algal blooms, oxygen depletion in water, fish mortality, loss of aquatic biodiversity.
Ammonia removal occurs in constructed wetlands – if they are designed to achieve biological nutrient removal – in a similar ways as in sewage treatment plants, except that no external, energy-intensive addition of air (oxygen) is needed. It is a two-step process, consisting of nitrification followed by denitrification. The nitrogen cycle is completed as follows: ammonia in the wastewater is converted to ammonium ions; the aerobic bacterium Nitrosomonas sp. oxidizes ammonium to nitrite; the bacterium Nitrobacter sp. then converts nitrite to nitrate. Under anaerobic conditions, nitrate is reduced to relatively harmless nitrogen gas that enters the atmosphere.
Nitrification
Nitrification is the biological conversion of organic and inorganic nitrogenous compounds from a reduced state to a more oxidized state, based on the action of two different bacteria types. Nitrification is strictly an aerobic process in which the end product is nitrate (). The process of nitrification oxidizes ammonium (from the wastewater) to nitrite (), and then nitrite is oxidized to nitrate ().
Denitrification
Denitrification is the biochemical reduction of oxidized nitrogen anions, nitrate and nitrite to produce the gaseous products nitric oxide (NO), nitrous oxide () and nitrogen gas (), with concomitant oxidation of organic matter. The end product, , and to a lesser extent the intermediary by product, , are gases that re-enter the atmosphere.
Ammonia removal from mine water
Constructed wetlands have been used to remove ammonia and other nitrogenous compounds from contaminated mine water, including cyanide and nitrate.
Phosphorus removal
Phosphorus occurs naturally in both organic and inorganic forms. The analytical measure of biologically available orthophosphates is referred to as soluble reactive phosphorus (SR-P). Dissolved organic phosphorus and insoluble forms of organic and inorganic phosphorus are generally not biologically available until transformed into soluble inorganic forms.
In freshwater aquatic ecosystems phosphorus is typically the major limiting nutrient. Under undisturbed natural conditions, phosphorus is in short supply. The natural scarcity of phosphorus is demonstrated by the explosive growth of algae in water receiving heavy discharges of phosphorus-rich wastes. Because phosphorus does not have an atmospheric component, unlike nitrogen, the phosphorus cycle can be characterized as closed. The removal and storage of phosphorus from wastewater can only occur within the constructed wetland itself. Phosphorus may be sequestered within a wetland system by:
The binding of phosphorus in organic matter as a result of incorporation into living biomass,
Precipitation of insoluble phosphates with ferric iron, calcium, and aluminium found in wetland soils.
Biomass plants incorporation
Aquatic vegetation may play an important role in phosphorus removal and, if harvested, extend the life of a system by postponing phosphorus saturation of the sediments. Plants create a unique environment at the biofilm's attachment surface. Certain plants transport oxygen which is released at the biofilm/root interface, adding oxygen to the wetland system. Plants also increase soil or other root-bed medium hydraulic conductivity. As roots and rhizomes grow they are thought to disturb and loosen the medium, increasing its porosity, which may allow more effective fluid movement in the rhizosphere. When roots decay they leave behind ports and channels known as macropores which are effective in channeling water through the soil.
Metals removal
Constructed wetlands have been used extensively for the removal of dissolved metals and metalloids. Although these contaminants are prevalent in mine drainage, they are also found in stormwater, landfill leachate and other sources (e.g., leachate or FDG washwater at coal-fired power plants), for which treatment wetlands have been constructed for mines.
Mine water—Acid drainage removal
Constructed wetlands can also be used for treatment of acid mine drainage from coal mines.
Pathogen removal
Constructed wetlands are not designed for pathogen removal, but have been designed to remove other water quality constituents such as suspended solids, organic matter (biochemical oxygen demand and chemical oxygen demand) and nutrients (nitrogen and phosphorus).
All types of pathogens are expected to be removed in a constructed wetland; however, greater pathogen removal is expected to occur in a subsurface wetland. In a free water surface flow wetland one can expect 1 to 2 log10 reduction of pathogens; however, bacteria and virus removal may be less than 1 log10 reduction in systems that are heavily planted with vegetation. This is because constructed wetlands typically include vegetation which assists in removing other pollutants such as nitrogen and phosphorus. Therefore, the importance of sunlight exposure in removing viruses and bacteria is minimized in these systems.
Removal in a properly designed and operated free water surface flow wetland is reported to be less than 1 to 2 log10 for bacteria, less than 1 to 2 log10 for viruses, 1 to 2 log10 for protozoa, and 1 to 2 log10 for helminths. In subsurface flow wetlands, the expected removal of pathogens is reported to be 1 to 3 log10 for bacteria, 1 to 2 log10 for viruses, 2 log10 for protozoa, and 2 log10 for helminths.
The log10 removal efficiencies reported here can also be understood in terms of the common way of reporting removal efficiencies as percentages: 1 log10 removal is equivalent to a removal efficiency of 90%; 2 log10 = 99%; 3 log10 = 99.9%; 4 log10 = 99.99% and so on.
Types and design considerations
Constructed wetland systems can be surface flow systems with only free-floating macrophytes, floating-leaved macrophytes, or submerged macrophytes; however, typical free water surface systems are usually constructed with emergent macrophytes. Subsurface flow-constructed wetlands with a vertical or a horizontal flow regime are also common and can be integrated into urban areas as they require relatively little space.
The main three broad types of constructed wetlands include:
Subsurface flow constructed wetland – this wetland can be either with vertical flow (the effluent moves vertically, from the planted layer down through the substrate and out) or with horizontal flow (the effluent moves horizontally, parallel to the surface)
Surface flow constructed wetland (this wetland has horizontal flow)
Floating treatment wetland
The former types are placed in a basin with a substrate to provide a surface area upon which large amounts of waste degrading biofilms form, while the latter relies on a flooded treatment basin upon which aquatic plants are held in flotation till they develop a thick mat of roots and rhizomes upon which biofilms form. In most cases, the bottom is lined with either a polymer geomembrane, concrete or clay (when there is appropriate clay type) in order to protect the water table and surrounding grounds. The substrate can be either gravel—generally limestone or pumice/volcanic rock, depending on local availability, sand or a mixture of various sizes of media (for vertical flow constructed wetlands).
Constructed wetlands can be used after a septic tank for primary treatment (or other types of systems) in order to separate the solids from the liquid effluent. Some constructed wetland designs however do not use upfront primary treatment.
Subsurface flow
In subsurface flow constructed wetlands the flow of wastewater occurs between the roots of the plants and there is no water surfacing (it is kept below gravel). As a result, the system is more efficient, does not attract mosquitoes, is less odorous and less sensitive to winter conditions. Also, less area is needed to purify water. A downside to the system are the intakes, which can clog or bioclog easily, although some larger sized gravel will often solve this problem.
Subsurface flow wetlands can be further classified as horizontal flow or vertical flow constructed wetlands. In the vertical flow constructed wetland, the effluent moves vertically from the planted layer down through the substrate and out (requiring air pumps to aerate the bed). In the horizontal flow constructed wetland the effluent moves horizontally via gravity, parallel to the surface, with no surface water thus avoiding mosquito breeding. Vertical flow constructed wetlands are considered to be more efficient with less area required compared to horizontal flow constructed wetlands. However, they need to be interval-loaded and their design requires more know-how while horizontal flow constructed wetlands can receive wastewater continuously and are easier to build.
Due to the increased efficiency a vertical flow subsurface constructed wetland requires only about of space per person equivalent, down to 1.5 square metres in hot climates.
The "French System" combines primary and secondary treatment of raw wastewater. The effluent passes various filter beds whose grain size is getting progressively smaller (from gravel to sand).
Applications
Subsurface flow wetlands can treat a variety of different wastewaters, such as household wastewater, agricultural, paper mill wastewater, mining runoff, tannery or meat processing wastes, storm water.
The quality of the effluent is determined by the design and should be customized for the intended reuse application (like irrigation or toilet flushing) or the disposal method.
Design considerations
Depending on the type of constructed wetlands, the wastewater passes through a gravel and more rarely sand medium on which plants are rooted. A gravel medium (generally limestone or volcanic rock lavastone) can be used as well (the use of lavastone will allow for a surface reduction of about 20% over limestone) is mainly deployed in horizontal flow systems though it does not work as efficiently as sand (but sand will clog more readily).
Constructed subsurface flow wetlands are meant as secondary treatment systems which means that the effluent needs to first pass a primary treatment which effectively removes solids. Such a primary treatment can consist of sand and grit removal, grease trap, compost filter, septic tank, Imhoff tank, anaerobic baffled reactor or upflow anaerobic sludge blanket (UASB) reactor. The following treatment is based on different biological and physical processes like filtration, adsorption or nitrification. Most important is the biological filtration through a biofilm of aerobic or facultative bacteria. Coarse sand in the filter bed provides a surfaces for microbial growth and supports the adsorption and filtration processes. For those microorganisms the oxygen supply needs to be sufficient.
Especially in warm and dry climates the effects of evapotranspiration and precipitation are significant. In cases of water loss, a vertical flow constructed wetland is preferable to a horizontal because of an unsaturated upper layer and a shorter retention time, although vertical flow systems are more dependent on an external energy source. Evapotranspiration (as is rainfall) is taken into account in designing a horizontal flow system.
The effluent can have a yellowish or brownish colour if domestic wastewater or blackwater is treated. Treated greywater usually does not tend to have a colour. Concerning pathogen levels, treated greywater meets the standards of pathogen levels for safe discharge to surface water. Treated domestic wastewater might need a tertiary treatment, depending on the intended reuse application.
Plantings of reedbeds are popular in European constructed subsurface flow wetlands, although at least twenty other plant species are usable. Many fast growing timer plants can be used, as well for example as Musa spp., Juncus spp., cattails (Typha spp.) and sedges.
Operation and maintenance
Overloading peaks should not cause performance problems while continuous overloading lead to a loss of treatment capacity through too much suspended solids, sludge or fats.
Subsurface flow wetlands require the following maintenance tasks: regular checking of the pretreatment process, of pumps when they are used, of influent loads and distribution on the filter bed.
Comparisons with other types
Subsurface wetlands are less hospitable to mosquitoes compared to surface flow wetlands, as there is no water exposed to the surface. Mosquitos can be a problem in surface flow constructed wetlands. Subsurface flow systems have the advantage of requiring less land area for water treatment than surface flow. However, surface flow wetlands can be more suitable for wildlife habitat.
For urban applications the area requirement of a subsurface flow constructed wetland might be a limiting factor compared to conventional municipal wastewater treatment plants. High rate aerobic treatment processes like activated sludge plants, trickling filters, rotating discs, submerged aerated filters or membrane bioreactor plants require less space. The advantage of subsurface flow constructed wetlands compared to those technologies is their operational robustness which is particularly important in developing countries. The fact that constructed wetlands do not produce secondary sludge (sewage sludge) is another advantage as there is no need for sewage sludge treatment. However, primary sludge from primary settling tanks does get produced and needs to be removed and treated.
Costs
The costs of subsurface flow constructed wetlands mainly depend on the costs of sand with which the bed has to be filled. Another factor is the cost of land.
Surface flow
Surface flow wetlands, also known as free water surface constructed wetlands, can be used for tertiary treatment or polishing of effluent from wastewater treatment plants. They are also suitable to treat stormwater drainage.
Surface flow constructed wetlands always have horizontal flow of wastewater across the roots of the plants, rather than vertical flow. They require a relatively large area to purify water compared to subsurface flow constructed wetlands and may have increased smell and lower performance in winter.
Surface flow wetlands have a similar appearance to ponds for wastewater treatment (such as "waste stabilization ponds") but are in the technical literature not classified as ponds.
Pathogens are destroyed by natural decay, predation from higher organisms, sedimentation and UV irradiation since the water is exposed to direct sunlight. The soil layer below the water is anaerobic but the roots of the plants release oxygen around them, this allows complex biological and chemical reactions.
Surface flow wetlands can be supported by a wide variety of soil types including bay mud and other silty clays.
Plants such as water hyacinth (Eichhornia crassipes) and Pontederia spp. are used worldwide (although Typha and Phragmites are highly invasive).
However, surface flow constructed wetlands may encourage mosquito breeding. They may also have high algae production that lowers the effluent quality and due to open water surface mosquitos and odours, it is more difficult to integrate them in an urban neighbourhood.
Hybrid systems
A combination of different types of constructed wetlands is possible to use the specific advantages of each system.
Integrated constructed wetland
An integrated constructed wetland is an unlined free surface flow constructed wetland with emergent vegetated areas and local soil material. Its objectives is not only to treat wastewater from farmyards and other wastewater sources, but also to integrate the wetland infrastructure into the landscape and enhancing its biological diversity.
Integrated constructed wetland facilitates may be more robust treatment systems compared to other constructed wetlands. This is due to the greater biological complexity and generally relatively larger land area use and associated longer hydraulic residence time of integrated constructed wetland compared to conventional constructed wetlands.
Integrated constructed wetlands are used in Ireland, the UK and the United States since about 2007. Farm constructed wetlands, which are a subtype of integrated constructed wetlands, are promoted by the Scottish Environment Protection Agency and the Northern Ireland Environment Agency since 2008.
Other design aspects
The design of a constructed wetland can greatly effect the surrounding environment. A wide range of skills and knowledge is needed in the construction and can easily be detrimental to the site if not done correctly. A long list of professions ranging from civil engineers to hydrologists to wildlife biologists to landscape architects are needed in this design process. The landscape architect can utilize a wide range of skills to help accomplish the task of constructing a wetland that may not be thought of by other professions. Ecological landscape architects are also qualified to create wetland restoration designs in coordination with wetland scientists that increase the community value and appreciation of a project through well designed access, interpretation, and views of the project. Landscape architecture has a long history of engagement with the aesthetic dimension of wetlands. Landscape architects also guide through the laws and regulations associated with constructing a wetland.
Plants and other organisms
Plants
Typhas and Phragmites are the main species used in constructed wetland due to their effectiveness, even though they can be invasive outside their native range.
In North America, cattails (Typha latifolia) are common in constructed wetlands because of their widespread abundance, ability to grow at different water depths, ease of transport and transplantation, and broad tolerance of water composition (including pH, salinity, dissolved oxygen and contaminant concentrations). Elsewhere, Common Reed (Phragmites australis) are common (both in blackwater treatment but also in greywater treatment systems to purify wastewater).
Plants are usually indigenous in that location for ecological reasons and optimum workings.
Animals
Locally grown non-predatory fish can be added to surface flow constructed wetlands to eliminate or reduce pests, such as mosquitos.
Stormwater wetlands provide habitat for amphibians but the pollutants they accumulate can affect the survival of larval stages, potentially making them function as "ecological traps".
Costs
Since constructed wetlands are self-sustaining their lifetime costs are significantly lower than those of conventional treatment systems. Often their capital costs are also lower compared to conventional treatment systems. They do take up significant space, and are therefore not preferred where real estate costs are high.
History
Primary clarifier effluent was discharged directly to natural wetlands for decades before environmental regulations discouraged the practice. Subsurface flow constructed wetlands with sand filter beds have their origin in China and are now used in Asia in small cities.
Examples
Austria
The total number of constructed wetlands in Austria is 5,450 (in 2015). Due to legal requirements (nitrification), only vertical flow constructed wetlands are implemented in Austria as they achieve better nitrification performance than horizontal flow constructed wetlands. Only about 100 of these constructed wetlands have a design size of 50 population equivalents or more. The remaining 5,350 treatment plants are smaller than that.
Canada
As part of the remediation efforts to remove contamination from CFB Goose Bay, one of the waste dumps was transformed into an engineered wetland.
See also
Decentralized wastewater system
Ecological engineering
Ecological sanitation
Floodplain restoration
Integrated constructed wetland
Passive treatment system
Sanitation
Vegetative treatment system
Water-sensitive urban design
Wetland classification
Wetlands Construídos (a company in Brazil)
References
External links
Constructed Wetlands – US Environmental Protection Agency — Handbook, studies and related resources
Publications on constructed wetlands in the library of the Sustainable Sanitation Alliance
Artificial landforms
Environmental engineering
Environmental terminology
Sewerage infrastructure
Stormwater management
Sustainable design
Wetlands
Ecological restoration
Water and the environment
Chinese inventions | Constructed wetland | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 5,006 | [
"Hydrology",
"Constructed wetlands",
"Ecological restoration",
"Water treatment",
"Stormwater management",
"Chemical engineering",
"Sewerage infrastructure",
"Wetlands",
"Water pollution",
"Civil engineering",
"Environmental engineering",
"Bioremediation"
] |
1,203,063 | https://en.wikipedia.org/wiki/Level-set%20method | The Level-set method (LSM) is a conceptual framework for using level sets as a tool for numerical analysis of surfaces and shapes. LSM can perform numerical computations involving curves and surfaces on a fixed Cartesian grid without having to parameterize these objects. LSM makes it easier to perform computations on shapes with sharp corners and shapes that change topology (such as by splitting in two or developing holes). These characteristics make LSM effective for modeling objects that vary in time, such as an airbag inflating or a drop of oil floating in water.
Overview
The figure on the right illustrates several ideas about LSM. In the upper left corner is a bounded region with a well-behaved boundary. Below it, the red surface is the graph of a level set function determining this shape, and the flat blue region represents the X-Y plane. The boundary of the shape is then the zero-level set of , while the shape itself is the set of points in the plane for which is positive (interior of the shape) or zero (at the boundary).
In the top row, the shape's topology changes as it is split in two. It is challenging to describe this transformation numerically by parameterizing the boundary of the shape and following its evolution. An algorithm can be used to detect the moment the shape splits in two and then construct parameterizations for the two newly obtained curves. On the bottom row, however, the plane at which the level set function is sampled is translated upwards, on which the shape's change in topology is described. It is less challenging to work with a shape through its level-set function rather than with itself directly, in which a method would need to consider all the possible deformations the shape might undergo.
Thus, in two dimensions, the level-set method amounts to representing a closed curve (such as the shape boundary in our example) using an auxiliary function , called the level-set function. The curve is represented as the zero-level set of by
and the level-set method manipulates implicitly through the function . This function is assumed to take positive values inside the region delimited by the curve and negative values outside.
The level-set equation
If the curve moves in the normal direction with a speed , then by chain rule and implicit differentiation, it can be determined that the level-set function satisfies the level-set equation
Here, is the Euclidean norm (denoted customarily by single bars in partial differential equations), and is time. This is a partial differential equation, in particular a Hamilton–Jacobi equation, and can be solved numerically, for example, by using finite differences on a Cartesian grid.
However, the numerical solution of the level set equation may require advanced techniques. Simple finite difference methods fail quickly. Upwinding methods such as the Godunov method are considered better; however, the level set method does not guarantee preservation of the volume and shape of the set level in an advection field that maintains shape and size, for example, a uniform or rotational velocity field. Instead, the shape of the level set may become distorted, and the level set may disappear over a few time steps. Therefore, high-order finite difference schemes, such as high-order essentially non-oscillatory (ENO) schemes, are often required, and even then, the feasibility of long-term simulations is questionable. More advanced methods have been developed to overcome this; for example, combinations of the leveling method with tracking marker particles suggested by the velocity field.
Example
Consider a unit circle in , shrinking in on itself at a constant rate, i.e. each point on the boundary of the circle moves along its inwards pointing normally at some fixed speed. The circle will shrink and eventually collapse down to a point. If an initial distance field is constructed (i.e. a function whose value is the signed Euclidean distance to the boundary, positive interior, negative exterior) on the initial circle, the normalized gradient of this field will be the circle normal.
If the field has a constant value subtracted from it in time, the zero level (which was the initial boundary) of the new fields will also be circular and will similarly collapse to a point. This is due to this being effectively the temporal integration of the Eikonal equation with a fixed front velocity.
Applications
In mathematical modeling of combustion, LSM is used to describe the instantaneous flame surface, known as the G equation.
Level-set data structures have been developed to facilitate the use of the level-set method in computer applications.
Computational fluid dynamics
Trajectory planning
Optimization
Image processing
Computational biophysics
Discrete complex dynamics (visualization of the parameter plane and the dynamic plane)
History
The level-set method was developed in 1979 by Alain Dervieux, and subsequently popularized by Stanley Osher and James Sethian. It has since become popular in many disciplines, such as image processing, computer graphics, computational geometry, optimization, computational fluid dynamics, and computational biology.
See also
Contour boxplot
Zebra analysis
G equation
Advanced Simulation Library
Volume of fluid method
Image segmentation#Level-set methods
Immersed boundary methods
Stochastic Eulerian Lagrangian methods
Level set (data structures)
Posterization
References
External links
See Ronald Fedkiw's academic web page for many pictures and animations showing how the level-set method can be used to model real-life phenomena.
Multivac is a C++ library for front tracking in 2D with level-set methods.
James Sethian's web page on level-set method.
Stanley Osher's homepage.
The Level Set Method. MIT 16.920J / 2.097J / 6.339J. Numerical Methods for Partial Differential Equations by Per-Olof Persson. March 8, 2005
Lecture 11: The Level Set Method: MIT 18.086. Mathematical Methods for Engineers II by Gilbert Strang
Optimization algorithms and methods
Computer graphics algorithms
Image processing
Computational fluid dynamics
Articles containing video clips
Implicit surface modeling | Level-set method | [
"Physics",
"Chemistry"
] | 1,225 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
1,203,089 | https://en.wikipedia.org/wiki/Torsional%20vibration | Torsional vibration is the angular vibration of an object - commonly a shaft - along its axis of rotation. Torsional vibration is often a concern in power transmission systems using rotating shafts or couplings, where it can cause failures if not controlled. A second effect of torsional vibrations applies to passenger cars. Torsional vibrations can lead to seat vibrations or noise at certain speeds. Both reduce the comfort.
In ideal power generation (or transmission) systems using rotating parts, the torques applied or reacted are "smooth" leading to constant speeds, and the rotating plane where the power is generated (input) and the plane it is taken out (output) are the same. In reality this is not the case. The torques generated may not be smooth (e.g., internal combustion engines) or the component being driven may not react to the torque smoothly (e.g., reciprocating compressors), and the power generating plane is normally at some distance to the power takeoff plane. Also, the components transmitting the torque can generate non-smooth or alternating torques (e.g., elastic drive belts, worn gears, misaligned shafts). Because no material can be infinitely stiff, these alternating torques applied at some distance on a shaft cause twisting vibration about the axis of rotation.
Sources of torsional vibration
Torsional vibration can be introduced into a drive train by the power source. But even a drive train with a very smooth rotational input can develop torsional vibrations through internal components. Common sources are:
Internal combustion engine: The torsional vibrations of the not continuous combustion and the crank shaft geometry itself cause torsional vibrations
Reciprocating compressor: The pistons experience discontinuous forces from the compression.
Universal joint: The geometry of this joint causes torsional vibrations if the shafts are not parallel.
Stick slip: During the engagement of a friction element, stick slip situations create torsional vibrations.
Lash: Drive train lash can cause torsional vibrations if the direction of rotation is changed or if the flow of power, i.e. driver vs. driven, is reversed.
Crankshaft torsional vibration
Torsional vibration is a concern in the crankshafts of internal combustion engines because it could break the crankshaft itself; shear-off the flywheel; or cause driven belts, gears and attached components to fail, especially when the frequency of the vibration matches the torsional resonant frequency of the crankshaft. Causes of the torsional vibration are attributed to several factors.
Alternating torques are generated by the slider-crank mechanism of the crankshaft, connecting rod, and piston.
The cylinder pressure due to combustion is not constant through the combustion cycle.
The slider-crank mechanism does not output a smooth torque even if the pressure is constant (e.g., at top dead centre there is no torque generated)
The motion of the piston mass and connecting rod mass generate alternating torques often referred to as "inertia" torques
Engines with six or more cylinders in a straight line configuration can have very flexible crankshafts due to their long length.
2 Stroke Engines generally have smaller bearing overlap between the main and the pin bearings due to the larger stroke length, hence increasing the flexibility of the Crankshaft due to decreased stiffness.
There is inherently little damping in a crankshaft to reduce the vibration except for the shearing resistance of oil film in the main and conrod bearings.
If torsional vibration is not controlled in a crankshaft it can cause failure of the crankshaft or any accessories that are being driven by the crankshaft (typically at the front of the engine; the inertia of the flywheel normally reduces the motion at the rear of the engine). The couplings turn the vibration energy into heat. Therefore, and to ensure that the coupling is not damaged due to this (temperature could be very high, depending on the load), this is verified through torsional vibration calculation.
This potentially damaging vibration is often controlled by a torsional damper that is located at the front nose of the crankshaft (in automobiles it is often integrated into the front pulley). There are two main types of torsional dampers.
Viscous dampers consist of an inertia ring in a viscous fluid. The torsional vibration of the crankshaft forces the fluid through narrow passages that dissipates the vibration as heat. The viscous torsional damper is analogous to the hydraulic shock absorber in a car's suspension.
Tuned absorber type of "dampers" often referred to as a harmonic dampers or harmonic balancers (even though it technically does not damp or balance the crankshaft). This damper uses a spring element (often rubber in automobile engines) and an inertia ring that is typically tuned to the first torsional natural frequency of the crankshaft. This type of damper reduces the vibration at specific engine speeds when an excitation torque excites the first natural frequency of the crankshaft, but not at other speeds. This type of damper is analogous to the tuned mass dampers used in skyscrapers to reduce the building motion during an earthquake.
Torsional vibrations in electromechanical drive systems
Torsional vibrations of drive systems usually result in a fluctuation of the rotational speed of the rotor of the driving electric motor. Such oscillations of the angular speed superimposed on the average rotor rotational speed cause perturbations of the electromagnetic flux, leading to additional oscillations of the electric currents in the motor windings. Then, the generated electromagnetic torque is also influenced by additional time-varying electromechanical interactions, which lead to further torsional vibrations of the drive system. According to the above, mechanical vibrations/oscillations of the drive system become coupled with the electrical oscillations of the motor windings' currents. Such coupling is typically nonlinear and presents a high computational burden.
Due to the highly nonlinear and coupled nature of electromechanical oscillations, approximations are often used, enabling such oscillations to be characterized analytically. To simplify the characterization of the oscillations between mechanical and electric systems, it is common to assume the mechanical and electrical components are uncoupled. Then, by holding either the mechanical or electrical aspect in steady-state, the characteristic of the other can be calculated. A common method is to apply electromagnetic torques generated by the electric motors as assumed excitation functions of time or of the rotor-to-stator slip, which are usually based on numerous experimental measurements carried out for a given electric motor's dynamic behaviour. For this purpose, by means of measurement results, i.e., empirically, formulas have been developed that provide good approximations for the electromagnetic external excitations produced by the electric motor. Although the electric currents flowing in the electric motor windings are accurate, the mechanical drive system is typically reduced to one or seldom to at most a few rotating rigid bodies. In many cases, such simplifications yield sufficiently useful results for engineering applications, but they can lead to inaccuracies since many qualitative dynamic properties of the mechanical systems, e.g., their mass distribution, torsional flexibility, and damping effects, are neglected. Thus, an influence of the oscillatory behaviour of drive systems on the electric machine rotor angular speed fluctuations, and in this way on the electric current oscillations in the rotor and stator windings, cannot be investigated with a satisfactory precision, excepting numerical methods, which can provide arbitrarily high accuracy.
Mechanical vibrations and deformations are phenomena associated with the operation of the majority of railway vehicle drivetrain structures. The knowledge about torsional vibrations in transmission systems of railway vehicles is of a great importance in the field of mechanical system dynamics. Torsional vibrations in railway vehicle drivetrains are generated by many coupled mechanisms, which are very complex and can be divided into two main parts:
The electromechanical interactions within the railway drivetrain system, including the electric motor, gears, and the driven parts of the disc and gear clutches.
Torsional vibrations of the flexible wheels and wheelsets caused by variations of the adhesion forces in the wheel-rail contact zone.
An interaction of the adhesion forces has nonlinear features which are related to the creep value and strongly depend on the wheel-rail zone conditions and the track geometry (especially when driving on a curve section of the track). In many modern mechanical systems, torsional structural deformability plays an important role. Often the study of railway vehicle dynamics using the rigid multibody methods without torsionally deformable elements are used This approach does not enable analysis of the self-excited vibrations, which have an important influence on the wheel-rail longitudinal interaction.
A dynamic modelling of the electrical drive systems coupled with elements of a driven machine or vehicle is particularly important when the purpose of such modelling is to obtain an information about the transient phenomena of system operation, like run-up, run-down, and the loss of adhesion in the wheel-rail zone. The modelling of an electromechanical interaction between the electric driving motor and the machine also influence the self-excited torsional vibrations in the drive system.
Measuring torsional vibration on physical systems
The most common way to measure torsional vibration is the approach of using equidistant pulses over one shaft revolution. Dedicated shaft encoders as well as gear tooth pickup transducers (induction, hall-effect, variable reluctance, etc.) can generate these pulses. The resulting encoder pulse train is converted into either a digital rpm reading or a voltage proportional to the rpm.
The use of a dual-beam laser is another technique that is used to measure torsional vibrations. The operation of the dual-beam laser is based on the difference in reflection frequency of two perfectly aligned beams pointing at different points on a shaft. Despite its specific advantages, this method yields a limited frequency range, requires line-of-sight from the part to the laser, and represents multiple lasers in case several measurement points need to be measured in parallel.
Torsional vibration software
There are many software packages that are capable of solving the torsional vibration system of equations. Torsional vibration specific codes are more versatile for design and system validation purposes and can produce simulation data that can readily compared to published industry standards. These codes make it easy to add system branches, mass-elastic data, steady-state loads, transient disturbances and many other items only a rotordynamicist would need. Torsional vibration specific codes:
AxSTREAM RotorDynamics - Commercial FEA-based program for performing the full scope of torsional analyses on the complete range of rotating equipment. Can be used to perform steady-state and transient, modal, harmonic and reciprocating machines analysis, and generates stability plot and Campbell diagrams quickly.
ARMD TORSION - Commercial FEA-based software for performing damped and undamped torsional natural frequencies, mode shapes, steady-state and time-transient response of mechanical drive trains with inputs of various types of external excitation, synchronous motor start-up torque, compressor torques, and electrical system disturbances.
Bond Graphs can be used to analyse torsional vibrations in generator sets, such as those used aboard ships.
See also
Torsion (mechanics)
Torsion coefficient
Torsion spring or -bar
Torque
Damping torque
Bibliography
Nestorides, E.J., BICERA: A Handbook on Torsional Vibration, University Press, 1958,
References
External links
Torsional Vibration Application Case for Vehicle Frontend Accessory Drive
Mechanical vibrations | Torsional vibration | [
"Physics",
"Engineering"
] | 2,410 | [
"Structural engineering",
"Mechanics",
"Mechanical vibrations"
] |
1,204,012 | https://en.wikipedia.org/wiki/Bhangmeter | A bhangmeter is a non-imaging radiometer installed on reconnaissance and navigation satellites to detect atmospheric nuclear detonations and determine the yield of the nuclear weapon. They are also installed on some armored fighting vehicles, in particular NBC reconnaissance vehicles, in order to help detect, localise and analyse tactical nuclear detonations. They are often used alongside pressure and sound sensors in this role in addition to standard radiation sensors. Some nuclear bunkers and military facilities may also be equipped with such sensors alongside seismic event detectors.
The bhangmeter was developed at Los Alamos National Laboratory by a team led by Hermann Hoerlin.
History
The bhangmeter was invented, and the first proof-of-concept device was built, in 1948 to measure the nuclear test detonations of Operation Sandstone. Prototype and production instruments were later built by EG&G, and the name "bhangmeter" was coined in 1950 by Frederick Reines. Bhangmeters became standard instruments used to observe US nuclear tests. A bhangmeter was developed to observe the detonations of Operation Buster-Jangle (1951) and Operation Tumbler-Snapper (1952). These tests lay the groundwork for a large deployment of nationwide North American bhangmeters with the Bomb Alarm System (1961-1967).
US president John F. Kennedy and the First Secretary of the Communist Party of the Soviet Union Nikita Khrushchev signed the Partial Test Ban Treaty on August 5, 1963, under the condition that each party could use its own technical means to monitor the ban on nuclear testing in the atmosphere or in outer space.
Bhangmeters were first installed, in 1961, aboard a modified US KC-135A aircraft monitoring the pre-announced Soviet test of Tsar Bomba.
The Vela satellites were the first space-based observation devices jointly developed by the U.S. Air Force and the Atomic Energy Commission. The first generation of Vela satellites were not equipped with bhangmeters but with X-ray sensors to detect the intense single pulse of X-rays produced by a nuclear explosion. The first satellites which incorporated bhangmeters were the Advanced Vela satellites.
Since 1980, bhangmeters have been included on US GPS navigation satellites.
Description
The silicon photodiode sensors are designed to detect the distinctive bright double pulse of visible light that is emitted from atmospheric nuclear weapons explosions. This signature consists of a short and intense flash lasting around 1 millisecond, followed by a second much more prolonged and less intense emission of light taking a fraction of a second to several seconds to build up. This signature, with a double intensity maximum, is characteristic of atmospheric nuclear explosions and is the result of the Earth's atmosphere becoming opaque to visible light and transparent again as the explosion's shock wave travels through it.
The effect occurs because the surface of the early fireball is quickly overtaken by the expanding "case shock", the atmospheric shock wave composed of the ionised plasma of what was once the casing and other matter of the device. Although it emits a considerable amount of light itself, it is opaque and prevents the far brighter fireball from shining through. The net result recorded is a decrease of the light visible from outer space as the shock wave expands, producing the first peak recorded by the bhangmeter.
As it expands, the shock wave cools off and becomes less opaque to the visible light produced by the inner fireball. The bhangmeter starts eventually to record an increase in visible light intensity. The expansion of the fireball leads to an increase of its surface area and consequently an increase of the amount of visible light radiated off to space. The fireball continues to cool down so the amount of light eventually starts to decrease, causing the second peak observed by the bhangmeter. The time between the first and second peaks can be used to determine its nuclear yield.
The effect is unambiguous for explosions below about altitude, but above this height a more ambiguous single pulse is produced.
Origin of the name
The name of the detector is a pun which was bestowed upon it by Fred Reines, one of the scientists working on the project. The name is derived from the Hindi word "bhang", a locally grown variety of cannabis which is smoked or drunk to induce intoxicating effects, the joke being that one would have to be on drugs to believe the bhangmeter detectors would work properly. This is in contrast to a "bangmeter" one might associate with detection of nuclear explosions.
See also
Vela incident
WC-135 Constant Phoenix
Nuclear MASINT
Electro-optical MASINT
References
Further reading
Nuclear weapons
Nuclear warfare
Electromagnetic radiation meters
Chemical, biological, radiological and nuclear defense | Bhangmeter | [
"Physics",
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 955 | [
"Spectrum (physical sciences)",
"Electromagnetic radiation meters",
"Electromagnetic spectrum",
"Measuring instruments",
"Biological warfare",
" biological",
" radiological and nuclear defense",
"Nuclear warfare",
"Chemical",
"Radioactivity"
] |
1,204,294 | https://en.wikipedia.org/wiki/Gluing%20axiom | In mathematics, the gluing axiom is introduced to define what a sheaf on a topological space must satisfy, given that it is a presheaf, which is by definition a contravariant functor
to a category which initially one takes to be the category of sets. Here is the partial order of open sets of ordered by inclusion maps; and considered as a category in the standard way, with a unique morphism
if is a subset of , and none otherwise.
As phrased in the sheaf article, there is a certain axiom that must satisfy, for any open cover of an open set of . For example, given open sets and with union and intersection , the required condition is that
is the subset of With equal image in
In less formal language, a section of over is equally well given by a pair of sections : on and respectively, which 'agree' in the sense that and have a common image in under the respective restriction maps
and
.
The first major hurdle in sheaf theory is to see that this gluing or patching axiom is a correct abstraction from the usual idea in geometric situations. For example, a vector field is a section of a tangent bundle on a smooth manifold; this says that a vector field on the union of two open sets is (no more and no less than) vector fields on the two sets that agree where they overlap.
Given this basic understanding, there are further issues in the theory, and some will be addressed here. A different direction is that of the Grothendieck topology, and yet another is the logical status of 'local existence' (see Kripke–Joyal semantics).
Removing restrictions on C
To rephrase this definition in a way that will work in any category that has sufficient structure, we note that we can write the objects and morphisms involved in the definition above in a diagram which we will call (G), for "gluing":
Here the first map is the product of the restriction maps
and each pair of arrows represents the two restrictions
and
.
It is worthwhile to note that these maps exhaust all of the possible restriction maps among , the , and the .
The condition for to be a sheaf is that for any open set and any collection of open sets whose union is , the diagram (G) above is an equalizer.
One way of understanding the gluing axiom is to notice that is the colimit of the following diagram:
The gluing axiom says that turns colimits of such diagrams into limits.
Sheaves on a basis of open sets
In some categories, it is possible to construct a sheaf by specifying only some of its sections. Specifically, let be a topological space with basis . We can define a category to be the full subcategory of whose objects are the . A B-sheaf on with values in is a contravariant functor
which satisfies the gluing axiom for sets in . That is, on a selection of open sets of , specifies all of the sections of a sheaf, and on the other open sets, it is undetermined.
B-sheaves are equivalent to sheaves (that is, the category of sheaves is equivalent to the category of B-sheaves). Clearly a sheaf on can be restricted to a B-sheaf. In the other direction, given a B-sheaf we must determine the sections of on the other objects of . To do this, note that for each open set , we can find a collection whose union is . Categorically speaking, this choice makes the colimit of the full subcategory of whose objects are . Since is contravariant, we define to be the limit of the with respect to the restriction maps. (Here we must assume that this limit exists in .) If is a basic open set, then is a terminal object of the above subcategory of , and hence . Therefore, extends to a presheaf on . It can be verified that is a sheaf, essentially because every element of every open cover of is a union of basis elements (by the definition of a basis), and every pairwise intersection of elements in an open cover of is a union of basis elements (again by the definition of a basis).
The logic of C
The first needs of sheaf theory were for sheaves of abelian groups; so taking the category as the category of abelian groups was only natural. In applications to geometry, for example complex manifolds and algebraic geometry, the idea of a sheaf of local rings is central. This, however, is not quite the same thing; one speaks instead of a locally ringed space, because it is not true, except in trite cases, that such a sheaf is a functor into a category of local rings. It is the stalks of the sheaf that are local rings, not the collections of sections (which are rings, but in general are not close to being local). We can think of a locally ringed space as a parametrised family of local rings, depending on in .
A more careful discussion dispels any mystery here. One can speak freely of a sheaf of abelian groups, or rings, because those are algebraic structures (defined, if one insists, by an explicit signature). Any category having finite products supports the idea of a group object, which some prefer just to call a group in . In the case of this kind of purely algebraic structure, we can talk either of a sheaf having values in the category of abelian groups, or an abelian group in the category of sheaves of sets; it really doesn't matter.
In the local ring case, it does matter. At a foundational level we must use the second style of definition, to describe what a local ring means in a category. This is a logical matter: axioms for a local ring require use of existential quantification, in the form that for any in the ring, one of and is invertible. This allows one to specify what a 'local ring in a category' should be, in the case that the category supports enough structure.
Sheafification
To turn a given presheaf into a sheaf , there is a standard device called sheafification or sheaving. The rough intuition of what one should do, at least for a presheaf of sets, is to introduce an equivalence relation, which makes equivalent data given by different covers on the overlaps by refining the covers. One approach is therefore to go to the stalks and recover the sheaf space of the best possible sheaf produced from .
This use of language strongly suggests that we are dealing here with adjoint functors. Therefore, it makes sense to observe that the sheaves on form a full subcategory of the presheaves on . Implicit in that is the statement that a morphism of sheaves is nothing more than a natural transformation of the sheaves, considered as functors. Therefore, we get an abstract characterisation of sheafification as left adjoint to the inclusion. In some applications, naturally, one does need a description.
In more abstract language, the sheaves on form a reflective subcategory of the presheaves (Mac Lane–Moerdijk Sheaves in Geometry and Logic p. 86). In topos theory, for a Lawvere–Tierney topology and its sheaves, there is an analogous result (ibid. p. 227).
Other gluing axioms
The gluing axiom of sheaf theory is rather general. One can note that the Mayer–Vietoris axiom of homotopy theory, for example, is a special case.
See also
Gluing schemes
Notes
References
General topology
Limits (category theory)
Homological algebra
Mathematical axioms
Differential topology | Gluing axiom | [
"Mathematics"
] | 1,618 | [
"General topology",
"Mathematical structures",
"Mathematical logic",
"Mathematical axioms",
"Fields of abstract algebra",
"Topology",
"Category theory",
"Differential topology",
"Limits (category theory)",
"Homological algebra"
] |
1,205,131 | https://en.wikipedia.org/wiki/Regge%20theory | In quantum physics, Regge theory ( , ) is the study of the analytic properties of scattering as a function of angular momentum, where the angular momentum is not restricted to be an integer multiple of ħ but is allowed to take any complex value. The nonrelativistic theory was developed by Tullio Regge in 1959.
Details
The simplest example of Regge poles is provided by the quantum mechanical treatment of the Coulomb potential or, phrased differently, by the quantum mechanical treatment of the binding or scattering of an electron of mass and electric charge off a proton of mass and charge . The energy of the binding of the electron to the proton is negative whereas for scattering the energy is positive. The formula for the binding energy is the expression
where , is the Planck constant, and is the permittivity of the vacuum. The principal quantum number is in quantum mechanics (by solution of the radial Schrödinger equation) found to be given by , where is the radial quantum number and the quantum number of the orbital angular momentum. Solving the above equation for , one obtains the equation
Considered as a complex function of this expression describes in the complex -plane a path which is called a Regge trajectory. Thus in this consideration the orbital
momentum can assume complex values.
Regge trajectories can be obtained for many other potentials, in particular also for the Yukawa potential.
Regge trajectories appear as poles of the scattering amplitude or in the related -matrix. In the case of the Coulomb potential considered above this -matrix is given by the following expression as can be checked by reference to any textbook on quantum mechanics:
where is the gamma function, a generalization of factorial . This gamma function is a meromorphic function of its argument with simple poles at . Thus the expression for (the gamma function in the numerator) possesses poles at precisely those points which are given by the above expression for the Regge trajectories; hence the name Regge poles.
History and implications
The main result of the theory is that the scattering amplitude for potential scattering grows as a function of the cosine of the scattering angle as a power that changes as the scattering energy changes:
where is the noninteger value of the angular momentum of a would-be bound state with energy . It is determined by solving the radial Schrödinger equation and it smoothly interpolates the energy of wavefunctions with different angular momentum but with the same radial excitation number. The trajectory function is a function of for relativistic generalization. The expression is known as the Regge trajectory function, and when it is an integer, the particles form an actual bound state with this angular momentum. The asymptotic form applies when is much greater than one, which is not a physical limit in nonrelativistic scattering.
Shortly afterwards, Stanley Mandelstam noted that in relativity the purely formal limit of large is near to a physical limit — the limit of large . Large means large energy in the crossed channel, where one of the incoming particles has an energy momentum that makes it an energetic outgoing antiparticle. This observation turned Regge theory from a mathematical curiosity into a physical theory: it demands that the function that determines the falloff rate of the scattering amplitude for particle-particle scattering at large energies is the same as the function that determines the bound state energies for a particle-antiparticle system as a function of angular momentum.
The switch required swapping the Mandelstam variable , which is the square of the energy, for , which is the squared momentum transfer, which for elastic soft collisions of identical particles is s times one minus the cosine of the scattering angle. The relation in the crossed channel becomes
which says that the amplitude has a different power law falloff as a function of energy at different corresponding angles, where corresponding angles are those with the same value of . It predicts that the function that determines the power law is the same function that interpolates the energies where the resonances appear. The range of angles where scattering can be productively described by Regge theory shrinks into a narrow cone around the beam-line at large energies.
In 1960 Geoffrey Chew and Steven Frautschi conjectured from limited data that the strongly interacting particles had a very simple dependence of the squared-mass on the angular momentum: the particles fall into families where the Regge trajectory functions were straight lines: with the same constant for all the trajectories. The straight-line Regge trajectories were later understood as arising from massless endpoints on rotating relativistic strings. Since a Regge description implied that the particles were bound states, Chew and Frautschi concluded that none of the strongly interacting particles were elementary.
Experimentally, the near-beam behavior of scattering did fall off with angle as explained by Regge theory, leading many to accept that the particles in the strong interactions were composite. Much of the scattering was diffractive, meaning that the particles hardly scatter at all — staying close to the beam line after the collision. Vladimir Gribov noted that the Froissart bound combined with the assumption of maximum possible scattering implied there was a Regge trajectory that would lead to logarithmically rising cross sections, a trajectory nowadays known as the pomeron. He went on to formulate a quantitative perturbation theory for near beam line scattering dominated by multi-pomeron exchange.
From the fundamental observation that hadrons are composite, there grew two points of view. Some correctly advocated that there were elementary particles, nowadays called quarks and gluons, which made a quantum field theory in which the hadrons were bound states. Others also correctly believed that it was possible to formulate a theory without elementary particles — where all the particles were bound states lying on Regge trajectories and scatter self-consistently. This was called S-matrix theory.
The most successful S-matrix approach centered on the narrow-resonance approximation, the idea that there is a consistent expansion starting from stable particles on straight-line Regge trajectories. After many false starts, Richard Dolen, David Horn, and Christoph Schmid understood a crucial property that led Gabriele Veneziano to formulate a self-consistent scattering amplitude, the first string theory. Mandelstam noted that the limit where the Regge trajectories are straight is also the limit where the lifetime of the states is long.
As a fundamental theory of strong interactions at high energies, Regge theory enjoyed a period of interest in the 1960s, but it was largely succeeded by quantum chromodynamics. As a phenomenological theory, it is still an indispensable tool for understanding near-beam line scattering and scattering at very large energies. Modern research focuses both on the connection to perturbation theory and to string theory.
See also
Quark–gluon plasma
Quasinormal mode
Pomeron
Cornell potential
Dual resonance model
References
Further reading
External links
Quantum chromodynamics
Scattering theory | Regge theory | [
"Chemistry"
] | 1,443 | [
"Scattering",
"Scattering theory"
] |
1,205,435 | https://en.wikipedia.org/wiki/Jet%20fuel | Jet fuel or aviation turbine fuel (ATF, also abbreviated avtur) is a type of aviation fuel designed for use in aircraft powered by gas-turbine engines. It is colorless to straw-colored in appearance. The most commonly used fuels for commercial aviation are Jet A and Jet A-1, which are produced to a standardized international specification. The only other jet fuel commonly used in civilian turbine-engine powered aviation is Jet B, which is used for its enhanced cold-weather performance.
Jet fuel is a mixture of a variety of hydrocarbons. Because the exact composition of jet fuel varies widely based on petroleum source, it is impossible to define jet fuel as a ratio of specific hydrocarbons. Jet fuel is therefore defined as a performance specification rather than a chemical compound. Furthermore, the range of molecular mass between hydrocarbons (or different carbon numbers) is defined by the requirements for the product, such as the freezing point or smoke point. Kerosene-type jet fuel (including Jet A and Jet A-1, JP-5, and JP-8) has a carbon number distribution between about 8 and 16 (carbon atoms per molecule); wide-cut or naphtha-type jet fuel (including Jet B and JP-4), between about 5 and 15.
History
Fuel for piston-engine powered aircraft (usually a high-octane gasoline known as avgas) has a high volatility to improve its carburetion characteristics and high autoignition temperature to prevent preignition in high compression aircraft engines. Turbine engines (as with diesel engines) can operate with a wide range of fuels because fuel is injected into the hot combustion chamber. Jet and gas turbine (turboprop, helicopter) aircraft engines typically use lower cost fuels with higher flash points, which are less flammable and therefore safer to transport and handle.
The first axial compressor jet engine in widespread production and combat service, the Junkers Jumo 004 used on the Messerschmitt Me 262A fighter and the Arado Ar 234B jet recon-bomber, burned either a special synthetic "J2" fuel or diesel fuel. Gasoline was a third option but unattractive due to high fuel consumption. Other fuels used were kerosene or kerosene and gasoline mixtures.
A pressure to move from Jet fuel to sustainable aviation fuel, aka Aviation biofuel, has existed since before the 2016 Paris Agreement.
Standards
Most jet fuels in use since the end of World War II are kerosene-based. Both British and American standards for jet fuels were first established at the end of World War II. British standards derived from standards for kerosene use for lamps—known as paraffin in the UK—whereas American standards derived from aviation gasoline practices. Over the subsequent years, details of specifications were adjusted, such as minimum freezing point, to balance performance requirements and availability of fuels. Very low temperature freezing points reduce the availability of fuel. Higher flash point products required for use on aircraft carriers are more expensive to produce. In the United States, ASTM International produces standards for civilian fuel types, and the U.S. Department of Defense produces standards for military use. The British Ministry of Defence establishes standards for both civil and military jet fuels. For reasons of inter-operational ability, British and United States military standards are harmonized to a degree. In Russia and the CIS members, grades of jet fuels are covered by the State Standard (GOST) number, or a Technical Condition number, with the principal grade available being TS-1.
Types
Jet A/A-1
Jet A specification fuel has been used in the United States since the 1950s and is usually not available outside the United States and a few Canadian airports such as Toronto, Montreal, and Vancouver, whereas Jet A-1 is the standard specification fuel used in most of the rest of the world, the main exceptions being Russia and the CIS members, where TS-1 fuel type is the most common standard. Both Jet A and Jet A-1 have a flash point higher than , with an autoignition temperature of .
Differences between Jet A and Jet A-1
The differences between Jet A and Jet A-1 are twofold. The primary difference is the lower freezing point of Jet A-1 fuel:
Jet A's is
Jet A-1's is
The other difference is the mandatory addition of an antistatic additive to Jet A-1 fuel.
Jet A and Jet A-1 fuel trucks and storage tanks, as well as plumbing that carries them, are all marked "Jet A" or "Jet A-1" in white italicized text within a black rectangle background, adjacent to one or two diagonal black stripes.
Typical physical properties for Jet A and Jet A-1
Jet A-1 fuel must meet:
DEF STAN 91-91 (Jet A-1),
ASTM specification D1655 (Jet A-1), and
IATA Guidance Material (Kerosene Type), NATO Code F-35.
Jet A fuel must reach ASTM specification D1655 (Jet A).
Jet B
Jet B is a naphtha-kerosene fuel that is used for its enhanced cold-weather performance. However, Jet B's lighter composition makes it more dangerous to handle. For this reason, it is rarely used, except in very cold climates. A blend of approximately 30% kerosene and 70% gasoline, it is known as wide-cut fuel. It has a very low freezing point of , and a low flash point as well. It is primarily used in northern Canada and Alaska, where the extreme cold makes its low freezing point necessary, and which helps mitigate the danger of its lower flash point.
GOST standards
The GOST standard 10227 specifies civilian fuels, among which TS-1, T-1, T-1S, T2 and RT. Military fuels such as T-1pp, T-8V (aka T-8B) and T-6 are specified by GOST 12308. Icing inhibitors are specified by GOST 8313. Some researchers refer to T-6 as "ram rocket fuel"; others have patented a method used to produce T-1pp from a mixture of T-6 and RT, the latter of which has been characterized as "unified Russian fuel for sub- and supersonic aircraft".
TS-1
TS-1 is a jet fuel made to Russian standard GOST 10227 for enhanced cold-weather performance. It has somewhat higher volatility than Jet A-1 (flash point is minimum). It has a very low freezing point, below .
Additives
The DEF STAN 91-091 (UK) and ASTM D1655 (international) specifications allow for certain additives to be added to jet fuel, including:
Antioxidants to prevent gumming, usually based on alkylated phenols, e.g., AO-30, AO-31, or AO-37;
Antistatic agents, to dissipate static electricity and prevent sparking; Stadis 450, with dinonylnaphthylsulfonic acid (DINNSA) as a component, is an example
Corrosion inhibitors, e.g., DCI-4A used for civilian and military fuels, and DCI-6A used for military fuels;
Fuel system icing inhibitor (FSII) agents, e.g., 2-(2-Methoxyethoxy)ethanol (Di-EGME); FSII is often mixed at the point-of-sale so that users with heated fuel lines do not have to pay the extra expense.
Biocides are to remediate microbial (i.e., bacterial and fungal) growth present in aircraft fuel systems. Two biocides were previously approved for use by most aircraft and turbine engine original equipment manufacturers (OEMs); Kathon FP1.5 Microbiocide and Biobor JF. Biobor JF is currently the only biocide available for aviation use. Kathon was discontinued by the manufacturer due to several airworthiness incidents. Kathon is now banned from use in aviation fuel.
Metal deactivator can be added to reduce the negative effects of trace metals on the thermal stability of the fuel. The one allowable additive is the chelating agent salpn (N,N′-bis(salicylidene)-1,2-propanediamine).
As the aviation industry's jet kerosene demands have increased to more than 5% of all refined products derived from crude,
it has been necessary for the refiner to optimize the yield of jet kerosene, a high-value product, by varying process techniques.
New processes have allowed flexibility in the choice of crudes, the use of coal tar sands as a source of molecules and the
manufacture of synthetic blend stocks. Due to the number and severity of the processes used, it is often necessary and
sometimes mandatory to use additives. These additives may, for example, prevent the formation of harmful chemical species
or improve a property of a fuel to prevent further engine wear.
Water in jet fuel
It is very important that jet fuel be free from water contamination. During flight, the temperature of the fuel in the tanks decreases, due to the low temperatures in the upper atmosphere. This causes precipitation of the dissolved water from the fuel. The separated water then drops to the bottom of the tank, because it is denser than the fuel. Since the water is no longer in solution, it can form droplets which can supercool to below 0 °C (32 °F). If these supercooled droplets collide with a surface they can freeze and may result in blocked fuel inlet pipes. This was the cause of the British Airways Flight 38 accident. Removing all water from fuel is impractical; therefore, fuel heaters are usually used on commercial aircraft to prevent water in fuel from freezing.
There are several methods for detecting water in jet fuel. A visual check may detect high concentrations of suspended water, as this will cause the fuel to become hazy in appearance. An industry standard chemical test for the detection of free water in jet fuel uses a water-sensitive filter pad that turns green if the fuel exceeds the specification limit of 30 ppm (parts per million) free water. A critical test to rate the ability of jet fuel to release emulsified water when passed through coalescing filters is ASTM standard D3948 Standard Test Method for Determining Water Separation Characteristics of Aviation Turbine Fuels by Portable Separometer.
Military jet fuels
Military organizations around the world use a different classification system of JP (for "Jet Propellant") numbers. Some are almost identical to their civilian counterparts and differ only by the amounts of a few additives; Jet A-1 is similar to JP-8, Jet B is similar to JP-4. Other military fuels are highly specialized products and are developed for very specific applications.
JP-1
was an early jet fuel specified in 1944 by the United States government (AN-F-32). It was a pure kerosene fuel with high flash point (relative to aviation gasoline) and a freezing point of . The low freezing point requirement limited availability of the fuel and it was soon superseded by other "wide cut" jet fuels which were kerosene-naphtha or kerosene-gasoline blends. It was also known as avtur.
JP-2
an obsolete type developed during World War II. JP-2 was intended to be easier to produce than JP-1 since it had a higher freezing point, but was never widely used.
JP-3
was an attempt to improve availability of the fuel compared to JP-1 by widening the cut and loosening tolerances on impurities to ensure ready supply. In his book Ignition! An Informal History of Liquid Rocket Propellants, John D. Clark described the specification as, "remarkably liberal, with a wide cut (range of distillation temperatures) and with such permissive limits on olefins and aromatics that any refinery above the level of a Kentucky moonshiner's pot still could convert at least half of any crude to jet fuel". It was even more volatile than JP-2 and had high evaporation loss in service.
JP-4
was a 50-50 kerosene-gasoline blend. It had lower flash point than JP-1, but was preferred because of its greater availability. It was the primary United States Air Force jet fuel between 1951 and 1995. Its NATO code is F-40. It is also known as avtag.
JP-5
is a yellow kerosene-based jet fuel developed in 1952 for use in aircraft stationed aboard aircraft carriers, where the risk from fire is particularly great. JP-5 is a complex mixture of hydrocarbons, containing alkanes, naphthenes, and aromatic hydrocarbons that weighs and has a high flash point (min. ). Because some US naval air stations, Marine Corps air stations and Coast Guard air stations host both sea and land based naval aircraft, these installations will also typically fuel their shore-based aircraft with JP-5, thus precluding the need to maintain separate fuel facilities for JP-5 and non-JP-5 fuel. Chinese also named their navy fuel RP-5. Its freezing point is . It does not contain antistatic agents. JP-5 is also known as NCI-C54784. JP-5's NATO code is F-44. It is also called AVCAT fuel for Aviation Carrier Turbine fuel.
The JP-4 and JP-5 fuels, covered by the MIL-DTL-5624 and meeting the British Specification DEF STAN 91-86 AVCAT/FSII (formerly DERD 2452), are intended for use in aircraft turbine engines. These fuels require unique additives that are necessary for military aircraft and engine fuel systems.
JP-6
was developed for the General Electric YJ93 afterburning turbojet engines used in the North American XB-70 Valkyrie for sustained flight at Mach 3. It was similar to JP-5 but with a lower freezing point and improved thermal oxidative stability. When the XB-70 program was cancelled, the JP-6 specification, MIL-J-25656, was also cancelled.
JP-7
was developed for the Pratt & Whitney J58 afterburning turbojet engines used in the Lockheed SR-71 Blackbird for sustained flight at Mach 3+. It had a high flash point required to prevent boiloff caused by aerodynamic heating. Its thermal stability was high enough to prevent coke and varnish deposits when used as a heat-sink for aircraft air conditioning and hydraulic systems and engine accessories.
JP-8
is a jet fuel, specified and used widely by the U.S. military. It is specified by MIL-DTL-83133 and British Defence Standard 91-87. JP-8 is a kerosene-based fuel, projected to remain in use at least until 2025. The United States military uses JP-8 as a "universal fuel" in both turbine-powered aircraft and diesel-powered ground vehicles. It was first introduced at NATO bases in 1978. Its NATO code is F-34.
JP-9
is a gas turbine fuel for missiles, specifically the Tomahawk cruise missile, containing the TH-dimer (tetrahydrodimethyldicyclopentadiene) produced by catalytic hydrogenation of methylpentadiene dimer.
JP-10
is a gas turbine fuel for missiles, specifically the AGM-86 ALCM cruise missile. It contains a mixture of (in decreasing order) endo-tetrahydrodicyclopentadiene, exo-tetrahydrodicyclopentadiene (a synthetic fuel), and adamantane. It is produced by catalytic hydrogenation of dicyclopentadiene. It superseded JP-9 fuel, achieving a lower low-temperature service limit of . It is also used by the Tomahawk jet-powered subsonic cruise missile.
JPTS
was a combination of LF-1 charcoal lighter fluid and an additive to improve thermal oxidative stability officially known as "Thermally Stable Jet Fuel". It was developed in 1956 for the Pratt & Whitney J57 engine which powered the Lockheed U-2 spy plane.
Zip fuel
designates a series of experimental boron-containing "high energy fuels" intended for long range aircraft. The toxicity and undesirable residues of the fuel made it difficult to use. The development of the ballistic missile removed the principal application of zip fuel.
Syntroleum
has been working with the USAF to develop a synthetic jet fuel blend that will help them reduce their dependence on imported petroleum. The USAF, which is the United States military's largest user of fuel, began exploring alternative fuel sources in 1999. On December 15, 2006, a B-52 took off from Edwards Air Force Base for the first time powered solely by a 50–50 blend of JP-8 and Syntroleum's FT fuel. The seven-hour flight test was considered a success. The goal of the flight test program was to qualify the fuel blend for fleet use on the service's B-52s, and then flight test and qualification on other aircraft.
Piston engine use
Jet fuel is very similar to diesel fuel, and in some cases, may be used in diesel engines. The possibility of environmental legislation banning the use of leaded avgas (fuel in spark-ignited internal combustion engine, which usually contains tetraethyllead (TEL), a toxic substance added to prevent engine knocking), and the lack of a replacement fuel with similar performance, has left aircraft designers and pilot's organizations searching for alternative engines for use in small aircraft. As a result, a few aircraft engine manufacturers, most notably Thielert and Austro Engine, have begun offering aircraft diesel engines which run on jet fuel which may simplify airport logistics by reducing the number of fuel types required. Jet fuel is available in most places in the world, whereas avgas is only widely available in a few countries which have a large number of general aviation aircraft. A diesel engine may be more fuel-efficient than an avgas engine. However, very few diesel aircraft engines have been certified by aviation authorities. Diesel aircraft engines are uncommon today, even though opposed-piston aviation diesel powerplants such as the Junkers Jumo 205 family had been used during the Second World War.
Jet fuel is often used in diesel-powered ground-support vehicles at airports. However, jet fuel tends to have poor lubricating ability in comparison to diesel, which increases wear in fuel injection equipment. An additive may be required to restore its lubricity. Jet fuel is more expensive than diesel fuel but the logistical advantages of using one fuel can offset the extra expense of its use in certain circumstances.
Jet fuel contains more sulfur, up to 1,000 ppm, which therefore means it has better lubricity and does not currently require a lubricity additive as all pipeline diesel fuels require. The introduction of Ultra Low Sulfur Diesel or ULSD brought with it the need for lubricity modifiers. Pipeline diesels before ULSD were able to contain up to 500 ppm of sulfur and were called Low Sulfur Diesel or LSD. In the United States LSD is now only available to the off-road construction, locomotive and marine markets. As more EPA regulations are introduced, more refineries are hydrotreating their jet fuel production, thus limiting the lubricating abilities of jet fuel, as determined by ASTM Standard D445.
JP-8, which is similar to Jet A-1, is used in NATO diesel vehicles as part of the single-fuel policy.
Synthetic jet fuel
Fischer–Tropsch (FT) Synthesized Paraffinic Kerosene (SPK) synthetic fuels are certified for use in United States and international aviation fleets at up to 50% in a blend with conventional jet fuel. As of the end of 2017, four other pathways to SPK are certified, with their designations and maximum blend percentage in brackets: Hydroprocessed Esters and Fatty Acids (HEFA SPK, 50%); synthesized iso-paraffins from hydroprocessed fermented sugars (SIP, 10%); synthesized paraffinic kerosene plus aromatics (SPK/A, 50%); alcohol-to-jet SPK (ATJ-SPK, 30%). Both FT and HEFA based SPKs blended with JP-8 are specified in MIL-DTL-83133H.
Some synthetic jet fuels show a reduction in pollutants such as SOx, NOx, particulate matter, and sometimes carbon emissions. It is envisaged that usage of synthetic jet fuels will increase air quality around airports which will be particularly advantageous at inner city airports.
Qatar Airways became the first airline to operate a commercial flight on a 50:50 blend of synthetic Gas to Liquid (GTL) jet fuel and conventional jet fuel. The natural gas derived synthetic kerosene for the six-hour flight from London to Doha came from Shell's GTL plant in Bintulu, Malaysia. The world's first passenger aircraft flight to use only synthetic jet fuel was from Lanseria International Airport to Cape Town International Airport on September 22, 2010. The fuel was developed by Sasol.
Chemist Heather Willauer is leading a team of researchers at the U.S. Naval Research Laboratory who are developing a process to make jet fuel from seawater. The technology requires an input of electrical energy to separate Oxygen (O2) and Hydrogen (H2) gas from seawater using an iron-based catalyst, followed by an oligomerization step wherein carbon monoxide (CO) and hydrogen are recombined into long-chain hydrocarbons, using zeolite as the catalyst. The technology is expected to be deployed in the 2020s by U.S. Navy warships, especially nuclear-powered aircraft carriers.
On February 8, 2021, the world's first scheduled passenger flight flew with some synthetic kerosene from a non-fossil fuel source. 500 liters of synthetic kerosene was mixed with regular jet fuel. Synthetic kerosene was produced by Shell and the flight was operated by KLM.
USAF synthetic fuel trials
On August 8, 2007, Air Force Secretary Michael Wynne certified the B-52H as fully approved to use the FT blend, marking the formal conclusion of the test program.
This program is part of the Department of Defense Assured Fuel Initiative, an effort to develop secure domestic sources for the military energy needs. The Pentagon hopes to reduce its use of crude oil from foreign producers and obtain about half of its aviation fuel from alternative sources by 2016. With the B-52 now approved to use the FT blend, the USAF will use the test protocols developed during the program to certify the Boeing C-17 Globemaster III and then the Rockwell B-1B Lancer to use the fuel. To test these two aircraft, the USAF has ordered of FT fuel. The USAF intends to test and certify every airframe in its inventory to use the fuel by 2011. They will also supply over to NASA for testing in various aircraft and engines.
The USAF has certified the B-1B, B-52H, C-17, Lockheed Martin C-130J Super Hercules, McDonnell Douglas F-4 Phantom (as QF-4 target drones), McDonnell Douglas F-15 Eagle, Lockheed Martin F-22 Raptor, and Northrop T-38 Talon to use the synthetic fuel blend.
The U.S. Air Force's C-17 Globemaster III, F-16 and F-15 are certified for use of hydrotreated renewable jet fuels. The USAF plans to certify over 40 models for fuels derived from waste oils and plants by 2013. The U.S. Army is considered one of the few customers of biofuels large enough to potentially bring biofuels up to the volume production needed to reduce costs. The U.S. Navy has also flown a Boeing F/A-18E/F Super Hornet dubbed the "Green Hornet" at 1.7 times the speed of sound using a biofuel blend. The Defense Advanced Research Projects Agency (DARPA) funded a $6.7 million project with Honeywell UOP to develop technologies to create jet fuels from biofeedstocks for use by the United States and NATO militaries.
In April 2011, four USAF F-15E Strike Eagles flew over the Philadelphia Phillies opening ceremony using a blend of traditional jet fuel and synthetic biofuels. This flyover made history as it was the first flyover to use biofuels in the Department of Defense.
Jet biofuels
The air transport industry is responsible for 2–3 percent of man-made carbon dioxide emitted. Boeing estimates that biofuels could reduce flight-related greenhouse-gas emissions by 60 to 80 percent. One possible solution which has received more media coverage than others would be blending synthetic fuel derived from algae with existing jet fuel:
Green Flight International became the first airline to fly jet aircraft on 100% biofuel. The flight from Reno Stead Airport in Stead, Nevada was in an Aero L-29 Delfín piloted by Carol Sugars and Douglas Rodante.
Boeing and Air New Zealand are collaborating with Tecbio Aquaflow Bionomic and other jet biofuel developers around the world.
Virgin Atlantic successfully tested a biofuel blend consisting of 20 percent babassu nuts and coconut and 80 percent conventional jet fuel, which was fed to a single engine on a 747 flight from London Heathrow to Amsterdam Schiphol.
A consortium consisting of Boeing, NASA's Glenn Research Center, MTU Aero Engines (Germany), and the U.S. Air Force Research Laboratory is working on development of jet fuel blends containing a substantial percentage of biofuel.
British Airways and Velocys have entered into a partnership in the UK to design a series of plants that convert household waste into jet fuel.
24 commercial and military biofuel flights have taken place using Honeywell “Green Jet Fuel,” including a Navy F/A-18 Hornet.
In 2011, United Continental Holdings was the first United States airline to fly passengers on a commercial flight using a blend of sustainable, advanced biofuels and traditional petroleum-derived jet fuel. Solazyme developed the algae oil, which was refined utilizing Honeywell's UOP process technology, into jet fuel to power the commercial flight.
Solazyme produced the world's first 100 percent algae-derived jet fuel, Solajet, for both commercial and military applications.
Oil prices increased about fivefold from 2003 to 2008, raising fears that world petroleum production is becoming unable to keep up with demand. The fact that there are few alternatives to petroleum for aviation fuel adds urgency to the search for alternatives. Twenty-five airlines were bankrupted or stopped operations in the first six months of 2008, largely due to fuel costs.
In 2015 ASTM approved a modification to Specification D1655 Standard Specification for Aviation Turbine Fuels to permit up to 50 ppm (50 mg/kg) of FAME (fatty acid methyl ester) in jet fuel to allow higher cross-contamination from biofuel production.
Worldwide consumption of jet fuel
Worldwide demand of jet fuel has been steadily increasing since 1980. Consumption more than tripled in 30 years from 1,837,000 barrels/day in 1980, to 5,220,000 in 2010. Around 30% of the worldwide consumption of jet fuel is in the US (1,398,130 barrels/day in 2012).
Taxation
Article 24 of the Chicago Convention on International Civil Aviation of 7 December 1944 stipulates that when flying from one contracting state to another, the kerosene that is already on board aircraft may not be taxed by the state where the aircraft lands, nor by a state through whose airspace the aircraft has flown. This is to prevent double taxation. It is sometimes suggested that the Chicago Convention precludes the taxation of aviation fuel. However, this is not correct. The Chicago Convention does not preclude a kerosene tax on domestic flights or on refuelling before international flights.
Article 15 of the Chicago Convention is also sometimes said to ban fuel taxes. Article 15 states: "No fees, dues or other charges shall be imposed by any contracting State in respect solely of the right of transit over or entry into or exit from its territory of any aircraft of a contracting State or persons or property thereon." However, ICAO distinguishes between charges and taxes, and Article 15 does not prohibit the levying of taxes without a service
provided.
In the European Union, commercial aviation fuel is exempt from taxation, according to the 2003 Energy Taxation Directive. EU member states may tax jet fuel via bilateral agreements, however no such agreements exist.
In the United States, most states tax jet fuel.
Health effects
General health hazards associated with exposure to jet fuel vary according to its components, exposure duration (acute vs. long-term), route of administration (dermal vs. respiratory vs. oral), and exposure phase (vapor vs. aerosol vs. raw fuel). Kerosene-based hydrocarbon fuels are complex mixtures which may contain up to 260+ aliphatic and aromatic hydrocarbon compounds including toxicants such as benzene, n-hexane, toluene, xylenes, trimethylpentane, methoxyethanol, naphthalenes. While time-weighted average hydrocarbon fuel exposures can often be below recommended exposure limits, peak exposure can occur, and the health impact of occupational exposures is not fully understood. Evidence of the health effects of jet fuels comes from reports on both temporary or persisting biological from acute, subchronic, or chronic exposure of humans or animals to kerosene-based hydrocarbon fuels, or the constituent chemicals of these fuels, or to fuel combustion products. The effects studied include: cancer, skin conditions, respiratory disorders, immune and hematological disorders, neurological effects, visual and hearing disorders, renal and hepatic diseases, cardiovascular conditions, gastrointestinal disorders, genotoxic and metabolic effects.
See also
Index of aviation articles
Notes
References
Further reading
External links
History of Jet Fuel
MIL-DTL-5624U
MIL-DTL-83133H
Aviation Fuel Properties 1983
Aviation fuels
Liquid fuels
Petroleum products
Occupational safety and health | Jet fuel | [
"Chemistry",
"Engineering"
] | 6,261 | [
"Aviation fuels",
"Petroleum",
"Petroleum products",
"Aerospace engineering"
] |
1,205,681 | https://en.wikipedia.org/wiki/Contact%20process | The contact process is a method of producing sulfuric acid in the high concentrations needed for industrial processes. Platinum was originally used as the catalyst for this reaction; however, because it is susceptible to reacting with arsenic impurities in the sulfur feedstock, vanadium(V) oxide (V2O5) has since been preferred.
History
This process was patented in 1831 by British vinegar merchant Peregrine Phillips. In addition to being a far more economical process for producing concentrated sulfuric acid than the previous lead chamber process, the contact process also produces sulfur trioxide and oleum.
In 1901 Eugen de Haën patented the basic process involving combining sulfur dioxide and oxygen in the presence of vanadium oxides, producing sulfur trioxide which was easily absorbed into water, producing sulfuric acid. This process was improved remarkably by shrinking the particle size of the catalyst (e.g. ≤ 5000 microns), a process discovered by two chemists of BASF in 1914.
Process
The process can be divided into four stages:
Combining of sulfur and oxygen (O2) to form sulfur dioxide, then purify the sulfur dioxide in a purification unit
Adding an excess of oxygen to sulfur dioxide in the presence of the catalyst vanadium pentoxide at 450 °C and 1-2 atm
The sulfur trioxide formed is added to sulfuric acid which gives rise to oleum (disulfuric acid)
The oleum is then added to water to form sulfuric acid which is very concentrated. Since this process is an exothermic reaction, the reaction temperature should be as low as possible.
Purification of the air and sulfur dioxide (SO2) is necessary to avoid catalyst poisoning (i.e. removing catalytic activities). The gas is then washed with water and dried with sulfuric acid.
To conserve energy, the mixture is heated by exhaust gases from the catalytic converter by heat exchangers.
Sulfur dioxide and dioxygen then react as follows:
2 SO2(g) + O2(g) ⇌ 2 SO3(g) : ΔH = -197 kJ·mol−1
According to the Le Chatelier's principle, a lower temperature should be used to shift the chemical equilibrium towards the right, hence increasing the percentage yield. However too low of a temperature will lower the formation rate to an uneconomical level. Hence to increase the reaction rate, high temperatures (450 °C), medium pressures (1-2 atm), and vanadium(V) oxide (V2O5) are used to ensure an adequate (>95%) conversion. The catalyst only serves to increase the rate of reaction as it does not change the position of the thermodynamic equilibrium. The mechanism for the action of the catalyst comprises two steps:
Oxidation of SO2 into SO3 by V5+:
2SO2 + 4V5+ + 2O2− → 2SO3 + 4V4+
Oxidation of V4+ back into V5+ by dioxygen (catalyst regeneration):
4V4+ + O2 → 4V5+ + 2O2−
Hot sulfur trioxide passes through the heat exchanger and is dissolved in concentrated H2SO4 in the absorption tower to form oleum.
H2SO4 + SO3 → H2S2O7
Note that directly dissolving SO3 in water is impractical due to the highly exothermic nature of the reaction. Acidic vapor or mists are formed instead of a liquid.
Oleum is reacted with water to form concentrated H2SO4.
H2S2O7 + H2O → 2 H2SO4
Purification unit
This includes the dusting tower, cooling pipes, scrubbers, drying tower, arsenic purifier and testing box. Sulfur dioxide has many impurities such as vapours, dust particles and arsenous oxide. Therefore, it must be purified to avoid catalyst poisoning (i.e.: destroying catalytic activity and loss of efficiency). In this process, the gas is washed with water, and dried by sulfuric acid. In the dusting tower, the sulfur dioxide is exposed to a steam which removes the dust particles. After the gas is cooled, the sulfur dioxide enters the washing tower where it is sprayed by water to remove any soluble impurities. In the drying tower, sulfuric acid is sprayed on the gas to remove the moisture from it. Finally, the arsenic oxide is removed when the gas is exposed to ferric hydroxide.
Double contact double absorption
The next step to the contact process is double contact double absorption (DCDA). In this process the product gases (SO2) and (SO3) are passed through absorption towers twice to achieve further absorption and conversion of SO2 to SO3 and production of higher grade sulfuric acid.
SO2-rich gases enter the catalytic converter, usually a tower with multiple catalyst beds, and are converted to SO3, achieving the first stage of conversion. The exit gases from this stage contain both SO2 and SO3 which are passed through intermediate absorption towers where sulfuric acid is trickled down packed columns and SO3 reacts with water increasing the sulfuric acid concentration. Though SO2 too passes through the tower it is unreactive and comes out of the absorption tower.
This stream of gas containing SO2, after necessary cooling is passed through the catalytic converter bed column again achieving up to 99.8% conversion of SO2 to SO3 and the gases are again passed through the final absorption column thus achieving not only high conversion efficiency for SO2, but also enabling production of a higher concentration of sulfuric acid.
The industrial production of sulfuric acid involves proper control of temperatures and flow rates of the gases as both the conversion efficiency and absorption are dependent on these.
Notes
References
The Repertory of Patent Inventions, no. 72 (April 1831), page 248.
(Anon.) (1832) "English patents: Specification of the patent granted to Peregrine Phillips, Jr. of Bristol, in the county of Somersetshire, Vinegar Maker, for an improvement in manufacturing Sulphuric Acid. Dated March 21, 1831." Journal of the Franklin Institute, new series, vol. 9, pages 180-182.
Ernest Cook (March 20, 1926) "Peregrine Phillips, the inventor of the contact process for sulphuric acid," Nature, 117 (2942) : 419–421.
Lunge, Theoretical and Practical Treatise on the Manufacture of Sulphuric Acid and Alkali, with the Collateral Branches, 3rd ed., vol. 1, part 2 (London, England: Gurney and Jackson, 1903), page 975
External links
Chemical processes
Vanadium
Sulfur
Catalysis | Contact process | [
"Chemistry"
] | 1,381 | [
"Catalysis",
"Chemical processes",
"nan",
"Chemical process engineering",
"Chemical kinetics"
] |
17,198,286 | https://en.wikipedia.org/wiki/Intramolecular%20Diels%E2%80%93Alder%20cycloaddition | In organic chemistry, an intramolecular Diels-Alder cycloaddition is a Diels–Alder reaction in which the diene and the dienophile are both part of the same molecule. The reaction leads to the formation of the cyclohexene-like structure as usual for a Diels–Alder reaction, but as part of a more complex fused or bridged cyclic ring system. This reaction can gives rise to various natural derivatives of decalin.
Reaction products
Because the two reacting groups are already attached, two basic modes of addition are possible in this reaction. Depending on whether the tether that links to the dienophile is attached to the end or the middle of the diene, fused or bridged polycyclic ring systems can be formed.
The tether than attaches the two reacting groups also affects the geometry of the reaction. As a result of its conformational and other structural restrictions, the exo vs endo results are usually not based on the simple (intermolecular) Diels–Alder reaction effects.
Use in total synthesis
Intramolecular Diels-Alder cycloaddition has been used in total synthesis. Through this reaction polycyclic compounds can be accessed with high stereoselectivity. The following potential drugs have been synthesized using the intramolecular Diels-Alder reaction: salvinorin A, himbacine, and solanapyrone A.
References
Name reactions
Cycloadditions | Intramolecular Diels–Alder cycloaddition | [
"Chemistry"
] | 312 | [
"Name reactions"
] |
17,198,697 | https://en.wikipedia.org/wiki/Multi-service%20business%20gateway | A multi-service business gateway (MSBG) is a device that combines multiple network voice and data communications functions into a single device. Targeted at small and medium enterprises (SMEs), the MSBG integrates critical functions such as routing, VoIP, and security (virtual private networking, firewall, intrusion detection/prevention) into a single fault-tolerant platform, with a common control & management plane oriented around services. An MSBG may also include functionality such as web/e-mail server and filtering, storage, and wireless networking.
Popularly identified in 2004, the MSBG product segment emerged to address the increasing need of advanced voice and data services among small and medium-sized businesses. The more limited financial and technical resources of SMEs restrict their ability procure, implement, and manage the technologies available to large enterprises. By integrating critical network functions in a single device, the MSBG provides a solution that is more affordable and also simplifies deployment and management for SMEs. MSBGs can be managed by service provider or other managed services company, which allows a business to implement network services without the need of its own information technology (IT) staff.
MSBGs provide a variety of solutions that can be used to support an SME's entire network. Use of a common architecture enables SMEs and service providers to expand the scale and services offered to meet the individual needs of the business. The openness of the MSBG also permits 3rd party applications or proprietary features to be added to the system.
References
Networking hardware | Multi-service business gateway | [
"Engineering"
] | 317 | [
"Computer networks engineering",
"Networking hardware"
] |
17,199,168 | https://en.wikipedia.org/wiki/Dry%20heat%20sterilization | Dry heat sterilization of an object is one of the earliest forms of sterilization practiced. It uses hot air that is either free from water vapor or has very little of it, where this moisture plays a minimal or no role in the process of sterilization.
Process
The dry heat sterilization process is accomplished by conduction; that is where heat is absorbed by the exterior surface of an item and then passed inward to the next layer. Eventually, the entire item reaches the proper temperature needed to achieve sterilization. The proper time and temperature for dry heat sterilization is 160 °C (320 °F) for 2 hours or 170 °C (340 °F) for 1 hour, and in the case of High Velocity Hot Air sterilisers, 190°C (375°F) for 6 to 12 minutes.
Items should be dry before sterilization since water will interfere with the process. Dry heat destroys microorganisms by causing denaturation of proteins.
The presence of moisture, such as in steam sterilization, significantly speeds up heat penetration.
There are two types of hot air convection (Convection refers to the circulation of heated air within the chamber of the oven) sterilizers:
Gravity convection
Mechanical convection
Mechanical convection process
A mechanical convection oven contains a blower that actively forces heated air throughout all areas of the chamber. The flow created by the blower ensures uniform temperatures and the equal transfer of heat throughout the load. For this reason, the mechanical convection oven is the more efficient of the two processes.
High Velocity Hot Air
An even more efficient system than convection uses deturbulized hot air forced through a jet curtain at 3000ft/minute.
Instruments used for dry heat sterilization
Instruments and techniques used for dry heat sterilization include hot air ovens, incinerators, flaming, radiation, and glass bead sterilizers.
Effect on microorganisms
Dry heat lyses the proteins in any organism, causes oxidative free radical damage, causes drying of cells, and can even burn them to ashes, as in incineration.
See also
Sterility assurance level
References
ISO 20857
Notes
General References
Sterilization (microbiology) | Dry heat sterilization | [
"Chemistry",
"Biology"
] | 447 | [
"Microbiology techniques",
"Sterilization (microbiology)"
] |
17,206,404 | https://en.wikipedia.org/wiki/Open%20web%20steel%20joist | In structural engineering, the open web steel joist (OWSJ) is a lightweight steel truss consisting, in the standard form, of parallel chords and a triangulated web system, proportioned to span between bearing points.
The main function of an OWSJ is to provide direct support for roof or floor deck and to transfer the load imposed on the deck to the structural frame i.e. beam and column.
In order to accurately design an OWSJ, engineers consider the joist span between bearing points, joist spacing, slope, live loads, dead loads, collateral loads, seismic loads, wind uplift, deflection criteria and maximum joist depth allowed. Many steel joist manufacturers supply economical load tables in order to allow designers to select the most efficient joist sizes for their projects.
While OWSJs can be adapted to suit a wide variety of architectural applications, the greatest economy will be realized when utilizing standard details, which may vary from one joist manufacturer to another. Some other shapes, in addition to the parallel top and bottom chord, are single slope, double slope, arch, gable and scissor configurations. These shapes may not be available from all joist manufacturers, and are usually supplied at a premium cost that reflects the complexity required.
The manufacture of OWSJ in North America is overseen by the Steel Joist Institute (SJI). The SJI has worked since 1928 to maintain sound engineering practice throughout the industry. As a non-profit organization of active manufacturers, the Institute cooperates with governmental and business agencies to establish steel joist standards. Continuing research and updating are included in this work. Load tables and specifications are published by the SJI in five categories: K-Series, LH-Series, DLH-Series, CJ-Series, and Joist Girders. Load tables are available in both Allowable Stress Design (ASD) and Load and Resistance Factor Design (LRFD).
History
The first joist in 1923 was a Warren truss type, with top and bottom chords of round bars and a web formed from a single continuous bent bar. Various other types were developed, but problems also followed because each manufacturer had their own design and fabrication standards. Architects, engineers and builders found it difficult to compare rated capacities and to use fully the economies of steel joist construction.
Members of the industry began to organize the institute, and in 1928 the first standard specifications were adopted, followed in 1929 by the first load table. The joists covered by these early standards were later identified as open web steel joists, SJ-Series.
K-Series
Open Web Steel Joists, K-Series, were primarily developed to provide structural support for floors and roofs of buildings. They possess multiple advantages and features which have resulted in their wide use and acceptance throughout the United States and other countries.
K-Series Joists are standardized regarding depths, spans, and load-carrying capacities. There are 63 separate designations in the Load Tables, representing joist depths from through in increments and spans through . Standard K-Series Joists have a end bearing depth so that, regardless of the overall joist depths, the tops of the joists lie in the same plane. Seat depths deeper than can also be specified.
Standard K-Series Joists are designed for simple span uniform loading which results in a parabolic moment diagram for chord forces and a linearly sloped shear diagram for web forces. When non-uniform and/or concentrated loads are encountered the shear and moment diagrams required may be shaped quite differently and may not be covered by the shear and moment design envelopes of a standard K-Series Joist. When conditions such as this arise, a KCS joist may be a good option.
KCS Joists
KCS (K-Series Constant Shear) joists are designed in accordance with the Standard Specification for K-Series Joists.
KCS joist chords are designed for a flat positive moment envelope. The moment capacity is constant at all interior panels. All webs are designed for a vertical shear equal to the specified shear capacity and interior webs will be designed for 100% stress reversal.
LH- and DLH-Series
Longspan (LH) and Deep Longspan (DLH) Steel Joists are relatively light weight shop-manufactured steel trusses used in the direct support of floor or roof slabs or decks between walls, beams, and main structural members.
The LH- and DLH-Series have been designed for the purpose of extending the use of joists to spans and loads in excess of those covered by Open Web Steel Joists, K-Series.
LH-Series Joists have been standardized in depths from through , for spans through .
DLH-Series Joists have been standardized in depths from through , for spans up through .
Longspan and Deep Longspan Steel Joists can be furnished with either underslung or square ends, with parallel chords or with single or double pitched top chords to provide sufficient slope for roof drainage. Square end joists are primarily intended for bottom chord bearing.
The depth of the bearing seat at the ends of underslung LH- and DLH-Series Joists have been established at for chord section number 2 through 17. A bearing seat depth of has been established for the DLH Series chord section number 18 through 25.
CJ-Series
Open Web Composite Steel Joists, CJ-Series, were developed to provide structural support for floors and roofs which incorporate an overlying concrete slab while also allowing the steel joist and slab to act together as an integral unit after the concrete has adequately cured.
The CJ-Series Joists are capable of supporting larger floor or roof loadings due to the attachment of the concrete slab to the top chord of the composite joist. Shear connection between the concrete slab and steel joist is typically made by the welding of shear studs through the steel deck to the underlying CJ-Series Composite Steel Joist.
Joist Girders
Joist Girders are open web steel trusses used as primary framing members. They are designed as simple spans supporting equally spaced concentrated loads for a floor or roof system. These concentrated loads are considered to act at the panel points of the Joist Girders.
These members have been standardized for depths from , and spans to .
The standard depth at the bearing ends has been established at for all Joist Girders. Joist Girders are usually attached to the columns by bolting with two diameter A325 bolts.
See also
I-beam
Structural steel
References
Steel Joist Institute
Structural engineering
Structural steel | Open web steel joist | [
"Engineering"
] | 1,349 | [
"Structural engineering",
"Civil engineering",
"Structural steel",
"Construction"
] |
17,208,090 | https://en.wikipedia.org/wiki/Tert-Butyl%20isocyanide | tert-Butyl isocyanide is an organic compound with the formula Me3CNC (Me = methyl, CH3). It is an isocyanide, commonly called isonitrile or carbylamine, as defined by the functional group C≡N-R. tert-Butyl isocyanide, like most alkyl isocyanides, is a reactive colorless liquid with an extremely unpleasant odor. It forms stable complexes with transition metals and can insert into metal-carbon bonds.
tert-Butyl isocyanide is prepared by a Hofmann carbylamine reaction. In this conversion, a dichloromethane solution of tert-butylamine is treated with chloroform and aqueous sodium hydroxide in the presence of catalytic amount of the phase transfer catalyst benzyltriethylammonium chloride.
Me3CNH2 + CHCl3 + 3 NaOH → Me3CNC + 3 NaCl + 3 H2O
tert-Butyl isocyanide is isomeric with pivalonitrile, also known as tert-butyl cyanide. The difference, as with all carbylamine analogs of nitriles, is that the bond joining the CN functional group to the parent molecule is made on the nitrogen, not the carbon.
Coordination chemistry
By virtue of the lone electron pair on carbon, isocyanides serves as ligands in coordination chemistry, especially with metals in the 0, +1, and +2 oxidation states. tert-Butyl isocyanide has been shown to stabilize metals in unusual oxidation states, such as Pd(I).
Pd(dba)2 + PdCl2(C6H5CN)2 + 4 t-BuNC → [(t-BuNC)2PdCl]2 + 2 dba + 2 C6H5CN
tert-Butyl isocyanide can form hepta-coordinate homoleptic complexes, despite having a large t-Bu group, which is held far away from the metal center because of the linearity of the M-C≡N-C linkages.
tert-Butyl isocycanide forms complexes that are stoichiometrically analogous to certain binary metal carbonyl complexes, such as Fe2(CO)9 and Fe2(tBuNC)9.
Safety
tert-Butyl isocyanide is toxic. Its behavior is similar to that of its close electronic relative carbon monoxide.
References
Isocyanides
Ligands
Tert-butyl compounds
Foul-smelling chemicals | Tert-Butyl isocyanide | [
"Chemistry"
] | 547 | [
"Isocyanides",
"Ligands",
"Coordination chemistry",
"Functional groups"
] |
17,208,553 | https://en.wikipedia.org/wiki/Simply%20connected%20at%20infinity | In topology, a branch of mathematics, a topological space X is said to be simply connected at infinity if for any compact subset C of X, there is a compact set D in X containing C so that the induced map on fundamental groups
is the zero map. Intuitively, this is the property that loops far away from a small subspace of X can be collapsed, no matter how bad the small subspace is.
The Whitehead manifold is an example of a 3-manifold that is contractible but not simply connected at infinity. Since this property is invariant under homeomorphism, this proves that the Whitehead manifold is not homeomorphic to R3.
However, it is a theorem of John R. Stallings that for , a contractible n-manifold is homeomorphic to Rn precisely when it is simply connected at infinity.
References
Algebraic topology
Properties of topological spaces | Simply connected at infinity | [
"Mathematics"
] | 175 | [
"Properties of topological spaces",
"Algebraic topology",
"Space (mathematics)",
"Topological spaces",
"Fields of abstract algebra",
"Topology",
"Topology stubs"
] |
4,638,199 | https://en.wikipedia.org/wiki/Nuclear%20decommissioning | Nuclear decommissioning is the process leading to the irreversible complete or partial closure of a nuclear facility, usually a nuclear reactor, with the ultimate aim at termination of the operating licence. The process usually runs according to a decommissioning plan, including the whole or partial dismantling and decontamination of the facility, ideally resulting in restoration of the environment up to greenfield status. The decommissioning plan is fulfilled when the approved end state of the facility has been reached.
The process typically takes about 15 to 30 years, or many decades more when an interim safe storage period is applied for radioactive decay. Radioactive waste that remains after the decommissioning is either moved to an on-site storage facility where it is still under control of the owner, or moved to a dry cask storage or disposal facility at another location. The final disposal of nuclear waste from past and future decommissioning is a growing still unsolved problem.
Decommissioning is an administrative and technical process. The facility is dismantled to the point that it no longer requires measures for radiation protection. It includes clean-up of radioactive materials. Once a facility is fully decommissioned, no radiological danger should persist. The license will be terminated and the site released from regulatory control. The plant licensee is then no longer responsible for the nuclear safety.
The costs of decommissioning are to be covered by funds that are provided for in a decommissioning plan, which is part of the facility's initial authorization. They may be saved in a decommissioning fund, such as a trust fund.
There are worldwide also hundreds of thousands small nuclear devices and facilities, for medical, industrial and research purposes, that will have to be decommissioned at some point.
Definition
Nuclear decommissioning is the administrative and technical process leading to the irreversible closure of a nuclear facility such as a nuclear power plant (NPP), a research reactor, an isotope production plant, a particle accelerator, or uranium mine. It refers to the administrative and technical actions taken to remove all or some of the regulatory controls from the facility to bring about that its site can be reused. Decommissioning includes planning, decontamination, dismantling and materials management.
Decommissioning is the final step in the lifecycle of a nuclear installation. It involves activities from shutdown and removal of nuclear material to the environmental restoration of the site. The term decommissioning covers all measures carried out after a nuclear installation has been granted a decommissioning licence until nuclear regulatory supervision is no longer necessary. The aim is ideally to restore the natural initial state that existed before the construction of the nuclear power plant, the so-called greenfield status.
Decommissioning includes all steps as described in the decommissioning plan, leading to the release of a nuclear facility from regulatory control. The decommissioning plan is fulfilled when the approved end state of the facility has been reached. Disposal facilities for radioactive waste are closed rather than decommissioned. The use of the term decommissioning implies that no further use of the facility (or part thereof) for its existing purpose is foreseen. Though decommissioning typically includes dismantling of the facility, it is not necessarily part of it as far as existing structures are reused after decommissioning and decontamination.,p. 49-50
From the owner's perspective, the ultimate aim of decommissioning is termination of the operating license, once he has given certainty that the radiation at the site is below the legal limits, which in the US is an annual exposure of 25 millirem in case of releasing of the site to the public for unrestricted use. The site will be dismantled to the point that it no longer requires measures for radiation protection. Once a facility is decommissioned no radioactive danger persists and it can be released from regulatory control.
The complete process usually takes about 20 to 30 years. In the US, the decommissioning must be completed within 60 years of the plant ceasing operations, unless a longer time is necessary to protect public health and safety; up to 50 years are for radioactive decay and 10 years to dismantle the facility.
Steps in the decommissioning process
The decommissioning process encompasses:
pre-decommissioning
development of a decommissioning plan
involvement of the public (in democracies)
application for a decommissioning license
permanent shutdown
removal and disposal of nuclear fuel, coolant(s) and/or moderator
decommissioning
dismantling and decontamination
in the US, a License Termination Plan (LTP) has to be submitted two years prior to (the expected) termination of the plant license.
restoration of the environment
termination of the operating license; turn over responsibilities
monitoring of the site (in case of deferred dismantling/Safstor)
monitoring and maintenance of the interim storage of spent fuel
final disposal of radioactive waste
Decommissioning plan
Under supervision of the IAEA, a member state first develops a decommissioning plan to demonstrate the feasibility of decommissioning and assure that the associated costs are covered. At the final shutdown, a final decommissioning plan describes in detail how the decommissioning will take place, how the facility will be safely dismantled, ensuring radiation protection of the workers and the public, addressing environmental impacts, managing radioactive and non-radioactive materials, and termination of the regulatory authorization. In the EU, decommissioning operations are overseen by Euratom. Member states are assisted by the European Commission.
The progressive demolition of buildings and removal of radioactive material is potentially occupationally hazardous, expensive, time-intensive, and presents environmental risks that must be addressed to ensure radioactive materials are either transported elsewhere for storage or stored on-site in a safe manner.
Disposal of nuclear waste
Radioactive waste that remains after the decommissioning is either moved to an on-site storage facility where it still is under control of the plant owner, or moved to a dry cask storage or disposal facility at another location. The problem of long-term disposal of nuclear waste is still unsolved. Pending the availability of geologic repository sites for long-term disposal, interim storage is necessary. As the planned Yucca Mountain nuclear waste repository – like elsewhere in the world – is controversial, on- or off-site storage in the US usually takes place in Independent Spent Fuel Storage Facilities (ISFSI's).
In the UK, all eleven Magnox reactors are in decommissioning under responsibility of the NDA. The spent fuel was removed and transferred to the Sellafield site in Cumbria for reprocessing. Facilities for "temporary" storage of nuclear waste – mainly 'Intermediate Level Waste' (ILW) – are in the UK called Interim Storage Facilities (ISF's).
Environmental impact assessment
The decommission of a nuclear reactor can only take place after the appropriate licence has been granted pursuant to the relevant legislation. As part of the licensing procedure, various documents, reports and expert opinions have to be written and delivered to the competent authority, e.g. safety report, technical documents and an environmental impact assessment (EIA). In the European Union these documents are a precondition for granting such a licence is an opinion by the European Commission according to Article 37 of the Euratom Treaty. On the basis of these general data, the Commission must be in a position to assess the exposure of reference groups of the population in the nearest neighbouring states.
Options
There are several options for decommissioning:
Immediate dismantling (DECON in the United States; )
Shortly after the permanent shutdown, the dismantling and/or decontamination of the facility begins. Equipment, structures, systems and components that contain radioactive material are removed and/or decontaminated to a level that permits the ending of regulatory control of the facility and its release, either for unrestricted use or with restrictions on its future use.,p. 50 The operating license is terminated.
Deferred dismantling (SAFSTOR in the United States; "care and maintenance" (C&M) in the UK)
The final decommissioning is postponed for a longer period, usually 30 to 50 years. Often the non-nuclear part of the facility is dismantled and the fuel removed immediately. The radioactive part is maintained and monitored in a condition that allows the radioactivity to decay. Afterwards, the plant is dismantled and the property decontaminated to levels that permit release for unrestricted or restrict use. In the US, the decommissioning must be completed within 60 years. With deferred dismantling, costs are shifted to the future, but this entails the risk of rising expenditures for decades to come and changing rules. Moreover, the site cannot be re-used until the decommissioning is finished, while there are no longer revenues from production.
Partial entombment
The US has introduced the so-called In Situ Decommissioning (ISD) closures. All aboveground structures are dismantled; all remaining belowground structures are entombed by grouting all spaces. Advantages are lower decommissioning costs and safer execution. Disadvantages are main components remaining undismantled and definitively inaccessible. The site has to be monitored indefinitely.
This method was implemented at the Savannah River Site in South Carolina for the closure of the P and R Reactors. With this method, the cost of decommissioning for each reactor was about $73 million. In comparison, the decommissioning of each reactor using traditional methods would have been an estimated $250 million. This resulted in a 71% decrease in cost. Other examples are the Hallam nuclear reactor and the Experimental Breeder Reactor II.
Complete entombment
The facility will not be dismantled. Instead it is entombed and maintained indefinitely, and surveillance is continued until the entombed radioactive waste is decayed to a level permitting termination of the license and unrestricted release of the property. The licensee maintains the license previously issued. This option is likely the only possible one in case of a nuclear disaster where the reactor is destroyed and dismantling is impossible or too dangerous. An example of full entombment is the Chernobyl reactor.
In IAEA terms, entombment is not considered an acceptable strategy for decommissioning a facility following a planned permanent shutdown, except under exceptional circumstances, such as a nuclear disaster. In that case, the structure has to be maintained and surveillance continued until the radioactive material is decayed to a level permitting termination of the licence and unrestricted release of the structure.,p. 50
Costs
The calculation of the total cost of decommissioning is challenging, as there are large differences between countries regarding inclusion of certain costs, such as on-site storage of fuel and radioactive waste from decommissioning, dismanting of non-radioactive buildings and structures, and transport and (final) disposal of radioactive waste.,p. 61
Moreover, estimates of future costs of deferred decommissioning are virtually impossible, due to the long periode, where inflation and rising costs are unpredictable. Nuclear decommissioning projects are characterized by high and highly variable costs, long schedule and a range of risks. Compared with non-nuclear decommissioning, additional costs are usually related with radiological hazards and safety & security requirements, but also with higher wages for required higher qualified personnel. Benchmarking, comparing projects in different countries, may be useful in estimating the cost of decommissioning. While, for instance, costs for spent fuel and high-level-waste management significantly impacts the budget and schedule of decommissioning projects, it is necessary to clarify which is the starting and the ending point of the decommissioning process.
The effective decommissioning activities begin after all nuclear fuel has been removed from the plant areas that will be decommissioned and these activities form a critical component of pre-decommissioning operations, thus should be factored into the decommissioning plan. The chosen option – immediate or deferred decommissioning – impacts the overall costs. Many other factors also influence the cost. A 2018 KPMG article about decommissioning costs observes that many entities do not include the cost of managing spent nuclear fuel, removed from the plant areas that will be decommissioned (in the US routinely stored in ISFSIs).
In 2004, in a meeting in Vienna, the International Atomic Energy Agency estimated the total cost for the decommissioning of all nuclear facilities.
Decommissioning of all nuclear power reactors in the world would require US$187 billion; US$71 billion for fuel cycle facilities; less than US$7 billion for all research reactors; and US$640 billion for dismantling all military reactors for the production of weapons-grade plutonium, research fuel facilities, nuclear reprocessing chemical separation facilities, etc.
The total cost to decommission the nuclear fission industry in the World (from 2001 to 2050) was estimated at US$1 trillion. Market Watch estimated (2019) the global decommissioning costs in the nuclear sector in the range of US$1 billion to US$1.5 billion per 1,000-megawatt plant.
The huge costs of research and development for (geological) longterm disposal of nuclear waste are collectively defrayed by the taxpayers in different countries, not by the companies.
Decommissioning funds
The costs of decommissioning are to be covered by funds that are provided for in a decommissioning plan, which is part of the facility's initial authorization, before the start of the operations. In this way, it is ensured that there will be sufficient money to pay for the eventual decommissioning of the facility. This may for example be through saving in a trust fund or a guarantee from the parent company
Switzerland has a central fund for decommissioning its five nuclear power reactors, and another one for disposal the nuclear waste. Germany has also a state-owned fund for decommissioning of the plants and managing radioactive waste, for which the reactor owners have to pay. The UK Government (the taxpayers) will pay most of the costs for both nuclear decommissioning and existing waste.
The decommissioning of all Magnox reactors is entirely funded by the state.
Since 2010, owners of new nuclear plants in the Netherlands are obliged to set up a decommissioning fund before construction is started.
Underfunding
The economic costs of decommissioning will increase as more
assets reach the end of their life, but few operators have put aside sufficient funds.
In 2016 the European Commission assessed that European Union's nuclear decommissioning liabilities were seriously underfunded by about 118 billion euros, with only 150 billion euros of earmarked assets to cover 268 billion euros of expected decommissioning costs covering both dismantling of nuclear plants and storage of radioactive parts and waste.
In Feb 2017, a committee of the French parliament warned that the state-controlled EDF has underestimated the costs for decommissioning. France had set aside only €23 billion for decommissioning and waste storage of its 58 reactors, which was less than a third of 74 billion in expected costs, while the UK's NDA estimated that clean-up of UK's 17 nuclear sites will cost between €109-€250 billion. EDF estimated the total cost at €54 billion. According to the parliamentary commission, the clean-up of French reactors will take longer, be more challenging and cost much more than EDF anticipates. It said that EDF showed "excessive optimism" concerning the decommissioning. EDF values some €350 million per reactor, whereas European operators count with between 900 million and 1.3 billion euros per reactor. The EDF's estimate was primarily based on the single historic example of the already dismantled Chooz A reactor. The committee argued that costs like restoration of the site, removal of spent fuel, taxes and insurance and social costs should be included.
Similar concerns about underfunding exist in the United States, where the U.S. Nuclear Regulatory Commission has located apparent decommissioning funding assurance shortfalls and requested 18 power plants to address that issue. The decommissioning cost of Small modular reactors is expected to be twice as much respect to Large Reactors.
Examples by country
In France, decommissioning of Brennilis Nuclear Power Plant, a fairly small 70 MW power plant, already cost €480 million (20x the estimate costs) and is still pending after 20 years.
Despite the huge investments in securing the dismantlement, radioactive elements such as plutonium, caesium-137 and cobalt-60 leaked out into the surrounding lake.
In the UK, the decommissioning of civil nuclear assets were estimated to be £99 to £232 billion (2020), earlier in 2005 under-estimated to be £20-40 billion. The Sellafield site (Calder Hall, Windscale and the reprocessing facility) alone accounts for most of the decommissioning cost and increase in cost;
as of 2015, the costs were estimated £53.2 billion. In 2019, the estimate was even much higher: £97 billion. A 2013 estimate by the United Kingdom's Nuclear Decommissioning Authority predicted costs of at least £100 billion to decommission the 19 existing United Kingdom nuclear sites.
In Germany, decommissioning of Niederaichbach nuclear power plant, a 100 MW power plant, amounted to more than €143 million.
Lithuania has increased the prognosis of decommissioning costs from €2019 million in 2010 to €3376 million in 2015.
United States
The decommissioning can only be completed after the on-site storage of nuclear waste has been ended. Under the 1982 Nuclear Waste Policy Act, a "Nuclear Waste Fund", funded by tax on electricity was established to build a geologic repository. On May 16, 2014, collection of the fee was suspended after a complaint by owners and operators of nuclear power plants. By 2021, the Fund had a balance of more than $44 billion, including interest. Later, the Fund has been put back into the general fund and is being used for other purposes. As the plan for the Yucca Mountain nuclear waste repository has been canceled, DOE announced in 2021 the establishing of an interim repository for nuclear waste.
Because the government has failed to establish a central repository, the federal government pays about half-a-billion dollars a year to the utilities as penalty, to compensate the cost of storage at more than 80 ISFSI sites in 35 states as of 2021. As of 2021, the government had paid $9 billion to utility companies for their interim storage costs, which may grow to $31 billion or more.
Nuclear waste costed the American taxpayers through the Department of Energy (DOE) budget as of 2018 about $30 billion per year, $18 billion for nuclear power and $12 billion for waste from nuclear weapons programs.
KPMG estimated the total cost of decommissioning the US nuclear fleet as of 2018 to be greater than US$150 billion. About two-thirds can be attributed to costs for termination of the NRC operating licence; 25% to management of spent fuel; and 10% to site restoration. The decommissioning of only the three uranium enrichment facilities would have an estimated cost (2004) of US$18.7 to 62 billion, with an additional US$2 to 6 billion for the dismantling of a large inventory of depleted uranium hexafluoride. A 2004 GAO report indicated the "costs will have exceeded revenues by $3.5 billion to $5.7 billion (in 2004 dollars)" for the 3 enrichment facilities slated for decommissioning.
International collaboration
Organizations that promote the international sharing of information, knowledge, and experiences related to nuclear decommissioning include the International Atomic Energy Agency, the Organization for Economic Co-operation and Development's Nuclear Energy Agency and the European Atomic Energy Community. In addition, an online system called the Deactivation and Decommissioning Knowledge Management Information Tool was developed under the United States Department of Energy and made available to the international community to support the exchange of ideas and information. The goals of international collaboration in nuclear decommissioning are to reduce decommissioning costs and improve worker safety.
Decommissioning of ships, mobile reactors, and military reactors
Many warships and a few civil ships have used nuclear reactors for propulsion. Former Soviet and American warships have been taken out of service and their power plants removed or scuttled. Dismantling of Russian submarines and ships and American submarines and ships is ongoing. Russia has a fleet of nuclear-powered vessels in decommissioning, dumped in the Barents Sea. Estimated cost for the decommissioning of the two K-27 and K-159 submarines alone was €300 million (2019), or $330 million. Marine power plants are generally smaller than land-based electrical generating stations.
The biggest American military nuclear facility for the production of weapons-grade plutonium was Hanford site (in the State of Washington), now defueled, but in a slow and problematic process of decontamination, decommissioning, and demolition. There is "the canyon", a large structure for the chemical extraction of plutonium with the PUREX process. There are also many big containers and underground tanks with a solution of water, hydrocarbons and uranium-plutonium-neptunium-cesium-strontium (all highly radioactive). With all reactors now defueled, some were put in SAFSTOR (with their cooling towers demolished). Several reactors have been declared National Historic Landmarks.
List of inactive or decommissioned civil nuclear reactors
A wide range of nuclear facilities have been decommissioned so far. The number of decommissioned nuclear reactors out of the List of nuclear reactors is small. As May 2022, about 700 nuclear reactors have been retired from operation in several early and intermediate stages (cold shut-down, defueling, SAFSTOR, internal demolition), but only about 25 have been taken to fully "greenfield status". Many of these sites still host spent nuclear fuel in the form of dry casks embedded in concrete filled steel drums.
As of 2017, most nuclear plants operating in the United States were designed for a life of about 30–40 years and are licensed to operate for 40 years by the US Nuclear Regulatory Commission. As of 2020, the average age of these reactors was about 39 years. Many plants are coming to the end of their licensing period and if their licenses are not renewed, they must go through a decontamination and decommissioning process.
Generally are not included the costs of storage of nuclear waste, including spent fuel, and maintenance of the storage facility, pending the realization of repository sites for long-term disposal,p. 246 (in the US Independent Spent Fuel Storage Installations (ISFSI's). Thus many entities do not include the cost of managing spent nuclear fuel, removed from the plant areas that will be decommissioned. There are, however, large differences between countries regarding inclusion of certain costs, such as on-site storage of fuel and radioactive waste from decommissioning, dismanting of non-radioactive buildings and structures, and transport and (final) disposal of radioactive waste.,p. 61
The year of costs may refer to the value corrected for exchange rates and inflation until that year (e.g. 2020-dollars).
The stated power in the list is preferably given in design net capacity (reference unit power) in MWe, similar to the List of commercial nuclear reactors.
United Kingdom
United States
See also
Lists of nuclear disasters and radioactive incidents
Marcoule Nuclear Site in France
Nuclear Decommissioning Authority
Nuclear entombment
Ship-Submarine Recycling Program
References
External links
NUCLEAR ENERGY AGENCY of the Organisation for Economic Co-operation & Development: Cost of Decommissioning Nuclear Energy Plants (2016)
UNITED STATES NUCLEAR REGULATORY COMMISSION: Backgrounder on Decommissioning Nuclear Power Plants
Business Insider – UK: Getting Rid Of Old Nuclear Reactors Worldwide Is Going To Cost Way More Than People Think
Germany's economy minister Sigmar Gabriel says state won't pay for nuclear decommissioning (May 18, 2014)
Nuclear Decommissioning Report (www.ndreport.com) is the multi-media platform for the nuclear decommissioning industry.
decommissioning.info is a portal with information on nuclear decommissioning
US Sites Undergoing Decommissioning
European website on decommissioning of nuclear installations
Decommissioning Fund Methodologies for Nuclear Installations in the EU, rapport by the German Wuppertal Institute, commissioned by the European Commission. May 2007.
Master 'Nuclear Energy' – Decommissioning and Waste management
Nuclear technology
Nuclear power stations
Radioactive waste
Nuclear liability
Radioactive contamination | Nuclear decommissioning | [
"Physics",
"Chemistry",
"Technology"
] | 5,252 | [
"Radioactive contamination",
"Nuclear technology",
"Environmental impact of nuclear power",
"Hazardous waste",
"Radioactivity",
"Nuclear physics",
"Radioactive waste"
] |
4,638,461 | https://en.wikipedia.org/wiki/Blacksmiths%20of%20western%20Africa | The history of blacksmithing in West Africa dates back to around 1500 BCE, marking the emergence of skilled artisans whose mastery of ironworking was both revered and feared across the region. Blacksmiths held a unique position in West African societies, often perceived as possessing magical abilities due to their expertise in transforming metal. Their craft, critical to the development of tools, weapons, and ceremonial objects, was essential to the social and economic growth of various West African civilizations. As a result, blacksmiths were not only integral to the survival and advancement of their communities but also occupied high social statuses. These ironworking societies include the Mandé peoples of Mali and the Bamana. In some cultures, their skills were linked to spiritual practices and religious beliefs, particularly in the Yoruba culture, where the god Ogun, associated with iron and war, played a central role in their mythology. Blacksmiths in these societies were often part of endogamous castes, with knowledge and skills passed down through generations, ensuring the continuation of this vital craft.
Nigeria
The Nok people of Nigeria show the art of blacksmiths, which date back to the sixth century BC. Ironworking made farming, hunting, and war much more efficient. Iron allowed for greater growth in societies. With the ability to support larger communities came social growth and the development of large kingdoms, which spread across Western Africa.
Throughout Nigeria two more very important West African civilizations arose. The Ife and the Oyo people of Yorubaland are very similar in their spiritual and ritual beliefs. Both base their existence around ironworking. To these African civilizations, iron had become the key to their development and survival, and it was worshiped as such. The Ife and Oyo people believe that the blacksmith has the power to express the spirit of Ogun, the god of iron, because they create iron, which is the foundation for their survival.
Spirituality and religion
Ogun, the god of iron
Ogun, the god of iron, is one of the pantheon of "orisa" traditionally worshipped by the Yoruba of Nigeria. Ogun is the god of iron and metalworking and was himself a user of iron as a blacksmith and metal worker. In Yoruba the use of “O” means “a spiritual force has mastered a particular form of wisdom” (Fatunmbi). Ogun therefore means the survival through assertive and aggressive action that is directed toward maintaining survival (Fatunmbi). Most of Nigeria's numerous ethnic cultures have a god of iron and metalworking in their traditional religion.
Mande blacksmiths
The Mande blacksmiths hold important positions in society. Blacksmiths are often called upon by the chief for guidance in major decisions regarding the village. The power of the blacksmith is thought to be so great that they are also feared. Mande Blacksmiths control a force called nyama. This means that they control all energy and power in the village as well as the makeup and workings of the Mande society. The ability to control such a force is not given to just anyone. A single family in the village is designated to produce blacksmiths. The boys from that family are taught the daliluw, “the secret knowledge about the use and nature of nyama”.
“Nyama is the foundation that nourishes the institution of smithing, so that it may nourish society, is the simple axiom that knowledge can be power when properly articulated…. One must first possess it (nyama) in substantial amounts and then acquire the knowledge to manipulate and direct it to capitalize on its potential benefits. Acts that are difficult or dangerous—like hunting, or smelting, and forging iron—demand that a greater responsibility of energy and a higher degree of knowledge be possessed by the actor
(Perani, Smith 1998: 71).
They begin training at an early age, as an apprentice in order to master the techniques of blacksmithing by the time they reach adulthood and become Mande blacksmiths.
Bamana society
The Bamana society is very similar to the Mande. Bamana society is also endogamous, so blacksmith families are the only blacksmiths in the village and they hold a very high status, due to the extreme power and responsibility that they possess. Bamana Blacksmiths are also experts in divination, amulet making, as well as the practice of medicines due to their extensive knowledge of the Spirit of Ogun. Bamana Blacksmiths are responsible with the well being of the villagers and the safety of the village. This power like the Mande is driven by their control over nyama.
The Bamana training of young blacksmiths lasts about eight years. After completion of the apprenticeship the young blacksmith is ready to begin forging tools, weapons, and ritual masks and staffs, used for ceremonial purposes. “When used actively and sacrificed to, iron staffs continue to gain and radiate power, the power to protect, cure, fight, honor, lead, and repel” (Perani, Smith 1998: 71-72).
Numu blacksmith castes and languages
In much of West Africa, blacksmiths form castes, called numu in Mande. Because these castes are endogamous (they only marry within the group), they have in several instances become distinct ethnic groups, which when separated from their parent group have even developed distinct languages spoken only by blacksmiths. The best-known of these is Ligbi; others include Tonjon, Natioro, Somyev, and in eastern Africa, Ndo.
References
Joyce, Tom. (2002) The Blacksmith's Art from Africa Life Force at the Anvil.
Perani, Judith. Smith, Fred T. (1998) The Visual Arts of Africa, gender, power, and life cycle rituals. 71-72 p.
Ross, Emma George. "The Age of Iron in West Africa". In Heilbrunn Timeline of Art History. New York: The Metropolitan Museum of Art, 2000–. http://www.metmuseum.org/toah/hd/iron/hd_iron.htm (October 2002)
History of West Africa
History of metallurgy | Blacksmiths of western Africa | [
"Chemistry",
"Materials_science"
] | 1,256 | [
"Metallurgy",
"History of metallurgy"
] |
4,642,577 | https://en.wikipedia.org/wiki/Electrochemical%20noise | Electrochemical noise (ECN) is the generic term given to fluctuations of current and potential. When associated with corrosion, it is the result of stochastic pulses of current generated by sudden film rupture, crack propagation, and discrete events involving metal dissolution and hydrogen discharge with gas bubble formation and detachment. The technique of measuring electrochemical noise uses no external signal for the collection of experimental data.
The ECN technique measures the signal perturbations, which are low-level fluctuations of the corrosion potential between two nominally identical electrodes, which can be used in the mechanistic determination of corrosion type and speed. The fluctuations are usually of low amplitude, less than 1 mV, and of low-frequency bandpass filtered RMS value (DC and high-frequency AC components removed). The noise corresponds with the low-level frequency noise (differential of the ZRA) signal but has a much lower amplitude when general corrosion is involved. The major source of noise can be attributed to macroscopic random-stochastic phenomena. They include partial faradaic current adsorption/desorption, surface coverage, corrosion cracking, and mechanical erosion processes. A common feature of this 1/f Poisson spectra is that it differs from the "white" Gaussian noise, in which accuracy increases as the square root of the measurement time.
The technique considers the reactions occurring at the metal–solution interface and suggests two currents flowing on each electrode as a result of the anodic and cathodic reactions. Once regarded as a source of bias and error that compromised electrochemical measurements, it is now regarded as a rich source of information. The technique is widely used in the Corrosion engineering world as a useful Corrosion Monitoring technique.
The ECN phenomenon belongs to the general category of random low-frequency stochastic processes described by either probability density function equations or in statistical terms. These random processes are either stationary or non-stationary. The first moments of a stationary process are invariant with time.
References
Electrochemistry | Electrochemical noise | [
"Chemistry"
] | 407 | [
"Electrochemistry",
"Physical chemistry stubs",
"Electrochemistry stubs"
] |
4,642,746 | https://en.wikipedia.org/wiki/Feynman%20sprinkler | A Feynman sprinkler, also referred to as a Feynman inverse sprinkler or reverse sprinkler, is a sprinkler-like device which is submerged in a tank and made to suck in the surrounding fluid. The question of how such a device would turn was the subject of an intense and remarkably long-lived debate. The device generally remains steady with no rotation, though with sufficiently low friction and high rate of inflow, it has been seen to turn weakly in the opposite direction of a conventional sprinkler.
A regular sprinkler has nozzles arranged at angles on a freely rotating wheel such that when water is pumped out of them, the resulting jets cause the wheel to rotate; a Catherine wheel and the aeolipile ("Hero's engine") work on the same principle. A "reverse" or "inverse" sprinkler would operate by aspirating the surrounding fluid instead. The problem is commonly associated with theoretical physicist Richard Feynman, who mentions it in his bestselling memoirs Surely You're Joking, Mr. Feynman!. The problem did not originate with Feynman, nor did he publish a solution to it.
History
The first documented treatment of the problem is in chapter III, section III, of Ernst Mach's textbook The Science of Mechanics, first published in 1883. There Mach reported that the device showed "no distinct rotation." In the early 1940s (and apparently without awareness of Mach's earlier discussion), the problem began to circulate among members of the physics department at Princeton University, generating a lively debate. Richard Feynman, at the time a young graduate student at Princeton, built a makeshift experiment within the facilities of the university's cyclotron laboratory. The experiment ended with the explosion of the glass carboy that he was using as part of his setup.
In 1966, Feynman turned down an offer from the editor of Physics Teacher to discuss the problem in print and objected to it being called "Feynman's problem," pointing instead to the discussion of it in Mach's textbook. The sprinkler problem attracted a great deal of attention after the incident was mentioned in Surely You're Joking, Mr. Feynman!, a book of autobiographical reminiscences published in 1985. Feynman gave one argument for why the sprinkler should rotate in the forward direction, and another for why it should rotate in reverse; he did not say how or if the sprinkler actually moved. In an article written shortly after Feynman's death in 1988, John Wheeler, who had been his doctoral advisor at Princeton, revealed that the experiment at the cyclotron had shown “a little tremor as the pressure was first applied [...] but as the flow continued there was no reaction.” The sprinkler incident is also discussed in James Gleick's biography of Feynman, Genius, published in 1992 where Gleick claims that a sprinkler will not turn at all if made to suck in fluid.
In 2005, physicist Edward Creutz (who was in charge of the Princeton cyclotron at the time of the incident) revealed in print that he had assisted Feynman in setting up his experiment and that, when pressure was applied to force water out of the carboy through the sprinkler head,
The question
In his book, Feynman recites the question:
Solution
The behavior of the reverse sprinkler is qualitatively quite distinct from that of the ordinary sprinkler, and one does not behave like the other "played backwards". Most of the published theoretical treatments of this problem have concluded that the ideal reverse sprinkler will not experience any torque in its steady state. It may be understood in terms of conservation of angular momentum: in its steady state, the amount of angular momentum carried by the incoming fluid is constant, which implies that there is no torque on the sprinkler itself.
Alternatively, in terms of forces on an individual sprinkler nozzle, consider Mach's illustration. There:
the reaction force on the nozzle as it sucks in the fluid, pulling the nozzle anti-clockwise;
the inflowing water impacting on the inside of the nozzle, pushing the nozzle clockwise.
The two forces are equal and opposite, so sucking in the fluid causes no net force on the sprinkler nozzle. This is similar to the pop pop boat when it sucks in water—the inflowing water transfers its momentum to the boat, so sucking in water causes no net force on the boat.
Many experiments, going back to Mach, find no rotation of the reverse sprinkler. In setups with sufficiently low friction and high rate of inflow, the reverse sprinkler has been seen to turn weakly in the opposite sense to the conventional sprinkler, even in its steady state. Such behavior could be explained by the diffusion of momentum in a non-ideal (i.e., viscous) flow. However, careful observation of experimental setups shows that this turning is associated with the formation of a vortex inside the body of the sprinkler. An analysis of the actual distribution of forces and pressure in a non-ideal reverse sprinkler provides the theoretical basis to explain this:
References
External links
D3-22: Inverse Sprinkler - Metal Model, University of Maryland Physics Lecture-Demonstration Facility
The Edgerton Center Corridor Lab: Feynman Sprinkler
Physics dissertation by A. Jenkins, Caltech (see chapter 6)
Richard Feynman
Fluid mechanics
Thought experiments in physics | Feynman sprinkler | [
"Engineering"
] | 1,179 | [
"Civil engineering",
"Fluid mechanics"
] |
4,643,304 | https://en.wikipedia.org/wiki/Lightest%20supersymmetric%20particle | In particle physics, the lightest supersymmetric particle (LSP) is the generic name given to the lightest of the additional hypothetical particles found in supersymmetric models. In models with R-parity conservation, the LSP is stable; in other words, it cannot decay into any Standard Model particle, since all SM particles have the opposite R-parity. There is extensive observational evidence for an additional component of the matter density in the universe, which goes under the name dark matter. The LSP of supersymmetric models is a dark matter candidate and is a weakly interacting massive particle (WIMP).
Constraints on LSP from cosmology
The LSP is unlikely to be a charged wino, charged higgsino, slepton, sneutrino, gluino, squark, or gravitino but is most likely a mixture of neutral higgsinos, the bino and the neutral winos, i.e. a neutralino. In particular, if the LSP were charged (and is abundant in our galaxy) such particles would have been captured by the Earth's magnetic field and form heavy hydrogen-like atoms. Searches for anomalous hydrogen in natural water however have been without any evidence for such particles and thus put severe constraints on the existence of a charged LSP.
As a dark matter candidate
Dark matter particles must be electrically neutral; otherwise they would scatter light and thus not be "dark". They must also almost certainly be non-colored.
With these constraints, the LSP could be the lightest neutralino, the gravitino, or the lightest sneutrino.
Sneutrino dark matter is ruled out in the Minimal Supersymmetric Standard Model (MSSM) because of the current limits on the interaction cross section of dark matter particles with ordinary matter as measured by direct detection experiments—the sneutrino interacts via Z boson exchange and would have been detected by now if it makes up the dark matter. Extended models with right-handed or sterile sneutrinos reopen the possibility of sneutrino dark matter by lowering the interaction cross section.
Neutralino dark matter is the favored possibility. In most models the lightest neutralino is mostly bino (superpartner of the hypercharge gauge boson field B), with some admixture of neutral wino (superpartner of the weak isospin gauge boson field W0) and/or neutral Higgsino.
Gravitino dark matter is a possibility in supersymmetric models in which the scale of supersymmetry breaking is low, around 100 TeV. In such models the gravitino is very light, of order an eV. As dark matter, the gravitino is sometimes called a super-WIMP because its interaction strength is much weaker than that of other supersymmetric dark matter candidates. For the same reason, its direct thermal production in the early universe is too inefficient to account for the observed dark matter abundance. Rather, gravitinos would have to be produced through the decay of the next-to-lightest supersymmetric particle (NLSP).
In extra-dimensional theories, there are analogous particles called LKPs or Lightest Kaluza–Klein Particle. These are the stable particles of extra-dimensional theories.
See also
Dark matter
Darkon (unparticle)
List of hypothetical particles
Supersymmetry
Weakly interacting slender particle
References
Dark matter
Supersymmetric quantum field theory
Hypothetical particles
Weight | Lightest supersymmetric particle | [
"Physics",
"Astronomy"
] | 735 | [
"Dark matter",
"Hypothetical particles",
"Force",
"Unsolved problems in astronomy",
"Physical quantities",
"Symmetry",
"Supersymmetric quantum field theory",
"Concepts in astronomy",
"Mechanical quantities",
"Mass",
"Unsolved problems in physics",
"Weight",
"Subatomic particles",
"Exotic m... |
4,643,400 | https://en.wikipedia.org/wiki/Majorana%20fermion | A Majorana fermion () or Majorana particle is a fermion that is its own antiparticle. They were hypothesised by Ettore Majorana in 1937. The term is sometimes used in opposition to Dirac fermion, which describes fermions that are not their own antiparticles.
With the exception of neutrinos, all of the Standard Model elementary fermions are known to behave as Dirac fermions at low energy (lower than the electroweak symmetry breaking temperature), and none are Majorana fermions. The nature of neutrinos is not settled – they may turn out to be either Dirac or Majorana fermions.
In condensed matter physics, quasiparticle excitations can appear like bound Majorana states. However, instead of a single fundamental particle, they are the collective movement of several individual particles (themselves composite) which are governed by non-Abelian statistics.
Theory
The concept goes back to Majorana's suggestion in 1937 that electrically neutral spin- particles can be described by a real-valued wave equation (the Majorana equation), and would therefore be identical to their antiparticle, because the wave functions of particle and antiparticle are related by complex conjugation, which leaves the Majorana wave equation unchanged.
The difference between Majorana fermions and Dirac fermions can be expressed mathematically in terms of the creation and annihilation operators of second quantization: The creation operator creates a fermion in quantum state (described by a real wave function), whereas the annihilation operator annihilates it (or, equivalently, creates the corresponding antiparticle). For a Dirac fermion the operators and are distinct, whereas for a Majorana fermion they are identical. The ordinary fermionic annihilation and creation operators and can be written in terms of two Majorana operators and by
In supersymmetry models, neutralinos – superpartners of gauge bosons and Higgs bosons – are Majorana fermions.
Identities
Another common convention for the normalization of the Majorana fermion operator is
which can be rearranged to obtain the Majorana fermion operators as
It is easy to see that is indeed fulfilled. This convention has the advantage that the Majorana operator squares to the identity, i.e. .
Using this convention, a collection of Majorana fermions ( ordinary fermions), () obey the following anticommutation identities
and
where and are antisymmetric matrices. These are identical to the commutation relations for the real Clifford algebra in dimensions ().
Elementary particles
Because particles and antiparticles have opposite conserved charges, Majorana fermions have zero charge, hence among the fundamental particles, the only fermions that could be Majorana are sterile neutrinos, if they exist. All the other elementary fermions of the Standard Model have gauge charges, so they cannot have fundamental Majorana masses: Even the Standard Model's left-handed neutrinos and right-handed antineutrinos have non-zero weak isospin, a charge-like quantum number. However, if they exist, the so-called "sterile neutrinos" (left-handed antineutrinos and right-handed neutrinos) would be truly neutral particles (assuming no other, unknown gauge charges exist).
The sterile neutrinos introduced to explain neutrino oscillation and anomalously small Standard Model neutrino masses could have Majorana masses. If they do, then at low energy (after electroweak symmetry breaking), by the seesaw mechanism, the neutrino fields would naturally behave as six Majorana fields, with three of them expected to have very high masses (comparable to the GUT scale) and the other three expected to have very low masses (below 1 eV). If right-handed neutrinos exist but do not have a Majorana mass, the neutrinos would instead behave as three Dirac fermions and their antiparticles with masses coming directly from the Higgs interaction, like the other Standard Model fermions.
The seesaw mechanism is appealing because it would naturally explain why the observed neutrino masses are so small. However, if the neutrinos are Majorana then they violate the conservation of lepton number and even of B − L.
Neutrinoless double beta decay has not (yet) been observed,
but if it does exist, it can be viewed as two ordinary beta decay events whose resultant antineutrinos immediately annihilate each other, and is only possible if neutrinos are their own antiparticles.
The high-energy analog of the neutrinoless double beta decay process is the production of same-sign charged lepton pairs in hadron colliders; it is being searched for by both the ATLAS and CMS experiments at the Large Hadron Collider. In theories based on left–right symmetry, there is a deep connection between these processes. In the currently most-favored explanation of the smallness of neutrino mass, the seesaw mechanism, the neutrino is “naturally” a Majorana fermion.
Majorana fermions cannot possess intrinsic electric or magnetic moments, only toroidal moments. Such minimal interaction with electromagnetic fields makes them potential candidates for cold dark matter.
Majorana bound states
In superconducting materials, a quasiparticle can emerge as a Majorana fermion (non-fundamental), more commonly referred to as a Bogoliubov quasiparticle in condensed matter physics. Its existence becomes possible because a quasiparticle in a superconductor is its own antiparticle.
Mathematically, the superconductor imposes electron hole "symmetry" on the quasiparticle excitations, relating the creation operator at energy to the annihilation operator at energy . Majorana fermions can be bound to a defect at zero energy, and then the combined objects are called Majorana bound states or Majorana zero modes. This name is more appropriate than Majorana fermion (although the distinction is not always made in the literature), because the statistics of these objects is no longer fermionic. Instead, the Majorana bound states are an example of non-abelian anyons: interchanging them changes the state of the system in a way that depends only on the order in which the exchange was performed. The non-abelian statistics that Majorana bound states possess allows them to be used as a building block for a topological quantum computer.
A quantum vortex in certain superconductors or superfluids can trap midgap states, which is one source of Majorana bound states. Shockley states at the end points of superconducting wires or line defects are an alternative, purely electrical, source. An altogether different source uses the fractional quantum Hall effect as a substitute for the superconductor.
Experiments in superconductivity
In 2008, Fu and Kane provided a groundbreaking development by theoretically predicting that Majorana bound states can appear at the interface between topological insulators and superconductors. Many proposals of a similar spirit soon followed, where it was shown that Majorana bound states can appear even without any topological insulator. An intense search to provide experimental evidence of Majorana bound states in superconductors first produced some positive results in 2012. A team from the Kavli Institute of Nanoscience at Delft University of Technology in the Netherlands reported an experiment involving indium antimonide nanowires connected to a circuit with a gold contact at one end and a slice of superconductor at the other. When exposed to a moderately strong magnetic field the apparatus showed a peak electrical conductance at zero voltage that is consistent with the formation of a pair of Majorana bound states, one at either end of the region of the nanowire in contact with the superconductor. Simultaneously, a group from Purdue University and University of Notre Dame reported observation of fractional Josephson effect (decrease of the Josephson frequency by a factor of 2) in indium antimonide nanowires connected to two superconducting contacts and subjected to a moderate magnetic field, another signature of Majorana bound states. Bound state with zero energy was soon detected by several other groups in similar hybrid devices, and fractional Josephson effect was observed in topological insulator HgTe with superconducting contacts
The aforementioned experiments mark possible verifications of independent 2010 theoretical proposals from two groups predicting the solid state manifestation of Majorana bound states in semiconducting wires proximitized to superconductors. However, it was also pointed out that some other trivial non-topological bounded states could highly mimic the zero voltage conductance peak of Majorana bound state. The subtle relation between those trivial bound states and Majorana bound states was reported by the researchers in Niels Bohr Institute, who can directly "watch" coalescing Andreev bound states evolving into Majorana bound states, thanks to a much cleaner semiconductor-superconductor hybrid system.
In 2014, evidence of Majorana bound states was also observed using a low-temperature scanning tunneling microscope, by scientists at Princeton University. These experiments resolved the predicted signatures of localized Majorana bound states – zero energy modes – at the ends of ferromagnetic (iron) chains on the surface of a superconductor (lead) with strong spin-orbit coupling. Follow up experiments at lower temperatures probed these end states with higher energy resolution and showed their robustness when the chains are buried by layers of lead. Experiments with spin-polarized STM tips have also been used, in 2017, to distinguish these end modes from trivial zero energy modes that can form due to magnetic defects in a superconductor, providing important evidence (beyond zero bias peaks) for the interpretation of the zero energy mode at the end of the chains as a Majorana bound state. More experiments finding evidence for Majorana bound states in chains have also been carried out with other types of magnetic chains, particularly chains manipulated atom-by-atom to make a spin helix on the surface of a superconductor.
Majorana fermions may also emerge as quasiparticles in quantum spin liquids, and were observed by researchers at Oak Ridge National Laboratory, working in collaboration with Max Planck Institute and University of Cambridge on 4 April 2016.
Chiral Majorana fermions were claimed to be detected in 2017 by Q.L. He et al., in a quantum anomalous Hall effect/superconductor hybrid device. In this system, Majorana fermions edge mode will give a rise to a conductance edge current. Subsequent experiments by other groups, however, could not reproduce these findings. In November 2022, the article by He et al. was retracted by the editors, because "analysis of the raw and published data revealed serious irregularities and discrepancies".
On 16 August 2018, a strong evidence for the existence of Majorana bound states (or Majorana anyons) in an iron-based superconductor, which many alternative trivial explanations cannot account for, was reported by Ding's and Gao's teams at Institute of Physics, Chinese Academy of Sciences and University of Chinese Academy of Sciences, when they used scanning tunneling spectroscopy on the superconducting Dirac surface state of the iron-based superconductor. It was the first time that indications of Majorana particles were observed in a bulk of pure substance. However, more recent experimental studies in iron-based superconductors show that topologically trivial Caroli–de Gennes–Matricon states and Yu–Shiba–Rusinov states can exhibit qualitative and quantitative features similar to those Majorana zero modes would make. In 2020 similar results were reported for a platform consisting of europium sulfide and gold films grown on vanadium.
Majorana bound states in quantum error correction
One of the causes of interest in Majorana bound states is that they could be used in quantum error correcting codes. This process is done by creating so called 'twist defects' in codes such as the toric code which carry unpaired Majorana modes. The Majoranas are then "braided" by being physically moved around each other in 2D sheets or networks of nanowires. This braiding process forms a projective representation of the braid group.
Such a realization of Majoranas would allow them to be used to store and process quantum information within a quantum computation. Though the codes typically have no Hamiltonian to provide suppression of errors, fault-tolerance would be provided by the underlying quantum error correcting code.
Majorana bound states in Kitaev chains
In February 2023 a study reported the realization of a "poor man's" Majorana that is a Majorana bound state that is not topologically protected and therefore only stable for a very small range of parameters. It was obtained in a Kitaev chain consisting of two quantum dots in a superconducting nanowire strongly coupled by normal tunneling and Andreev tunneling with the state arising when the rate of both processes match confirming a prediction of Alexei Kitaev.
References
Further reading
Fermions
Quantum field theory | Majorana fermion | [
"Physics",
"Materials_science"
] | 2,758 | [
"Quantum field theory",
"Matter",
"Fermions",
"Quantum mechanics",
"Condensed matter physics",
"Subatomic particles"
] |
20,586,364 | https://en.wikipedia.org/wiki/High-intensity%20radiated%20field | A high-intensity radiated field (HIRF) is radio-frequency energy of a strength sufficient to adversely affect either a living organism or the performance of a device subjected to it. A microwave oven is an example of this principle put to controlled, safe use. Radio-frequency (RF) energy is non-ionizing electromagnetic radiation – its effects on tissue are through heating.
Electronic components are affected via rectification of the RF and a corresponding shift in the bias points of the components in the field.
The U.S. Food and Drug Administration (FDA), and U.S. Federal Communications Commission (FCC) set limits for the amounts of RF energy exposure permitted in a standard work-day.
History
The U.S. Federal Aviation Administration (FAA) and industry EMC leaders have periodically met to define the adequacy of protection requirements for civil avionics from outside interference since 1980. In 1986 The FAA Technical Center contracted for a definition of the electromagnetic environment for civil aviation. This study was performed by the Electromagnetic Compatibility Analysis Center (ECAC). The study has shown levels of exposure to this threat as high as four orders of magnitude (10000 times) higher than the then current civil aircraft EMC susceptibility test certification standards of 1 volt/meter (DO-160). This environment was also two orders of magnitude higher (100 times) than the then prevailing military avionics systems test standards (MIL-STD 461/462).
Units of measurement
An RF electromagnetic wave has both an electric and a magnetic component (electric field and magnetic field), and it is often convenient to express the intensity of the RF environment at a given location in terms of units specific to each component. For example, the unit "volts per meter" (V/m) is used to express the strength of the electric field (electric "field strength"), and the unit "amperes per meter" (A/m) is used to express the strength of the magnetic field (magnetic "field strength"). Another commonly used unit for characterizing the total electromagnetic field is "power density." Power density is most appropriately used when the point of measurement is far enough away from an antenna to be located in the "far-field" zone of the antenna.
See also
Radiation
Radio frequency
References
Bibliography
Radiation effects | High-intensity radiated field | [
"Physics",
"Materials_science",
"Engineering"
] | 479 | [
"Physical phenomena",
"Materials science",
"Radiation",
"Condensed matter physics",
"Radiation effects"
] |
20,587,582 | https://en.wikipedia.org/wiki/Live-line%20working | In electrical engineering, live-line working, also known as hotline maintenance, is the maintenance of electrical equipment, often operating at high voltage, while the equipment is energised. Although this is more hazardous for personnel than working on electrical equipment with the power off, live-line maintenance techniques are used in the electric power distribution industry to avoid the disruption and high economic costs of having to turn off power to customers to perform essential periodic maintenance on transmission lines and other equipment.
The first techniques for live-line working were developed in the early years of the 20th century, and both equipment and work methods were later refined to deal with increasingly higher voltages. In the 1960s, methods were developed in the laboratory to enable field workers to come into direct contact with high voltage lines. Such methods can be applied to enable safe work at the highest transmission voltages.
Background
In general, it is impossible to determine visually whether electrical equipment is energized; in any event, it is often necessary to maintain or repair circuits while they are in operation. In addition, at high voltages, it is unnecessary to come into direct contact with charged equipment to be shocked because an arc can jump from the equipment to a tool or part of the body. Materials such as rubber, while excellent insulators, are also subject to electrical failure at high voltages.
Methods
In general, there are three methods of live-line working which help workers avoid the considerable hazards of live line working. In various ways, they all serve to prevent current flowing from the live equipment through the worker.
Hot stick or Live Line Tool
Hot sticks are used in live line work by having the worker remain at a specified distance from the live parts and carry out the work by means of an insulating stick. Tools can be attached to the stick, allowing work to be performed with the worker safely away from the live conductors.
Insulating Gloves or Rubber Gloves
A live line worker is electrically protected by insulating gloves and other insulating equipment, and carries out the work in direct mechanical contact with live parts.
Barehand or Potential
The barehanded approach has a live line worker performing the work in direct electric contact with live parts. Before contact, the worker's body is raised to the same electric potential as the live parts, and then held there by electric connection, while maintaining suitable isolation from the surroundings which are at different potentials, like the ground, other people or trees. Because the worker and the work are at the same potential, no current flows through the worker.
Unearthed or De-energised
Some organizations additionally consider working on unearthed de-energised equipment to be another form of live-line working. This is because the line might become inadvertently charged (e.g. through a back-charged transformer, possibly as a result of an improperly connected, inadequately isolated emergency generator at a customer facility), or inductively coupled from an adjacent in-service line. To prevent this, the line is first grounded via a clamp known as a bond or drain earth. Once this is in place, further work is not considered to be live-line working.
Hot stick
Hot-stick working appeared in the second decade of the 20th century, when insulating poles made from baked wood were used for tasks such as replacing fuses, replacing post insulators, and transferring lines onto temporary supports. The sticks enabled the linemen to carry out the work without infringing on the minimum clearance distances from live equipment. As experience with the techniques developed, then the operating voltages at which the work was performed increased. With the advent of fibreglass poles in the late 1950s, which neither split nor soaked up rainwater, utilities were prepared to carry out hot-stick working to their highest operating voltages, perhaps 765 kV.
Tools, such as hooks or socket wrenches can be mounted at the end of the pole. More sophisticated poles can accept pneumatically or hydraulically driven power tools which allow, for example, bolts to be unscrewed remotely. A rotary wire brush allows a terminal to be scoured clean before a connection is made. However, a worker's dexterity is naturally reduced when operating tools at the end of a pole that is several metres long.
Insulating glove or rubber glove working
Usually applied for work above 1 kV AC 1.5 kV DC
The primary classes are:
Class 00 - phase to phase working voltage 500 V
Class 0 - phase to phase working voltage 1.0 kV
Class 1 - phase to phase working voltage 7.5 kV
Class 2 - phase to phase working voltage 17 kV
Class 3 - phase to phase working voltage 26.5 kV
Class 4 - phase to phase working voltage 36 kV
Gloves protect the worker from exposure to the live part being worked upon sometimes referred to as the 1st point of contact; the point where current would enter the body should an inadvertent contact be made.
Covers of insulating material such as blankets and linehose are employed in rubber glove working to protect the worker from exposure to a part at a different potential sometimes referred to as the 2nd point of contact; the point where current would leave the body should an inadvertent contact be made.
Bare hand
Bare-hand, or potential working involves placing the worker in direct electrical contact with an energized overhead line. The worker might work alongside the lines, from a platform that is suspended from them, or may sit or stand directly on the line itself. In all cases, the worker's body is maintained at the same voltage as the line. It is imperative that the worker maintain appropriate and adequate limits of approach to any part at a different potential. Such techniques were first used in 1960.
There are a number of ways in which the worker can access the live parts:
The worker can access from a specialist type of mobile elevating work platform (MEWP) termed an insulating aerial device (IAD) which has a boom of insulating material and which all conductive parts at the platform end are bonded together. There are other requirements for safe working such as gradient control devices, a means of preventing a vacuum in the hydraulic lines, etc.
The worker can stand on an insulating ladder which is maneuvered to the line by means of non-conductive rope.
The worker is lowered from a helicopter and transfers themself to the line.
The worker is brought alongside the wire in a hovering helicopter and works from that position.
As the worker approaches the line, an arc will form between the line and the worker as they are being charged. This arc can be debilitating, and the worker must immediately bond themself electrically to the line to prevent further arcing. A worker may use a conducting wand during the approach to first make the connection. Once on the line, the worker is safe from shock as both the lineworker and the wire are at the same electric potential, and hence no current passes through their body. This is the same principle as that which allows birds to safely sit on power lines.
When the work is completed, the process is reversed to remove the worker safely from the wire. Barehand working provides the lineworker with greater dexterity than the hot stick method, and may be the preferred option if conditions permit it. With this technique, insulator strings, conductor spacers and vibration dampers can be replaced, or lines spliced, without any loss of supply.
The strong electric field surrounding charged equipment is enough to drive a current of approximately 15 μA for each kV·m−1 through a human body. To prevent this, hot-hand workers are usually required to wear a Faraday suit. This is a set of overalls made from or woven throughout with conducting fibers. The suit is in effect a wearable Faraday cage, which equalizes the potential over the body, and ensures there is no through-tissue current. Conducting gloves, even conducting socks, are also necessary, leaving only the face uncovered.
There is little practical upper voltage limit for hot-hand working, and it has been successfully performed at some of the highest transmission operating voltages in the world, such as the Russian 1150 kV system.
Helicopter
A lineworker wearing a Faraday suit can work on live, high-power lines by being transported to the lines in a helicopter. The worker can perform maintenance sitting on an outrigger platform attached to the helicopter while the aircraft hovers next to the line. When approaching the line a long wand is touched to the line to equalize the potential of the aircraft to that of the line, then a breakaway bonding wire connected to the helicopter's frame is attached to the line during work. Alternatively the worker can transfer to the wires from the helicopter and crawl down the wires, then be picked up by the helicopter after the work is completed.
Eye protection
An electric arc is extremely bright, including in the ultraviolet, and can cause arc eye, a painful and potentially blinding condition. Workers may be provided with appropriately tinted goggles that protect their vision in the event of a flash, and provide defence against debris ejected by an arc.
See also
Lineworker
References
Electric power
Electrical safety | Live-line working | [
"Physics",
"Engineering"
] | 1,858 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
20,589,034 | https://en.wikipedia.org/wiki/Mountain%20climbing%20problem | In mathematics, the mountain climbing problem is a mathematical problem that considers a two-dimensional mountain range (represented as a continuous function), and asks whether it is possible for two mountain climbers starting at sea level on the left and right sides of the mountain to meet at the summit, while maintaining equal altitudes at all times. It has been shown that when the mountain range has only a finite number of peaks and valleys, it is always possible to coordinate the climbers' movements, but this does not necessarily hold when it has an infinite number of peaks and valleys.
This problem was named and posed in this form by , but its history goes back to , who solved a version of it. The problem has been repeatedly rediscovered and solved independently in different contexts by a number of people (see references below).
Since the 1990s, the problem was shown to be connected to the weak Fréchet distance of curves in the plane, various planar motion planning problems in computational geometry, the inscribed square problem, semigroup of polynomials, etc. The problem was popularized in the article by , which received the Mathematical Association of America's Lester R. Ford Award in 1990.
Analysis
The problem can be rephrased as asking whether, for a given pair of continuous functions with (corresponding to rescaled versions of the left and right faces of the mountain), it is possible to find another pair of functions with (the climbers' horizontal positions at time ) such that the function compositions and (the climbers' altitudes at time ) are the same function.
Finite number of peaks and valleys
When have only a finite number of peaks and valleys (local maxima and local minima) it is always possible to coordinate the climbers' movements. This can be shown by drawing out a sort of game tree: an undirected graph with one vertex labeled whenever and either or is a local maximum or minimum. Two vertices will be connected by an edge if and only if one node is immediately reachable from the other; the degree of a vertex will be greater than one only when the climbers have a non-trivial choice to make from that position.
At the vertex , the degree is one: the only possible direction for both climbers to go is onto the mountain. Similarly, at the degree is one, because both climbers can only return down the mountain.
At a vertex where one climber is at a peak or a valley and the other one is not, then the degree is two: the climber at the peak or valley has two choices of which way to go, and the other climber can only go one way.
At a vertex where both climbers are at peaks or both climbers are at valleys, the degree is four: both climbers may choose independently of each other which direction to go.
At a vertex where one climber is at a peak and the other is at a valley, the degree is zero: such positions are unreachable. (That is, if such a vertex exists, then the graph is not connected.)
According to the handshaking lemma, every connected component of an undirected graph has an even number of odd-degree vertices. Since the only odd-degree vertices in all of are and , these two vertices must belong to the same connected component. That is, must contain a path from to . That path tells how to coordinate the climbers' movement to the summit.
It has been observed that for a mountain with peaks and valleys the length of this path (roughly corresponding to the number of times one or the other climber must "backtrack") can be as large as quadratic in .
This technique breaks down when have an infinite number of local extrema. In that case, would not be a finite graph, so the handshaking lemma would not apply: and might be connected but only by a path with an infinite number of vertices, possibly taking the climbers "infinite time" to traverse.
Infinite number of peaks and valleys
The following result is due to :
Suppose and are continuous functions from to with and , and such that neither function is constant on an interval. Then there exist continuous functions and from to with , , and such that , where "" stands for a composition of functions.
On the other hand, it is not possible to extend this result to all continuous functions. For, if has constant height over an interval while has infinitely many oscillations passing through the same height, then the first climber may be forced to go back and forth over that interval infinitely many times, making his path to the summit infinitely long. gives a concrete example involving .
Notes
References
.
.
.
.
.
.
.
.
.
.
.
.
.
..
External links
The Parallel Mountain Climbers Problem, a description and a Java applet solution.
Articles containing proofs
Discrete geometry
Recreational mathematics
Mathematical problems | Mountain climbing problem | [
"Mathematics"
] | 986 | [
"Discrete mathematics",
"Recreational mathematics",
"Discrete geometry",
"Articles containing proofs",
"Mathematical problems"
] |
20,589,543 | https://en.wikipedia.org/wiki/Mixing%20length%20model | In fluid dynamics, the mixing length model is a method attempting to describe momentum transfer by turbulence Reynolds stresses within a Newtonian fluid boundary layer by means of an eddy viscosity. The model was developed by Ludwig Prandtl in the early 20th century. Prandtl himself had reservations about the model, describing it as, "only a rough approximation,"
but it has been used in numerous fields ever since, including atmospheric science, oceanography and stellar structure. Also, Ali and Dey hypothesized an advanced concept of mixing instability.
Physical intuition
The mixing length is conceptually analogous to the concept of mean free path in thermodynamics: a fluid parcel will conserve its properties for a characteristic length, , before mixing with the surrounding fluid. Prandtl described that the mixing length,
In the figure above, temperature, , is conserved for a certain distance as a parcel moves across a temperature gradient. The fluctuation in temperature that the parcel experienced throughout the process is . So can be seen as the temperature deviation from its surrounding environment after it has moved over this mixing length .
Mathematical formulation
To begin, we must first be able to express quantities as the sums of their slowly varying components and fluctuating components.
Reynolds decomposition
This process is known as Reynolds decomposition. Temperature can be expressed as:
where , is the slowly varying component and is the fluctuating component.
In the above picture, can be expressed in terms of the mixing length considering a fluid parcel moving in the z-direction:
The fluctuating components of velocity, , , and , can also be expressed in a similar fashion:
although the theoretical justification for doing so is weaker, as the pressure gradient force can significantly alter the fluctuating components. Moreover, for the case of vertical velocity, must be in a neutrally stratified fluid.
Taking the product of horizontal and vertical fluctuations gives us:
The eddy viscosity is defined from the equation above as:
so we have the eddy viscosity, expressed in terms of the mixing length, .
See also
Law of the wall
Reynolds stress equation model
References
Oceanography
Turbulence | Mixing length model | [
"Physics",
"Chemistry",
"Environmental_science"
] | 428 | [
"Hydrology",
"Turbulence",
"Applied and interdisciplinary physics",
"Oceanography",
"Fluid dynamics"
] |
8,002,891 | https://en.wikipedia.org/wiki/Photomixing | Photomixing is the generation of continuous wave terahertz radiation from two lasers. The beams are mixed together and focused onto a photomixer device which generates the terahertz radiation.
It is technologically significant because there are few sources capable of providing radiation in this waveband, others include frequency multiplied electronic/microwave sources, quantum cascade laser and ultrashort pulsed lasers with photoconductive switches as used in terahertz time-domain spectroscopy. The advantages of this technique are that it is continuously tunable over the frequency range from 300 GHz to 3 THz (10 cm−1 to 100 cm−1) (1 mm to 0.1 mm), and spectral resolutions in the order of 1 MHz can be achieved. However, the achievable power is on the order of 10−8 W.
Principle
Two continuous wave lasers with identical polarisation are required, the lasers with frequency ω1 and ω2 are spatially overlapped to generate a terahertz beatnote. The co-linear lasers are then used to illuminate an ultra fast semiconductor material such as GaAs. The photonic absorption and the short charge carrier lifetime results in the modulation of the conductivity at the desired terahertz frequency ωTHz = ω1 - ω2. An applied electric field allows the conductivity variation to be converted into a current which is radiated by a pair of antenna. A typical photoconductive device or 'photomixer' is made from low temperature GaAs with a patterned metalized layer which is used to form an electrode array and radiating antenna.
High resolution spectrometer
The photomixing source can then form the basis of a laser spectrometer which can be used to examine the THz signature of various subjects such as gases, liquids or solid materials.
The instrument can be divided into the following functional units:
Laser sources which provide a THz beatnote in the optical domain. These are usually two near infrared lasers and maybe an optical amplifier.
The photomixer device converts the beatnote into THz radiation, often emitted into free space by an integrated antenna.
A THz propagation path, depending on the application suitable focusing elements are used to collimate the THz beam and allow it to pass through the sample under study.
Detector, with the relatively low levels of available power, in the order of 1 μW, a sensitive detector is required to ensure a reasonable signal to noise ratio. Si bolometers provide a solution for in-coherent instruments. Alternatively a second photomixer device can be used as a detector and has the advantage of allowing coherent detection.
References
Francis Hindle, Arnaud Cuisset, Robin Bocquet, Gaël Mouret "Continuous-wave terahertz by photomixing: applications to gas phase pollutant detection and quantification" Comptes Rendus Physique (2007),
Electromagnetic spectrum
Terahertz technology | Photomixing | [
"Physics"
] | 601 | [
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Terahertz technology"
] |
8,003,258 | https://en.wikipedia.org/wiki/Dol%20hareubang | A (Jejuan: ), alternatively , or , is a type of traditional volcanic rock statue from Jeju Island, Korea.
It is not known when the statues first began to be made; various theories exist for their origin. They possibly began to be made at latest 500 years ago, since the early Joseon period. There are either 47 or 48 original pre-modern statues that are known to exist; most of them are located on Jeju Island.
The statues are traditionally placed in front of gates, as symbolic projections of power and as guardians against evil spirits. They were also symbols and ritual objects for fertility. The statues have been compared to jangseung, traditional wooden totem poles around Korea whose function was similarly to ward off bad spirits. They are now considered symbols of Jeju Island. Recreations of them in miniature and in full size have since been created.
Names
Dol hareubang is a term in the Jeju language, and means "stone grandfather". The term was reportedly not common until recently, and was mostly used by children. It was decided by the Jeju Cultural Property Committee in 1971 to make dol hareubang the official term for the statue, and this name has since become the predominant one.
The statues have gone by a significant variety of names that were possibly regional and dependent on the characteristics of the statues. Names including useongmok (), museongmok (), ujungseok (), beoksumeori (), dolyeonggam (), sumunjang (), janggunseok (), dongjaseok (), mangjuseok (), and ongjungseok (). The name useongmok was possibly the most common.
Description
Each dol hareubang has different features and sizes, but they tend to share some commonalities. They are made of volcanic stone, and often depict figures wearing a round hat. This round hat is said to make the statue phallic, and thus a symbol of fertility. They tend to have large eyes, closed mouths, and one shoulder raised higher than the other. Their expressions have been described as stern, dignified, or humorous. Some have big ears, and some have hands placed either in front, on their stomachs, or around their backs.
The statues were often erected at the entrance of fortresses (and thus at the boundaries of settlements), facing each other. They often had grooves in them for placing wooden logs in. The position of these logs signaled whether entrances were open or closed to passersby, as per the jeongnang system used around Jeju. The statues projected images of power and security, and also served superstitious function in warding off bad spirits. Some people reportedly paid their respects to the statues whenever they passed.
There is some commonalities shared between the dol hareubang of three Joseon-era historical regions of Jeju, although there is still intra-region variance. Dol hareubang in Jeju-seong and Jeongeuihyeon-seong tend to be standing on stone platforms called giseok (), but those in Daejeonghyeon-seong do not.
There are reportedly either 47 or 48 extant pre-modern dol hareubang. In Jeju City, there are 21. In Seongeup-ri in Seogwipo, there are 12. Across Inseong-ri, Anseong-ri, and Boseong-ri there are 12. In the National Folk Museum of Korea in Seoul, there are 2 that were originally from Jeju City. It is reportedly not known with certainty when most of these statues were produced. The statues were reportedly moved around over time, which caused wear-and-tear and made it difficult to place where they were originally from.
They also served other superstitious functions. One folk belief had it that, if a woman was experiencing issues with infertility, she could secretly take parts of a statue's nose, grind it into a powder, then consume the powder to improve her fertility. Many statues reportedly have worn noses due to this belief. Some reportedly believe that touching the nose of the statue improves fertility.
History
The origin of dol hareubangs is unclear, with at least three theories surrounding it. Records surrounding the number and location of the statues from before 1914 are reportedly sparse. One theory has it that a sea-faring people brought the statues to Jeju. A second theory argues that the statues developed from jangseung or beoksu () statues.
Around 1416 (during the Joseon period), 6 dol hareubang in three pairs reportedly existed on the island. By 1754, there were reportedly 48 statues; 24 of these were at Jeju-mok (now Jeju City), with 4 pairs each at the fortress's west, south, and east gates.
Some scholars argue the earliest known dol hareubang in their current form were created in 1754. There is a record that dol hareubang (called ongjungseok) statues were built in 1754 in Jeju-mok. The creation of the statues was reportedly motivated by a belief that, after several famines in the reigns of kings Sukjong and Yeongjo, vengeful spirits were roaming and tormenting the living. The head of Jeju-mok then ordered that the statues be built. It is not clear whether these were the earliest occurrences of the statues.
During the 1910–1945 Japanese colonial period, the statues were reportedly disregarded and moved around. This pattern reportedly continued into the rapid urban development after the liberation of Korea. Research on the statues occurred in the 1960s, and two of them were moved to the National Folk Museum of Korea in 1968.
In recent years, the statue has become a symbol of Jeju Island. The first time a dol hareubang souvenir was created was reportedly in 1963, by sculptor Song Jong-Won. Song made a tall replica of a statue at the south gate of Jeju-mok. Tourist goods now widely feature the statues, with miniature to full-sized statues being sold.
During the 1991 Soviet-South Korean summit on Jeju Island, Soviet leader Mikhail Gorbachev was given a dol hareubang as a gift. In 2002, a statue was gifted to Laizhou in China, and in 2003 another was gifted to the city hall of Santa Rosa, California in the United States.
See also
Kurgan stelae
Korean shamanism
Shigandang
Seonangdang
Moai
Religion in Korea
References
Sources
External links
Religion in Korea
Religion in South Korea
Culture of Korea
Colossal statues
Stone sculptures
Outdoor sculptures in South Korea
Korean folk religion
Korean traditions
Culture of Jeju Province | Dol hareubang | [
"Physics",
"Mathematics"
] | 1,401 | [
"Quantity",
"Colossal statues",
"Physical quantities",
"Size"
] |
8,003,764 | https://en.wikipedia.org/wiki/Bra%20size | Bra size (also known as brassiere measurement or bust size) indicates the size characteristics of a bra. While there is a number of bra sizing systems in use around the world, the bra sizes usually consist of a number, indicating the size of the band around the woman's torso, and one or more letters that indicate the breast cup size. Bra cup sizes were invented in 1932 while band sizes became popular in the 1940s. For convenience, because of the impracticality of determining the size dimensions of each breast, the volume of the bra cup, or cup size, is based on the difference between band length and over-the-bust measurement.
Manufacturers try to design and manufacture bras that correctly fit the majority of women, while individual women try to identify correctly fitting bras among different styles and sizing systems.
The shape, size, position, symmetry, spacing, firmness, and sag of individual women's breasts vary considerably. Manufacturers' bra size labelling systems vary from country to country because no international standards exist. Even within a country, one study found that the bra size label was consistently different from the measured size. As a result of all these factors, about 25% of women have a difficult time finding a properly fitted bra, and some women choose to buy custom-made bras due to the unique shape of their breasts.
Measurement method origins
On 21 November 1911, Parisienne Madeleine Gabeau received a United States patent for a brassiere with soft cups and a metal band that supported and separated the breasts. To avoid the prevailing fashion that created a single "monobosom", her design provided: "...that the edges of the material d may be carried close along the inner and under contours of the breasts, so as to preserve their form, I employ an outlining band of metal b which is bent to conform to the lower curves of the breast."
Cup design origins
The term "cup" was not used to describe bras until 1916 when two patents were filed.
In October 1932, S.H. Camp and Company was the first to use letters of the alphabet (A, B, C and D) to indicate cup size, although the letters represented how pendulous the breasts were and not their volume. Camp's advertising in the February 1933 issue of Corset and Underwear Review featured letter-labeled profiles of breasts. Cup sizes A to D were not intended to be used for larger-breasted women.
In 1935, Warner's introduced its Alphabet Bra with cup sizes from size A to size D. Their bras incorporated breast volume into its sizing, and continues to be the system in use today. Before long, these cup sizes got nicknames: egg cup, tea cup, coffee cup and challenge cup, respectively. Two other companies, Model and Fay-Miss (renamed in 1935 as the Bali Brassiere Company), followed, offering A, B, C and D cup sizes in the late 1930s. Catalogue companies continued to use the designations Small, Medium and Large through the 1940s. Britain did not adopt the American cups in 1933, and resisted using cup sizes for its products until 1948. The Sears Company finally applied cup sizes to bras in its catalogue in the 1950s.
However, though various manufacturers used the same descriptions of bra sizes (e.g., A to D, small large, etc.), there was no standardisation of what these descriptions actually measured, so that each company had its own standards.
Band measurement origins
Multiple hook and eye closures were introduced in the 1930s that enabled adjustment of bands. Prior to the widespread use of bras, the undergarment of choice for Western women was a corset. To help women meet the perceived ideal female body shape, corset and girdle manufacturers used a calculation called hip spring, the difference between waist and hip measurement (usually ).
The band measurement system was created by U.S. bra manufacturers just after World War II.
Other innovations
The underwire was first added to a strapless bra in 1937 by André, a custom-bra firm. Patents for underwire-type devices in bras were issued in 1931 and 1932, but were not widely adopted by manufacturers until after World War II when metal shortages eased.
In the 1930s, Dunlop chemists were able to reliably transform rubber latex into elastic thread. After 1940, "whirlpool", or concentric stitching, was used to shape the cup structure of some designs. The synthetic fibres were quickly adopted by the industry because of their easy-care properties. Since a brassiere must be laundered frequently, easy-care fabric was in great demand.
Consumer fitting
For best results, the breasts should be measured twice: once when standing upright, once bending over at the waist with the breasts hanging down. If the difference between these two measurements is more than 10 cm, then the average is chosen for calculating the cup size. A number of reports, surveys and studies in different countries have found that between 80% and 85% of women wear incorrectly fitted bras.
In November 2005, Oprah Winfrey produced a show devoted to bras and bra sizes, during which she talked about research that eight out of ten women wear the wrong size bra.
Larger breasts and bra fit
Studies have revealed that the most common mistake made by women when selecting a bra was to choose too large a back band and too small a cup, for example, 38C instead of 34E, or 34B instead of 30D.
The heavier a person's build, the more difficult it is to obtain accurate measurements, as measuring tape sinks into the flesh more easily.
In a study conducted in the United Kingdom of 103 women seeking mammoplasty, researchers found a strong link between obesity and inaccurate back measurement. They concluded that "obesity, breast hypertrophy, fashion and bra-fitting practices combine to make those women who most need supportive bras the least likely to get accurately fitted bras."
One issue that complicates finding a correctly fitting bra is that band and cup sizes are not standardized, but vary considerably from one manufacturer to another, resulting in sizes that only provide an approximate fit. Women cannot rely on labeled bra sizes to identify a bra that fits properly. Scientific studies show that the current system of bra sizing may be inaccurate.
Manufacturers cut their bras differently, so, for example, two 34B bras from two companies may not fit the same person. Customers should pay attention to which sizing system is used by the manufacturer. The main difference is in how cup sizes increase, by 2 cm or 1 inch (= 2.54 cm, see below). Some French manufacturers also increase cup sizes by 3 cm. Unlike dress sizes, manufacturers do not agree on a single standard.
British bras currently range from A to LL cup size (with Rigby&Peller recently introducing bras by Elila which go up to US-N-Cup), while most Americans can find bras with cup sizes ranging from A to G. Some brands (Goddess, Elila) go as high as N, a size roughly equal to a British JJ-Cup. In continental Europe, Milena Lingerie from Poland produces up to cup R.
Larger sizes are usually harder to find in retail outlets. As the cup size increases, the labeled cup size of different manufacturers' bras tend to vary more widely in actual volume. One study found that the label size was consistently different from the measured size.
Even medical studies have attested to the difficulty of getting a correct fit. Research by plastic surgeons has suggested that bra size is imprecise because breast volume is not calculated accurately:
The use of the cup sizing and band measurement systems has evolved over time and continues to change. Experts recommend that women get fitted by an experienced person at a retailer offering the widest possible selection of bra sizes and brands.
Bad bra-fit symptoms
If the straps dig into the shoulder, leaving red marks or causing shoulder or neck pain, the bra band is not offering enough support. If breast tissue overflows the bottom of the bra, under the armpit, or over the top edge of the bra cup, the cup size is too small. Loose fabric in the bra cup indicates the cup size is too big. If the underwires poke the breast under the armpit or if the bra's center panel does not lie flat against the sternum, the cup size is too small. If the band rides up the torso at the back, the band size is too big. If it digs into the flesh, causing the flesh to spill over the edges of the band, the band is too small. If the band feels tight, this may be due to the cups being too small; instead of going up in band size a person should try going up in cup size. Similarly a band might feel too loose if the cup is too big. It is possible to test whether a bra band is too tight or too loose by reversing the bra on her torso so that the cups are at the back and then check for fit and comfort. Generally, if the wearer must continually adjust the bra or experiences general discomfort, the bra is a poor fit and she should get a new fitting.
Obtaining best fit
Bra experts recommend that women, especially those whose cup sizes are D or larger, get a professional bra fitting from the lingerie department of a clothing store or a specialty lingerie store. However, even professional bra fitters in different countries including New Zealand and the United Kingdom produce inconsistent measurements of the same person. There is significant heterogeneity in breast shape, density, and volume. As such, current methods of bra fitting may be insufficient for this range of chest morphology.
A 2004 study by Consumers Reports in New Zealand found that 80% of department store bra fittings resulted in a poor fit. However, because manufacturer's standards widely vary, women cannot rely on their own measurements to obtain a satisfactory fit. Some bra manufacturers and distributors state that trying on and learning to recognize a properly fitting bra is the best way to determine a correct bra size, much like shoes.
A correctly fitting bra should meet the following criteria:
When viewed from the side, the edge of the chest band should be horizontal, should not ride up the back and should be firm but comfortable.
Each cup's underwire at the front should lie flat against the sternum (not the breast), along the inframammary fold, and should not dig into the chest or the breasts, rub or poke out at the front.
The breasts should be enclosed by the cups and there should be a smooth line where the fabric at the top of the cup ends.
The apex of the breast, the nipple, must be in the center of the cup.
The breast should not bulge over the top or out the sides of the cups, even with a low-cut style such as the balconette bra.
The straps of a correctly fitted bra should not dig into or slip off the shoulder, which suggests a too-large band.
The back of the bra should not ride up and the chest band should remain parallel to the floor when viewed from the back.
The breasts should be supported primarily by the band around the rib cage, rather than by the shoulder straps.
The woman should be able to breathe and move easily without the bra slipping around.
Confirming bra fit
One method to confirm that the bra is the best fit has been nicknamed the Swoop and Scoop. After identifying a well-fitting bra, the woman bends forward (the swoop), allowing her breasts to fall into the bra, filling the cup naturally, and then fastening the bra on the outermost set of hooks. When the woman stands up, she uses the opposite hand to place each breast gently into the cup (the scoop), and she then runs her index finger along the inside top edge of the bra cup to make sure her breast tissue does not spill over the edges.
Experts suggest that women choose a bra band that fits well on the outermost hooks. This allows the wearer to use the tighter hooks on the bra strap as it stretches during its lifetime of about eight months. The band should be tight enough to support the bust, but the straps should not provide the primary support.
Consumer measurement difficulties
A bra is one of the most complicated articles of clothing to make. A typical bra design has between 20 and 48 parts, including the band, hooks, cups, lining, and straps. Major retailers place orders from manufacturers in batches of 10,000. Orders of this size require a large-scale operation to manage the cutting, sewing and packing required.
Constructing a properly fitting brassiere is difficult. Adelle Kirk, formerly a manager at the global Kurt Salmon management consulting firm that specializes in the apparel and retail businesses, said that making bras is complex:
Asymmetric breasts
Obtaining the correct size is complicated by the fact that up to 25% of women's breasts display a persistent, visible breast asymmetry, which is defined as differing in size by at least one cup size. For about 5% to 10% of women, their breasts are severely different, with the left breast being larger in 62% of cases. Minor asymmetry may be resolved by wearing a padded bra, but severe cases of developmental breast deformity — commonly called "Amazon's Syndrome" by physicians — may require corrective surgery due to morphological alterations caused by variations in shape, volume, position of the breasts relative to the inframammary fold, the position of the nipple-areola complex on the chest, or both.
Breast volume variation
Obtaining the correct size is further complicated by the fact that the size and shape of women's breasts change, if they experience menstrual cycles, during the cycle and can experience unusual or unexpectedly rapid growth in size due to pregnancy, weight gain or loss, or medical conditions. Even breathing can substantially alter the measurements.
Some women's breasts can change shape by as much as 20% per month:
Increases in average bra size
In 2010, the most common bra size sold in the UK was 36D. In 2004, market research company Mintel reported that bust sizes in the United Kingdom had increased from 1998 to 2004 in younger as well as older consumers, while a more recent study showed that the most often sold bra size in the US in 2008 was 36D.
Researchers ruled out increases in population weight as the explanation and suggested it was instead likely due to more women wearing the correct, larger size.
Consumer measurement methods
Bra retailers recommend several methods for measuring band and cup size. These are based on two primary methods, either under the bust or over the bust, and sometimes both. Calculating the correct bra band size is complicated by a variety of factors. The American National Standards Institute states that while a voluntary consensus of sizes exists, there is much confusion to the 'true' size of clothing. As a result, bra measurement can be considered an art and a science. Online shopping and in-person bra shopping experiences may differ because online recommendations are based on averages and in-person shopping can be completely personalized so the shopper may easily try on band sizes above and below her between measured band size. For the woman with a large cup size and a between band size, they may find their cup size is not available in local stores so may have to shop online where most large cup sizes are readily available on certain sites. Others recommend rounding to the nearest whole number.
Band measurement methods
There are several possible methods for measuring the bust.
Underbust +0
A measuring tape is pulled around the torso at the inframammary fold. The tape is then pulled tight while remaining horizontal and parallel to the floor. The measurement (in inches) is then rounded to the nearest even number for the band size. , Kohl's uses this method for its online fitting guide.
Underbust +4
This method begins the same way as the underbust +0 method, where a measuring tape is pulled tight around the torso under the bust while remaining horizontal. If the measurement (in inches) is even, 4 is added to calculate the band size. If it is odd, 5 is added. Kohl's used this method in 2013. The "war on plus four" was a name given to a campaign (circa 2011) against this method, with underbust +0 supporters claiming that the then-ubiquitous +4 method fails to fit a majority of women. Underbust +4 method generally only applies to the US and UK sizes.
Sizing chart
Currently, many large U.S. department stores determine band size by starting with the measurement taken underneath the bust similar to the aforementioned underbust +0 and underbust +4 methods. A sizing chart or calculator then uses this measurement to determine the band size. Band sizes calculated using this method vary between manufacturers.
Underarm/upper bust
A measuring tape is pulled around the torso under the armpit and above the bust. Because band sizes are most commonly manufactured in even numbers, the wearer must round to the closest even number.
Cup measurement methods
Bra-wearers can calculate their cup size by finding the difference between their bust size and their band size. The bust size, bust line measure, or over-bust measure is the measurement around the torso over the fullest part of the breasts, with the crest of the breast halfway between the elbow and shoulder, usually over the nipples, ideally while standing straight with arms to the side and wearing a properly fitted bra, because this practice assumes the current bra fits correctly. The measurements are made in the same units as the band size, either inches or centimetres. The cup size is calculated by subtracting the band size from the over-the-bust measurement.
The meaning of cup sizes varies
Cup sizes vary from one country to another. For example, a U.S. H-cup does not have the same size as an Australian, even though both are based on measurements in inches. The larger the cup size, the bigger the variation.
Surveys of bra sizes tend to be very dependent on the population studied and how it was obtained. For instance, one U.S. study reported that the most common size was 34B, followed by 34C, that 63% were size 34 and 39% cup size B. However, the survey sample was drawn from 103 Caucasian student volunteers at a Midwest U.S. university aged 18–25, and excluded pregnant and nursing women.
Plastic Surgeon Measuring System
Bra-wearers who have difficulty calculating a correct cup size may be able to find a correct fit using a method adopted by plastic surgeons. Using a flexible tape measure, position the tape at the outside of the chest, under the arm, where the breast tissue begins. Measure across the fullest part of the breast, usually across the nipple, to where the breast tissue stops at the breast bone.
Conversion of the measurement to cup size is shown in the "Measuring cup size" table.
Note that, in general, countries that employ metric cup sizing (like in § Continental Europe) have their own system of increments that result in cup sizes which differ from those using inches, since does not equal .
These cup measurements are only correct for converting cup sizes for a band to cm using this particular method, because cup size is relative to band size. This principle means that bras of differing band size can have the same volume. For example, the cup volume is the same for 30D, 32C, 34B, and 36A. These related bra sizes of the same cup volume are called sister sizes. For a list of such sizes, refer to § Calculating cup volume and breast weight.
Consumer fit research
A 2012 study by White and Scurr University of Portsmouth compared method that adds 4 to the band size over-the-bust method used in many United Kingdom lingerie shops with and compared that to measurements obtained using a professional method. The study relied on the professional bra-fitting method described by McGhee and Steele (2010). The study utilized a five-step approach to obtain the best fitting bra size for an individual. The study measured 45 women using the traditional selection method that adds 4 to the band size over-the-bust method. Women tried bras on until they obtained the best fit based on professional bra fitting criteria. The researchers found that 76% of women overestimated their band and 84% underestimated their cup size. When women wear bras with too big a band, breast support is reduced. Too small a cup size may cause skin irritation. They noted that "ill-fitting bras and insufficient breast support can lead to the development of musculoskeletal pain and inhibit women participating in physical activity.". The study recommended that women should be educated about the criteria for finding a well-fitting bra. They recommended that women measure under their bust to determine their band size rather than the traditional over the bust measurement method.
Manufacturer design standards
Bra-labeling systems used around the world are at times misleading and confusing. Cup and band sizes vary around the world. In countries that have adopted the European EN 13402 dress-size standard, the torso is measured in centimetres and rounded to the nearest multiple of 5 cm. Bra-fitting experts in the United Kingdom state that many women who buy off the rack without professional assistance wear up to two sizes too small.
Manufacturer Fruit of the Loom attempted to solve the problem of finding a well-fitting bra for asymmetrical breasts by introducing Pick Your Perfect Bra, which allow women to choose a bra with two different cup sizes, although it is only available in A through D cup sizes.
One very prominent discrepancy between the sizing systems is the fact that the US band sizes, based on inches, does not correspond to its centimeter based EU counterpart.
There are several sizing systems in different countries.
Cup size is determined by one of two methods: in the US and UK, increasing cup size every inch method; and in all other systems by increasing cup size for every two centimeters. Since one inch equals 2.54 centimeters, there is considerable discrepancy between the systems, which becomes more exaggerated as cup sizes increase. Many bras are only available in 36 sizes.
UK
The UK and US use the inch system. The difference in chest circumference between the cup sizes is always one inch, or 2.54 cm. The difference between 2 band sizes is 2 inches or 5.08 cm.
Leading brands and manufacturers including Panache, Bestform, Gossard, Freya, Curvy Kate, Bravissimo and Fantasie, which use the British standard band sizes (where underbust measurement equals band size) 28-30-32-34-36-38-40-42-44, and so on. Cup sizes are designated by AA-A-B-C-D-DD-E-F-FF-G-GG-H-HH-J-JJ-K-KK-L.
However, some clothing retailers and mail order companies have their own house brands and use a custom sizing system. Marks and Spencers uses AA-A-B-C-D-DD-E-F-G-GG-H-J, leaving out FF and HH, in addition to following the US band sizing convention. As a result, their J-Cup is equal to a British standard H-cup. Evans and ASDA sell bras (ASDA as part of their George clothing range) whose sizing runs A-B-C-D-DD-E-F-G-H. Their H-Cup is roughly equal to a British standard G-cup.
Some retailers reserve AA for young teens, and use AAA for women.
Australia/New Zealand
Australia and New Zealand cup and band sizes are in metric increases of 2 cm per cup similar to many European brands. Cup labelling methods and sizing schemes are inconsistent and there is great variability between brands. In general, cup sizes AA-DD follow UK labels but thereafter split off from this system and employ European labels (no double letters with cups progressing from F-G-H etc. for every 2 cm increase). However, a great many local manufacturers employ unique labelling systems Australia and New Zealand bra band sizes are labelled in dress size, although they are obtained by under bust measurement whilst dress sizes utilise bust-waist-hip. In practice very few of the leading Australian manufacturers produce sizes F+ and many disseminate sizing misinformation. The Australian demand for DD+ is largely met by various UK, US and European major brands. This has introduced further sizing scheme confusion that is poorly understood even by specialist retailers.
United States
Bra sizing in the United States is very similar to the United Kingdom. Band sizes use the same designation in inches and the cups also increase by 1-inch-steps. However, some manufacturers use conflicting sizing methods. Some label bras beyond a C cup as D-DD-DDD-DDDD-E-EE-EEE-EEEE-F..., some use the variation: D1, D2, D3, D4, D5..... but many use the following system: A, B, C, D, DD, DDD, G, H, I, J, K, L, M, N, O. and others label them like the British system D-DD-E-F-FF... Comparing the larger cup sizes between different manufacturers can be difficult.
In 2013, underwear maker Jockey International offered a new way to measure bra and cup size. It introduced a system with ten cup sizes per band size that are numbered and not lettered, designated as 1–36, 2–36 etc. The company developed the system over eight years, during which they scanned and measured the breasts and torsos of 800 women. Researchers also tracked the women's use of their bras at home. To implement the system, women must purchase a set of plastic cups from the company to find their Jockey cup size. Some analysts were critical of the requirement to buy the measurement kit, since women must pay about US$20 to adopt Jockey's proprietary system, in addition to the cost of the bras themselves.
Europe / International
European bra sizes are based on centimeters. They are also known as International. Abbreviations such as EU, Intl and Int are all referring to the same European bra size convention. These sizes are used in most of Europe and large parts of the world.
The underbust measurement is rounded to the nearest multiple of 5 cm. Band sizes run 65, 70, 75, 80 etc., increasing in steps of 5 cm, similar to the English double inch. A person with a measured underbust circumference of 78–82 cm should wear a band size 80. The tightness or snugness of the measurement (e.g. a tape measure or similar) depends on the adipose tissue softness. Softer tissue require tightening when measuring, this to ensure that the bra band will fit snugly on the body and stay in place. A loose measurement can, and often does, vary from the tighter measurement. This causes some confusion as a person with a loose measurement of 84 cm would think they have band size 85 but due to a lot of soft tissue the same person might have a snugger and tighter and of 79 cm and should choose the more appropriate band size of 80 or even smaller band size.
The cup labels begin normally with "A" for an 13±1 cm difference between bust and underbust circumference measurement measured loosely (i.e. not tightly as for bra band size), i.e. the not between bust circumference and band size (that normally require some tightening when measured).To clarify the important difference in measuring: Underbust measuring for bra band is done snugly and tight while measuring underbust for determining bra cups is done loosely. For people with much soft adipose tissue these two measurements will not be identical. In this sense the method to determine European sizes differ compared to English systems where the cup sizes are determined by bust measurement compared to bra band size. European cups increase for every additional 2 cm in difference between bust and underbust measurement, instead of 2.5 cm or 1-inch, and except for the initial cup size letters are neither doubled nor skipped. In very large cup sizes this causes smaller cups than their English counterparts.
This system has been standardized in the European dress size standard EN 13402 introduced in 2006, but was in use in many European countries before that date.
South Korea/Japan
In South Korea and Japan the torso is measured in centimetres and rounded to the nearest multiple of 5 cm. Band sizes run 65-70-75-80..., increasing in steps of 5 cm, similar to the English double inch. A person with a loosely measured underbust circumference of 78–82 cm should wear a band size 80.
The cup labels begin with "AAA" for a 5±1.25 cm difference between bust and underbust circumference, i.e. similar bust circumference and band size as in the English systems. They increase in steps of 2.5 cm, and except for the initial cup size letters are neither doubled nor skipped.
Japanese sizes are the same as Korean ones, but the cup labels begin with "AA" for a 7.5±1.25 cm difference and usually precedes the bust designation, i.e. "B75" instead of "75B".
This system has been standardized in the Korea dress size standard KS K9404 introduced in 1999 and in Japan dress size standard JIS L4006 introduced in 1998.
France/Belgium/Spain
The French and Spanish system is a permutation of the Continental European sizing system. While cup sizes are the same, band sizes are exactly 15 cm larger than the European band size.
Italy
The Italian band size uses small consecutive integers instead of the underbust circumference rounded to the nearest multiple of 5 cm. Since it starts with size 0 for European size 60, the conversion consists of a division by 5 and then a subtraction of 12. The size designations are often given in Roman numerals.
Cup sizes have traditionally used a step size of 2.5 cm, which is close to the English inch of 2.54 cm, and featured some double letters for large cups, but in recent years some Italian manufacturers have switched over to the European 2-cm system.
Here is a conversion table for bra sizes in Italy with respect other countries:
Advertising and retail influence
Manufacturers' marketing and advertising often appeals to fashion and image over fit, comfort, and function. Since about 1994, manufacturers have re-focused their advertising, moving from advertising functional brassieres that emphasize support and foundation, to selling lingerie that emphasize fashion while sacrificing basic fit and function, like linings under scratchy lace.
Engineered Alternative to traditional bras
English mechanical engineer and professor John Tyrer from Loughborough University has devised a solution to problematic bra fit by re-engineering bra design. He started investigating the problem of bra design while on an assignment from the British government after his wife returned disheartened from an unsuccessful shopping trip. His initial research into the extent of fitting problems soon revealed that of women wear the wrong size of bra.. He theorised that this widespread practice of purchasing the wrong size was due to the measurement system recommended by bra manufacturers. This sizing system employs a combination of maximum chest diameter (under bust) and maximum bust diameter (bust) rather than the actual breast volume which is to be accommodated by the bra. According to Tyrer, "to get the most supportive and fitted bra it's infinitely better if you know the volume of the breast and the size of the back.". He says the A, B, C, D cup measurement system is flawed. "It's like measuring a motor car by the diameter of the gas cap." "The whole design is fundamentally flawed. It's an instrument of torture." Tyrer has developed a bra design with crossed straps in the back. These use the weight of one breast to lift the other using counterbalance. Standard designs constrict chest movement during breathing. One of the tools used in the development of Tyrer's design has been a projective differential shape body analyzer for .
Breasts weigh up to and not . Tyrer said, "By measuring the diameter of the chest and breasts current measurements are supposed to tell you something about the size and volume of each breast, but in fact it doesn't". Bra companies remain reluctant to manufacture Tyrer's prototype, which is a front closing bra with more vertical orientation and adjustable cups.
Calculating cup volume and breast weight
The average breast weighs about . Each breast contributes to about 4–5% of the body fat. The density of fatty tissue is more or less equal to
If a cup is a hemisphere, its volume V is given by the following formula:
where r is the radius of the cup, and D is its diameter.
If the cup is a hemi-ellipsoid, its volume is given by the formula:
where a, b and c are the three semi-axes of the hemi-ellipsoid, and cw, cd and wl are respectively the cup width, the cup depth and the length of the wire.
Cups give a hemi-spherical shape to breasts and underwires give shape to cups. So the curvature radius of the underwire is the key parameter to determine volume and weight of the breast. The same underwires are used for the cups of sizes 36A, 34B, 32C, 30D etc. ... so those cups have the same volume. The reference numbers of underwire sizes are based on a B cup bra, for example underwire size 32 is for 32B cup (and 34A, 30C...). An underwire size 30 width has a curvature diameter of and this diameter increases by by size. The table below shows volume calculations for some cups that can be found in a ready-to-wear large size shop.
See also
History of bras
List of bra designs
Nursing bra
Underwire bra
Wonderbra
Notes
References
Further reading
Brassieres
Sizes in clothing
es:Sostén#Tallas y copas | Bra size | [
"Physics",
"Mathematics"
] | 7,059 | [
"Sizes in clothing",
"Quantity",
"Physical quantities",
"Size"
] |
8,005,697 | https://en.wikipedia.org/wiki/Allylic%20strain | Allylic strain (also known as A1,3 strain, 1,3-allylic strain, or A-strain) in organic chemistry is a type of strain energy resulting from the interaction between a substituent on one end of an olefin (a synonym for an alkene) with an allylic substituent on the other end. If the substituents (R and R') are large enough in size, they can sterically interfere with each other such that one conformer is greatly favored over the other. Allylic strain was first recognized in the literature in 1965 by Johnson and Malhotra. The authors were investigating cyclohexane conformations including endocyclic and exocylic double bonds when they noticed certain conformations were disfavored due to the geometry constraints caused by the double bond. Organic chemists capitalize on the rigidity resulting from allylic strain for use in asymmetric reactions.
Quantifying allylic strain energy
The "strain energy" of a molecule is a quantity that is difficult to precisely define, so the meaning of this term can easily vary depending on one's interpretation. Instead, an objective way to view the allylic strain of a molecule is through its conformational equilibrium. Comparing the heats of formation of the involved conformers, an overall ΔHeq can be evaluated. This term gives information about the relative stabilities of the involved conformers and the effect allylic strain has one equilibrium. Heats of formation can be determined experimentally though calorimetric studies; however, calculated enthalpies are more commonly used due to the greater ease of acquisition.
Different methods utilized to estimate conformational equilibrium enthalpy include: the Westheimer method, the homomorph method, and more simply—using estimated enthalpies of nonbonded interactions within a molecule. Because all of these methods are approximations, reported strain values for the same molecule can vary and should be used only to give a general idea of the strain energy.
Olefins
The simplest type of molecules which exhibit allylic strain are olefins. Depending on the substituents, olefins maintain varying degrees of allylic strain. In 3-methyl-1-butene, the interactions between the hydrogen and the two methyl groups in the allylic system cause a change in enthalpy equal to 2 kcal/mol. As expected, with an increase in substituent size, the equilibrium enthalpies between rotamers also increases. For example, when examining 4-methyl-2-pentene which contains an additional allylic methyl group compared to 3-methyl-1-butene, the enthalpy of rotation for the highest energy conformer increases from 2 kcal/mol to 4 kcal/mol.
Cyclic molecules
Nonbonded 1,3-diaxial interaction energies are commonly used to approximate strain energy in cyclic molecules, as values for these interactions are available. By taking the difference in nonbonded interactions for each conformer, the equilibrium enthalpy can be estimated. The strain energy for methylidenecyclohexane has been calculated to be 4.5 kcalmol−1 using estimations for 1,3-diaxial strain (0.9 kcalmol−1), methyl/hydrogen allylic strain (1.3kcalmol−1), and methyl/methyl allylic strain (7.6 kcalmol−1) values.
The strain energy in 1,8-dimethylnaphthalene was calculated to be 7.6 kcalmol−1 and around 12-15 kcalmol−1 for 4,5-dimethylphenanthrene. Allylic strain tends to be greater for cyclic molecules compared to olefins as strain energy increases with increasing rigidity of the system. An in depth summary of allylic strain in six membered rings has been presented in a review by Johnson, F.
Influencing factors
Several factors influence the energy penalty associated with the allylic strain. In order to relieve strain caused by interaction between the two methyl groups, the cyclohexanes will often exhibit a boat or twist-boat conformation. The boat conformation tends to be the major conformation to the strain. The effect of allylic strain on cis alkenes creates a preference for more linear structures.
Substituent size
The size of the substituents interacting at the 1 and 3 positions of an allylic group is often the largest factor contributing to the magnitude of the strain. As a rule, larger substituents will create a larger magnitude of strain. Proximity of bulky groups causes an increase in repulsive Van der Waals forces. This quickly increases the magnitude of the strain. The interactions between the hydrogen and methyl group in the allylic system cause a change in enthalpy equal to 3.6 kcal/mol. The strain energy in this system was calculated to be 7.6 kcal/mol due to interactions between the two methyl groups.
Substituent polarity
Polarity also has an effect on allylic strain. In terms of stereoselectivity, polar groups act like large, bulky groups. Even though two groups may have approximately the same A values the polar group will act as though it were much bulkier. This is due to the donor character of the polar group. Polar groups increase the HOMO energy of the σ-system in the transition state. This causes the transition state to be in a much more favorable position when the polar group is not interacting in a 1,3 allylic strain.
Hydrogen bonding
With certain polar substituents, hydrogen bonding can occur in the allylic system between the substituents. Rather than the strain that would normally occur in the close group proximity, the hydrogen bond stabilizes the conformation and makes it energetically much more favorable. This scenario occurs when the allylic substituent at the 1 position is a hydrogen bond donor (usually a hydroxyl) and the substituent at the 3 position is a hydrogen bond acceptor (usually an ether). Even in cases where the allylic system could conform to put a much smaller hydrogen in the hydrogen bond acceptor’s position, it is much more favorable to allow the hydrogen bond to form.
Solvents
Solvents also have an effect on allylic strain. When used in conjunction with knowledge of the effects of polarity on allylic strain, solvents can be very useful in directing the conformation of a product that contains an allylic structure in its transition state. When a bulky and polar solvent is able to interact with one of the substituents in the allylic group, the complex of the solvent can energetically force the bulky complex out of the allylic strain in favor of a smaller group.
Conjugation
Conjugation increases the allylic strain because it forces substituents into a configuration that causes their atoms to be in closer proximity, increasing the strength of repulsive Van der Waals forces. This situation occurs most noticeably when carboxylic acid or ketone is involved as a substituent of the allylic group. Resonance effect on the carboxylic group shifts the CO double bond to a hydroxy group. The carboxylic group will thus function as a hydroxyl group that will cause a large allylic strain to form and cancel the stabilization effects of the extended conjugation. This is very common in enolization reactions and can be viewed in the figure below under "Acidic Conditions."
In situations where the molecule can either be in a conjugated system or avoid allylic strain, it has been shown that the molecule's major form will be the one that avoids strain. This has been found via the cyclization in the figure below. Under treatment of perchloric acid, molecule A cyclizes into the conjugated system show in molecule B. However, the molecule will rearrange (due to allylic strain) into molecule C, causing molecule C to be the major species. Thus, the magnitude of destabilization via the allylic strain outweighs the stabilization caused by the conjugated system.
Acidic conditions
In cases where an enolization is occurring around an allylic group (usually as part of a cyclic system), A1,3 strain can cause the reaction to be nearly impossible. In these situations, acid treatment would normally cause the alkene to become protonated, moving the double bond to the carboxylic group, changing it to a hydroxy group. The resulting allylic strain between the alcohol and the other group involved in the allylic system is so great that the reaction can not occur under normal thermodynamic conditions. This same enolization occurs much more rapidly under basic conditions, as the carboxylic group is retained in the transition state and allows the molecule to adopt a conformation that does not cause allylic strain.
Application of allylic strain in organic reactions and total synthesis
Origin of stereoselectivity of organic reactions from allylic strain
When one is considering allylic strain, one needs to consider the possible conformers and the possible stereoelectronic demand of the reaction. For example, in the conformation of (Z)-4-methylpent-2-ene, the molecule isn't frozen in the favored conformer but rotates in the dihedral angle around 30° at <1kcal/mol cost. In stereoselective reactions, there are 2 effects of allylic strain on the reaction which is the sterics effect and the electronic effects. The sterics effect is where the largest group prefer to be the farthest from the alkene. The electronic effect is where the orbitals of the substituents prefer to align anti or outside of the orbitals depending on the reaction.
Hydroboration reaction
The hydroboration reaction is a useful reaction to functionalize alkenes to alcohols. In the reaction the trimethylsilyl (TMS) group fulfill 2 roles in directing the stereoselectivity of the reaction. First, the bulky size of TMS helped the molecule to preferably adopt a conformation where the TMS is not close to the methyl group on the alkene. Second, the TMS group conferred a stereoelectronic effect on the molecule by adopting an anti conformation to the directing orbitals of the alkene. For the regioselectivity of the reaction, the TMS group can stabilize the developing partial positive charge on the secondary carbon a lot better than a methyl group.
Aldol reaction
In the highly versatile and widely used Evans’ Aldol Reaction, allylic strain played a major role in the development of the reaction. The Z enolate was created to avoid the allylic strain with oxazolidinone. The formation of a specific enolate enforces the development of relative stereochemistry throughout the reaction, making the aldol reaction a very predictive and useful methodology out there to synthesize chiral molecules. The absolute stereochemistry is then determined by the chirality of the oxazolidinone.
There is another aspect of aldol reaction that is influenced by the allylic strain. On the second aldol reaction, the product which is a 1,3 dicarbonyl is formed in high diastereoselectivity. This is because the acidity of the proton is significantly reduced because for the deprotonation to occur, it will have to go through a developing allylic strain in the unfavored conformation. In the favored conformation, the proton is not aligned properly for deprotonation to occur.
Diels-Alder reaction
In an intramolecular Diels-Alder reaction, asymmetric induction can be induced through allylic 1,3 strain on the diene or the dienophile. In the following example, the methyl group on the dienophile forced the molecule to adopt that specific 6-membered ring conformation on the molecule.
In the model studies to synthesize chlorothricolide, an intramolecular Diels Alder reaction gave a mixture of diastereomers. But by installing the a bulky TMS substituent, the reaction gave the desired product in high diastereoselectivity and regioselectivity in good yield. The bulky TMS substituent helps enhance allylic 1,3 strain in the conformation of the molecule.
Total synthesis of natural products
In the seminar paper on the total synthesis of (+)-monensin, Kishi and co-workers utilized the allylic strain to induce asymmetric induction in the hydroboration oxidation reaction. The reaction is regioselective and stereoselective. The regioselectivity of the reaction is due to the significant positive character developed at the tertiary carbon. The stereoselectivity of the reaction is due to the attack by the borane from the least hindered side to which is where the methyl group lies at.
References
External links
Advanced Organic Chemistry Lecture Notes (Evans, D. A.; Myers, A. G. Harvard University, 2006-2007)
Stereochemistry | Allylic strain | [
"Physics",
"Chemistry"
] | 2,724 | [
"Spacetime",
"Stereochemistry",
"Space",
"nan"
] |
8,006,942 | https://en.wikipedia.org/wiki/Military%20College%20of%20Telecommunication%20Engineering | The Military College of Telecommunication Engineering (MCTE) is the engineering training establishment for the Corps of Signals, established 1911, of the Indian Army. It is located near Indore, in the town formerly known as Mhow, now called Dr Ambedkar Nagar, in Madhya Pradesh. Lt Gen Vivek Dogra is the present Military College of Telecommunication Engineering (MCTE) Commandant.
References
External links
Corps of Signals at Indian Army website
Military academies of India
Military communications of India
Telecommunications engineering
Telecommunication education
Universities and colleges in Mhow
Universities and colleges established in 1911
1911 establishments in India | Military College of Telecommunication Engineering | [
"Engineering"
] | 118 | [
"Electrical engineering",
"Telecommunications engineering"
] |
8,008,377 | https://en.wikipedia.org/wiki/Phase-contrast%20microscopy |
Phase-contrast microscopy (PCM) is an optical microscopy technique that converts phase shifts in light passing through a transparent specimen to brightness changes in the image. Phase shifts themselves are invisible, but become visible when shown as brightness variations.
When light waves travel through a medium other than a vacuum, interaction with the medium causes the wave amplitude and phase to change in a manner dependent on properties of the medium. Changes in amplitude (brightness) arise from the scattering and absorption of light, which is often wavelength-dependent and may give rise to colors. Photographic equipment and the human eye are only sensitive to amplitude variations. Without special arrangements, phase changes are therefore invisible. Yet, phase changes often convey important information.
Phase-contrast microscopy is particularly important in biology.
It reveals many cellular structures that are invisible with a bright-field microscope, as exemplified in the figure.
These structures were made visible to earlier microscopists by staining, but this required additional preparation and death of the cells.
The phase-contrast microscope made it possible for biologists to study living cells and how they proliferate through cell division. It is one of the few methods available to quantify cellular structure and components without using fluorescence.
After its invention in the early 1930s, phase-contrast microscopy proved to be such an advancement in microscopy that its inventor Frits Zernike was awarded the Nobel Prize in Physics in 1953. The woman who manufactured this microscope, Caroline Bleeker, often remains uncredited.
Working principle
The basic principle to make phase changes visible in phase-contrast microscopy is to separate the illuminating (background) light from the specimen-scattered light (which makes up the foreground details) and to manipulate these differently.
The ring-shaped illuminating light (green) that passes the condenser annulus is focused on the specimen by the condenser. Some of the illuminating light is scattered by the specimen (yellow). The remaining light is unaffected by the specimen and forms the background light (red). When observing an unstained biological specimen, the scattered light is weak and typically phase-shifted by −90° (due to both the typical thickness of specimens and the refractive index difference between biological tissue and the surrounding medium) relative to the background light. This leads to the foreground (blue vector) and background (red vector) having nearly the same intensity, resulting in low image contrast.
In a phase-contrast microscope, image contrast is increased in two ways: by generating constructive interference between scattered and background light rays in regions of the field of view that contain the specimen, and by reducing the amount of background light that reaches the image plane. First, the background light is phase-shifted by −90° by passing it through a phase-shift ring, which eliminates the phase difference between the background and the scattered light rays.
When the light is then focused on the image plane (where a camera or eyepiece is placed), this phase shift causes background and scattered light rays originating from regions of the field of view that contain the sample (i.e., the foreground) to constructively interfere, resulting in an increase in the brightness of these areas compared to regions that do not contain the sample. Finally, the background is dimmed ~70-90% by a gray filter ring; this method maximizes the amount of scattered light generated by the illumination light, while minimizing the amount of illumination light that reaches the image plane. Some of the scattered light that illuminates the entire surface of the filter will be phase-shifted and dimmed by the rings, but to a much lesser extent than the background light, which only illuminates the phase-shift and gray filter rings.
The above describes negative phase contrast. In its positive form, the background light is instead phase-shifted by +90°. The background light will thus be 180° out of phase relative to the scattered light. The scattered light will then be subtracted from the background light to form an image with a darker foreground and a lighter background, as shown in the first figure.
Related methods
The success of the phase-contrast microscope has led to a number of subsequent phase-imaging methods.
In 1952, Georges Nomarski patented what is today known as differential interference contrast (DIC) microscopy.
It enhances contrast by creating artificial shadows, as if the object is illuminated from the side. But DIC microscopy is unsuitable when the object or its container alter polarization. With the growing use of polarizing plastic containers in cell biology, DIC microscopy is increasingly replaced by Hoffman modulation contrast microscopy, invented by Robert Hoffman in 1975.
Traditional phase-contrast methods enhance contrast optically, blending brightness and phase information in a single image. Since the introduction of the digital camera in the mid-1990s, several new digital phase-imaging methods have been developed, collectively known as quantitative phase-contrast microscopy. These methods digitally create two separate images, an ordinary bright-field image and a so-called phase-shift image. In each image point, the phase-shift image displays the quantified phase shift induced by the object, which is proportional to the optical thickness of the object. In this way measurement of the associated optical field can remedy the halo artifacts associated with conventional phase contrast by solving an optical inverse problem to computationally reconstruct the scattering potential of the object.
See also
Live cell imaging
Phase-contrast imaging
Phase-contrast X-ray imaging
References
External links
Optical Microscopy Primer — Phase Contrast Microscopy by Florida State University
Phase contrast and dark field microscopes (Université Paris-Sud)
Microscope Parts need to know.
Dutch inventions
Cell imaging
Laboratory equipment
Optical microscopy techniques
Microscopes | Phase-contrast microscopy | [
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 1,145 | [
"Microscopes",
"Cell imaging",
"Measuring instruments",
"Microscopy"
] |
8,008,410 | https://en.wikipedia.org/wiki/Downtime | In computing and telecommunications, downtime (also (system) outage or (system) drought colloquially) is a period when a system is unavailable. The unavailability is the proportion of a time-span that a system is unavailable or offline.
This is usually a result of the system failing to function because of an unplanned event, or because of routine maintenance (a planned event).
The terms are commonly applied to networks and servers. The common reasons for unplanned outages are system failures (such as a crash) or communications failures (commonly known as network outage or network drought colloquially). For outages due to issues with general computer systems, the term computer outage (also IT outage or IT drought) can be used.
The term is also commonly applied in industrial environments in relation to failures in industrial production equipment. Some facilities measure the downtime incurred during a work shift, or during a 12- or 24-hour period. Another common practice is to identify each downtime event as having an operational, electrical or mechanical origin.
The opposite of downtime is uptime.
Types
Industry standards for the term "Outage Duration" or "Maintenance Duration" can have different point of initiation and completion thus the following clarification should be used to avoid conflicts in contract execution:
"Turnkey" this is the most engrossing of all outage types. Outage or Maintenance starts with operator of the plant or equipment pressing the shutdown or stop button to initiate a halt in operation. Unless otherwise noted, Outage or Maintenance is considered completed when the plant or equipment is back in normal operation ready to begin manufacturing or ready be synchronized with system or grid or ready to perform duties as pump or compressor.
"Breaker to Breaker" This Outage or Maintenance starts with operator of the plant or equipment removing the power circuit (Main power breaker at "off" or "disengaged" or "On-Cooldown"), not the control circuit from operation. This still would allow for the equipment to be cooled down or brought to ambient such that outage/maintenance work can be prepared or initiated. Depending on equipment types, "Breaker to Breaker" outage can be advantageous if contracting out controls related maintenance as this type of maintenance work can be performed while main equipment is still on cool-down or on stand-by. Unless otherwise noted, this type of outage is considered complete when power circuit is re-energized via engaging of the power breaker.
"Completion of Lock-out/Tag-out" This Outage or Maintenance (sometimes mistaken for "Off-Cooldown" but not the same) starts with operator of the plant or equipment removing the power circuit, disengaging the control circuit and performing other neutralization of potential power and hazard sources (typically called Lock-Out, Tag-Out "LOTO") This point of maintenance period is typically the last phase of the outage initiation stage before actual work starts on the facility, plant or equipment. Safety briefing should always follow the LOTO activity, before any work is conducted. Unless otherwise noted, this type of outage is considered complete when the equipment has reached mechanical completion and ready to be placed on slow-roll for many heavy rotating equipment, Bump-test or rotation check for motors, etc., but must follow return or work permit per LOTO procedures.
Any on-line testing, performance testing and tuning required should not count towards the outage duration as these activities are typically conducted after the completion of outage or maintenance event and are out of control of most maintenance contractors.
Characteristics
Unplanned downtime may be the result of an equipment malfunction, etc.
Telecommunication outage classifications
Downtime can be caused by failure in
hardware (physical equipment),
(logic controlling equipment),
interconnecting equipment (such as cables, facilities, routers,...),
transmission (wireless, microwave, satellite), and/or
capacity (system limits).
The failures can occur because of
damage,
failure,
design,
procedural (improper use by humans),
engineering (how to use and deployment),
overload (traffic or system resources stressed beyond designed limits),
environment (support systems like power and HVAC),
(outages designed into the system for a purpose such as software upgrades and equipment growth),
other (none of the above but known), or
unknown.
The failures can be the responsibility of
customer/service provider,
vendor/supplier,
utility,
government,
contractor,
end customer,
public individual,
act of nature,
other (none of the above but known), or
unknown.
Impact
Outages caused by system failures can have a serious impact on the users of computer/network systems, in particular those industries that rely on a nearly 24-hour service:
Medical informatics
Nuclear power and other infrastructure
Banks and other financial institutions
Aeronautics, airlines
News reporting
E-commerce and online transaction processing
Persistent online games
Also affected can be the users of an ISP and other customers of a telecommunication network.
Corporations can lose business due to network outage or they may default on a contract, resulting in financial losses. According to Veeam 2019 cloud data management report organizations encounter unplanned downtime, on average, 5-10 times per year with the average cost of one hour of downtime being $102,450.
Those people or organizations that are affected by downtime can be more sensitive to particular aspects:
some are more affected by the length of an outage - it matters to them how much time it takes to recover from a problem
others are sensitive to the timing of an outage - outages during peak hours affect them the most
The most demanding users are those that require high availability.
Famous outages
On Mother's Day, Sunday, May 8, 1988, a fire broke out in the main switching room of the Hinsdale Central Office of the Illinois Bell telephone company. One of the largest switching systems in the state, the facility processed more than 3.5 million calls each day while serving 38,000 customers, including numerous businesses, hospitals, and Chicago's O'Hare and Midway Airports.
Virtually the entire AT&T network of 4ESS toll tandems switches went in and out of service over and over again on January 15, 1990, disrupting long-distance service for the entire United States. The problem dissipated by itself when traffic slowed down. A software bug was found.
AT&T lost its Frame Relay network for 26 hours on April 13, 1998. This affected many thousands of customers, and bank transactions were one casualty. AT&T failed to meet the service level agreement on their contracts with customers and had to refund 6,600 customer accounts, costing millions of dollars.
Xbox Live had intermittent downtime during the 2007–2008 holiday season which lasted thirteen days. Increased demand from Xbox 360 purchasers (the largest number of new user sign-ups in the history of Xbox Live) was given as the reason for the downtime; in order to make amends for the service issues, Microsoft offered their users the opportunity to receive a free game.
Sony's PlayStation Network April 2011 outage, began on April 20, 2011, and was gradually restored on May 14, 2011, starting in the United States. This outage is the longest amount of time the PSN has been offline since its inception in 2006. Sony has stated the problem was caused by an external intrusion which resulted in the confiscation of personal information. Sony reported on April 26, 2011, that a large amount of user data had been obtained by the same hack that resulted in the downtime.
Telstra's Ryde switch failed in late 2011 after water egressed into the electrical switch board from continuing wet weather. The Ryde switch is one of the largest by area switches in Australia, and affected more than 720,000 services.
The Miami datacenter of ServerAxis went offline unannounced on February 29, 2016, and was never restored. This impacted multiple providers and hundreds of websites. The outage impacted coverage of the 2016 NCAA Division I women's basketball tournament as WBBState, one of the affected sites, was by far the most comprehensive provider of women's basketball statistics available.
The game platform Roblox had an outage around October 2021, during their Chipotle Event. Many users thought it was because of the event, because it received massive reception, as users could get a free Chipotle burrito during it. The outage was Roblox's longest downtime, lasting 3 days.
On July 8, 2022, Rogers suffered a major nationwide outage in Canada. This simultaneously affected cell phone and internet access, causing 911 calls, interbank transactions to fail and also disrupting government services.
On July 19th, 2024, CrowdStrike issued a faulty device driver update for their Falcon software, resulting in Windows PCs, servers, and virtual machines to crash and boot loop. The incident unintentionally affected approximately 8.5 million Windows machines worldwide, including critical infrastructure such as 911 services in various states. It is considered to be the largest outage in the history of information technology.
Service levels
In service level agreements, it is common to mention a percentage value (per month or per year) that is calculated by dividing the sum of all downtimes timespans by the total time of a reference time span (e.g. a month). 0% downtime means that the server was available all the time.
For Internet servers downtimes above 1% per year or worse can be regarded as unacceptable as this means a downtime of more than 3 days per year. For e-commerce and other industrial use any value above 0.1% is usually considered unacceptable.
Response and reduction of impact
It is the duty of the network designer to make sure that a network outage does not happen. When it does happen, a well-designed system will further reduce the effects of an outage by having localized outages which can be detected and fixed as soon as possible.
A process needs to be in place to detect a malfunction - network monitoring - and to restore the network to a working condition - this generally involves a help desk team that can troubleshoot a problem, one composed of trained engineers; a separate help desk team is usually necessary in order to field user input, which can be particularly demanding during a downtime.
A network management system can be used to detect faulty or degrading components prior to customer complaints, with proactive fault rectification.
Risk management techniques can be used to determine the impact of network outages on an organisation and what actions may be required to minimise risk. Risk may be minimised by using reliable components, by performing maintenance, such as upgrades, by using redundant systems or by having a contingency plan or business continuity plan.
Technical means can reduce errors with error correcting codes, retransmission, checksums, or diversity scheme.
One of the biggest causes of downtime is misconfiguration, where a planned change goes wrong. Typically organisations rely on manual effort to manage the process of configuration backups, but this requires highly skilled engineers with the time to manage the process across a multi-vendor network. Automation tools are available to manage backups, but there are very few solutions that handle configuration recovery which is needed to minimize the overall impact of the outage.
Planning
A planned outage is the result of a planned activity by the system owner and/or by a service provider. These outages, often scheduled during the maintenance window, can be used to perform tasks including the following:
Deferred maintenance, e.g., a deferred hardware repair or a deferred restart to clean up a garbled memory
Diagnostics to isolate a detected fault
Hardware fault repair
Fixing an error or omission in a configuration database or omission in a recent configuration database change
Fixing an error in application database or an error in a recent application database change
Software patching/software updates to fix a software fault.
Outages can also be planned as a result of a predictable natural event, such as Sun outage.
Maintenance downtimes have to be carefully scheduled in industries that rely on computer systems. In many cases, system-wide downtimes can be averted using what is called a "rolling upgrade" - the process of incrementally taking down parts of the system for upgrade, without affecting the overall functionality.
Avoidance
For most websites, website monitoring is available. Website monitoring (synthetic or passive) is a service that "monitors" downtime and users on the site.
Other usage
Downtime can also refer to time when human capital or other assets go down. For instance, if employees are in meetings or unable to perform their work due to another constraint, they are down. This can be equally expensive, and can be the result of another asset (i.e. computer/systems) being down. This is also commonly known as "dead time".
Downtime is also generalized in a personal sense, being used to refer to a period of sleep or recreation.
This term is used also in factories or industrial use. See total productive maintenance (TPM).
Measuring downtime
There are many external services which can be used to monitor the uptime and downtime as well as availability of a service or a host.
See also
High availability
Uptime
Mean down time
Planned downtime
Carrier grade
References
External links
Engineering failures
Maintenance
System administration
it:Tempo di fermo | Downtime | [
"Technology",
"Engineering"
] | 2,760 | [
"Systems engineering",
"Reliability engineering",
"Technological failures",
"Engineering failures",
"System administration",
"Information systems",
"Civil engineering",
"Mechanical engineering",
"Maintenance"
] |
8,008,662 | https://en.wikipedia.org/wiki/Tafel%20equation | The Tafel equation is an equation in electrochemical kinetics relating the rate of an electrochemical reaction to the overpotential. The Tafel equation was first deduced experimentally and was later shown to have a theoretical justification. The equation is named after Swiss chemist Julius Tafel.It describes how the electrical current through an electrode depends on the voltage difference between the electrode and the bulk electrolyte for a simple, unimolecular redox reaction.
Where an electrochemical reaction occurs in two half reactions on separate electrodes, the Tafel equation is applied to each electrode separately. On a single electrode the Tafel equation can be stated as:
where
the plus sign under the exponent refers to an anodic reaction, and a minus sign to a cathodic reaction,
: overpotential, [V]
: Tafel slope", [V]
: current density, [A/m2]
: "exchange current density", [A/m2].
A verification plus further explanation for this equation can be found here. The Tafel equation is an approximation of the Butler–Volmer equation in the case of . "[ The Tafel equation ] assumes that the concentrations at the electrode are practically equal to the concentrations in the bulk electrolyte, allowing the current to be expressed as a function of potential only. In other words, it assumes that the electrode mass transfer rate is much greater than the reaction rate, and that the reaction is dominated by the slower chemical reaction rate ". Also, at a given electrode the Tafel equation assumes that the reverse half reaction rate is negligible compared to the forward reaction rate.
Overview of the terms
The exchange current is the current at equilibrium, i.e. the rate at which oxidized and reduced species transfer electrons with the electrode. In other words, the exchange current density is the rate of reaction at the reversible potential (when the overpotential is zero by definition). At the reversible potential, the reaction is in equilibrium meaning that the forward and reverse reactions progress at the same rates. This rate is the exchange current density.
The Tafel slope is measured experimentally. It can, however, be shown theoretically that when the dominant reaction mechanism involves the transfer of a single electron that
where A is defined as
where
is the Boltzmann constant,
is the absolute temperature,
is the electric elementary charge of an electron,
is the thermal voltage, and
is the charge transfer coefficient, the value of which must be between 0 and 1.
Equation in case of non-negligible electrode mass transfer
In a more general case, The following derivation of the extended Butler–Volmer equation is adapted from that of Bard and Faulkner and Newman and Thomas-Alyea. [ ... ] the current is expressed as a function not only of potential (as in the simple version), but of the given concentrations as well. The mass-transfer rate may be relatively small, but its only effect on the chemical reaction is through the altered (given) concentrations. In effect, the concentrations are a function of the potential as well.The Tafel equation can be also written as:
where
n is the number of electrons exchanged, like in the Nernst equation,
k is the rate constant for the electrode reaction in s−1,
F is the Faraday constant,
C is the reactive species concentration at the electrode surface in mol/m2,
the plus sign under the exponent refers to an anodic reaction, and a minus sign to a cathodic reaction,
R is the universal gas constant.
is the charge transfer coefficient, the value of which must be between 0 and 1.
Demonstration
As seen in equation (),
so:
as seen in equation () and because .
because
due to the electrode mass transfer , which finally yields equation ().
Equation in case of low values of polarization
An other equation is applicable at low values of polarization . In such case, the dependence of current on polarization is usually linear (not logarithmic):
This linear region is called polarization resistance due to its formal similarity to Ohm's law.
Kinetics of corrosion
The pace at which corrosion develops is determined by the kinetics of the reactions involved, hence the electrical double layer is critical.
Applying an overpotential to an electrode causes the reaction to move in one direction, away from equilibrium. Tafel's law determines the new rate, and as long as the reaction kinetics are under control, the overpotential is proportional to the log of the corrosion current.
See also
Overpotential
Butler–Volmer equation
Electrocatalyst
Faradaic current
Faraday's laws of electrolysis
References
Further reading
External links
Chemical kinetics
Electrochemical equations
Physical chemistry | Tafel equation | [
"Physics",
"Chemistry",
"Mathematics"
] | 977 | [
"Chemical reaction engineering",
"Applied and interdisciplinary physics",
"Mathematical objects",
"Equations",
"Electrochemistry",
"nan",
"Chemical kinetics",
"Physical chemistry",
"Electrochemical equations"
] |
8,009,013 | https://en.wikipedia.org/wiki/Superior%20fascia%20of%20the%20urogenital%20diaphragm | The superior fascia of the urogenital diaphragm is continuous with the obturator fascia and stretches across the pubic arch.
Structure
If the obturator fascia be traced medially after leaving the obturator internus muscle, it will be found attached by some of its deeper or anterior fibers to the inner margin of the pubic arch, while its superficial or posterior fibers pass over this attachment to become continuous with the superior fascia of the urogenital diaphragm.
Behind, this layer of the fascia is continuous with the inferior fascia and with the fascia of Colles; in front it is continuous with the fascial sheath of the prostate, and is fused with the inferior fascia to form the transverse ligament of the pelvis.
Controversy
Some sources dispute that this structure exists. However, whether this layer is real or imagined, it still serves to describe a division of the contents of the perineum in many modern anatomy resources.
References
External links
Genitourinary system
Fascia | Superior fascia of the urogenital diaphragm | [
"Biology"
] | 212 | [
"Organ systems",
"Genitourinary system"
] |
10,348,500 | https://en.wikipedia.org/wiki/Material%20flow%20management | Material flow management (MFM) is an economic focused method of analysis and reformation of goods production and subsequent waste through the lens of material flows, incorporating themes of sustainability and the theory of a circular economy. It is used in social, medical, and urban contexts. However, MFM has grown in the field of industrial ecology, combining both technical and economic approaches to minimize waste that impacts economic prosperity and the environment. It has been heavily utilized by the country of Germany, but it has been applied to the industries of various other countries. The material flow management process utilizes the Sankey diagram, and echoes the circular economy model, while being represented in media environments as a business model which may help lower the costs of production and waste.
Context
Material flow management began as a largely academic discourse, eventually becoming an actual tool implemented by both countries and industries.
The first clear suggestion of material flow management was that of Robert A. Frosch and Nicholas E. Gallopoulos. Published in the Scientific American Journal in 1989, Frosch and Gallopoulos introduced and recommended the optimization of waste from industrial processes to then be reused for another. While lacking in detail, the analysis of material flow management continued to develop years later, with Robert Socolow and Valerie Thomas beginning to support findings with data, publishing their work in the Journal of Industrial Ecology in 1997.
Material flow management was established as a policy at the 1992 Rio De Janeiro United Nations Conference on Environment and Development (UNCED), or UN "Earth Summit" Conference. The event was later credited as an advancement towards three UN treaties: Framework Convention on Climate Change, the Convention on Biological Diversity, and the Convention to Combat Desertification.
Material flow management has been credited as a factor in environmental sustainability and environmental management, given its focus on responsible management of ecosystems and ecosystem services for current use, and that of future generations.
Uses and applications
One of the terms used in academic and practical discussions of material flow management is "material flow analysis" (MFA), which is identified as part of the MFM process. MFA is the more target-oriented analysis of substance flow within a system of production, especially within a company.
Material flow analysis is the responsibility of both organized governments and industries. While policies produced by governmental bodies create a framework, the actual design and implementation are done by industries. There are several stakeholders involved in these processes.
Material flow management assessment began to take country- and government-focused approaches following a 1997 publication by the World Resources Institute for the Netherlands and Germany. It displayed the total flow, soon adjusted to divide overall flows into its major constituents. In 2002, the United States Environmental Protection Agency released the report, Beyond RCRA: Waste and Materials Management in the Year 2020, finding that it is time for society to shift from a waste management-focused environmental plan to a material management-focused plan.
Another assessment was conducted by Taylor Searcy in 2017, revitalizing Fiji’s sustainable sea transportation industry to improve socio-economic and environmental impacts.
A 2019 study of the material flow in Brazil's mortar and concrete supply chain concluded that in terms of material use efficiency, the ratio of product to material consumption results in a low score, with the most outstanding inefficient processes being quarry waste and building waste at extraction and construction sites.
Government policies
The United States began seriously incorporating material flow management in its environmental policies with the Resource Conservation and Recovery Act of 1976. This gave the government the ability the control hazardous waste produced by all steps of production. Eventually, Congress helped strengthen the RCRA with the Hazardous and Solid Waste Amendments of 1984, incorporating more preventative policies.
In 2006, Israel released a Sustainable Solid Waste Management Plan, outlining green goals and priorities for the country's waste system, including economic tools of execution. Policies for household recycling and waste collection separation were then solidified with the 2010 Recycling Action Plan.
Korea has also introduced various policies that have executed MFM, specifically in regard to food waste. A 2005 ban on putting untreated food in landfills was followed by a 2012 ban on ocean dumping. In addition to these environmental initiatives, the country combined the economic and social aspects of MFM using food waste agreements with vital economic sectors, as well as public awareness campaigns.
Sankey diagram
The material flow management process utilizes the Sankey diagram, and echoes the circular economy model, while being represented in media environments as a business model which may help lower the costs of production and waste. An important tool for MFM is the Sankey diagram. It was developed by Irish engineer Riall Sankey to analyze the efficiency of steam engines and has since become a tool in industrial engineering and science. Sankey diagrams are a visual representation of industrial ecology. While they were mostly used in historical contexts, they are useful for assessing ecological impacts.
Circular economy
A circular economy is a model of resource production and consumption in any economy that involves sharing, leasing, reusing, repairing, refurbishing, and recycling existing materials and products. The circular economy, an economic system still in the development process (not yet widely adopted), intends to model itself after the material flow management and energy models in biological systems. Focusing on society-wide benefits, it designs a system without waste or pollution and intends to keep products and materials in the system for as long as possible. Applications of the circular economy in the European Union have produced evidence of practicality, estimating that implementation in agricultural, chemical, and construction sectors could reduce up to 7.5 billion tonnes of CO2e globally.
Media representation
In today's studies on the effectiveness of MFM for improving productivity, an analysis of its implementation has been debated with its correlation to government roles in environmental management. With the large spike in environmental disaster concepts regarding the depletion of resources by human activity, MFM could be interpreted as a promoter of the circular economy and an analysis of the necessity of such.
In this light, MFM is being utilized as a business strategy that would be meant to optimize vertical integration of manufacturing. Companies focusing on the economics behind MFM, rather than the subject of the environmental crisis, may take note of how MFM lowers the cost of materials by creating an efficient approach to sustainability.
Material flow management as a business model may appear to some to promote sustainability in the long run. However, its critics still find these two terms – MFM and sustainability – to be at a crossroads despite the vertical integration model of manufacturing showing agreement in their processes.
See also
Environmental ethics, a part of environmental philosophy which argues for the protection of natural entities, informing waste management practices
Environmentalism, a philosophy which centers on the protection of the Earth and can be achieved through material flow management
Environmental protection, a practice involving the protection of natural environments and taking preventative measures that may include material flow management
Material flow accounting, a method of studying the flow of materials and wastes on national or regional scales, with economic factors which lend themselves to material flow management
References
External links
Cadence: Material Flow Analysis in Manufacturing Improves Production and Efficiency
iPoint: Sankey Diagram Software
Materials
Industrial ecology
Sustainability and environmental management | Material flow management | [
"Physics",
"Chemistry",
"Engineering"
] | 1,432 | [
"Industrial engineering",
"Materials",
"Environmental engineering",
"Industrial ecology",
"Matter"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.