id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
30,044,847 | https://en.wikipedia.org/wiki/Slurry%20pump | A slurry pump is a type of pump designed for pumping liquid containing solid particles. Slurry pumps changes in design and construction to adjust to multiple type of slurry which varies in concentration of solids, size of solid particles, shape of solid particles, and composition of solution. Slurry pump are more robust than liquid pumps; they have added sacrificial material and replaceable wear parts to withstand wear due to abrasion.
Centrifugal, positive displacement, and vortex pumps can be used for slurry. Centrifugal slurry pumps can have between bearing-supported shafts with split casing or rubber- or metal-lined casing. Configurations include horizontal, vertical suspended and submersible.
Slurry is usually classified according to the concentration of solids. Engineering classification of slurry is more complex and involves concentration, particle size, shape and weight in order to determine abrasion severity. For engineering selection of slurry pumps, slurry is classified as class 1, class 2, class 3 and class 4.
Selection of slurry pumps is more difficult than selection of pumps for water and liquids. Many factors and corrections to the duty point affect brake horsepower and wear. Root-dynamic Centrifugal Slurry Pumps (ANSI/HI 12.1-12.6-2016) provides methods for calculation of slurry pumps. The peripheral speed of the impeller is one of the main features and classification of slurry pumps. Speed must be in accordance with the slurry type classification (abrasion classification) in order to maintain a reasonable life in service due to high abrasion of solids.
Before selecting an appropriate slurry pump the engineers considers capacity, head, solids handling capacity, efficiency and power, speed and NPSH.
Slurry pumps are widely used in transport of abrasive solids in industries such as mining, dredging, and steel. They are often designed to be suitable for heavy-wearing and heavy-duty uses. Depending on the mining process, some slurries are corrosive which presents a challenge because corrosion-resistant materials like stainless steel are softer than high-iron steel. The most common metal alloy used to build slurry pumps is known as "high chrome", which is basically white iron with 25% chromium added to make it less brittle. Rubber line casings are also used for certain application where the solid particles are small.
Components
Impeller The impeller, either elastomer, stainless steel or high-chrome material, is the main rotating component which normally has vanes to impart the centrifugal force to the liquid.
Casing Split outer casing halves of cast contain the wear liners and provide high operation pressure capabilities. The casing shape is generally of semi-volute or concentric, efficiencies of which are less than that of the volute type.
Shaft and Bearing Assembly A large diameter shaft with a short overhang minimizes deflection and vibration. Heavy-duty roller bearing are housed in a removable bearing cartridge.
Shaft sleeve A hardened, heavy-duty corrosion-resistant sleeve with O-ring seals at both ends protects the shaft. A split fit allows the sleeve removed or installed quickly.
Shaft Seal Expeller drive seal, Packing seal, Mechanical seal.
Drive Type V-belt drive, gear reducer drive, fluid coupling drive, and frequency conversion drive devices.
Types
SubmersibleSubmersible slurry pumps are placed at the bottom of a tank, lagoon, pond, or another water-filled environment, and suction solids and liquids right at the pump itself. The materials are taken in at the intake and passed through a hose connected to the discharge valve.
Self-PrimingA self-priming slurry pump is operated from land, and a hose is connected to the pump's intake valve. The self-priming pump draws the slurry to the pump then discharges the material from there.
Flooded SuctionThe flooded suction slurry pump is connected to a tank or hopper and uses gravity to move slurry and liquid from the enclosure. Located at the bottom or below the water, the pump uses the force of gravity to continuously fill the pump and then passes the material out through the discharge valve.
Pumps | Slurry pump | Physics,Chemistry | 888 |
20,035,423 | https://en.wikipedia.org/wiki/Flux%20transfer%20event | A flux transfer event (FTE) occurs when a magnetic portal opens in the Earth's magnetosphere through which high-energy particles flow from the Sun. This connection, while previously thought to be permanent, has been found to be brief and very dynamic. The European Space Agency's four Cluster spacecraft and NASA's five THEMIS probes have flown through and surrounded these FTEs, measuring their dimensions and identifying the particles that are transferred between the magnetic fields.
Formation
Earth's magnetosphere and the Sun's magnetic field are constantly pressed against one another on the dayside of Earth. Approximately every eight minutes, these fields briefly merge, forming a temporary "portal" between the Earth and the Sun through which high-energy particles such as solar wind can flow. The portal takes the shape of a magnetic cylinder about the width of Earth. Current observations place the portal at up to 4 times the size of Earth.
Simulations
Since Cluster and THEMIS have directly sampled FTEs, scientists can simulate FTEs on computers to predict how they might behave. Jimmy Raeder of the University of New Hampshire told his colleagues simulations show that the cylindrical portals tend to form above Earth's equator and then roll over Earth's winter pole. In December, FTEs roll over the North Pole; in July they roll over the South Pole.
Flux transfer events beyond Earth
Magnetic fields similar to Earth's are common throughout known space and many undergo similar flux transfer events. During its second flyby of the planet on October 6, 2008, the NASA probe MESSENGER discovered that Mercury’s magnetic field shows a magnetic reconnection rate ten times higher than Earth's. Mercury's proximity to the Sun only accounts for about a third of the reconnection rate observed by MESSENGER and the cause of this discrepancy is not currently known.
Most recently, it has been found that the same phenomenon, also known as a 'magnetic rope', can be observed at Saturn. The findings prove that at times Saturn "behaves and interacts with the Sun in much the same way as Earth".
See also
Flux tube
Magnetic flux
References
Magnetic Portals Connect Sun and Earth
External links
Magnetic Portals Connect Sun and Earth
A Giant Breach in Earth's Magnetic Field
Planetary science
Space plasmas | Flux transfer event | Physics,Astronomy | 457 |
387,797 | https://en.wikipedia.org/wiki/Asphalt%20concrete | Asphalt concrete (commonly called asphalt, blacktop, or pavement in North America, and tarmac or bitumen macadam in the United Kingdom and the Republic of Ireland) is a composite material commonly used to surface roads, parking lots, airports, and the core of embankment dams. Asphalt mixtures have been used in pavement construction since the nineteenth century. It consists of mineral aggregate bound together with bitumen (a substance also independently known as asphalt, Pitch, or Tar), laid in layers, and compacted.
The American English terms asphalt (or asphaltic) concrete, bituminous asphalt concrete, and bituminous mixture are typically used only in engineering and construction documents, which define concrete as any composite material composed of mineral aggregate adhered with a binder. The abbreviation, AC, is sometimes used for asphalt concrete but can also denote asphalt content or asphalt cement, referring to the liquid asphalt portion of the composite material.
History
Natural asphalt (Ancient Greek: ἄσφαλτος (ásphaltos)) has been known of and used since antiquity, in Mesopotamia, Phoenicia, Egypt, Babylon, Greece, Carthage, and Rome, to waterproof temple baths, reservoirs, aqueducts, tunnels, and moats, as a masonry mortar, to cork vessels, and surface roads. The Procession Street of Babylonian King Nabopolassar, c. 625 BC, leading north from his palace through the city's wall, being described as being constructed from burnt brick and asphalt. Natural asphalt covered and bonded cobbles were used from 1824, in France, as a means to construct roads. In 1829 natural Seyssel asphalt mixed with 7% aggregate, to create an asphat-mastic surface was used for a footpath at Pont Morand, Lyons, France, the technique spreading to Paris in 1835, London, England, in 1836, and the Philadelphia, USA, in 1838. A two mile stretch of a gravel constructed road, running out of Nottingham, and Huntingdon High Street, were experimentally covered is natural asphalt during the 1840s. The first Macadam road surfaced with asphalt was constructed in 1852, between Paris and Perpignan, France, using Swiss Val de Travers rock asphalt (natural asphalt covered limestone aggregate). In 1869, Threadneedle Street, in London, England was resurfaced with Swiss Val de Travers rock asphalt. A process to surface a packed sand road through application of heated natural asphalt mixed with sand, in a ratio of 1:5, rolling, and hardened through the application of natural asphalt mixed with a petroleum oil, was invented by Belgian-American chemist Edward De Smedt, at Columbia University, in 1870, obtaining a pair of U.S. patents for the material and method of hardening. Civil Engineer, Surveyor, and an English county Highway board member, Edgar Purnell Hooley created a process and engine to combine a synthetic, refined petroleum tar, and resin, with Macadam aggregates (gravel, portland cement, crushed rocks, and blast furnace slag) in a steam heated mixer, at 212 °F, and through a heated reservoir, conduits, and meshes, create a machine and material that can be applied to form a road surface, filing a UK patent, in 1902, for his improvement. Hooley founding a UK company to market the technology, where the term tar macadam, shortened to tarmac was coined, after the name of his companyTar Macadam (Purnell Hooley's Patent) Syndicate Limited, derived from the combination of tar and Macadam gravel composite mixtures.
Mixture formulations
Mixing of asphalt and aggregate is accomplished in one of several ways:
Hot-mix asphalt concrete (commonly abbreviated as HMA) This is produced by heating the asphalt binder to decrease its viscosity and drying the aggregate to remove moisture from it prior to mixing. Mixing is generally performed with the aggregate at about for virgin asphalt and for polymer modified asphalt, and the asphalt cement at . Paving and compaction must be performed while the asphalt is sufficiently hot. In many locales paving is restricted to summer months because in winter the base will cool the asphalt too quickly before it can be packed to the required density. HMA is the form of asphalt concrete most commonly used on high traffic pavements such as those on major highways, racetracks and airfields. It is also used as an environmental liner for landfills, reservoirs, and fish hatchery ponds.
Warm-mix asphalt concrete (commonly abbreviated as WMA) This is produced by adding either zeolites, waxes, asphalt emulsions or sometimes water to the asphalt binder prior to mixing. This allows significantly lower mixing and laying temperatures and results in lower consumption of fossil fuels, thus releasing less carbon dioxide, aerosols and vapors. This improves working conditions, and lowers laying-temperature, which leads to more rapid availability of the surface for use, which is important for construction sites with critical time schedules. The usage of these additives in hot-mixed asphalt (above) may afford easier compaction and allow cold-weather paving or longer hauls. Use of warm mix is rapidly expanding. A survey of US asphalt producers found that nearly 25% of asphalt produced in 2012 was warm mix, a 416% increase since 2009. Cleaner road pavements can be potentially developed by combining WMA and material recycling. Warm Mix Asphalt (WMA) technology has environmental, production, and economic benefits.
Cold-mix asphalt concrete This is produced by emulsifying the asphalt in water with an emulsifying agent before mixing with the aggregate. While in its emulsified state, the asphalt is less viscous and the mixture is easy to work and compact. The emulsion will break after enough water evaporates and the cold mix will, ideally, take on the properties of an HMA pavement. Cold mix is commonly used as a patching material and on lesser-trafficked service roads.
Cut-back asphalt concrete Is a form of cold mix asphalt produced by dissolving the binder in kerosene or another lighter fraction of petroleum before mixing with the aggregate. While in its dissolved state, the asphalt is less viscous and the mix is easy to work and compact. After the mix is laid down the lighter fraction evaporates. Because of concerns with pollution from the volatile organic compounds in the lighter fraction, cut-back asphalt has been largely replaced by asphalt emulsion.
Mastic asphalt concrete, or sheet asphalt This is produced by heating hard grade blown bitumen (i.e., partly oxidised) in a green cooker (mixer) until it has become a viscous liquid after which the aggregate mix is then added. The bitumen aggregate mixture is cooked (matured) for around 6–8 hours and once it is ready, the mastic asphalt mixer is transported to the work site where experienced layers empty the mixer and either machine or hand lay the mastic asphalt contents on to the road. Mastic asphalt concrete is generally laid to a thickness of around for footpath and road applications and around for flooring or roof applications.
High-modulus asphalt concrete, sometimes referred to by the French-language acronym EMÉ (enrobé à module élevé) This uses a very hard bituminous formulation (penetration 10/20), sometimes modified, in proportions close to 6% by weight of the aggregates, as well as a high proportion of mineral powder (between 8–10%) to create an asphalt concrete layer with a high modulus of elasticity (of the order of 13000MPa). This makes it possible to reduce the thickness of the base layer up to 25% (depending on the temperature) in relation to conventional bitumen, while offering as very high fatigue strengths. High-modulus asphalt layers are used both in reinforcement operations and in the construction of new reinforcements for medium and heavy traffic. In base layers, they tend to exhibit a greater capacity of absorbing tensions and, in general, better fatigue resistance.
In addition to the asphalt and aggregate, additives, such as polymers, and antistripping agents may be added to improve the properties of the final product.
Areas paved with asphalt concrete—especially airport aprons—have been called "the tarmac" at times, despite not being constructed using the tarmacadam process.
A variety of specialty asphalt concrete mixtures have been developed to meet specific needs, such as stone-matrix asphalt, which is designed to ensure a strong wearing surface, or porous asphalt pavements, which are permeable and allow water to drain through the pavement for controlling storm water.
Roadway performance characteristics
Different types of asphalt concrete have different performance characteristics in roads in terms of surface durability, tire wear, braking efficiency and roadway noise. In principle, the determination of appropriate asphalt performance characteristics must take into account the volume of traffic in each vehicle category, and the performance requirements of the friction course. In general, the viscosity of asphalt allows it to conveniently form a convex surface, and a central apex to streets and roads to drain water to the edges. This is not, however, in itself an advantage over concrete, which has various grades of viscosity and can be formed into a convex road surface. Rather, it is the economy of asphalt concrete that renders it more frequently used. Concrete is found on interstate highways where maintenance is highly crucial.
Asphalt concrete generates less roadway noise than a Portland cement concrete surface, and is typically less noisy than chip seal surfaces. Because tire noise is generated through the conversion of kinetic energy to sound waves, more noise is produced as the speed of a vehicle increases. The notion that highway design might take into account acoustical engineering considerations, including the selection of the type of surface paving, arose in the early 1970s.
With regard to structural performance, the asphalt behaviour depends on a variety of factors including the material, loading and environmental condition. Furthermore, the performance of pavement varies over time. Therefore, the long-term behaviour of asphalt pavement is different from its short-term performance. The LTPP is a research program by the FHWA, which is specifically focusing on long-term pavement behaviour.
Degradation and restoration
Asphalt deterioration can include crocodile cracking, potholes, upheaval, raveling, bleeding, rutting, shoving, stripping, and grade depressions. In cold climates, frost heaves can crack asphalt even in one winter. Filling the cracks with bitumen is a temporary fix, but only proper compaction and drainage can slow this process.
Factors that cause asphalt concrete to deteriorate over time mostly fall into one of three categories: construction quality, environmental considerations, and traffic loads. Often, damage results from combinations of factors in all three categories.
Construction quality is critical to pavement performance. This includes the construction of utility trenches and appurtenances that are placed in the pavement after construction. Lack of compaction in the surface of the asphalt, especially on the longitudinal joint, can reduce the life of a pavement by 30 to 40%. Service trenches in pavements after construction have been said to reduce the life of the pavement by 50%, mainly due to the lack of compaction in the trench, and also because of water intrusion through improperly sealed joints.
Environmental factors include heat and cold, the presence of water in the subbase or subgrade soil underlying the pavement, and frost heaves.
High temperatures soften the asphalt binder, allowing heavy tire loads to deform the pavement into ruts. Paradoxically, high heat and strong sunlight also cause the asphalt to oxidize, becoming stiffer and less resilient, leading to crack formation. Cold temperatures can cause cracks as the asphalt contracts. Cold asphalt is also less resilient and more vulnerable to cracking.
Water trapped under the pavement softens the subbase and subgrade, making the road more vulnerable to traffic loads. Water under the road freezes and expands in cold weather, causing and enlarging cracks. In spring thaw, the ground thaws from the top down, so water is trapped between the pavement above and the still-frozen soil underneath. This layer of saturated soil provides little support for the road above, leading to the formation of potholes. This is more of a problem for silty or clay soils than sandy or gravelly soils. Some jurisdictions pass frost laws to reduce the allowable weight of trucks during the spring thaw season and protect their roads.
The damage a vehicle causes is roughly proportional to the axle load raised to the fourth power, so doubling the weight an axle carries actually causes 16 times as much damage. Wheels cause the road to flex slightly, resulting in fatigue cracking, which often leads to crocodile cracking. Vehicle speed also plays a role. Slowly moving vehicles stress the road over a longer period of time, increasing ruts, cracking, and corrugations in the asphalt pavement.
Other causes of damage include heat damage from vehicle fires, or solvent action from chemical spills.
Prevention and repair of degradation
The life of a road can be prolonged through good design, construction and maintenance practices. During design, engineers measure the traffic on a road, paying special attention to the number and types of trucks. They also evaluate the subsoil to see how much load it can withstand. The pavement and subbase thicknesses are designed to withstand the wheel loads. Sometimes, geogrids are used to reinforce the subbase and further strengthen the roads. Drainage, including ditches, storm drains and underdrains are used to remove water from the roadbed, preventing it from weakening the subbase and subsoil.
Sealcoating asphalt is a maintenance measure that helps keep water and petroleum products out of the pavement.
Maintaining and cleaning ditches and storm drains will extend the life of the road at low cost. Sealing small cracks with bituminous crack sealer prevents water from enlarging cracks through frost weathering, or percolating down to the subbase and softening it.
For somewhat more distressed roads, a chip seal or similar surface treatment may be applied. As the number, width and length of cracks increases, more intensive repairs are needed. In order of generally increasing expense, these include thin asphalt overlays, multicourse overlays, grinding off the top course and overlaying, in-place recycling, or full-depth reconstruction of the roadway.
It is far less expensive to keep a road in good condition than it is to repair it once it has deteriorated. This is why some agencies place the priority on preventive maintenance of roads in good condition, rather than reconstructing roads in poor condition. Poor roads are upgraded as resources and budget allow. In terms of lifetime cost and long term pavement conditions, this will result in better system performance. Agencies that concentrate on restoring their bad roads often find that by the time they have repaired them all, the roads that were in good condition have deteriorated.
Some agencies use a pavement management system to help prioritize maintenance and repairs.
Recycling
Asphalt concrete is a recyclable material that can be reclaimed and reused both on-site and in asphalt plants. The most common recycled component in asphalt concrete is reclaimed asphalt pavement (RAP). RAP is recycled at a greater rate than any other material in the United States. Many roofing shingles also contain asphalt, and asphalt concrete mixes may contain reclaimed asphalt shingles (RAS). Research has demonstrated that RAP and RAS can replace the need for up to 100% of the virgin aggregate and asphalt binder in a mix, but this percentage is typically lower due to regulatory requirements and performance concerns. In 2019, new asphalt pavement mixtures produced in the United States contained, on average, 21.1% RAP and 0.2% RAS.
Recycling methods
Recycled asphalt components may be reclaimed and transported to an asphalt plant for processing and use in new pavements, or the entire recycling process may be conducted in-place. While in-place recycling typically occurs on roadways and is specific to RAP, recycling in asphalt plants may utilize RAP, RAS, or both. In 2019, an estimated 97.0 million tons of RAP and 1.1 million tons of RAS were accepted by asphalt plants in the United States.
RAP is typically received by plants after being milled on-site, but pavements may also be ripped out in larger sections and crushed in the plant. RAP millings are typically stockpiled at plants before being incorporated into new asphalt mixes. Prior to mixing, stockpiled millings may be dried and any that have agglomerated in storage may have to be crushed.
RAS may be received by asphalt plants as post-manufacturer waste directly from shingle factories, or they may be received as post-consumer waste at the end of their service life. Processing of RAS includes grinding the shingles and sieving the grinds to remove oversized particles. The grinds may also be screened with a magnetic sieve to remove nails and other metal debris. The ground RAS is then dried, and the asphalt cement binder can be extracted. For further information on RAS processing, performance, and associated health and safety concerns, see Asphalt Shingles.
In-place recycling methods allow roadways to be rehabilitated by reclaiming the existing pavement, remixing, and repaving on-site. In-place recycling techniques include rubblizing, hot in-place recycling, cold in-place recycling, and full-depth reclamation. For further information on in-place methods, see Road Surface.
Performance
During its service life, the asphalt cement binder, which makes up about 5–6% of a typical asphalt concrete mix, naturally hardens and becomes stiffer. This aging process primarily occurs due to oxidation, evaporation, exudation, and physical hardening. For this reason, asphalt mixes containing RAP and RAS are prone to exhibiting lower workability and increased susceptibility to fatigue cracking. These issues are avoidable if the recycled components are apportioned correctly in the mix. Practicing proper storage and handling, such as by keeping RAP stockpiles out of damp areas or direct sunlight, is also important in avoiding quality issues. The binder aging process may also produce some beneficial attributes, such as by contributing to higher levels of rutting resistance in asphalts containing RAP and RAS.
One approach to balancing the performance aspects of RAP and RAS is to combine the recycled components with virgin aggregate and virgin asphalt binder. This approach can be effective when the recycled content in the mix is relatively low, and has a tendency to work more effectively with soft virgin binders. A 2020 study found that the addition of 5% RAS to a mix with a soft, low-grade virgin binder significantly increased the mix's rutting resistance while maintaining adequate fatigue cracking resistance.
In mixes with higher recycled content, the addition of virgin binder becomes less effective, and rejuvenators may be used. Rejuvenators are additives that restore the physical and chemical properties of the aged binder. When conventional mixing methods are used in asphalt plants, the upper limit for RAP content before rejuvenators become necessary has been estimated at 50%. Research has demonstrated that the use of rejuvenators at optimal doses can allow for mixes with 100% recycled components to meet the performance requirements of conventional asphalt concrete.
Other recycled materials in asphalt concrete
Beyond RAP and RAS, a range of waste materials can be re-used in place of virgin aggregate, or as rejuvenators. Crumb rubber, generated from recycled tires, has been demonstrated to improve the fatigue resistance and flexural strength of asphalt mixes that contain RAP. In California, legislative mandates require the Department of Transportation to incorporate crumb rubber into asphalt paving materials. Other recycled materials that are actively included in asphalt concrete mixes across the United States include steel slag, blast furnace slag, and cellulose fibers.
Further research has been conducted to discover new forms of waste that may be recycled into asphalt mixes. A 2020 study conducted in Melbourne, Australia presented a range of strategies for incorporating waste materials into asphalt concrete. The strategies presented in the study include the use of plastics, particularly high-density polyethylene, in asphalt binders, and the use of glass, brick, ceramic, and marble quarry waste in place of traditional aggregate.
Rejuvenators may also be produced from recycled materials, including waste engine oil, waste vegetable oil, and waste vegetable grease.
Recently, discarded face masks have been incorporated into stone mastic.
See also
References
Building materials
Concrete
Road construction
Pavements | Asphalt concrete | Physics,Chemistry,Engineering | 4,177 |
27,974,182 | https://en.wikipedia.org/wiki/Corona%20Australis%20in%20Chinese%20astronomy | According to traditional Chinese uranography, the modern constellation Corona Australis is located within the northern quadrant of the sky, which is symbolized as The Black Tortoise of the North (北方玄武, Běi Fāng Xuán Wǔ)
The name of the Western constellation in modern Chinese is 南冕座 (nán miǎn zuò), meaning "the southern crown constellation".
Stars
The map of Chinese constellation in constellation Corona Australis area consists of :
See also
Traditional Chinese star names
Chinese constellations
References
External links
Corona Australis – Chinese associations
香港太空館研究資源
中國星區、星官及星名英譯表
天象文學
台灣自然科學博物館天文教育資訊網
中國古天文
中國古代的星象系統
Astronomy in China
Corona Australis | Corona Australis in Chinese astronomy | Astronomy | 179 |
30,426 | https://en.wikipedia.org/wiki/Total%20internal%20reflection | In physics, total internal reflection (TIR) is the phenomenon in which waves arriving at the interface (boundary) from one medium to another (e.g., from water to air) are not refracted into the second ("external") medium, but completely reflected back into the first ("internal") medium. It occurs when the second medium has a higher wave speed (i.e., lower refractive index) than the first, and the waves are incident at a sufficiently oblique angle on the interface. For example, the water-to-air surface in a typical fish tank, when viewed obliquely from below, reflects the underwater scene like a mirror with no loss of brightness (Fig.1).
TIR occurs not only with electromagnetic waves such as light and microwaves, but also with other types of waves, including sound and water waves. If the waves are capable of forming a narrow beam (Fig.2), the reflection tends to be described in terms of "rays" rather than waves; in a medium whose properties are independent of direction, such as air, water or glass, the "rays" are perpendicular to associated wavefronts. The total internal reflection occurs when critical angle is exceeded.
Refraction is generally accompanied by partial reflection. When waves are refracted from a medium of lower propagation speed (higher refractive index) to a medium of higher propagation speed (lower refractive index)—e.g., from water to air—the angle of refraction (between the outgoing ray and the surface normal) is greater than the angle of incidence (between the incoming ray and the normal). As the angle of incidence approaches a certain threshold, called the critical angle, the angle of refraction approaches 90°, at which the refracted ray becomes parallel to the boundary surface. As the angle of incidence increases beyond the critical angle, the conditions of refraction can no longer be satisfied, so there is no refracted ray, and the partial reflection becomes total. For visible light, the critical angle is about 49° for incidence from water to air, and about 42° for incidence from common glass to air.
Details of the mechanism of TIR give rise to more subtle phenomena. While total reflection, by definition, involves no continuing flow of power across the interface between the two media, the external medium carries a so-called evanescent wave, which travels along the interface with an amplitude that falls off exponentially with distance from the interface. The "total" reflection is indeed total if the external medium is lossless (perfectly transparent), continuous, and of infinite extent, but can be conspicuously less than total if the evanescent wave is absorbed by a lossy external medium ("attenuated total reflectance"), or diverted by the outer boundary of the external medium or by objects embedded in that medium ("frustrated" TIR). Unlike partial reflection between transparent media, total internal reflection is accompanied by a non-trivial phase shift (not just zero or 180°) for each component of polarization (perpendicular or parallel to the plane of incidence), and the shifts vary with the angle of incidence. The explanation of this effect by Augustin-Jean Fresnel, in 1823, added to the evidence in favor of the wave theory of light.
The phase shifts are used by Fresnel's invention, the Fresnel rhomb, to modify polarization. The efficiency of the total internal reflection is exploited by optical fibers (used in telecommunications cables and in image-forming fiberscopes), and by reflective prisms, such as image-erecting Porro/roof prisms for monoculars and binoculars.
Optical description
Although total internal reflection can occur with any kind of wave that can be said to have oblique incidence, including (e.g.) microwaves and sound waves, it is most familiar in the case of light waves.
Total internal reflection of light can be demonstrated using a semicircular-cylindrical block of common glass or acrylic glass. In Fig.3, a "ray box" projects a narrow beam of light (a "ray") radially inward. The semicircular cross-section of the glass allows the incoming ray to remain perpendicular to the curved portion of the air/glass surface, and then hence to continue in a straight line towards the flat part of the surface, although its angle with the flat part varies.
Where the ray meets the flat glass-to-air interface, the angle between the ray and the normal (perpendicular) to the interface is called the angle of incidence. If this angle is sufficiently small, the ray is partly reflected but mostly transmitted, and the transmitted portion is refracted away from the normal, so that the angle of refraction (between the refracted ray and the normal to the interface) is greater than the angle of incidence. For the moment, let us call the angle of incidence θ and the angle of refraction θt (where t is for transmitted, reserving r for reflected). As θ increases and approaches a certain "critical angle", denoted by θc (or sometimes θcr), the angle of refraction approaches 90° (that is, the refracted ray approaches a tangent to the interface), and the refracted ray becomes fainter while the reflected ray becomes brighter. As θ increases beyond θc, the refracted ray disappears and only the reflected ray remains, so that all of the energy of the incident ray is reflected; this is total internal reflection (TIR). In brief:
If θ < θc, the incident ray is split, being partly reflected and partly refracted;
If θ > θc, the incident ray suffers total internal reflection (TIR); none of it is transmitted.
Critical angle
The critical angle is the smallest angle of incidence that yields total reflection, or equivalently the largest angle for which a refracted ray exists. For light waves incident from an "internal" medium with a single refractive index , to an "external" medium with a single refractive index , the critical angle is given by and is defined if . For some other types of waves, it is more convenient to think in terms of propagation velocities rather than refractive indices. The explanation of the critical angle in terms of velocities is more general and will therefore be discussed first.
When a wavefront is refracted from one medium to another, the incident (incoming) and refracted (outgoing) portions of the wavefront meet at a common line on the refracting surface (interface). Let this line, denoted by L, move at velocity across the surface, where is measured normal to L (Fig.4). Let the incident and refracted wavefronts propagate with normal velocities and respectively, and let them make the dihedral angles θ1 and θ2 respectively with the interface. From the geometry, is the component of in the direction normal to the incident wave, so that Similarly, Solving each equation for and equating the results, we obtain the general law of refraction for waves:
But the dihedral angle between two planes is also the angle between their normals. So θ1 is the angle between the normal to the incident wavefront and the normal to the interface, while θ2 is the angle between the normal to the refracted wavefront and the normal to the interface; and Eq.() tells us that the sines of these angles are in the same ratio as the respective velocities.
This result has the form of "Snell's law", except that we have not yet said that the ratio of velocities is constant, nor identified θ1 and θ2 with the angles of incidence and refraction (called θi and θt above). However, if we now suppose that the properties of the media are isotropic (independent of direction), two further conclusions follow: first, the two velocities, and hence their ratio, are independent of their directions; and second, the wave-normal directions coincide with the ray directions, so that θ1 and θ2 coincide with the angles of incidence and refraction as defined above.
Obviously the angle of refraction cannot exceed 90°. In the limiting case, we put and in Eq.(), and solve for the critical angle:
In deriving this result, we retain the assumption of isotropic media in order to identify θ1 and θ2 with the angles of incidence and refraction.
For electromagnetic waves, and especially for light, it is customary to express the above results in terms of refractive indices. The refractive index of a medium with normal velocity is defined as where c is the speed of light in vacuum. Hence Similarly, Making these substitutions in Eqs.() and (), we obtain
and
Eq.() is the law of refraction for general media, in terms of refractive indices, provided that θ1 and θ2 are taken as the dihedral angles; but if the media are isotropic, then and become independent of direction, while θ1 and θ2 may be taken as the angles of incidence and refraction for the rays, and Eq.() follows. So, for isotropic media, Eqs.() and () together describe the behavior in Fig.5.
According to Eq.(), for incidence from water () to air (), we have , whereas for incidence from common or acrylic glass () to air (), we have .
The arcsin function yielding θc is defined only if Hence, for isotropic media, total internal reflection cannot occur if the second medium has a higher refractive index (lower normal velocity) than the first. For example, there cannot be TIR for incidence from air to water; rather, the critical angle for incidence from water to air is the angle of refraction at grazing incidence from air to water (Fig.6).
The medium with the higher refractive index is commonly described as optically denser, and the one with the lower refractive index as optically rarer. Hence it is said that total internal reflection is possible for "dense-to-rare" incidence, but not for "rare-to-dense" incidence.
Everyday examples
When standing beside an aquarium with one's eyes below the water level, one is likely to see fish or submerged objects reflected in the water-air surface (Fig.1). The brightness of the reflected image – just as bright as the "direct" view – can be startling.
A similar effect can be observed by opening one's eyes while swimming just below the water's surface. If the water is calm, the surface outside the critical angle (measured from the vertical) appears mirror-like, reflecting objects below. The region above the water cannot be seen except overhead, where the hemispherical field of view is compressed into a conical field known as Snell's window, whose angular diameter is twice the critical angle (cf. Fig.6). The field of view above the water is theoretically 180° across, but seems less because as we look closer to the horizon, the vertical dimension is more strongly compressed by the refraction; e.g., by Eq.(), for air-to-water incident angles of 90°, 80°, and 70°, the corresponding angles of refraction are 48.6° (θcr in Fig.6), 47.6°, and 44.8°, indicating that the image of a point 20° above the horizon is 3.8° from the edge of Snell's window while the image of a point 10° above the horizon is only 1° from the edge.
Fig.7, for example, is a photograph taken near the bottom of the shallow end of a swimming pool. What looks like a broad horizontal stripe on the right-hand wall consists of the lower edges of a row of orange tiles, and their reflections; this marks the water level, which can then be traced across the other wall. The swimmer has disturbed the surface above her, scrambling the lower half of her reflection, and distorting the reflection of the ladder (to the right). But most of the surface is still calm, giving a clear reflection of the tiled bottom of the pool. The space above the water is not visible except at the top of the frame, where the handles of the ladder are just discernible above the edge of Snell's window – within which the reflection of the bottom of the pool is only partial, but still noticeable in the photograph. One can even discern the color-fringing of the edge of Snell's window, due to variation of the refractive index, hence of the critical angle, with wavelength (see Dispersion).
The critical angle influences the angles at which gemstones are cut. The round "brilliant" cut, for example, is designed to refract light incident on the front facets, reflect it twice by TIR off the back facets, and transmit it out again through the front facets, so that the stone looks bright. Diamond (Fig.8) is especially suitable for this treatment, because its high refractive index (about 2.42) and consequently small critical angle (about 24.5°) yield the desired behavior over a wide range of viewing angles. Cheaper materials that are similarly amenable to this treatment include cubic zirconia (index≈2.15) and moissanite (non-isotropic, hence doubly refractive, with an index ranging from about 2.65 to 2.69, depending on direction and polarization); both of these are therefore popular as diamond simulants.
Evanescent wave
Mathematically, waves are described in terms of time-varying fields, a "field" being a function of location in space. A propagating wave requires an "effort" field and a "flow" field, the latter being a vector (if we are working in two or three dimensions). The product of effort and flow is related to power (see System equivalence). For example, for sound waves in a non-viscous fluid, we might take the effort field as the pressure (a scalar), and the flow field as the fluid velocity (a vector). The product of these two is intensity (power per unit area). For electromagnetic waves, we shall take the effort field as the electric field and the flow field as the magnetizing field . Both of these are vectors, and their vector product is again the intensity (see Poynting vector).
When a wave in (say) medium 1 is reflected off the interface between medium 1 and medium 2, the flow field in medium 1 is the vector sum of the flow fields due to the incident and reflected waves. If the reflection is oblique, the incident and reflected fields are not in opposite directions and therefore cannot cancel out at the interface; even if the reflection is total, either the normal component or the tangential component of the combined field (as a function of location and time) must be non-zero adjacent to the interface. Furthermore, the physical laws governing the fields will generally imply that one of the two components is continuous across the interface (that is, it does not suddenly change as we cross the interface); for example, for electromagnetic waves, one of the interface conditions is that the tangential component of is continuous if there is no surface current. Hence, even if the reflection is total, there must be some penetration of the flow field into medium 2; and this, in combination with the laws relating the effort and flow fields, implies that there will also be some penetration of the effort field. The same continuity condition implies that the variation ("waviness") of the field in medium 2 will be synchronized with that of the incident and reflected waves in medium 1.
But, if the reflection is total, the spatial penetration of the fields into medium 2 must be limited somehow, or else the total extent and hence the total energy of those fields would continue to increase, draining power from medium 1. Total reflection of a continuing wavetrain permits some energy to be stored in medium 2, but does not permit a continuing transfer of power from medium 1 to medium 2.
Thus, using mostly qualitative reasoning, we can conclude that total internal reflection must be accompanied by a wavelike field in the "external" medium, traveling along the interface in synchronism with the incident and reflected waves, but with some sort of limited spatial penetration into the "external" medium; such a field may be called an evanescent wave.
Fig.9 shows the basic idea. The incident wave is assumed to be plane and sinusoidal. The reflected wave, for simplicity, is not shown. The evanescent wave travels to the right in lock-step with the incident and reflected waves, but its amplitude falls off with increasing distance from the interface.
(Two features of the evanescent wave in Fig.9 are to be explained later: first, that the evanescent wave crests are perpendicular to the interface; and second, that the evanescent wave is slightly ahead of the incident wave.)
Frustrated total internal reflection (FTIR)
If the internal reflection is to be total, there must be no diversion of the evanescent wave. Suppose, for example, that electromagnetic waves incident from glass (with a higher refractive index) to air (with a lower refractive index) at a certain angle of incidence are subject to TIR. And suppose that we have a third medium (often identical to the first) whose refractive index is sufficiently high that, if the third medium were to replace the second, we would get a standard transmitted wavetrain for the same angle of incidence. Then, if the third medium is brought within a distance of a few wavelengths from the surface of the first medium, where the evanescent wave has significant amplitude in the second medium, then the evanescent wave is effectively refracted into the third medium, giving non-zero transmission into the third medium, and therefore less than total reflection back into the first medium. As the amplitude of the evanescent wave decays across the air gap, the transmitted waves are attenuated, so that there is less transmission, and therefore more reflection, than there would be with no gap; but as long as there is some transmission, the reflection is less than total. This phenomenon is called frustrated total internal reflection (where "frustrated" negates "total"), abbreviated "frustrated TIR" or "FTIR".
Frustrated TIR can be observed by looking into the top of a glass of water held in one's hand (Fig.10). If the glass is held loosely, contact may not be sufficiently close and widespread to produce a noticeable effect. But if it is held more tightly, the ridges of one's fingerprints interact strongly with the evanescent waves, allowing the ridges to be seen through the otherwise totally reflecting glass-air surface.
The same effect can be demonstrated with microwaves, using paraffin wax as the "internal" medium (where the incident and reflected waves exist). In this case the permitted gap width might be (e.g.) 1cm or several cm, which is easily observable and adjustable.
The term frustrated TIR also applies to the case in which the evanescent wave is scattered by an object sufficiently close to the reflecting interface. This effect, together with the strong dependence of the amount of scattered light on the distance from the interface, is exploited in total internal reflection microscopy.
The mechanism of FTIR is called evanescent-wave coupling, and is a good analog to visualize quantum tunneling. Due to the wave nature of matter, an electron has a non-zero probability of "tunneling" through a barrier, even if classical mechanics would say that its energy is insufficient. Similarly, due to the wave nature of light, a photon has a non-zero probability of crossing a gap, even if ray optics would say that its approach is too oblique.
Another reason why internal reflection may be less than total, even beyond the critical angle, is that the external medium may be "lossy" (less than perfectly transparent), in which case the external medium will absorb energy from the evanescent wave, so that the maintenance of the evanescent wave will draw power from the incident wave. The consequent less-than-total reflection is called attenuated total reflectance (ATR). This effect, and especially the frequency-dependence of the absorption, can be used to study the composition of an unknown external medium.
Derivation of evanescent wave
In a uniform plane sinusoidal electromagnetic wave, the electric field has the form
where is the (constant) complex amplitude vector, is the imaginary unit, is the wave vector (whose magnitude is the angular wavenumber), is the position vector, ω is the angular frequency, is time, and it is understood that the real part of the expression is the physical field. The magnetizing field has the same form with the same and ω. The value of the expression is unchanged if the position varies in a direction normal to ; hence is normal to the wavefronts.
If ℓ is the component of in the direction of the field () can be written If the argument of is to be constant, ℓ must increase at the velocity known as the phase velocity. This in turn is equal to where is the phase velocity in the reference medium (taken as vacuum), and is the local refractive index w.r.t. the reference medium. Solving for gives i.e.
where is the wavenumber in vacuum.
From (), the electric field in the "external" medium has the form
where is the wave vector for the transmitted wave (we assume isotropic media, but the transmitted wave is not yet assumed to be evanescent).
In Cartesian coordinates , let the region have refractive index and let the region have refractive index . Then the plane is the interface, and the axis is normal to the interface (Fig.11). Let and be the unit vectors in the and directions respectively. Let the plane of incidence (containing the incident wave-normal and the normal to the interface) be the plane (the plane of the page), with the angle of incidence θi measured from towards . Let the angle of refraction, measured in the same sense, be θt ("t" for transmitted, reserving "r" for reflected).
From (), the transmitted wave vector has magnitude . Hence, from the geometry,
where the last step uses Snell's law. Taking the dot product with the position vector, we get
so that Eq.() becomes
In the case of TIR, the angle θt does not exist in the usual sense. But we can still interpret () for the transmitted (evanescent) wave by allowing to be complex. This becomes necessary when we write in terms of and thence in terms of using Snell's law:
For θi greater than the critical angle, the value under the square-root symbol is negative, so that
To determine which sign is applicable, we substitute () into (), obtaining
where the undetermined sign is the opposite of that in (). For an evanescent transmitted wave that is, one whose amplitude decays as increases the undetermined sign in () must be minus, so the undetermined sign in () must be plus.
With the correct sign, the result () can be abbreviated
where
and is the wavenumber in vacuum, i.e.
So the evanescent wave is a plane sinewave traveling in the direction, with an amplitude that decays exponentially in the direction (Fig.9). It is evident that the energy stored in this wave likewise travels in the direction and does not cross the interface. Hence the Poynting vector generally has a component in the direction, but its component averages to zero (although its instantaneous component is not identically zero).
Eq.() indicates that the amplitude of the evanescent wave falls off by a factor as the coordinate (measured from the interface) increases by the distance commonly called the "penetration depth" of the evanescent wave. Taking reciprocals of the first equation of (), we find that the penetration depth is
where λ0 is the wavelength in vacuum, i.e. Dividing the numerator and denominator by yields
where is the wavelength in the second (external) medium. Hence we can plot in units of λ2 as a function of the angle of incidence for various values of (Fig.12). As θi decreases towards the critical angle, the denominator approaches zero, so that increases without limit as is to be expected, because as soon as θi is less than critical, uniform plane waves are permitted in the external medium. As θi approaches 90° (grazing incidence), approaches a minimum
For incidence from water to air, or common glass to air, is not much different from λ2/(2π). But is larger at smaller angles of incidence (Fig.12), and the amplitude may still be significant at distances of several times ; for example, because is just greater than 0.01, the evanescent wave amplitude within a distance of the interface is at least 1% of its value at the interface. Hence, speaking loosely, we tend to say that the evanescent wave amplitude is significant within "a few wavelengths" of the interface.
Phase shifts
Between 1817 and 1823, Augustin-Jean Fresnel discovered that total internal reflection is accompanied by a non-trivial phase shift (that is, a phase shift that is not restricted to 0° or 180°), as the Fresnel reflection coefficient acquires a non-zero imaginary part. We shall now explain this effect for electromagnetic waves in the case of linear, homogeneous, isotropic, non-magnetic media. The phase shift turns out to be an advance, which grows as the incidence angle increases beyond the critical angle, but which depends on the polarization of the incident wave.
In equations (), (), (), (), and (), we advance the phase by the angle ϕ if we replace by (that is, if we replace by ), with the result that the (complex) field is multiplied by . So a phase advance is equivalent to multiplication by a complex constant with a negative argument. This becomes more obvious when (e.g.) the field () is factored as where the last factor contains the time dependence.
To represent the polarization of the incident, reflected, or transmitted wave, the electric field adjacent to an interface can be resolved into two perpendicular components, known as the s and p components, which are parallel to the surface and the plane of incidence respectively; in other words, the s and p components are respectively square and parallel to the plane of incidence.
For each component of polarization, the incident, reflected, or transmitted electric field ( in Eq.()) has a certain direction and can be represented by its (complex) scalar component in that direction. The reflection or transmission coefficient can then be defined as a ratio of complex components at the same point, or at infinitesimally separated points on opposite sides of the interface. But, in order to fix the signs of the coefficients, we must choose positive senses for the "directions". For the s components, the obvious choice is to say that the positive directions of the incident, reflected, and transmitted fields are all the same (e.g., the direction in Fig.11). For the p components, this article adopts the convention that the positive directions of the incident, reflected, and transmitted fields are inclined towards the same medium (that is, towards the same side of the interface, e.g. like the red arrows in Fig.11). But the reader should be warned that some books use a different convention for the p components, causing a different sign in the resulting formula for the reflection coefficient.
For the s polarization, let the reflection and transmission coefficients be and respectively. For the p polarization, let the corresponding coefficients be and . Then, for linear, homogeneous, isotropic, non-magnetic media, the coefficients are given by
(For a derivation of the above, see .)
Now we suppose that the transmitted wave is evanescent. With the correct sign (+), substituting () into () gives
where
that is, is the index of the "internal" medium relative to the "external" one, or the index of the internal medium if the external one is vacuum. So the magnitude of is 1, and the argument of is
which gives a phase advance of
Making the same substitution in (), we find that has the same denominator as with a positive real numerator (instead of a complex conjugate numerator) and therefore has half the argument of , so that the phase advance of the evanescent wave is half that of the reflected wave.
With the same choice of sign, substituting () into () gives
whose magnitude is 1, and whose argument is
which gives a phase advance of
Making the same substitution in (), we again find that the phase advance of the evanescent wave is half that of the reflected wave.
Equations () and () apply when , where θi is the angle of incidence, and θc is the critical angle . These equations show that
each phase advance is zero at the critical angle (for which the numerator is zero);
each phase advance approaches 180° as ; and
at intermediate values of θi (because the factor is in the numerator of () and the denominator of ()).
For , the reflection coefficients are given by equations () and () and are real, so that the phase shift is either 0° (if the coefficient is positive) or 180° (if the coefficient is negative).
In (), if we put (Snell's law) and multiply the numerator and denominator by , we obtain
which is positive for all angles of incidence with a transmitted ray (since ), giving a phase shift of zero.
If we do likewise with (), the result is easily shown to be equivalent to
which is negative for small angles (that is, near normal incidence), but changes sign at Brewster's angle, where θi and θt are complementary. Thus the phase shift is 180° for small θi but switches to 0° at Brewster's angle. Combining the complementarity with Snell's law yields as Brewster's angle for dense-to-rare incidence.
(Equations () and () are known as Fresnel's sine law and Fresnel's tangent law. Both reduce to 0/0 at normal incidence, but yield the correct results in the limit as . That they have opposite signs as we approach normal incidence is an obvious disadvantage of the sign convention used in this article; the corresponding advantage is that they have the same signs at grazing incidence.)
That completes the information needed to plot and for all angles of incidence. This is done in Fig.13, with in red and in blue, for three refractive indices. On the angle-of-incidence scale (horizontal axis), Brewster's angle is where (red) falls from 180° to 0°, and the critical angle is where both and (red and blue) start to rise again. To the left of the critical angle is the region of partial reflection, where both reflection coefficients are real (phase 0° or 180°) with magnitudes less than 1. To the right of the critical angle is the region of total reflection, where both reflection coefficients are complex with magnitudes equal to 1. In that region, the black curves show the phase advance of the p component relative to the s component:
It can be seen that a refractive index of 1.45 is not enough to give a 45° phase difference, whereas a refractive index of 1.5 is enough (by a slim margin) to give a 45° phase difference at two angles of incidence: about 50.2° and 53.3°.
This 45° relative shift is employed in Fresnel's invention, now known as the Fresnel rhomb, in which the angles of incidence are chosen such that the two internal reflections cause a total relative phase shift of 90° between the two polarizations of an incident wave. This device performs the same function as a birefringent quarter-wave plate, but is more achromatic (that is, the phase shift of the rhomb is less sensitive to wavelength). Either device may be used, for instance, to transform linear polarization to circular polarization (which Fresnel also discovered) and conversely.
In Fig.13, is computed by a final subtraction; but there are other ways of expressing it. Fresnel himself, in 1823, gave a formula for . Born and Wolf (1970, p.50) derive an expression for and find its maximum analytically.
For TIR of a beam with finite width, the variation in the phase shift with the angle of incidence gives rise to the Goos–Hänchen effect, which is a lateral shift of the reflected beam within the plane of incidence. This effect applies to linear polarization in the s or p direction. The Imbert–Fedorov effect is an analogous effect for circular or elliptical polarization and produces a shift perpendicular to the plane of incidence.
Applications
Optical fibers exploit total internal reflection to carry signals over long distances with little attenuation. They are used in telecommunication cables, and in image-forming fiberscopes such as colonoscopes.
In the catadioptric Fresnel lens, invented by Augustin-Jean Fresnel for use in lighthouses, the outer prisms use TIR to deflect light from the lamp through a greater angle than would be possible with purely refractive prisms, but with less absorption of light (and less risk of tarnishing) than with conventional mirrors.
Other reflecting prisms that use TIR include the following (with some overlap between the categories):
Image-erecting prisms for binoculars and spotting scopes include paired 45°-90°-45° Porro prisms (Fig.14), the Porro–Abbe prism, the inline Koenig and Abbe–Koenig prisms, and the compact inline Schmidt–Pechan prism. (The last consists of two components, of which one is a kind of Bauernfeind prism, which requires a reflective coating on one of its two reflecting faces, due to a sub-critical angle of incidence.) These prisms have the additional function of folding the optical path from the objective lens to the prime focus, reducing the overall length for a given primary focal length.
A prismatic star diagonal for an astronomical telescope may consist of a single Porro prism (configured for a single reflection, giving a mirror-reversed image) or an Amici roof prism (which gives a non-reversed image).
Roof prisms use TIR at two faces meeting at a sharp 90° angle. This category includes the Koenig, Abbe–Koenig, Schmidt–Pechan, and Amici types (already mentioned), and the roof pentaprism used in SLR cameras; the last of these requires a reflective coating on one face.
A prismatic corner reflector uses three total internal reflections to reverse the direction of incoming light.
The Dove prism gives an inline view with mirror-reversal.
Polarizing prisms: Although the Fresnel rhomb, which converts between linear and elliptical polarization, is not birefringent (doubly refractive), there are other kinds of prisms that combine birefringence with TIR in such a way that light of a particular polarization is totally reflected while light of the orthogonal polarization is at least partly transmitted. Examples include the Nicol prism, Glan–Thompson prism, Glan–Foucault prism (or "Foucault prism"), and Glan–Taylor prism.
Refractometers, which measure refractive indices, often use the critical angle.
Rain sensors for automatic windscreen/windshield wipers have been implemented using the principle that total internal reflection will guide an infrared beam from a source to a detector if the outer surface of the windshield is dry, but any water drops on the surface will divert some of the light.
Edge-lit LED panels, used (e.g.) for backlighting of LCD computer monitors, exploit TIR to confine the LED light to the acrylic glass pane, except that some of the light is scattered by etchings on one side of the pane, giving an approximately uniform luminous emittance.
Total internal reflection microscopy (TIRM) uses the evanescent wave to illuminate small objects close to the reflecting interface. The consequent scattering of the evanescent wave (a form of frustrated TIR), makes the objects appear bright when viewed from the "external" side. In the total internal reflection fluorescence microscope (TIRFM), instead of relying on simple scattering, we choose an evanescent wavelength short enough to cause fluorescence (Fig.15). The high sensitivity of the illumination to the distance from the interface allows measurement of extremely small displacements and forces.
A beam-splitter cube uses frustrated TIR to divide the power of the incoming beam between the transmitted and reflected beams. The width of the air gap (or low-refractive-index gap) between the two prisms can be made adjustable, giving higher transmission and lower reflection for a narrower gap, or higher reflection and lower transmission for a wider gap.
Optical modulation can be accomplished by means of frustrated TIR with a rapidly variable gap. As the transmission coefficient is highly sensitive to the gap width (the function being approximately exponential until the gap is almost closed), this technique can achieve a large dynamic range.
Optical fingerprinting devices have used frustrated TIR to record images of persons' fingerprints without the use of ink (cf. Fig.11).
Gait analysis can be performed by using frustrated TIR with a high-speed camera, to capture and analyze footprints.
A gonioscope, used in optometry and ophthalmology for the diagnosis of glaucoma, suppresses TIR in order to look into the angle between the iris and the cornea. This view is usually blocked by TIR at the cornea-air interface. The gonioscope replaces the air with a higher-index medium, allowing transmission at oblique incidence, typically followed by reflection in a "mirror", which itself may be implemented using TIR.
Some multi-touch interactive tables and whiteboards utilise FTIR to detect fingers touching the screen. An infrared camera is placed behind the screen surface, which is edge-lit by infrared LEDs; when touching the surface FTIR causes some of the infrared light to escape the screen plane, and the camera sees this as bright areas. Computer vision software is then used to translate this into a series of coordinates and gestures.
History
Discovery
The surprisingly comprehensive and largely correct explanations of the rainbow by Theodoric of Freiberg (written between 1304 and 1310) and Kamāl al-Dīn al-Fārisī (completed by 1309), although sometimes mentioned in connection with total internal reflection (TIR), are of dubious relevance because the internal reflection of sunlight in a spherical raindrop is not total. But, according to Carl Benjamin Boyer, Theodoric's treatise on the rainbow also classified optical phenomena under five causes, the last of which was "a total reflection at the boundary of two transparent media". Theodoric's work was forgotten until it was rediscovered by Giovanni Battista Venturi in 1814.
Theodoric having fallen into obscurity, the discovery of TIR was generally attributed to Johannes Kepler, who published his findings in his Dioptrice in 1611. Although Kepler failed to find the true law of refraction, he showed by experiment that for air-to-glass incidence, the incident and refracted rays rotated in the same sense about the point of incidence, and that as the angle of incidence varied through ±90°, the angle of refraction (as we now call it) varied through ±42°. He was also aware that the incident and refracted rays were interchangeable. But these observations did not cover the case of a ray incident from glass to air at an angle beyond 42°, and Kepler promptly concluded that such a ray could only be reflected.
René Descartes rediscovered the law of refraction and published it in his Dioptrique of 1637. In the same work he mentioned the senses of rotation of the incident and refracted rays and the condition of TIR. But he neglected to discuss the limiting case, and consequently failed to give an expression for the critical angle, although he could easily have done so.
Huygens and Newton: Rival explanations
Christiaan Huygens, in his Treatise on Light (1690), paid much attention to the threshold at which the incident ray is "unable to penetrate into the other transparent substance". Although he gave neither a name nor an algebraic expression for the critical angle, he gave numerical examples for glass-to-air and water-to-air incidence, noted the large change in the angle of refraction for a small change in the angle of incidence near the critical angle, and cited this as the cause of the rapid increase in brightness of the reflected ray as the refracted ray approaches the tangent to the interface. Huygens' insight is confirmed by modern theory: in Eqs.() and () above, there is nothing to say that the reflection coefficients increase exceptionally steeply as θt approaches 90°, except that, according to Snell's law, θt itself is an increasingly steep function of θi.
Huygens offered an explanation of TIR within the same framework as his explanations of the laws of rectilinear propagation, reflection, ordinary refraction, and even the extraordinary refraction of "Iceland crystal" (calcite). That framework rested on two premises: first, every point crossed by a propagating wavefront becomes a source of secondary wavefronts ("Huygens' principle"); and second, given an initial wavefront, any subsequent position of the wavefront is the envelope (common tangent surface) of all the secondary wavefronts emitted from the initial position. All cases of reflection or refraction by a surface are then explained simply by considering the secondary waves emitted from that surface. In the case of refraction from a medium of slower propagation to a medium of faster propagation, there is a certain obliquity of incidence beyond which it is impossible for the secondary wavefronts to form a common tangent in the second medium; this is what we now call the critical angle. As the incident wavefront approaches this critical obliquity, the refracted wavefront becomes concentrated against the refracting surface, augmenting the secondary waves that produce the reflection back into the first medium.
Huygens' system even accommodated partial reflection at the interface between different media, albeit vaguely, by analogy with the laws of collisions between particles of different sizes. However, as long as the wave theory continued to assume longitudinal waves, it had no chance of accommodating polarization, hence no chance of explaining the polarization-dependence of extraordinary refraction, or of the partial reflection coefficient, or of the phase shift in TIR.
Isaac Newton rejected the wave explanation of rectilinear propagation, believing that if light consisted of waves, it would "bend and spread every way" into the shadows. His corpuscular theory of light explained rectilinear propagation more simply, and it accounted for the ordinary laws of refraction and reflection, including TIR, on the hypothesis that the corpuscles of light were subject to a force acting perpendicular to the interface. In this model, for dense-to-rare incidence, the force was an attraction back towards the denser medium, and the critical angle was the angle of incidence at which the normal velocity of the approaching corpuscle was just enough to reach the far side of the force field; at more oblique incidence, the corpuscle would be turned back. Newton gave what amounts to a formula for the critical angle, albeit in words: "as the Sines are which measure the Refraction, so is the Sine of Incidence at which the total Reflexion begins, to the Radius of the Circle".
Newton went beyond Huygens in two ways. First, not surprisingly, Newton pointed out the relationship between TIR and dispersion: when a beam of white light approaches a glass-to-air interface at increasing obliquity, the most strongly-refracted rays (violet) are the first to be "taken out" by "total Reflexion", followed by the less-refracted rays. Second, he observed that total reflection could be frustrated (as we now say) by laying together two prisms, one plane and the other slightly convex; and he explained this simply by noting that the corpuscles would be attracted not only to the first prism, but also to the second.
In two other ways, however, Newton's system was less coherent. First, his explanation of partial reflection depended not only on the supposed forces of attraction between corpuscles and media, but also on the more nebulous hypothesis of "Fits of easy Reflexion" and "Fits of easy Transmission". Second, although his corpuscles could conceivably have "sides" or "poles", whose orientations could conceivably determine whether the corpuscles suffered ordinary or extraordinary refraction in "Island-Crystal", his geometric description of the extraordinary refraction was theoretically unsupported and empirically inaccurate.
Laplace, Malus, and attenuated total reflectance (ATR)
William Hyde Wollaston, in the first of a pair of papers read to the Royal Society of London in 1802, reported his invention of a refractometer based on the critical angle of incidence from an internal medium of known "refractive power" (refractive index) to an external medium whose index was to be measured. With this device, Wollaston measured the "refractive powers" of numerous materials, some of which were too opaque to permit direct measurement of an angle of refraction. Translations of his papers were published in France in 1803, and apparently came to the attention of Pierre-Simon Laplace.
According to Laplace's elaboration of Newton's theory of refraction, a corpuscle incident on a plane interface between two homogeneous isotropic media was subject to a force field that was symmetrical about the interface. If both media were transparent, total reflection would occur if the corpuscle were turned back before it exited the field in the second medium. But if the second medium were opaque, reflection would not be total unless the corpuscle were turned back before it left the first medium; this required a larger critical angle than the one given by Snell's law, and consequently impugned the validity of Wollaston's method for opaque media. Laplace combined the two cases into a single formula for the relative refractive index in terms of the critical angle (minimum angle of incidence for TIR). The formula contained a parameter which took one value for a transparent external medium and another value for an opaque external medium. Laplace's theory further predicted a relationship between refractive index and density for a given substance.
In 1807, Laplace's theory was tested experimentally by his protégé, Étienne-Louis Malus. Taking Laplace's formula for the refractive index as given, and using it to measure the refractive index of beeswax in the liquid (transparent) state and the solid (opaque) state at various temperatures (hence various densities), Malus verified Laplace's relationship between refractive index and density.
But Laplace's theory implied that if the angle of incidence exceeded his modified critical angle, the reflection would be total even if the external medium was absorbent. Clearly this was wrong: in Eqs.() above, there is no threshold value of the angle θi beyond which κ becomes infinite; so the penetration depth of the evanescent wave (1/κ) is always non-zero, and the external medium, if it is at all lossy, will attenuate the reflection. As to why Malus apparently observed such an angle for opaque wax, we must infer that there was a certain angle beyond which the attenuation of the reflection was so small that ATR was visually indistinguishable from TIR.
Fresnel and the phase shift
Fresnel came to the study of total internal reflection through his research on polarization. In 1811, François Arago discovered that polarized light was apparently "depolarized" in an orientation-dependent and color-dependent manner when passed through a slice of doubly-refractive crystal: the emerging light showed colors when viewed through an analyzer (second polarizer). Chromatic polarization, as this phenomenon came to be called, was more thoroughly investigated in 1812 by Jean-Baptiste Biot. In 1813, Biot established that one case studied by Arago, namely quartz cut perpendicular to its optic axis, was actually a gradual rotation of the plane of polarization with distance.
In 1816, Fresnel offered his first attempt at a wave-based theory of chromatic polarization. Without (yet) explicitly invoking transverse waves, his theory treated the light as consisting of two perpendicularly polarized components. In 1817 he noticed that plane-polarized light seemed to be partly depolarized by total internal reflection, if initially polarized at an acute angle to the plane of incidence. By including total internal reflection in a chromatic-polarization experiment, he found that the apparently depolarized light was a mixture of components polarized parallel and perpendicular to the plane of incidence, and that the total reflection introduced a phase difference between them. Choosing an appropriate angle of incidence (not yet exactly specified) gave a phase difference of 1/8 of a cycle. Two such reflections from the "parallel faces" of "two coupled prisms" gave a phase difference of 1/4 of a cycle. In that case, if the light was initially polarized at 45° to the plane of incidence and reflection, it appeared to be completely depolarized after the two reflections. These findings were reported in a memoir submitted and read to the French Academy of Sciences in November 1817.
In 1821, Fresnel derived formulae equivalent to his sine and tangent laws (Eqs.() and (), above) by modeling light waves as transverse elastic waves with vibrations perpendicular to what had previously been called the plane of polarization. Using old experimental data, he promptly confirmed that the equations correctly predicted the direction of polarization of the reflected beam when the incident beam was polarized at 45° to the plane of incidence, for light incident from air onto glass or water. The experimental confirmation was reported in a "postscript" to the work in which Fresnel expounded his mature theory of chromatic polarization, introducing transverse waves. Details of the derivation were given later, in a memoir read to the academy in January 1823. The derivation combined conservation of energy with continuity of the tangential vibration at the interface, but failed to allow for any condition on the normal component of vibration.
Meanwhile, in a memoir submitted in December 1822, Fresnel coined the terms linear polarization, circular polarization, and elliptical polarization. For circular polarization, the two perpendicular components were a quarter-cycle (±90°) out of phase.
The new terminology was useful in the memoir of January 1823, containing the detailed derivations of the sine and tangent laws: in that same memoir, Fresnel found that for angles of incidence greater than the critical angle, the resulting reflection coefficients were complex with unit magnitude. Noting that the magnitude represented the amplitude ratio as usual, he guessed that the argument represented the phase shift, and verified the hypothesis by experiment. The verification involved
calculating the angle of incidence that would introduce a total phase difference of 90° between the s and p components, for various numbers of total internal reflections at that angle (generally there were two solutions),
subjecting light to that number of total internal reflections at that angle of incidence, with an initial linear polarization at 45° to the plane of incidence, and
checking that the final polarization was circular.
This procedure was necessary because, with the technology of the time, one could not measure the s and p phase-shifts directly, and one could not measure an arbitrary degree of ellipticality of polarization, such as might be caused by the difference between the phase shifts. But one could verify that the polarization was circular, because the brightness of the light was then insensitive to the orientation of the analyzer.
For glass with a refractive index of 1.51, Fresnel calculated that a 45° phase difference between the two reflection coefficients (hence a 90° difference after two reflections) required an angle of incidence of 48°37' or 54°37'. He cut a rhomb to the latter angle and found that it performed as expected. Thus the specification of the Fresnel rhomb was completed. Similarly, Fresnel calculated and verified the angle of incidence that would give a 90° phase difference after three reflections at the same angle, and four reflections at the same angle. In each case there were two solutions, and in each case he reported that the larger angle of incidence gave an accurate circular polarization (for an initial linear polarization at 45° to the plane of reflection). For the case of three reflections he also tested the smaller angle, but found that it gave some coloration due to the proximity of the critical angle and its slight dependence on wavelength. (Compare Fig.13 above, which shows that the phase difference is more sensitive to the refractive index for smaller angles of incidence.)
For added confidence, Fresnel predicted and verified that four total internal reflections at 68°27' would give an accurate circular polarization if two of the reflections had water as the external medium while the other two had air, but not if the reflecting surfaces were all wet or all dry.
Fresnel's deduction of the phase shift in TIR is thought to have been the first occasion on which a physical meaning was attached to the argument of a complex number. Although this reasoning was applied without the benefit of knowing that light waves were electromagnetic, it passed the test of experiment, and survived remarkably intact after James Clerk Maxwell changed the presumed nature of the waves. Meanwhile, Fresnel's success inspired James MacCullagh and Augustin-Louis Cauchy, beginning in 1836, to analyze reflection from metals by using the Fresnel equations with a complex refractive index. The imaginary part of the complex index represents absorption.
The term critical angle, used for convenience in the above narrative, is anachronistic: it apparently dates from 1873.
In the 20th century, quantum electrodynamics reinterpreted the amplitude of an electromagnetic wave in terms of the probability of finding a photon. In this framework, partial transmission and frustrated TIR concern the probability of a photon crossing a boundary, and attenuated total reflectance concerns the probability of a photon being absorbed on the other side.
Research into the more subtle aspects of the phase shift in TIR, including the Goos–Hänchen and Imbert–Fedorov effects and their quantum interpretations, has continued into the 21st century.
Gallery
See also
Notes
References
Bibliography
S. Bochner (June 1963), "The significance of some basic mathematical conceptions for physics", Isis, 54 (2): 179–205; .
M. Born and E. Wolf, 1970, Principles of Optics, 4th Ed., Oxford: Pergamon Press.
C.B. Boyer, 1959, The Rainbow: From Myth to Mathematics, New York: Thomas Yoseloff.
J.Z. Buchwald (December 1980), "Experimental investigations of double refraction from Huygens to Malus", Archive for History of Exact Sciences, 21 (4): 311–373.
J.Z. Buchwald, 1989, The Rise of the Wave Theory of Light: Optical Theory and Experiment in the Early Nineteenth Century, University of Chicago Press, .
O. Darrigol, 2012, A History of Optics: From Greek Antiquity to the Nineteenth Century, Oxford, .
R. Fitzpatrick, 2013, Oscillations and Waves: An Introduction, Boca Raton, FL: CRC Press, .
R. Fitzpatrick, 2013a, "Total Internal Reflection", University of Texas at Austin, accessed 14 March 2018.
A. Fresnel, 1866 (ed. H. de Senarmont, E. Verdet, and L. Fresnel), Oeuvres complètes d'Augustin Fresnel, Paris: Imprimerie Impériale (3 vols., 1866–70), vol.1 (1866).
E. Hecht, 2017, Optics, 5th Ed., Pearson Education, .
C. Huygens, 1690, Traité de la Lumière (Leiden: Van der Aa), translated by S.P. Thompson as Treatise on Light, University of Chicago Press, 1912; Project Gutenberg, 2005. (Cited page numbers match the 1912 edition and the Gutenberg HTML edition.)
F.A. Jenkins and H.E. White, 1976, Fundamentals of Optics, 4th Ed., New York: McGraw-Hill, .
T.H. Levitt, 2013, A Short Bright Flash: Augustin Fresnel and the Birth of the Modern Lighthouse, New York: W.W. Norton, .
H. Lloyd, 1834, "Report on the progress and present state of physical optics", Report of the Fourth Meeting of the British Association for the Advancement of Science (held at Edinburgh in 1834), London: J. Murray, 1835, pp.295–413.
I. Newton, 1730, Opticks: or, a Treatise of the Reflections, Refractions, Inflections, and Colours of Light, 4th Ed. (London: William Innys, 1730; Project Gutenberg, 2010); republished with foreword by A. Einstein and Introduction by E.T. Whittaker (London: George Bell & Sons, 1931); reprinted with additional Preface by I.B. Cohen and Analytical Table of Contents by D.H.D. Roller, Mineola, NY: Dover, 1952, 1979 (with revised preface), 2012. (Cited page numbers match the Gutenberg HTML edition and the Dover editions.)
H.G.J. Rutten and M.A.M.van Venrooij, 1988 (fifth printing, 2002), Telescope Optics: A Comprehensive Manual for Amateur Astronomers, Richmond,VA: Willmann-Bell, .
J.A. Stratton, 1941, Electromagnetic Theory, New York: McGraw-Hill.
W. Whewell, 1857, History of the Inductive Sciences: From the Earliest to the Present Time, 3rd Ed., London: J.W. Parker & Son, vol.2.
E. T. Whittaker, 1910, [https://archive.org/details/historyoftheorie00whitrich A History of the Theories of Aether and Electricity: From the Age of Descartes to the Close of the Nineteenth Century, London: Longmans, Green, & Co.
External links
Mr. Mangiacapre, "Fluorescence in a Liquid" (video, ), uploaded 13 March 2012. (Fluorescence and TIR of a violet laser beam in quinine water.)
PhysicsatUVM, "Frustrated Total Internal Reflection" (video, 37s), uploaded 21 November 2011. ("A laser beam undergoes total internal reflection in a fogged piece of plexiglass...")
SMUPhysics, "Internal Reflection" (video, 12s), uploaded 20 May 2010. (Transition from refraction through critical angle to TIR in a 45°-90°-45° prism.)
Light
Waves
Physical phenomena
Optical phenomena
Physical optics
Geometrical optics
Glass physics
History of physics
Lighthouses
Dimensionless numbers of physics | Total internal reflection | Physics,Materials_science,Engineering | 12,536 |
2,481,420 | https://en.wikipedia.org/wiki/Loschmidt%20constant | The Loschmidt constant or Loschmidt's number (symbol: n0) is the number of particles (atoms or molecules) of an ideal gas per volume (the number density), and usually quoted at standard temperature and pressure. The 2018 CODATA recommended value is at 0 °C and 1 atm. It is named after the Austrian physicist Johann Josef Loschmidt, who was the first to estimate the physical size of molecules in 1865. The term Loschmidt constant is also sometimes used to refer to the Avogadro constant, particularly in German texts.
By ideal gas law, , and since , the Loschmidt constant is given by the relationship
where kB is the Boltzmann constant, p0 is the standard pressure, and T0 is the standard thermodynamic temperature.
Since the Avogadro constant NA satisfies , the Loschmidt constant satisfies
where R is the ideal gas constant.
Being a measure of number density, the Loschmidt constant is used to define the amagat, a practical unit of number density for gases and other substances:
,
such that the Loschmidt constant is exactly 1 amagat.
Modern determinations
In the CODATA set of recommended values for physical constants, the Loschmidt constant is calculated from the Avogadro constant and the molar volume of an ideal gas, or equivalently the Boltzmann constant:
where Vm is the molar volume of an ideal gas at the specified temperature and pressure, which can be chosen freely and must be quoted with values of the Loschmidt constant. The Loschmidt constant is exactly defined for exact temperatures and pressures since the 2019 revision of the SI.
First determinations
Loschmidt did not actually calculate a value for the constant which now bears his name, but it is a simple and logical manipulation of his published results. James Clerk Maxwell described the paper in these terms in a public lecture eight years later:
Loschmidt has deduced from the dynamical theory the following remarkable proportion:—As the volume of a gas is to the combined volume of all the molecules contained in it, so is the mean path of a molecule to one-eighth of the diameter of a molecule.
To derive this "remarkable proportion", Loschmidt started from Maxwell's own definition of the mean free path (there is an inconsistency between the result on this page and the page cross-referenced to the mean free path; here appears an additional factor 3/4):
where n has the same sense as the Loschmidt constant, that is the number of molecules per unit volume, and d is the effective diameter of the molecules (assumed to be spherical). This rearranges to
where 1/n is the volume occupied by each molecule in the gas phase, and πℓd/4 is the volume of the cylinder made by the molecule in its trajectory between two collisions. However, the true volume of each molecule is given by πd/6, and so nπd/6 is the volume occupied by all the molecules not counting the empty space between them. Loschmidt equated this volume with the volume of the liquified gas. Dividing both sides of the equation by nπd/6 has the effect of introducing a factor of V/V, which Loschmidt called the "condensation coefficient" and which is experimentally measurable. The equation reduces to
relating the diameter of a gas molecule to measurable phenomena.
The number density, the constant which now bears Loschmidt's name, can be found by simply substituting the diameter of the molecule into the definition of the mean free path and rearranging:
Instead of taking this step, Loschmidt decided to estimate the mean diameter of the molecules in air. This was no minor undertaking, as the condensation coefficient was unknown and had to be estimated – it would be another twelve years before Raoul Pictet and Louis Paul Cailletet would liquify nitrogen for the first time. The mean free path was also uncertain. Nevertheless, Loschmidt arrived at a diameter of about one nanometre, of the correct order of magnitude.
Loschmidt's estimated data for air give a value of n = . Eight years later, Maxwell was citing a figure of "about 19 million million million" per cm, or .
See also
Avogadro's law
List of scientists whose names are used in physical constants
References
Amount of substance
Physical constants | Loschmidt constant | Physics,Chemistry,Mathematics | 932 |
12,399,779 | https://en.wikipedia.org/wiki/RIPRNet | RIPRNet (Radio over Internet Protocol Routed Network) is a United States military network that allows system designers and deployment personnel to connect radios in remote locations to local dispatch consoles exchanging radio voice data over an IP routed network. In 2007, RIPRNet was being installed in Iraq for use by United States and Coalition forces.
RIPRNet is a tactical system, whose end-users are trucks or mobile forces. Part of the IP network is routed over strategic systems to increase connectivity.
As of July 2007, 14 core sites and 37 ground station consoles were operational, costing "less than $10 million (US dollars) to implement, and is expected to cost 300,000 a year to maintain."
References
External links
Another enemy tactic foiled by Airmen; Airman receives recognition for working to advance RIPRNET, 10/5/2006, Grand Forks AFB Public Affairs
Internet Protocol Network Protects Troop Convoys, Nov 2006, SIGNAL
Military communications | RIPRNet | Engineering | 185 |
35,742,175 | https://en.wikipedia.org/wiki/Paradox%20psychology | Paradox psychology is a counter-intuitive approach that is primarily geared toward addressing treatment resistance. The method of paradoxical interventions (pdxi) is more focused, rapid, and effective than Motivational Interviewing. In addressing resistance, the method seeks to influence the clients' underlying attitude and perception by providing laser beam attention on strengthening the attachment-alliance. This is counter-intuitive to traditional methods since change is usually directed toward various aspects of behavior, emotions, and thinking. As it turns out, the better therapy is able to strengthen the alliance, the more these aspects of behavior will change.
However, within the pdxi process, the idea of changing behavior is secondary to the main focus on the alliance. In surprising fashion, this seemingly minor shift actually results in a 'day and night' difference in how treatment is conducted. The advantage of focusing on attachment=alliance is that when done correctly, the client cannot block or defend against the intervention. Basically, 'resistance' becomes non-existent. So while the resistant client is often well defended and guarded around attempts to alter his behavior, he is unable to block the therapist from strengthening the alliance. This allows the clinician to avoid power struggles around behavior. By developing a stronger client-therapist bold, there is a natural and unconscious shift toward relaxation. As a result, this allows the client to let go of rigid patterns in a manner that can best be described as spontaneous (as unlikely as that may initially sound!)
Description
PDXI is an approach that specifically addresses treatment of the "difficult" or resistant client, and a scientific understanding that supports a process for 'spontaneous change'. It unifies behavioral, cognitive, and psychodynamic orientations under a single umbrella theory and is a science-based model showing how treating secondary (less problematic) behaviors (i.e.: anger, low self-esteem, poor social skills, etc.) will then impact primary targeted volatile or criminal type behaviors (i.e.: violence, problematic sexual behaviours, fire-setting, etc.)
In addition, paradox psychology helps explain the process of paradoxical interventions. In doing so, the approach represents the logical extension of attachment theory as described by John Bowlby and Ainsworth.
While there are many treatment theories that address separate aspects of behavior, emotions, and thinking, this approach focuses on the obvious fact that human existence is a 'paradox'. This paradox is evidenced by the fact that we live in an animal body, but we walk upright with our 'mind in the clouds'; our DNA is programmed to function via instinct, yet we prefer to assert free-will; we are smart enough to 'know better', but quite often repeat past mistakes. As such, it could be argued that the study of 'man as a paradox' is most closely aligned with our 'human essence'.
Master therapists
While the paradoxical method was documented by Adler as early as the 1920s, its counter-intuitive style has always been difficult to explain. Adler once described the method as "spitting in the patient's soup"; meaning that the method had the ability to impact behavior without "convincing or rewarding" the patient to change.
From the 1960s through the 1980s many 'master therapists' incorporated the method with great success. They include: Milton Erickson, Viktor Frankl, Jay Haley, Salvador Minuchin, Fritz Perls, and others. The method proved to have a consistent ability (as described by many for) 'amazing results' with clients who presented a wide range of disruptive behavioral issues.
Research
Unbiased research indicates that behavioral, cognitive, and psychodynamic methods show success rates that are statistically equal when working with motivated clients.
Paradoxical interventions were shown to have the highest success rate with oppositional and treatment-resistant clients.
Psychological research is research that psychologists perform to investigate and analyze the experiences and behaviors of individuals or groups in a systematic way. Their findings could be used in educational, occupational, and clinical settings.
Research helps us understand what causes people to think, feel, and act in certain ways; it allows us to categorize psychological disorders so that we can better understand the symptoms and their impact on individuals and society; and it allows us to better understand how intimate relationships, development, schools, family, peers, and religion all play a role.
Scientific and evidenced based
Even though the method was documented to be successful when working with treatment-resistance, paradoxical interventions lost favor in the late 1980s and '90s. This was due to the fact that the psychology field desired to present itself as science oriented, and pushed for 'evidence based' approaches. Since the underlying theory and mechanism for the paradoxical approach had remained an 'unsolved mystery', there was no way to promote the method in a concise and logical manner.
However, more recently, Eliot P. Kaplan, PhD has been able to provide a simple scientific framework that provides a grounded understanding for this seemingly complicated approach. In his work treating adolescents with problem sexual behaviors (PSB), he has been able to show that a basic orbits-gravity model allows us to unravel the puzzling nature of the approach. The model identifies the process between repetitive energy / behavior (orbits) and the strength of attachment (force of gravity) as gauged through the therapeutic alliance. The model incorporates this scientific construct to identify the 'active ingredient' that allows the method to be consistently effective in disarming and bypassing treatment resistance.
An exciting aspect of the approach is the humor and absurd quality of counter-intuitive interventions. It is often this unexpected humor that 'breaks-through' the client's usual attempts to keep the clinician at a distance and defend against treatment. Some of the better known interventions include: Prescribing the symptom; predicting behavior and outcomes; exaggerating symptomatic behavior; symptom planning and scheduling, etc.
Reverse psychology
Those who lack knowledge as to the depth of paradoxical interventions have tended to dismiss the approach simply as reverse psychology. While a paradoxical intervention and reverse psychology may seem similar on the surface, their underlying intent and direction are very different. In reverse psychology the clinician hopes to manipulate the client to follow his planned and preset agenda. (He tells the client to 'go left' with the 'plan' the client will resist his directive and 'go right'.)
However, a 'pure' paradoxical intervention seeks to only strengthen the alliance without an ulterior motive. This is done with the understanding, that by 'shifting gravity-attachment' the client will spontaneously make changes of his own desire and free-will. (Here the clinician expresses unconditional positive regard. He acknowledges that the client's habitual pattern is to 'go left', and truly accepts that the client will most likely do this pattern in the near future. However, paradoxically now that the client's behavior has been predicted and the future outcome has been accepted, the client is in a position to make a 'free-will choice' to undo the forecasted behavior.) The difference here is that paradoxical interventions support the client's ability to take responsibility for his own actions, while reverse psychology focuses on the ability of the clinician to 'trick' the client – a subtle but important difference. The advantage of the method is the ability to approach the client in a non-confrontational and non-threatening manner in such a way that it 'forces' the treatment-resistant client to take responsibility for his habitual reactions and patterns.
Reverse psychology, also known as strategic self-anticonformity, is a strategy that entails promoting a behavior that differs from the desired objective. While it can be used to control another person's conduct, it can also be used to manipulate them.
Paradoxical interventions should not be used to directly target dangerous or criminogenic behaviors. In such situations clinicians need to use strategic interventions that target secondary non-criminogenic behaviors, but as a result will impact primary targeted volatile behavior.
Reverse psychology is a persuasion strategy that involves asserting a view or conduct that is diametrically opposed to the one intended, with the hope that this approach will motivate the persuasion subject to perform what is genuinely desired.
References
Bibliography
Adler, A. (1956).The individual psychology of Alfred Adler. (H. L. Ansbacher and R. R. Ansbacher, Ed. And Trans.) New York: Harper Row
Ainsworth, M. D. S. (1989) Attachments beyond infancy. American Psychologist, 44, 709-716
Beisser, A (1970) The paradoxical theory of change. In J. Fagan and I. Shepherd (Eds.) Gestalt therapy now. New York: Harper and Row
Bowlby, J. (1969) Attachment and loss: (Vol. 1), Attachment. New York Basic Books
Capra, F. (1975) Tao of physics. Bantam Books
Fernandez, Y. M. & Serran, G. (2002) Characteristics of an effective sex offender therapist. In B. Schwartz (Eds.), The Sex Offender. (Chap. 9)
Frank, J.D. (1973). Persuasion and healing (2nd ed.). Baltimore: Johns Hopkins University Press.
Frankl, V. (1965) The doctor and the soul: From psychotherapy to logotherapy. New York: Knopf
Frankl, V.E. (1978). The unheard cry for meaning: Psychotherapy and humanism. New York: Simon & Schuster.
Hawking, S., (1998) A brief history of time 10th Ed. Bantam Books
Haley, J. (1963) Strategies of psychotherapy. New York: Grune and Stratton
Horvath, A. O., & Goheen, M. D. (1990) Factors mediating the success of defiance- and compliance-based interventions. . Journal of Counseling Psychology, 37, 363 - 371.
Horvath, A. O., & Symods, B.D. (1991). Relation between working alliance and outcome in psychotherapy: A meta-analysis. Journal of Counseling Psychology, 38, 139–149.
Kanfer, F.H., & Goldstein, A.P. (1991) Helping people change. New York: Pergamon Press
Kaplan, E.P. (2008) The Sex Offender - Volume 6, Chapter 4 Paradoxical Interventions with
Treatment Resistant Offenders. Civil Research Institute (CSI) Kingston, NJ
Marshall, W. L. (1997). The relationship between self-esteem and deviant sexual arousal in nonfamilial child molesters. Behavior Modification, 21, 1, 86-96
Marshall, W. L., Cripps, E., Anderson, D., & Cortoni, F. A. (1999) Self-esteem and coping strategies in child molesters. Journal of Interpersonal Violence, 14, 955-962
Mann R. E. & Shingler, J. (2001, September) Collaborative risk assessment with sexual offenders. Paper presented at the meeting for National Organization for the Treatment of Abusers, Cardiff, Wales.
Orlinsky, D.E., Grawe, K. & Parks, B.K. (1994) Process and outcome in psychotherapy - Noch Einmal. In A.E. Bergin & S.L. Garfield (Eds.) Handbook of psychotherapy and behavior change (4th ed., pp. 270–378). New York: Wiley.
Rogers, C. R. (!957). The necessary and sufficient conditions of therapeutic personality change. Journal of Consulting Psychology, 21, 95-103
Rogers, C. R. (!975). Empathic: An unappreciated way of being. Counseling Psychologists, 5, 2-10.
Safran, J.D. & Muran, J.C. (Eds.). (1995). The therapeutic alliance [Special issue]. Session: Psychotherapy in Practice, 1 (1). (Reissued as millennial issue, February 2000)
Satir, V. (1964) Conjoint Family Therapy. Palo Alto: Science and Behavior Books.
Segal, Z. V. and Marshall, W. L. (1986) Discrepancies between self-efficacy predictions and actual performance in a population of rapists and child molesters. Cognitive Therapy and Research, 10, 363 - 376
Seligman, M. E. (1995) The Effectiveness of Psychotherapy: The Consumer Reports Study. American Psychologist Vol. 50, Num. 12, 965 - 974
Shoham-Salomon, V., Avner, R., & Neeman, R. (1989). You're changed if you do and changed if you don't: Mechanisms underlying paradoxical interventions. Journal of Consulting and Clinical Psychology, 57, 590 - 598.
Ward, T., & Stewart, C. A. (2003) The treatment of sex offenders: Risk management and good lives. Professional Psychology: Research and Practice, .34, 4, 353-360
Weakland, J., Fisch, R., Watzlawick, P., and Bodin, A. (1974) Brief therapy: Focused problem resolution. Family Process, 13, 141-168
Weeks, G. R. and L'Abayte, L. (1982) Paradoxical psychotherapy: Theory and Practice. New York: Brunner / Mazel
Yalom, I.D. (1980). Existential psychotherapy. New York: Basic Books
Clinical psychology
Paradoxes | Paradox psychology | Biology | 2,815 |
61,478,925 | https://en.wikipedia.org/wiki/Cat%20exercise%20wheel | A cat exercise wheel is a large wheel on which a cat either runs on or walks on for exercise or play. A cat wheel looks like a large hamster wheel: the wheel turns due to the weight of the cat as it walks. A wheel can be used for enrichment or to exercise high-energy indoor cats.
Indoor cats
Many pet cats are kept in homes or small apartments, and do not have opportunities to run outdoors or otherwise exercise a considerable amount, as they are generally not leash-trained or let outside. A cat wheel can assist cats in maintaining a healthy weight and body condition, as well as providing mental stimulation and a form of play.
See also
Cat toys
Behavioral enrichment
Treadwheel
References
External links
Article from ASPCA's Virtual Pet Behaviorist on cat toys
Opportunities, prerequisites, pros and cons (in German)
Hazards of cat toys (ASPCA)
Cat behavior
Cat equipment
Play (activity)
Toys
Animal welfare
Wheels | Cat exercise wheel | Biology | 195 |
20,928,420 | https://en.wikipedia.org/wiki/Translational%20research%20informatics | Translational research informatics (TRI) is a sister domain to or a sub-domain of biomedical informatics or medical informatics concerned with the application of informatics theory and methods to translational research. There is some overlap with the related domain of clinical research informatics, but TRI is more concerned with enabling multi-disciplinary research to accelerate clinical outcomes, with clinical trials often being the natural step beyond translational research.
Translational research as defined by the National Institutes of Health includes two areas of translation. One is the process of applying discoveries generated during research in the laboratory, and in preclinical studies, to the development of trials and studies in humans. The second area of translation concerns research aimed at enhancing the adoption of best practices in the community. Cost-effectiveness of prevention and treatment strategies is also an important part of translational research.
Overview
Translational research informatics can be described as "an integrated software solution to manage the: (i) logistics, (ii) data integration, and (iii) collaboration, required by translational investigators and their supporting institutions". It is the class of informatics systems that sits between and often interoperates with: (i) health information technology/electronic medical record systems, (ii) CTMS/clinical research informatics, and (iii) statistical analysis and data mining.
Translational research informatics is relatively new, with most CTSA awardee academic medical centers actively acquiring and integrating systems to enable the end-to-end TRI requirements. One advanced TRI system is being implemented at the Windber Research Institute in collaboration with GenoLogics and InforSense. Translational Research Informatics systems are expected to rapidly develop and evolve over the next couple of years.
Systems
CTRI-dedicated wiki
Further discussion of this domain can be found at the Clinical Research Informatics Wiki (CRI Wiki), a wiki dedicated to issues in clinical and translational research informatics.
See also
Bioinformatics
References
Bioinformatics
Laboratory information management system | Translational research informatics | Chemistry,Engineering,Biology | 409 |
22,398,456 | https://en.wikipedia.org/wiki/Bacterial%20circadian%20rhythm | Bacterial circadian rhythms, like other circadian rhythms, are endogenous "biological clocks" that have the following three characteristics: (a) in constant conditions (i.e. constant temperature and either constant light {LL} or constant darkness {DD}) they oscillate with a period that is close to, but not exactly, 24 hours in duration, (b) this "free-running" rhythm is temperature compensated, and (c) the rhythm will entrain to an appropriate environmental cycle.
Until the mid-1980s, it was thought that only eukaryotic cells had circadian rhythms. It is now known that cyanobacteria (a phylum of photosynthetic eubacteria) have well-documented circadian rhythms that meet all the criteria of bona fide circadian rhythms. In these bacteria, three key proteins whose structures have been determined, KaiA, KaiB, and KaiC can form a molecular clockwork that orchestrates global gene expression. This system enhances the fitness of cyanobacteria in rhythmic environments.
History: are prokaryotes capable of circadian rhythmicity?
Before the mid-1980s, it was believed that only eukaryotes had circadian systems.
In 1985–6, several research groups discovered that cyanobacteria display daily rhythms of nitrogen fixation in both light/dark (LD) cycles and in constant light. The group of Huang and co-workers was the first to recognize clearly that the cyanobacterium Synechococcus sp. RF-1 was exhibiting circadian rhythms, and in a series of publications beginning in 1986 demonstrated all three of the salient characteristics of circadian rhythms described above in the same organism, the unicellular freshwater Synechococcus sp. RF-1. Another ground-breaking study was that of Sweeney and Borgese.
Inspired by the research of the aforementioned pioneers, the collaborative group of Takao Kondo, Carl H. Johnson, Susan Golden, and Masahiro Ishiura genetically transformed the cyanobacterium Synechococcus elongatus with a luciferase reporter that allowed rhythmic gene expression to be assayed non-invasively as rhythmically "glowing" cells. This system allowed an exquisitely precise circadian rhythm of luminescence to be measured from cell populations and even from single cyanobacterial cells. The figure shows the daily oscillations in luminescence of many individual cyanobacterial colonies on a petri dish; note the synchrony of rhythmicity among the various colonies.
Relationship to cell division
Despite predictions that circadian clocks would not be expressed by cells that are doubling faster than once per 24 hours, the cyanobacterial rhythms continue in cultures that are growing with doubling times as rapid as one division every 5–6 hours.
Adaptive significance
Do circadian timekeepers enhance the fitness of organisms growing under natural conditions? Circadian clocks are assumed to enhance the fitness of organisms by improving their ability to predict and anticipate daily cycles in environmental factors. However, there have been few rigorous tests of this proposition in any organism.
Cyanobacteria are one of the few organisms in which such a test has been performed. The adaptive fitness test was done by mixing cyanobacterial strains that express different circadian properties (i.e., rhythmicity vs. arhythmicity, different periods, etc.)
and growing them in competition under different environmental conditions. The idea was to determine if having an appropriately functional clock system enhances fitness under competitive conditions. The result was that strains with a functioning biological clock out-compete arhythmic strains in environments that have a rhythmic light/dark cycle (e.g., 12 hours of light alternating with 12 hours of darkness), whereas in "constant" environments (e.g., constant illumination) rhythmic and arhythmic strains grow at comparable rates. Among rhythmic strains with different periods, the strains whose endogenous period most closely matches the period of the environmental cycle is able to out-compete strains whose period does not match that of the environment.
Similar results were later obtained in plants and mice.
Global regulation of gene expression and chromosomal topology
In eukaryotes, about 10–20% of the genes are rhythmically expressed (as gauged by rhythms of mRNA abundance). However, in cyanobacteria, a much larger percentage of genes are controlled by the circadian clock. For example, one study has shown that the activity of essentially all promoters in the genome are rhythmically regulated. The mechanism by which this global gene regulation is mechanistically linked to the circadian clock appears to be due to clock triggering of a transcriptional cascade coupled to rhythmic changes in the topology of the entire cyanobacterial chromosome.
Molecular mechanism of the cyanobacterial clockwork
The S. elongatus luciferase reporter system was used to screen for clock gene mutants, of which many were isolated. The figure shows a few of the many mutants that were discovered. These mutants were used to identify the core KaiA, KaiB, KaiC clock genes.
At first, the cyanobacterial clockwork appeared to be a transcription and translation feedback loop in which clock proteins autoregulate the activity of their own promoters by a process that was similar in concept to the circadian clock loops of eukaryotes. Subsequently, however, several lines of evidence indicated that transcription and translation was not necessary for circadian rhythms of Kai proteins, the most spectacular being that the three purified Kai proteins can reconstitute a temperature-compensated circadian oscillation in a test tube.
In vivo, the output of this biochemical KaiABC oscillator to rhythms of gene expression appears to be mediated by KaiC phosphorylation status (see below) regulating a biochemical cascade involving a histidine kinase SasA and a phosphatase CikA that activate/inactivate the globally acting transcription factor RpaA. A contributing factor to the global transcription programs is rhythms of chromosomal topology in which the circadian clock orchestrates dramatic circadian changes in DNA topology that modulates changes in the transcription rates.
Visualizing the clockwork's "gears": structural biology of clock proteins
The cyanobacterial circadian system is so far unique in that it is the only circadian system in which the structures of full-length clock proteins have been solved. In fact, the structures of all three of the Kai proteins have been determined. KaiC forms a hexamer that resembles a double doughnut with a central pore that is partially sealed at one end. There are twelve ATP-binding sites in KaiC and the residues that are phosphorylated during the in vitro phosphorylation rhythm have been identified. KaiA has two major domains and forms dimers in which the N-terminal domains are "swapped" with the C-terminal domains. KaiB has been successfully crystallized from three different species of cyanobacteria and forms dimers or tetramers.
The three-dimensional structures have been helpful in elucidating the cyanobacterial clock mechanism by providing concrete models for the ways in which the three Kai proteins interact and influence each other.
The structural approaches have also allowed the KaiA/KaiB/KaiC complexes to be visualized as a function of time, which enabled sophisticated mathematical modeling of the in vitro phosphorylation rhythm. Therefore, the cyanobacterial clock components and their interactions can be visualized in four dimensions (three in space, one in time). The temporal formation patterns of the KaiA/KaiB/KaiC complex have been elucidated, along with an interpretation of the core mechanism based on the cycle of KaiC phosphorylation patterns and the dynamics of the KaiA/KaiB/KaiC complex. (See the animation of the phsophorylation/complex cycle.) In addition, single-molecule methods (high-speed atomic force microscopy) have been applied to visualize in real time and quantify the dynamic interactions of KaiA with KaiC on sub-second timescales. These interactions regulate the circadian oscillation by modulating the magnesium binding in KaiC.
While the KaiABC phosphorylation/complex cycle can explain key features of this biochemical circadian oscillator, especially how it can link to the output pathways that regulate global gene expression patterns, it does not provide an explanation for why the oscillator has a period of approximately 24 hours, nor how it can be "temperature compensated." Phosphorylation/dephosphorylation reactions and protein complex associations/dissassociations can be very rapid, so why does this biochemical oscillator have a period that is as slow as 24 hours and yet still be so precise? One model is that the rate-limiting reaction that determines the period is the very slow rate of ATP hydrolysis by KaiC. KaiC hydrolyses ATP at the remarkably slow rate of only 15 ATP molecules per KaiC monomer per 24 hours. The rate of this ATPase activity is temperature compensated, and the activities of wild-type and period-mutant KaiC proteins are directly proportional to their in vivo circadian frequencies, suggesting that the ATPase activity defines the circadian period. Therefore, some authors have proposed that the KaiC ATPase activity constitutes the most fundamental reaction underlying circadian periodicity in cyanobacteria. Structural analyses of the KaiC ATPase suggested that the slowness of this ATP hydrolysis arises from sequestration of a lytic water molecule in an unfavorable position and coupling of ATP hydrolysis to a peptide isomerization, thereby increasing the activation energy of ATP hydrolysis and slowing it to a 24 hour timescale.
Circadian advantage
In the context of bacterial circadian rhythms, specifically in cyanobacteria, circadian advantage refers to the improved competitive advantage of strains of cyanobacteria that "resonate" with the environmental circadian rhythm. For example, consider a strain with a free-running period (FRP) of 24 hours that is co-cultured with a strain that has a free-running period (FRP) of 30 hours in a light-dark cycle of 12 hours light and 12 hours dark (LD 12:12). The strain that has a 24-hour FRP will out-compete the 30-hour strain over time under these LD 12:12 conditions. On the other hand, in a light-dark cycle of 15 hours light and 15 hours darkness, the 30-hour strain will out-compete the 24-hour strain. Moreover, rhythmic strains of cyanobacteria will out-compete arhythmic strains in 24-h light/dark cycles, but in continuous light, arhythmic strains are able to co-exist with wild-type cells in mixed cultures.
Other bacteria
The only prokaryotic group with a well-documented circadian timekeeping mechanism is the cyanobacteria. Recent studies have suggested that there might be 24-hour timekeeping mechanisms among other prokaryotes. The purple non-sulfur bacterium Rhodopseudomonas palustris is one such example, as it harbors homologs of KaiB and KaiC and exhibits adaptive KaiC-dependent growth enhancement in 24-hour cyclic environments. However, R. palustris was reported to show a poor intrinsic free-running rhythm of nitrogen fixation under constant conditions. The lack of rhythm in R. palustris in constant conditions has implications for the adaptive value of intrinsic timekeeping mechanism. Therefore, the R. palustris system was proposed as a "proto" circadian timekeeper that exhibit some parts of circadian systems (kaiB and kaiC homologs), but not all.
There is some evidence of a circadian clock in Bacillus subtilis. Luciferase promoter assays showed gene expression patterns of ytvA, a gene encoding a blue light photoreceptor, that satisfied the criteria of a circadian clock. However, there has yet to be a robust demonstration of a clock in B. subtilis and the potential mechanisms of circadian gene regulation within B. subtilis remain unknown.
Another interesting example is the case of the microbiome. It is possible that circadian clocks play a role in the gut microbiota behavior. These microorganisms experience daily changes because their hosts eat on a daily routine (consumption in the day for diurnal animals and in the night for nocturnal hosts). The presence of a daily timekeeper might allow gut bacteria to anticipate resources coming from the host temporally, thereby giving those species of bacteria a competitive advantage over other species in the gut. Some bacteria are known to take hints from the host circadian clock in the form of melatonin. The disrupted gut microbiome has been proven to be related to a lot of diseases in humans gut microbiota. Thus, it is critical to our health to maintain a healthy gut microbiota. The host's circadian clock circadian rhythm controls the gut environment's ~24h cycle of many factors such as temperature changes, nutrients, certain hormones, bile acid levels, immune system functions. The relative abundances of some gut bacteria, such as Firmicutes and Bacteroidetes, display a clear daily cycle. In arrhythmic mice with clock-component dysfunctions, this rhythmicity disappears. Jet-lag and sleep deprivation can lead to the disruptions of the microbiome daily oscillations, but the changes are usually not dramatic.
This interaction is bidirectional as the gut microbiota can also act on the hosts. For example, antibiotics can affect the rhythmic adherence of gut bacteria to the intestinal epithelium and in turn, rewire the hosts’ chromatin and transcription oscillations in the intestines and in the livers.
Some of the current research in this field is focused on whether or not gut bacteria have intrinsic circadian rhythms. If so, researchers speculate that they may use their host's feeding patterns as zeitgebers. A long-term study on mice was conducted to determine whether the hosts’ rhythmic and arrhythmic feeding behaviors contributed differently to the recoveries of their gut microbiota from antibiotic treatment. Researchers found that rhythmic behavior after antibiotic ablation facilitates complete recovery of the gut microbiota. On the other hand, arrhythmic behavior after antibiotic ablation hinders the gut microbiota's proper recovery. Instead, this behavior promotes microbiota recovery to a new steady status that is distinct from the original. The genus Turicibacter, proven to modulate the mood-related neurotransmitter serotonin, was found to overly recover. This effect may lower the serotonin level in the gut, connecting the gut microbiome to effects on the host's mental health.
There are 4,616 bacterial species recognized in the human gut. Only 2 of them, Klebsiella aerogenes and Bacillus subtilis, are currently reported to have circadian clocks.
It is suspected that other gut bacteria may have circadian clocks, too.
See also
Circadian rhythm
Chronobiology
Cyanobacteria
KaiA
KaiB
KaiC
Oscillation
Phosphorylation
Synechococcus
References
Further reading
circadian rhythm | Bacterial circadian rhythm | Biology | 3,218 |
69,195,464 | https://en.wikipedia.org/wiki/National%20Initiative%20for%20Cybersecurity%20Careers%20and%20Studies | National Initiative for Cybersecurity Careers and Studies (NICCS) is an online training initiative and portal built as per the National Initiative for Cybersecurity Education framework. This is a federal cybersecurity training subcomponent, operated and maintained by Cybersecurity and Infrastructure Security Agency.
Overview
The National Initiative for Cybersecurity Careers and Studies was created by the Cybersecurity and Infrastructure Security Agency as a hub that provides access to cybersecurity resources, such as courses and career development, to the public. Its mission is to strengthen the cybersecurity workforce and awareness of cybersecurity and cyberspace through accessible education. With over 6,000 cyber security training courses, career pathway tools, and up-to-date coverage on cybersecurity events and news, NICCS aims to empower current and future generations of cybersecurity professionals.
History
The initiative was launched by Janet Napolitano, then-Secretary of Homeland Security of Department of Homeland Security on February 21, 2013. The primary objective of the initiative is to develop and train the next generation of American cyber professional by involving academia and the private sector.
Goals and Objectives
NICCS was founded with the overarching goal of being a national resource for cybersecurity education, careers, and training. It aims to provide its nation with resources to ensure the workforce has the proper training and education in the cybersecurity field. NICCS advocates for cybersecurity awareness, training, education, career advancement, and broadening its nation’s cybersecurity professionals workforce. The initiative employs several strategies to achieve its goals, such as implementing K-12 and collegiate-level programs, disseminating scholarship information, and offering varied training courses.
NICCS values cybersecurity as a priority in the nation's development. It believes that cybersecurity is integral to the success of several organizations and businesses. NICCS aims to educate and train the nation’s workforce using rapidly developing technology in the cybersecurity field.
Federal Virtual Training Environment
NICCS hosts Federal Virtual Training Environment, a completely free online cybersecurity training system for federal and state government employees. It contains more than 800 hours of training materials on ethical hacking, and surveillance, risk management, and malware analysis.
Training Programs
The NICCS seeks to provide trained, and certified cybersecurity professionals to the nation. They have developed a college to workforce pipeline with the CyberCorps Scholarship for Service program. They have also partnered with the NSA to identify, and recognize institutions that have a robust cybersecurity program, and designate them as CAE’s, or Center of Academic Excellence. In addition, they provide support and resources to K-12 teachers, and students to help them increase their cyber education. They have also partnered with training institutions across the United States to connect individuals with bootcamps, workshops, and training for certifications. They have also endorsed certain certifications like Network+ and Security+ that are relevant to cybersecurity professionals.
Similar Programs and Initiatives
National Cybersecurity Workforce Framework: Sets a universally accepted way to describe cybersecurity work, and workers
Cybersecurity and Infrastructure Security Agency: Responsible for ensuring critical infrastructure security and resilience
National Institute of Standards and Technology: Sets technological standards to help promote cooperation and foster innovation
DoD Cyber Workforce Framework: Establishes descriptions for the type of cybersecurity work individuals based on their tasks
See also
Cybersecurity and Infrastructure Security Agency
National Cyber Security Division
National Initiative for Cybersecurity Education
References
Initiatives_in_the_United_States
Computer network security | National Initiative for Cybersecurity Careers and Studies | Engineering | 741 |
29,247,528 | https://en.wikipedia.org/wiki/Earth%27s%20shadow | Earth's shadow (or Earth shadow) is the shadow that Earth itself casts through its atmosphere and into outer space, toward the antisolar point. During the twilight period (both early dusk and late dawn), the shadow's visible fringe – sometimes called the dark segment or twilight wedge – appears as a dark and diffuse band just above the horizon, most distinct when the sky is clear.
Since the angular sizes of the Sun and the Moon, visible from the surface of the Earth, are almost the same, the ratio of the length of the Earth's shadow to the distance between the Earth and the Moon will be almost equal to the ratio of the sizes of the Earth and the Moon. Since Earth's diameter is 3.7 times the Moon's, the length of the planet's umbra is correspondingly 3.7 times the average distance from the Moon to the Earth: roughly . The width of the Earth's shadow at the distance of the lunar orbit is approximately 9000 km (~ 2.6 lunar diameters), which allows people of the Earth to observe total lunar eclipses.
Appearance
Earth's shadow cast onto the atmosphere can be viewed during the "civil" stage of twilight, assuming the sky is clear and the horizon is relatively unobstructed. The shadow's fringe appears as a dark bluish to purplish band that stretches over 180° of the horizon opposite the Sun, i.e. in the eastern sky at dusk and in the western sky at dawn. Before sunrise, Earth's shadow appears to recede as the Sun rises; after sunset, the shadow appears to rise as the Sun sets.
Earth's shadow is best seen when the horizon is low, such as over the sea, and when the sky conditions are clear. In addition, the higher the observer's elevation is to view the horizon, the sharper the shadow appears.
Belt of Venus
A related phenomenon in the same part of the sky is the Belt of Venus, or anti-twilight arch, a pinkish band visible above the bluish shade of Earth's shadow, named after the planet Venus which, when visible, is typically located in this region of the sky.
No defined line divides the Earth's shadow and the Belt of Venus; one colored band blends into the other in the sky.
The Belt of Venus is quite a different phenomenon from the afterglow, which appears in the geometrically opposite part of the sky.
Color
When the Sun is near the horizon around sunset or sunrise, the sunlight appears reddish. This is because the light rays are penetrating an especially thick layer of the atmosphere, which works as a filter, scattering all but the longer (redder) wavelengths.
From the observer's perspective, the red sunlight directly illuminates small particles in the lower atmosphere in the sky opposite of the Sun. The red light is backscattered to the observer, which is the reason why the Belt of Venus appears pink.
The lower the setting Sun descends, the less defined the boundary between Earth's shadow and the Belt of Venus appears. This is because the setting Sun now illuminates a thinner part of the upper atmosphere. There the red light is not scattered because fewer particles are present, and the eye only sees the "normal" (usual) blue sky, which is due to Rayleigh scattering from air molecules. Eventually, both Earth's shadow and the Belt of Venus dissolve into the darkness of the night sky.
Color of lunar eclipses
Earth's shadow is as curved as the planet is, and its umbra extends into outer space. (The antumbra, however, extends indefinitely.) When the Sun, Earth, and the Moon are aligned perfectly (or nearly so), with Earth between the Sun and the Moon, Earth's shadow falls onto the lunar surface facing the night side of the planet, such that the shadow gradually darkens the full Moon, causing a lunar eclipse.
Even during a total lunar eclipse, a small amount of sunlight however still reaches the Moon. This indirect sunlight has been refracted as it passed through Earth's atmosphere. The air molecules and particulates in Earth's atmosphere scatter the shorter wavelengths of this sunlight; thus, the longer wavelengths of reddish light reaches the Moon, in the same way that light at sunset or sunrise appears reddish. This weak red illumination gives the eclipsed Moon a dimly reddish or copper color.
See also
Brocken spectre, the magnified shadow of an observer cast upon clouds opposite of the Sun's direction
References
External links
Definition of "dark segment"
Image showing a much larger segment of the sky with dark segment and Belt of Venus
Shadow of Earth, Belt of Venus as seen over Half Dome, Yosemite National Park, displayed in an interactive panorama. Scroll to the bottom of the post to view, after all other Yosemite panoramas.
Atmospheric optical phenomena
Shadows
Shadow
Sky | Earth's shadow | Physics | 1,005 |
18,310,752 | https://en.wikipedia.org/wiki/Spar%20varnish | Spar varnish (occasionally also called boat varnish or yacht varnish) is a wood-finishing varnish, originally developed for coating the spars of sailing ships, which formed part of the masts and rigging. These had to withstand rough condition, being flexed by the wind loads they supported, attacked by sea and bad weather, and suffering from UV degradation from long-term exposure to sunlight.
The most important condition for such varnishes was resistance to flexing. This required a varnish that was flexible and elastic. Without elasticity, the varnish would soon crack, allowing water to penetrate the wood beneath. Prior to the development of modern polymer chemistry, varnish production was rudimentary. Originally, spar varnish was a "long oil" varnish, composed primarily of drying oil with a small proportion of resin, usually boiled linseed oil and rosin. This gave flexibility, even though its weather resistance was still poor, and thus re-coating was required relatively frequently.
In modern times, "spar varnish" has become a genericised term in North America for any outdoor wood finish. Owing to modern varnish materials, their weather and UV resistance is likely to be good, but the original requirement for flexibility has largely been forgotten. A common form of modern spar varnish is spar urethane, a polyurethane-based finish intended for outdoor use, where sunlight-, heat-, and water-resistance are desirable qualities.
See also
Danish oil
Construction adhesive, a gluing compound for wood and other materials, designed to be more flexible than brittle wood glue
References
Varnishes
Wood finishing materials | Spar varnish | Chemistry | 339 |
372,266 | https://en.wikipedia.org/wiki/Long-term%20potentiation | In neuroscience, long-term potentiation (LTP) is a persistent strengthening of synapses based on recent patterns of activity. These are patterns of synaptic activity that produce a long-lasting increase in signal transmission between two neurons. The opposite of LTP is long-term depression, which produces a long-lasting decrease in synaptic strength.
It is one of several phenomena underlying synaptic plasticity, the ability of chemical synapses to change their strength. As memories are thought to be encoded by modification of synaptic strength, LTP is widely considered one of the major cellular mechanisms that underlies learning and memory.
LTP was discovered in the rabbit hippocampus by Terje Lømo in 1966 and has remained a popular subject of research since. Many modern LTP studies seek to better understand its basic biology, while others aim to draw a causal link between LTP and behavioral learning. Still, others try to develop methods, pharmacologic or otherwise, of enhancing LTP to improve learning and memory. LTP is also a subject of clinical research, for example, in the areas of Alzheimer's disease and addiction medicine.
History
Early theories of learning
At the end of the 19th century, scientists generally recognized that the number of neurons in the adult brain (roughly 100 billion) did not increase significantly with age, giving neurobiologists good reason to believe that memories were generally not the result of new neuron production. With this realization came the need to explain how memories could form in the absence of new neurons.
The Spanish neuroanatomist Santiago Ramón y Cajal was among the first to suggest a mechanism of learning that did not require the formation of new neurons. In his 1894 Croonian Lecture, he proposed that memories might instead be formed by strengthening the connections between existing neurons to improve the effectiveness of their communication. Hebbian theory, introduced by Donald Hebb in 1949, echoed Ramón y Cajal's ideas, further proposing that cells may grow new connections or undergo metabolic and synaptic changes that enhance their ability to communicate and create a neural network of experiences:
Eric Kandel (1964) and associates were some of the first researchers to discover long-term potentiation during their work with sea slug Aplysia. They attempted to apply behavioral conditioning to different cells in the slug’s neural network. Their results showed synaptic strength changes and researchers suggested that this may be due to a basic form of learning occurring within the slug.
Though these theories of memory formation are now well established, they were farsighted for their time: late 19th and early 20th century neuroscientists and psychologists were not equipped with the neurophysiological techniques necessary for elucidating the biological underpinnings of learning in animals. These skills would not come until the later half of the 20th century, at about the same time as the discovery of long-term potentiation.
Discovery
LTP was first observed by Terje Lømo in 1966 in the Oslo, Norway, laboratory of Per Andersen. There, Lømo conducted a series of neurophysiological experiments on anesthetized rabbits to explore the role of the hippocampus in short-term memory.
Lømo's experiments focused on connections, or synapses, from the perforant pathway to the dentate gyrus. These experiments were carried out by stimulating presynaptic fibers of the perforant pathway and recording responses from a collection of postsynaptic cells of the dentate gyrus. As expected, a single pulse of electrical stimulation to fibers of the perforant pathway caused excitatory postsynaptic potentials (EPSPs) in cells of the dentate gyrus. What Lømo unexpectedly observed was that the postsynaptic cells' response to these single-pulse stimuli could be enhanced for a long period of time if he first delivered a high-frequency train of stimuli to the presynaptic fibers. When such a train of stimuli was applied, subsequent single-pulse stimuli elicited stronger, prolonged EPSPs in the postsynaptic cell population. This phenomenon, whereby a high-frequency stimulus could produce a long-lived enhancement in the postsynaptic cells' response to subsequent single-pulse stimuli, was initially called "long-lasting potentiation".
Timothy Bliss, who joined the Andersen laboratory in 1968, collaborated with Lømo and in 1973 the two published the first characterization of long-lasting potentiation in the rabbit hippocampus. Bliss and Tony Gardner-Medwin published a similar report of long-lasting potentiation in the awake animal which appeared in the same issue as the Bliss and Lømo report. In 1975, Douglas and Goddard proposed "long-term potentiation" as a new name for the phenomenon of long-lasting potentiation. Andersen suggested that the authors chose "long-term potentiation" perhaps because of its easily pronounced acronym, "LTP".
Models and theory
The physical and biological mechanism of LTP is still not understood, but some successful models have been developed. Studies of dendritic spines, protruding structures on dendrites that physically grow and retract over the course of minutes or hours, have suggested a relationship between the electrical resistance of the spine and the effective synapse strength, due to their relationship with intracellular calcium transients. Mathematical models such as BCM Theory, which depends also on intracellular calcium in relation to NMDA receptor voltage gates, have been developed since the 1980s and modify the traditional a priori Hebbian learning model with both biological and experimental justification. Still, others have proposed re-arranging or synchronizing the relationship between receptor regulation, LTP, and synaptic strength.
Types
Since its original discovery in the rabbit hippocampus, LTP has been observed in a variety of other neural structures, including the cerebral cortex, cerebellum, amygdala, and many others. Robert Malenka, a prominent LTP researcher, has suggested that LTP may even occur at all excitatory synapses in the mammalian brain.
Different areas of the brain exhibit different forms of LTP. The specific type of LTP exhibited between neurons depends on a number of factors. One such factor is the age of the organism when LTP is observed. For example, the molecular mechanisms of LTP in the immature hippocampus differ from those mechanisms that underlie LTP of the adult hippocampus. The signalling pathways used by a particular cell also contribute to the specific type of LTP present. For example, some types of hippocampal LTP depend on the NMDA receptor, others may depend upon the metabotropic glutamate receptor (mGluR), while still others depend upon another molecule altogether. The variety of signaling pathways that contribute to LTP and the wide distribution of these various pathways in the brain are reasons that the type of LTP exhibited between neurons depends only in part upon the anatomic location in which LTP is observed. For example, LTP in the Schaffer collateral pathway of the hippocampus is NMDA receptor-dependent - this was proved by the application of AP5, an antagonist to the NMDA receptor, which prevented LTP in this pathway. Conversely, LTP in the mossy fiber pathway is NMDA receptor-independent, even though both pathways are in the hippocampus.
The pre- and postsynaptic activity required to induce LTP are other criteria by which LTP is classified. Broadly, this allows classification of LTP into Hebbian, non-Hebbian, and anti-Hebbian mechanisms. Borrowing its name from Hebb's postulate, summarized by the maxim that "cells that fire together wire together," Hebbian LTP requires simultaneous pre- and postsynaptic depolarization for its induction. Non-Hebbian LTP is a type of LTP that does not require such simultaneous depolarization of pre- and postsynaptic cells; an example of this occurs in the mossy fiber hippocampal pathway. A special case of non-Hebbian LTP, anti-Hebbian LTP explicitly requires simultaneous presynaptic depolarization and relative postsynaptic hyperpolarization for its induction.
Owing to its predictable organization and readily inducible LTP, the CA1 hippocampus has become the prototypical site of mammalian LTP study. In particular, NMDA receptor-dependent LTP in the adult CA1 hippocampus is the most widely studied type of LTP, and is therefore, the focus of this article.
Properties
NMDA receptor-dependent LTP exhibits several properties, including input specificity, associativity, cooperativity, and persistence.
Input specificity
Once induced, LTP at one synapse does not spread to other synapses; rather LTP is input specific. Long-term potentiation is only propagated to those synapses according to the rules of associativity and cooperativity. However, the input specificity of LTP may be incomplete at short distances. One model to explain the input specificity of LTP was presented by Frey and Morris in 1997 and is called the synaptic tagging and capture hypothesis.
Associativity
Associativity refers to the observation that when weak stimulation of a single pathway is insufficient for the induction of LTP, simultaneous strong stimulation of another pathway will induce LTP at both pathways.
Cooperativity
LTP can be induced either by strong tetanic stimulation of a single pathway to a synapse, or cooperatively via the weaker stimulation of many. When one pathway into a synapse is stimulated weakly, it produces insufficient postsynaptic depolarization to induce LTP. In contrast, when weak stimuli are applied to many pathways that converge on a single patch of postsynaptic membrane, the individual postsynaptic depolarizations generated may collectively depolarize the postsynaptic cell enough to induce LTP cooperatively. Synaptic tagging, discussed later, may be a common mechanism underlying associativity and cooperativity. Bruce McNaughton argues that any difference between associativity and cooperativity is strictly semantic. Experiments performed by stimulating an array of individual dendritic spines, have shown that synaptic cooperativity by as few as two adjacent dendritic spines prevents long term depression (LTD) allowing only LTP.
Persistence
LTP is persistent, lasting from several minutes to many months, and it is this persistence that separates LTP from other forms of synaptic plasticity.
Early phase
Maintenance
While induction entails the transient activation of CaMKII and PKC, maintenance of E-LTP (early-form LTP) is characterized by their persistent activation. During this stage PKMzeta (PKMζ) which does not have dependence on calcium, become autonomously active. Consequently, they are able to carry out the phosphorylation events that underlie E-LTP expression.
Expression
Phosphorylation is a chemical reaction in which a small phosphate group is added to another molecule to change that molecule's activity. Autonomously active CaMKII and PKC use phosphorylation to carry out the two major mechanisms underlying the expression of E-LTP. First, and most importantly, they phosphorylate existing AMPA receptors to increase their activity. Second, they mediate or modulate the insertion of additional AMPA receptors into the postsynaptic membrane. Importantly, the delivery of AMPA receptors to the synapse during E-LTP is independent of protein synthesis. This is achieved by having a nonsynaptic pool of AMPA receptors adjacent to the postsynaptic membrane. When the appropriate LTP-inducing stimulus arrives, nonsynaptic AMPA receptors are rapidly trafficked into the postsynaptic membrane under the influence of protein kinases. As mentioned previously, AMPA receptors are the brain's most abundant glutamate receptors and mediate the majority of its excitatory activity. By increasing the efficiency and number of AMPA receptors at the synapse, future excitatory stimuli generate larger postsynaptic responses.
While the above model of E-LTP describes entirely postsynaptic mechanisms for induction, maintenance, and expression, an additional component of expression may occur presynaptically. One hypothesis of this presynaptic facilitation is that persistent CaMKII activity in the postsynaptic cell during E-LTP may lead to the synthesis of a "retrograde messenger", discussed later. According to this hypothesis, the newly synthesized messenger travels across the synaptic cleft from the postsynaptic to the presynaptic cell, leading to a chain of events that facilitate the presynaptic response to subsequent stimuli. Such events may include an increase in neurotransmitter vesicle number, probability of vesicle release, or both. In addition to the retrograde messenger underlying presynaptic expression in early LTP, the retrograde messenger may also play a role in the expression of late LTP.
Late phase
Late LTP (L-LTP) is the natural extension of E-LTP. Unlike E-LTP, which is independent of protein synthesis, L-LTP requires gene transcription and protein synthesis in the postsynaptic cell. Two phases of L-LTP exist: the first depends upon protein synthesis, while the second depends upon both gene transcription and protein synthesis. These phases are occasionally called LTP2 and LTP3, respectively, with E-LTP referred to as LTP1 under this nomenclature.
Induction
Late LTP is induced by changes in gene expression and protein synthesis brought about by the persistent activation of protein kinases activated during E-LTP, such as MAPK. In fact, MAPK—specifically the extracellular signal-regulated kinase (ERK) subfamily of MAPKs—may be the molecular link between E-LTP and L-LTP, since many signaling cascades involved in E-LTP, including CaMKII and PKC, can converge on ERK. Recent research has shown that the induction of L-LTP can depend on coincident molecular events, namely PKA activation and calcium influx, that converge on CRTC1 (TORC1), a potent transcriptional coactivator for cAMP response element binding protein (CREB). This requirement for a molecular coincidence accounts perfectly for the associative nature of LTP, and, presumably, for that of learning.
Maintenance
Upon activation, ERK may phosphorylate a number of cytoplasmic and nuclear molecules that ultimately result in the protein synthesis and morphological changes observed in L-LTP. These cytoplasmic and nuclear molecules may include transcription factors such as CREB. ERK-mediated changes in transcription factor activity may trigger the synthesis of proteins that underlie the maintenance of L-LTP. One such molecule may be protein kinase Mζ (PKMζ), a persistently active kinase whose synthesis increases following LTP induction. PKMζ is an atypical isoform of PKC that lacks a regulatory subunit and thus remains constitutively active. Unlike other kinases that mediate LTP, PKMζ is active not just in the first 30 minutes following LTP induction; rather, PKMζ becomes a requirement for LTP maintenance only during the late phase of LTP. PKMζ thus appears important for the persistence of memory and would be expected to be important in the maintenance of long-term memory. Indeed, administration of a PKMζ inhibitor into the hippocampus of the rat results in retrograde amnesia with intact short-term memory; PKMζ does not play a role in the establishment of short-term memory. PKMζ has recently been shown to underlie L-LTP maintenance by directing the trafficking and reorganization of proteins in the synaptic scaffolding that underlie the expression of L-LTP. Even more recently, transgenic mice lacking PKMζ demonstrate normal LTP, questioning the necessity of PKMζ.
The long-term stabilization of synaptic changes is also determined by a parallel increase of pre- and postsynaptic structures such as axonal bouton, dendritic spine and postsynaptic density.
On the molecular level, an increase of the postsynaptic scaffolding proteins PSD-95 and Homer1c has been shown to correlate with the stabilization of synaptic enlargement.
Expression
The identities of only a few proteins synthesized during L-LTP are known. Regardless of their identities, it is thought that they contribute to the increase in dendritic spine number, surface area, and postsynaptic sensitivity to neurotransmitter associated with L-LTP expression. The latter may be brought about in part by the enhanced synthesis of AMPA receptors during L-LTP. Late LTP is also associated with the presynaptic synthesis of synaptotagmin and an increase in synaptic vesicle number, suggesting that L-LTP induces protein synthesis not only in postsynaptic cells, but in presynaptic cells as well. As mentioned previously, for postsynaptic LTP induction to result in presynaptic protein synthesis, there must be communication from the postsynaptic to the presynaptic cell. This may occur via the synthesis of a retrograde messenger, discussed later.
Even in studies restricted to postsynaptic events, investigators have not determined the location of the protein synthesis that underlies L-LTP. Specifically, it is unclear whether protein synthesis takes place in the postsynaptic cell body or in its dendrites. Despite having observed ribosomes (the major components of the protein synthesis machinery) in dendrites as early as the 1960s, prevailing wisdom was that the cell body was the predominant site of protein synthesis in neurons. This reasoning was not seriously challenged until the 1980s, when investigators reported observing protein synthesis in dendrites whose connection to their cell body had been severed. More recently, investigators have demonstrated that this type of local protein synthesis is necessary for some types of LTP.
One reason for the popularity of the local protein synthesis hypothesis is that it provides a possible mechanism for the specificity associated with LTP. Specifically, if indeed local protein synthesis underlies L-LTP, only dendritic spines receiving LTP-inducing stimuli will undergo LTP; the potentiation will not be propagated to adjacent synapses. By contrast, global protein synthesis that occurs in the cell body requires that proteins be shipped out to every area of the cell, including synapses that have not received LTP-inducing stimuli. Whereas local protein synthesis provides a mechanism for specificity, global protein synthesis would seem to directly compromise it. However, as discussed later, the synaptic tagging hypothesis successfully reconciles global protein synthesis, synapse specificity, and associativity.
Retrograde signaling
Retrograde signaling is a hypothesis that attempts to explain that, while LTP is induced and expressed postsynaptically, some evidence suggests that it is expressed presynaptically as well. The hypothesis gets its name because normal synaptic transmission is directional and proceeds from the presynaptic to the postsynaptic cell. For induction to occur postsynaptically and be partially expressed presynaptically, a message must travel from the postsynaptic cell to the presynaptic cell in a retrograde (reverse) direction. Once there, the message presumably initiates a cascade of events that leads to a presynaptic component of expression, such as the increased probability of neurotransmitter vesicle release.
Retrograde signaling is currently a contentious subject as some investigators do not believe the presynaptic cell contributes at all to the expression of LTP. Even among proponents of the hypothesis there is controversy over the identity of the messenger. Early thoughts focused on nitric oxide, while most recent evidence points to cell adhesion proteins.
Synaptic tagging
Before the local protein synthesis hypothesis gained significant support, there was general agreement that the protein synthesis underlying L-LTP occurred in the cell body. Further, there was thought that the products of this synthesis were shipped cell-wide in a nonspecific manner. It thus became necessary to explain how protein synthesis could occur in the cell body without compromising LTP's input specificity. The synaptic tagging hypothesis attempts to solve the cell's difficult problem of synthesizing proteins in the cell body but ensuring they only reach synapses that have received LTP-inducing stimuli.
The synaptic tagging hypothesis proposes that a "synaptic tag" is synthesized at synapses that have received LTP-inducing stimuli, and that this synaptic tag may serve to capture plasticity-related proteins shipped cell-wide from the cell body. Studies of LTP in the marine snail Aplysia californica have implicated synaptic tagging as a mechanism for the input-specificity of LTP. There is some evidence that given two widely separated synapses, an LTP-inducing stimulus at one synapse drives several signaling cascades (described previously) that initiates gene expression in the cell nucleus. At the same synapse (but not the unstimulated synapse), local protein synthesis creates a short-lived (less than three hours) synaptic tag. The products of gene expression are shipped globally throughout the cell, but are only captured by synapses that express the synaptic tag. Thus only the synapse receiving LTP-inducing stimuli is potentiated, demonstrating LTP's input specificity.
The synaptic tag hypothesis may also account for LTP's associativity and cooperativity. Associativity (see Properties) is observed when one synapse is excited with LTP-inducing stimulation while a separate synapse is only weakly stimulated. Whereas one might expect only the strongly stimulated synapse to undergo LTP (since weak stimulation alone is insufficient to induce LTP at either synapse), both synapses will in fact undergo LTP. While weak stimuli are unable to induce protein synthesis in the cell body, they may prompt the synthesis of a synaptic tag. Simultaneous strong stimulation of a separate pathway, capable of inducing cell body protein synthesis, then may prompt the production of plasticity-related proteins, which are shipped cell-wide. With both synapses expressing the synaptic tag, both would capture the protein products resulting in the expression of LTP in both the strongly stimulated and weakly stimulated pathways.
Cooperativity is observed when two synapses are activated by weak stimuli incapable of inducing LTP when stimulated individually. But upon simultaneous weak stimulation, both synapses undergo LTP in a cooperative fashion. Synaptic tagging does not explain how multiple weak stimuli can result in a collective stimulus sufficient to induce LTP (this is explained by the postsynaptic summation of EPSPs described previously). Rather, synaptic tagging explains the ability of weakly stimulated synapses, none of which are capable of independently generating LTP, to receive the products of protein synthesis initiated collectively. As before, this may be accomplished through the synthesis of a local synaptic tag following weak synaptic stimulation.
Modulation
As described previously, the molecules that underlie LTP can be classified as mediators or modulators. A mediator of LTP is a molecule, such as the NMDA receptor or calcium, whose presence and activity is necessary for generating LTP under nearly all conditions. By contrast, a modulator is a molecule that can alter LTP but is not essential for its generation or expression.
In addition to the signaling pathways described above, hippocampal LTP may be altered by a variety of modulators. For example, the steroid hormone estradiol may enhance LTP by driving CREB phosphorylation and subsequent dendritic spine growth. Additionally, β-adrenergic receptor agonists such as norepinephrine may alter the protein synthesis-dependent late phase of LTP. Nitric oxide synthase activity may also result in the subsequent activation of guanylyl cyclase and PKG. Similarly, activation of dopamine receptors may enhance LTP through the cAMP/PKA signaling pathway.
Relationship to behavioral memory
While the long-term potentiation of synapses in cell culture seems to provide an elegant substrate for learning and memory, the contribution of LTP to behavioral learning — that is, learning at the level of the whole organism — cannot simply be extrapolated from in vitro studies. For this reason, considerable effort has been dedicated to establishing whether LTP is a requirement for learning and memory in living animals. Because of this, LTP also plays a crucial role in fear processing.
Spatial memory
In 1986, Richard Morris provided some of the first evidence that LTP was indeed required for the formation of memories in vivo. He tested the spatial memory of rats by pharmacologically modifying their hippocampus, a brain structure whose role in spatial learning is well established. Rats were trained on the Morris water maze, a spatial memory task in which rats swim in a pool of murky water until they locate the platform hidden beneath its surface. During this exercise, normal rats are expected to associate the location of the hidden platform with salient cues placed at specific positions around the circumference of the maze. After training, one group of rats had their hippocampi bathed in the NMDA receptor blocker APV, while the other group served as the control. Both groups were then subjected to the water maze spatial memory task. Rats in the control group were able to locate the platform and escape from the pool, while the performance of APV-treated rats was significantly impaired. Moreover, when slices of the hippocampus were taken from both groups, LTP was easily induced in controls, but could not be induced in the brains of APV-treated rats. This provided early evidence that the NMDA receptor — and by extension, LTP — was required for at least some types of learning and memory.
Similarly, Susumu Tonegawa demonstrated in 1996 that the CA1 area of the hippocampus is crucial to the formation of spatial memories in living mice. So-called place cells located in this region become active only when the rat is in a particular location — called a place field — in the environment. Since these place fields are distributed throughout the environment, one interpretation is that groups of place cells form maps in the hippocampus. The accuracy of these maps determines how well a rat learns about its environment and thus how well it can navigate it. Tonegawa found that by impairing the NMDA receptor, specifically by genetically removing the NR1 subunit in the CA1 region, the place fields generated were substantially less specific than those of controls. That is, mice produced faulty spatial maps when their NMDA receptors were impaired. As expected, these mice performed very poorly on spatial tasks compared to controls, further supporting the role of LTP in spatial learning.
Enhanced NMDA receptor activity in the hippocampus has also been shown to produce enhanced LTP and an overall improvement in spatial learning. In 1999, Tang et al. produced a line of mice with enhanced NMDA receptor function by overexpressing the NR2B subunit in the hippocampus. The resulting smart mice, nicknamed "Doogie mice" after the fictional prodigious doctor Doogie Howser, had larger LTP and excelled at spatial learning tasks, reinforcing LTP's importance in the formation of hippocampus-dependent memories.
Inhibitory avoidance
In 2006, Jonathan Whitlock and colleagues reported on a series of experiments that provided perhaps the strongest evidence of LTP's role in behavioral memory, arguing that to conclude that LTP underlies behavioral learning, the two processes must both mimic and occlude one another. Employing an inhibitory avoidance learning paradigm, researchers trained rats in a two-chambered apparatus with light and dark chambers, the latter being fitted with a device that delivered a foot shock to the rat upon entry. An analysis of CA1 hippocampal synapses revealed that inhibitory avoidance training induced in vivo AMPA receptor phosphorylation of the same type as that seen in LTP in vitro; that is, inhibitory avoidance training mimicked LTP. In addition, synapses potentiated during training could not be further potentiated by experimental manipulations that would have otherwise induced LTP; that is, inhibitory avoidance training occluded LTP. In a response to the article, Timothy Bliss and colleagues remarked that these and related experiments "substantially advance the case for LTP as a neural mechanism for memory."
Clinical significance
The role of LTP in disease is less clear than its role in basic mechanisms of synaptic plasticity. However, alterations in LTP may contribute to a number of neurological diseases, including depression, Parkinson's disease, epilepsy, and neuropathic pain. Impaired LTP may also have a role in Alzheimer's disease and drug addiction.
Alzheimer's disease
LTP has received much attention among those who study Alzheimer's disease (AD), a neurodegenerative disease that causes marked cognitive decline and dementia. Much of this deterioration occurs in association with degenerative changes in the hippocampus and other medial temporal lobe structures. Because of the hippocampus' well established role in LTP, some have suggested that the cognitive decline seen in individuals with AD may result from impaired LTP.
In a 2003 review of the literature, Rowan et al. proposed one model for how LTP might be affected in AD. AD appears to result, at least in part, from misprocessing of amyloid precursor protein (APP). The result of this abnormal processing is the accumulation of fragments of this protein, called amyloid β (Aβ). Aβ exists in both soluble and fibrillar forms. Misprocessing of APP results in the accumulation of soluble Aβ that, according to Rowan's hypothesis, impairs hippocampal LTP and may lead to the cognitive decline seen early in AD.
AD may also impair LTP through mechanisms distinct from Aβ. For example, one study demonstrated that the enzyme PKMζ accumulates in neurofibrillary tangles, which are a pathologic marker of AD. PKMζ is an enzyme with critical importance in the maintenance of late LTP.
Drug addiction
Research in the field of addiction medicine has also recently turned its focus to LTP, owing to the hypothesis that drug addiction represents a powerful form of learning and memory. Addiction is a complex neurobehavioral phenomenon involving various parts of the brain, such as the ventral tegmental area (VTA) and nucleus accumbens (NAc). Studies have demonstrated that VTA and NAc synapses are capable of undergoing LTP and that this LTP may be responsible for the behaviors that characterize addiction.
See also
Neuroplasticity
Actin remodeling of neurons
Transcranial direct-current stimulation
Post-tetanic potentiation
References
Further reading
External links
Researchers provide first evidence for learning mechanism, a PhysOrg.com report on 2006 study by Bear and colleagues.
Short video documentary about the Doogie mice. (RealPlayer format)
"Smart Mouse", a Quantum ABC TV episode about the Doogie mice.
Neurophysiology
Neuroscience of memory
Behavioral neuroscience
Neuroplasticity
Neuroscience | Long-term potentiation | Biology | 6,567 |
2,594,056 | https://en.wikipedia.org/wiki/Morton%20Salt | Morton Salt is an American food company producing salt for food, water conditioning, industrial, agricultural, and road/highway use. Based in Chicago, the business is North America's leading producer and marketer of salt. It is a subsidiary of holding company Stone Canyon Industries Holdings, Inc.
History
The company began in Chicago, Illinois, in 1848 as a small sales agency, Richmond & Company, started by Alonzo Richmond as agents for Onondaga salt companies to sell their salt to the Midwest. In 1910, the business, which had by that time become both a manufacturer and a merchant of salt, was incorporated as the Morton Salt Company. In 1889, it was renamed after the owner, Joy Morton. Joy Morton started working for E. I. Wheeler in 1880, buying into the company for $10,000, with which he bought a fleet of lake boats to move salt west.
In 1896, Alfred Bevis founded the Bevis Rock Salt Company, building on the failed Lyons salt company in which he had previously invested and run. His daughter, Florence, married Dr. Charles Howard Longstreth, whom Bevis brought into both the Lyons and Bevis salt companies as an executive. Their son, Bevis Longstreth, became president and general manager on his return from service in World War I.
In 1919, Morton Salt acquired Bevis. About ten years later, Bevis Longstreth founded Thiokol Corporation. In 1969, the name "Morton-Norwich" came into use. Thiokol merged with Morton Salt in 1982 to form Morton-Thiokol. This merger was divested in 1989, following the 1986 Space Shuttle Challenger disaster, which was blamed on Morton-Thiokol products. Morton received the company's consumer chemical products divisions, while Thiokol retained only the space propulsion systems concern.
Morton owns the second-largest solar saline operation in North America, which it acquired in 1954, in Matthew Town, Inagua, The Bahamas.
Around 1958, the company realized that their salt was not living up to their slogan. A chemist, Richard A. Patton, was given the assignment to solve this problem. He invented a machine that would coat the salt with a byproduct of salt mining, magnesium oxide. Calcium silicate is now used instead for the same purpose. The same chemist developed a total of 27 patents, along with fellow chemists, that expanded Morton's commercialization of magnesium oxide.
In 1999, Morton Salt was acquired by the Philadelphia-based Rohm and Haas Company, Inc. and operated as a division of that company along with the Canadian Salt Company, which Morton had acquired in 1954.
On April 2, 2009, it was reported that Morton Salt was being acquired by German fertilizer and salt company K+S for a total enterprise value of US$1.7 billion. The sale, completed by October 2009, was in conjunction with Dow Chemical Company's takeover of Rohm and Haas.
In June 2016, a wall at the Morton Salt storage facility at 1308 N. Elston Avenue, in Chicago, collapsed and tons of salt and brick spilled suddenly onto several cars belonging to a neighboring car dealership. No one was injured and investigation initially found that the salt was piled too high. Repairs to part of the roof had also been neglected.
On April 30, 2021, K+S Aktiengesellschaft sold its North and South American business units, including Morton Salt, to Stone Canyon Industry Holdings, Mark Demetree and affiliates for $3.2 billion. The deal closed in April 2021.
Current overview
The Morton Salt Company's current headquarters office is in the River Point building at 444 West Lake Street in Chicago, becoming its first tenant in December 2016. Its previous headquarters was at 123 North Wacker Drive. Prior to its acquisition in 1999, the firm's corporate headquarters was at 100 North Riverside Plaza (later the headquarters of Boeing) and before that at 110 North Wacker Drive and 208 West Washington Street Morton operates a research & development laboratory in Elgin, Illinois, and produces salt at eight vacuum evaporation plants, six underground mines, five solar evaporation plants, and five packaging facilities across the United States, Canada, and The Bahamas.
Logo and advertising
Morton Salt's logo features the "Morton Salt Girl", a young girl walking in the rain with an opened umbrella and scattering salt behind her from a cylindrical container of table salt; this logo is considered to be one of the ten best-known advertising symbols in the United States. The company's logo and its motto, "When it rains, it pours", both originating in a 1914 advertising campaign, were developed to illustrate the point that Morton Salt was free flowing even in rainy weather. The company began adding magnesium carbonate as an absorbing agent to its table salt in 1911 to ensure that it poured freely.
The Morton Salt Girl, also known as the Umbrella Girl, has gone through seven different iterations, including updates in 1921, 1933, 1941, 1956, and 1968, and a 'refresh' on the 100th anniversary of its creation. The company sells associated memorabilia and makes some of its vintage advertisements freely available. In 2005, the Morton Salt Girl was shown in MasterCard's "Icons" commercial during Super Bowl XXXIX, which depicts several advertising mascots having dinner together. The logo has its centennial in 2014, which was celebrated with 100 parties in 100 cities, Morton Salt Girl Centennial Scholarships to benefit certain fine arts and culinary arts students at the School of the Art Institute of Chicago and the Kendall College School of Culinary Arts, Morton Salt Girl day at Wrigley Field, Facebook and Instagram lookalike contests, and other activities. Also in 2014, the Morton Salt Girl was voted into the Advertising Week Walk of Fame on Madison Avenue in New York City; it is the first icon featuring a woman to be inducted.
Morton Arboretum
Morton Salt is the sponsor of the Morton Arboretum, a botanical garden in Lisle, Illinois. It was established by Joy Morton, the company's founder, in 1922 to encourage the display and study of shrubs, trees, and vines. About 300,000 visitors a year hike on miles of trails, and over 3,600 kinds of plants are displayed.
In popular culture
In the 1989 Cheers episode "Feeble Attraction", Norm was planning to finally fire Doris, who he hired as a secretary for his failing painting company. She came into Cheers with a rain coat and umbrella leading Frasier to comment "You're going to fire the Morton Salt Girl".
In the 2011 episode "The Fight" of the television series Parks and Recreation, Morton Salt is one of three products publicly endorsed by the character Ron Swanson (Nick Offerman).
The Timbers Army used the Morton Salt Girl in a large tifo display and T-shirts during the kickoff match to the 2013 Major League Soccer season between the Portland Timbers and the New York Red Bulls.
As part of their "Walk Her Walk" campaign, Morton Salt funded the development of the music video "The One Moment" by the band OK Go, released on November 23, 2016.
See also
History of salt
Iodized salt
Sodium chloride
Footnotes
References
External links
American companies established in 1848
Brand name condiments
Food manufacturers of the United States
Manufacturing companies based in Chicago
Salt production | Morton Salt | Chemistry | 1,493 |
12,102,450 | https://en.wikipedia.org/wiki/Raspberry%20ketone | Raspberry ketone is a naturally occurring phenolic compound that is the primary aroma compound of red raspberries.
Occurrence
Raspberry ketone occurs in a variety of fruits, including raspberries, cranberries, and blackberries. It is detected and released by orchid flowers, e.g. Dendrobium superbum (syn D. anosmum), and several Bulbophyllum species to attract raspberry ketone-responsive male Dacini fruit flies. It is biosynthesized from coumaroyl-CoA. It can be extracted from the fruit, yielding about 1–4 mg per kg of raspberries.
Preparation
Since the natural abundance of raspberry ketone is very low, it is prepared industrially by a variety of methods from chemical intermediates. One of the ways this can be done is through a Claisen-Schmidt condensation followed by catalytic hydrogenation. First, acetone is condensed with 4-hydroxybenzaldehyde to form an α,β-unsaturated ketone. Then the alkene part is reduced to the alkane. This two-step method produces raspberry ketone in 99% yield. There is a less expensive hydrogenation catalyst, nickel boride, which also demonstrates high selectivity towards hydrogenation of the double bond of enone.
Uses
Raspberry ketone is sometimes used in perfumery, in cosmetics, and as a food additive to impart a fruity odor. It is one of the most expensive natural flavor components used in the food industry. The natural compound can cost as much as $20,000 per kg.
Marketing
Although products containing this compound are marketed for weight loss, there is no clinical evidence for this effect in humans. They are called "ketones" because of the ketone (acetone) group at their end, which is shared with ketone bodies.
Safety
Little is known about the long-term safety of raspberry ketone supplements, especially since little research has been done with humans. Toxicological models indicate a potential for cardiotoxic effects, as well as effects on reproduction and development. Furthermore, in many dietary supplements containing raspberry ketones, manufacturers add other ingredients such as caffeine which may have unsafe effects.
In 1965, the US Food and Drug Administration classified raspberry ketone as generally recognized as safe (GRAS) for the small quantities used to flavor foods.
See also
Raspberry ellagitannin
References
Flavors
Food additives
Ketones
Perfume ingredients
Ketone
4-Hydroxyphenyl compounds | Raspberry ketone | Chemistry | 533 |
33,760,918 | https://en.wikipedia.org/wiki/Laurie%20Leshin | Laurie Leshin is an American scientist and academic administrator serving as the 10th Director of the NASA Jet Propulsion Laboratory and as Vice President and Bren Professor of Geochemistry and Planetary Science at California Institute of Technology. Leshin's research has focused on geochemistry and space science. Leshin previously served as the 16th president of Worcester Polytechnic Institute.
Education
Leshin earned her Bachelor of Science degree in chemistry from Arizona State University and her Master of Science (1989) and PhD (1994) in geochemistry from the California Institute of Technology.
Career
From 1994 to 1996, Leshin was a University of California President's Postdoctoral Fellow in the department of Earth and space sciences at the University of California, Los Angeles (UCLA). From 1996 to 1998, Leshin was the W. W. Rubey Faculty Fellow in the department of Earth and space sciences at UCLA.
From 1998 to 2001, Leshin was an assistant professor at Arizona State University (ASU). In 2001, she became the Dee and John Whiteman Dean’s Distinguished Professor of Geological Sciences at ASU. In 2003, she became the director of ASU's Center for Meteorite Studies, which houses the largest university-based meteorite collection in the world. She directed research, education, and curation activities. At ASU, she also spearheaded the formulation of a new school of Earth and space exploration, combining Earth, planetary and astrophysical sciences with systems engineering in a nationally unique interdisciplinary academic unit.
In 2004, Leshin served on President Bush's Commission on Implementation of United States Space Exploration Policy, a nine-member commission charged with advising the President on the execution of his new Vision for Space Exploration.
From 2005 to 2007, Leshin was the director of Sciences and Exploration Directorate at NASA's Goddard Space Flight Center, where she oversaw science activities. From 2008 to 2009, she was the deputy center director for science and technology at Goddard. In this position, she oversaw strategy development at the center, leading an inclusive process to formulate future science and technology goals, and an integrated program of investments aligned to meet those goals. With other NASA Goddard senior managers, she was responsible for effectively executing the center's $3 billion in programs, and ensuring the scientific integrity of Earth observing missions, space-based telescopes such as the James Webb Space Telescope, and instruments exploring the Sun, Moon, Mercury, Mars, Saturn, comets and more. Starting in 2010, Leshin served as the deputy associate administrator for NASA’s Exploration Systems Mission Directorate, where she played a leading role in NASA's future human spaceflight endeavors. Her duties included oversight of the planning and execution of the next generation of human exploration systems, as well as the research, robotic and future capabilities development activities that support them. She was also engaged in initiating the development of commercial human spaceflight capabilities to low Earth orbit.
From 2011 to 2014, Leshin served as dean of the school of science and professor of Earth and environmental science at Rensselaer Polytechnic Institute, where she led the scientific academic and research enterprise. Leshin's scientific expertise is in cosmochemistry, and she is primarily interested in deciphering the record of water on objects in the Solar System.
In February 2013, President Obama appointed her to the advisory board of the Smithsonian National Air and Space Museum, and she was appointed by then Secretary of Transportation Ray LaHood to the advisory board of the United States Merchant Marine Academy later during the same year. She serves on the United States National Research Council's Committee on Astrobiology and Planetary Science, and as chair of the advisory board for the Thriving Earth Exchange of the American Geophysical Union.
In 2014, Leshin became the 16th president of Worcester Polytechnic Institute.
She has published approximately 50 scientific papers.
In January 2022, Leshin was simultaneously appointed as the director of NASA's Jet Propulsion Laboratory and a vice president and Bren Professor at the California Institute of Technology. WPI announced in February 2022, that Provost and Senior Vice President Wole Soboyejo would serve as interim WPI president upon Leshin's departure in May 2022 while WPI searches for a new president.
Awards
Leshin received the NASA Distinguished Public Service Medal in 2004 for her work on the Presidential Commission and the NASA Outstanding Leadership Medal in 2011 for her work at NASA. In 1996, she was the inaugural recipient of the Meteoritical Society's Nier Prize, awarded for outstanding research in meteoritics or planetary science by a scientist under the age of 35. The International Astronomical Union recognized her contributions to planetary science with the naming of asteroid 4922 Leshin.
References
External links
WPI Bio by Worcester Polytechnic Institute
Leshin's Profile by the NASA Exploration Systems Mission Directorate
Living people
Rensselaer Polytechnic Institute faculty
Date of birth missing (living people)
American geochemists
Arizona State University alumni
California Institute of Technology alumni
Presidents of Worcester Polytechnic Institute
American planetary scientists
Year of birth missing (living people)
Directors of the Jet Propulsion Laboratory | Laurie Leshin | Chemistry | 1,009 |
13,032,940 | https://en.wikipedia.org/wiki/Satellite%20truck | A Satellite Truck or SNG truck is a mobile communications satellite ground station mounted on a truck chassis as a platform. Employed in remote television broadcasts, satellite trucks transmit video signals back to studios or production facilities for editing and broadcasting. Satellite trucks usually travel with a production truck, which contains video cameras, sound equipment and a crew. A satellite truck has a large satellite dish antenna which is pointed at a communication satellite, which then relays the signal back down to the studio. Satellite communication allows transmission from any location that the production truck can reach, provided a line of sight to the desired satellite is available.
Satellite trucks are increasingly being used for data (ISP) services. These remote ISP services are used for disaster recovery and internet connectivity in areas underserved by mobile providers.
Equipment
Typically, a satellite truck will have its own onboard power source such as an electrical generator or inverter to create the alternating current to power all the transmission systems, which makes it an independent mobile satellite transmission entity. Often, such trucks will also have various degrees of video production equipment and video editing gear. This equipment allows these trucks to also act as mobile electronic news gathering (ENG) facilities, or they can even be outfitted to do electronic field production (EFP), allowing them to create an entire television show with multiple switched professional video cameras, character generators (CG) for digital on-screen graphics, video tape recorders (VTR) and video servers. The truck also have a satellite receiving dish (TVRO, TV receive-only) to monitor the receiving from the satellite of its own transmission or other feeds.
Most satellite trucks have been typically built on a light or mid-duty truck chassis with 6 wheels; usually with 4 tires on the rear axle. All equipment is mounted into the truck in racks that are fabricated into the box. Satellite trucks are generally referred to as 'fixed load' vehicles, meaning that the amount of equipment on-board generally does not change and the weight of the truck (other than fuel) ordinarily does not fluctuate.
Regulations
United States
Some larger satellite trucks weigh over , and therefore require the driver to obtain a Commercial Driver's License (CDL). Satellite trucks over GVWR are required to stop at weigh stations, undergo annual DOT inspections, and the Truck driver (usually also operates the truck) needs to pass a physical examination mandated by the DOT, maintain an accurate Drivers Daily Logbook, and comply with Hours of Service rules for professional drivers. Satellite Trucks part of a commercial fleet, or weighing over 10,000 pounds are considered commercial vehicles by the United States Department of Transportation (DOT).
Uses
A typical use for a satellite truck is satellite news gathering (SNG), which today in digital form is called DSNG.
Some newer generation satellite trucks are also being used for crisis communications, along with command and control centers for law enforcement, homeland security, emergency managers, and public utility companies.
The fact that these trucks do not rely upon terrestrial (land-based signals received through a conventional aerial) communication systems makes them ideal for information distribution and bandwidth creation in the aftermath of severe tropical cyclones, floods, and earthquakes when these land-based systems are damaged or destroyed. In the wake of Hurricane Katrina, when the communication ability of news media outlets far exceeded that of many federal and state relief agencies, many governmental bodies have since migrated to a mobile satellite-based communication platform.
C-Band satellite truck
C-Band Transportable uplinks ("Transportable Earth Station" (TES)) were initially used to transmit longer-format live television like sports television events and entertainment television programming. C-band satellite transmission requires a larger antenna than the Ku band trucks developed later in the 1980s, and a larger satellite antenna takes longer to set up and deploy.
Prior to dispatch of a C Band transportable uplink, an RF Interference study (RFI) needs to be completed. An RFI is a computer-generated report detailing any FCC protected microwave stations in the immediate area. This "frequency coordination" process has to be completed before an uplink transmission can commence. Terrestrial point-to-point signals share C-Band transmit frequencies (5.700-6.500 GHz), and full-time terrestrial signals take priority over ad hoc (temporary) C-Band uplink transmissions. Factors such as terrain, buildings and other structures are considered when determining the likelihood of interference from the TES.
Historically, it was necessary to install land telephone lines (also called hard or wired lines) where the TES was located. This was expensive and difficult to do at the time, since telephone companies were not used to setting up phone lines without notice of several days or even weeks. Early scrambling or encryption methods required a hard line for authorization of receive sites. Today, a digital cellular telephone is sufficient for most situations.
C-Band transportable service remains a prevalent source of long-haul transmission because of its immunity to the "rain fade" that Ku band experiences in significant rainstorms. C-Band transportable services cost more than similar Ku service due to the robust nature of the signal, the larger physical size of the truck, and specialized nature of C-Band transmissions.
With the advent of Ku band trucks (that don't require frequency coordination) and long-haul fiber-optics providing similar signal qualities, C-Band transportable service experienced a slowdown in service volume in the 1990s. It's still used in situations where rain-fades (a problem affecting only Ku band uplinks) are unacceptable and where fiber-optic links are not practical. C-Band uplinks are still commonly used for golf, auto racing, horse racing, and major college sports events in rural areas where local fiber interconnects to long-haul networks are either not available, or where the low number of events at the venue per year makes installation of fiber not cost effective. Ku TES' outnumber C-Band TES' around 30:1, when you consider the number of TV stations, network, and "freelance" Ku trucks versus the limited number of C-Band trucks.
Even with diminished usage, C-Band transportable services are still utilized as an alternative to fiber-optic cross-country transport as an 'alternate' transmission path. Most broadcast networks utilize both in order to protect their remote broadcasts that may be worth millions in rightsholder fees.
In the 2000's, high-definition television (HDTV) remote broadcasts caused a resurgence in C-Band transportable uplink services. The major factor in its resurgence was the limited amount of available bandwidth in local and long-haul fiber-optic service; uplink systems merely required the installation of High Definition MPEG digital encoders and decoders at either end.
Ku band satellite truck
Mobile Ku band satellite transmissions for television broadcasts started in Canada, until Conus Communications of St. Paul, MN along with Hubcom in Florida built the first Satellite News Gathering Truck (SNG) in 1983. Along with the truck, and used vans later purchased from Telesat in Canada, Conus developed a communications system which allowed satellite transmissions without the need to drop phone lines. Because of this, it was now possible to go 'live' from anywhere the truck could drive, changing the landscape of Electronic news-gathering.
The development of the mobile phone and its decreasing cost of operation and hardware over the years means trucks don't need a satellite "comms" system in most places. Satellite time was also easily booked on an 'as-needed' basis, typically around $500 per hour for the common Ku band TV transmission.
Over the years, Ku band Satellite trucks have undergone changes, from large trucks with C-Band dishes outfitted with landing pads and antenna wings to make them FCC compliant, to simpler, rapidly deployable Ku band type. Ku band uplink vehicles are available in a series of small to large vehicles, varying from an SUV, van, Sprinter, "bread truck (cutaway)", to the more common carryall (2 axle/6 tire truck). Typical Ku uplink vehicles are as large as 13 feet by 6 inches tall by 40 feet long, being the largest (non-tractor-trailer type) commercial units allowed on the roads.
Satellite vehicles are either TV station or network-owned. They can be custom suited to their internal usage needs, or are rental units owned by independent companies. Independently owned satellite uplink vehicles are often designed to be versatile, performing multiple uplink functions ranging from straight uplink/downlink services, network news, satellite media tours, or even being configured to becoming a full production vehicle.
Such large uplink trucks now have multiple camera television production capabilities all on board, as pioneered by Satellite Digital Teleproductions (SDTV) in the early 1990s. This combination, being an uplink with production along with a Transportable Earth Station (TES), is now the preferred vehicle for smaller (i.e. one to eight cameras), on location, live television broadcast instead of a separate uplink vehicle working alongside a larger 50-foot tractor trailer production-only vehicle, although the latter is still a regular occurrence.
There are a few combination production/uplink combination vehicles where the uplink system is located on the semi-tractor and the production facilities are in the semi-trailer. These systems add the ability to physically separate the uplink from the production unit. Typical scenarios for this are when the production trailer has to park inside a building, or if the uplink antenna has to be positioned farther away from the production trailer in order to make line-of-sight to the satellite arc.
Larger satellite vehicles are often television production control rooms (PCR), mobile Newsrooms, and/or workspaces on wheels, operated and maintained by broadcast engineers known as satellite truck operators. Operators of these units are known to have a vagabond lifestyle, spending large parts of their lives on the road.
Currently, even a simple flyaway transportable unit can be packed all into two suitcases, all small enough to be airline compliant. Smaller suitcase flyaway units are often used to supplement a build on location television control room, or to provide satellite uplink facilities in locations where a truck cannot be easily transported.
Satellite truck operation and maintenance
Full-time satellite truck operators can earn from USD $35,000 to over $100,000 per year depending on the number of hours worked, years of experience in the field, and the area in the US typically served (positions in major metropolitan areas often compensate more). There are some companies that keep databases of part-time or freelance satellite truck operators.
The National Association of Broadcasters (NAB) occasionally offer courses on the operation of satellite trucks, however most operators have learned their trade from an industry mentor or a combination of both formal in school and on the job informal training.
While helpful, formal training in electronics is not required to be a satellite truck operator. Even camera operators have made the transition from photography to transmission, a clear understanding of the operation of each device on the truck and at what point in the transmission flow it is used are required. Most modern day electronic equipment is too complicated to repair, especially in the field. Truck operators, however, are expected to be able to quickly identify a defective device and either replace it or engineer a way around it. It is for this reason a strong transmission flow understanding is essential.
Having a background in auto mechanics can also help, especially considering that many truck's main power source is a diesel generator. At the absolute least, an operator should know how to change oil, fuel, or an air filter and troubleshoot common engine problems (e.g. burning oil, fuel pump failure, starter/alternator issues).
Like other vehicles, trucks need regular maintenance and upkeep. Older trucks are more difficult to maintain because of increased vehicle wear, availability of parts (for discontinued nameplates), and availability of qualified service personnel fluent in maintenance issues of older vehicles. The expected lifespan for most truck chassis is roughly 8–10 years or 200,000 miles, dependent on its operating environment. It is common for satellite truck boxes to be swapped over to a newer chassis.
Driving the truck to and from event locations is a large, often overlooked, part of the job. Satellite truck operators are often not as interchangeable as reporters, producers, or camera crews, and as a result, can be worked full news cycles (e.g. morning to night). When this happens, the DOT Hours of Service rules may prohibit the operator to drive the truck. This often proves to be complicated for planning and logistic purposes.
By the very nature of the work, a truck operator is expected to travel, often at a moment's notice. Most uplink-for-hire operators keep a packed suitcase with at least 7 days of clothing in or near the truck for prompt deployment.
See also
Electronic news-gathering
Production truck
Outside broadcasting
References
Ground stations
Television technology
Trucks | Satellite truck | Technology | 2,643 |
31,790,833 | https://en.wikipedia.org/wiki/Drobe | Drobe (also referred to as Drobe Launchpad) was a computing news web site with a focus on the operating system. Its archived material was retained online, curated by editor Chris Williams until late 2020.
History
Drobe was founded in 1999 by Peter Price. In 2001, Peter handed the site over to Chris Williams as editor. After , it closed as a news site in 2009. It was retained as an historical archive until 2020 when the site went offline. A few weeks after the site's closure Williams posted articles on Micro Men, the television drama about the rivalry between Acorn and Sinclair in the 1980s. He subsequently stated that such articles may continue to appear periodically.
Main features
At launch, the site featured a news feed, POP email checker and a search facility "incorporating AcornSearch.com". , the site features archived articles, news and other media. It also hosts an online emulator for the BBC Micro, using the Java Runtime Environment.
Registered users were able to apply for user webspace in order to host their own projects. These subsites continue to be hosted by Drobe.
See also
The Icon Bar
References
Subsites
Drobe
BBC Micro emulator
File archives (mirrors of popular FTP sites, etc.)
Reference material (various /Acorn hardware)
Selected user sites
(ROLF (software), the look and feel on Linux)
External links
Diodesign (Editor/curator Chris Williams's website)
Drobe article statistics
(subdomain hosting, adopted by Drobe)
Computing websites
History of computing
Free-content websites
British news websites
RISC OS | Drobe | Technology | 328 |
68,329,220 | https://en.wikipedia.org/wiki/Gaping%20%28animal%20behavior%29 | Gaping is a common form of behavior in the animal kingdom, in which an animal opens its mouth widely and displays the interior of its mouth, for any of various purposes. This may be a form of deimatic behaviour, colloquially known as a startle display or threat display, as it enlarges the appearance of the animal, and for those with teeth it shows the threat that these represent. Animals may also use gaping as part of a courtship display, or to otherwise communicate with each other. Some animals have evolved features which make gaping behavior more visually effective. For example, "[i]n many species of reptile, the oral mucosa may be a bright color that serves to distract the predator". Gaping is part of the shark agonistic display, and is also found in snakes such as the cottonmouth, and in birds ranging from seagulls to puffins to roosters.
A number of species of bird use a gaping, open beak in their fear and threat displays. Some augment the display by hissing or breathing heavily, while others clap their beaks. In birds, the muscles that depress the lower mandible are usually weak, but certain birds have well-developed digastric muscles that aid in gaping actions. In most birds, these muscles are relatively small as compared to the jaw muscles of similarly sized mammals. Both male and female puffins use gaping as a prominent part of their threat display, with "a range of intensities" based on the situation, and with puffins engaging in territorial gape contests, where they mirror each other until one gives up and leaves, or an actual fight occurs.
Some animals are named for their tendency to use gaping as a threat display, or for the features that become apparent when making such a display. For example, the cottonmouth is so named because the white lining of its mouth is visible when gaping. Other snakes, such as the Western Massasauga, have been observed to engage in gaping behavior which "appears to be unrelated to any threat".
Gallery of images
References
Ethology
Antipredator adaptations | Gaping (animal behavior) | Biology | 426 |
591,288 | https://en.wikipedia.org/wiki/Father%20Time%20%28Lord%27s%29 | Father Time is a weathervane at Lord's Cricket Ground, London, in the shape of Father Time removing the bails from a wicket. The full weathervane is tall, with the figure of Father Time standing at . It was given to Lord's in 1926 by the architect of the Grandstand, Sir Herbert Baker. The symbolism of the figure derives from Law 12(3) of the Laws of Cricket: "After the call of Time, the bails shall be removed from both wickets." The weathervane is frequently referred to as Old Father Time in television and radio broadcasts, but "Old" is not part of its official title.
Father Time was originally located atop the old Grand Stand. It was wrenched from its position during the Blitz, when it became entangled in the steel cable of a barrage balloon, but was repaired and returned to its previous place. In 1992 it was struck by lightning, and the subsequent repairs were featured on the children's television programme Blue Peter. Father Time was permanently relocated to a structure adjacent to the Mound Stand in 1996, when the Grand Stand was demolished and rebuilt. It was again damaged in March 2015 by the high winds of Cyclone Niklas, which necessitated extensive repair by specialists.
In 1969 Father Time became the subject of a poem, "Lord's Test", by the Sussex and England cricketer John Snow.
Notes
External links
Cricket in London
Meteorological instrumentation and equipment
Lord's
Herbert Baker buildings and structures | Father Time (Lord's) | Technology,Engineering | 298 |
3,288,827 | https://en.wikipedia.org/wiki/Future%20Evolution | Future Evolution is a book written by paleontologist Peter Ward and illustrated by Alexis Rockman. He addresses his own opinion of future evolution and compares it with Dougal Dixon's After Man: A Zoology of the Future and H. G. Wells's The Time Machine.
According to Ward, humanity may exist for a long time. Nevertheless, we are impacting our planet. He splits his book in different chronologies, starting with the near future (the next 1,000 years). Humanity would be struggling to support a massive population of 11 billion. Global warming raises sea levels. The ozone layer weakens. Most of the available land is devoted to agriculture due to the demand for food. Despite all this, the oceanic wildlife remains untethered by most of these impacts, specifically the commercial farmed fish. This is, according to Ward, an era of extinction that would last about 10 million years (note that many human-caused extinctions have already occurred). After that, Earth gets stranger.
Ward labels the species that have the potential to survive in a human-infested world. These include dandelions, raccoons, owls, pigs, cattle, rats, snakes, and crows to name but a few. In the human-infested ecosystem, those preadapted to live amongst man survived and prospered. Ward describes garbage dumps 10 million years in the future infested with multiple species of rats, a snake with a sticky frog-like tongue to snap up rodents, and pigs with snouts specialized for rooting through garbage. The story's time traveller who views this new refuse-covered habitat is gruesomely attacked by ravenous flesh-eating crows.
Ward then questions the potential for humanity to evolve into a new species. According to him, this is incredibly unlikely. For this to happen a human population must isolate itself and interbreed until it becomes a new species. Then he questions if humanity would survive or extinguish itself by climate change, nuclear war, disease, or the posing threat of nanotechnology as terrorist weapons. Ward ultimately concludes that humanity may last for hundreds of millions of years, overcoming every obstacle.
In the final chapter, Ward looks at how life on Earth will fare in the very distant future (500 million years in the future), where an ever-brightening Sun combined with decreasing atmospheric carbon dioxide levels make the Earth too hot for complex life, resulting in the final devolution and eventual extinction of all life on Earth. Ward describes a small beach, with cactus-like plants growing and waist high armoured creatures resembling armadillos. The oceans have become hot and salty, and most marine life having gone extinct. Ward predicts that humans, if any exist at that time, will have to live underground and become the new ants of the Earth, much like the Morlocks from H.G. Wells' novel The Time Machine, knowing that like the remaining plants and animals, they too will become extinct as well.
See also
The Future Is Wild
Human extinction
References
2001 non-fiction books
Books about evolution
Evolution in popular culture
Speculative evolution
Holt, Rinehart and Winston books
Futurology books | Future Evolution | Biology | 651 |
71,466,306 | https://en.wikipedia.org/wiki/42%20Leonis%20Minoris | 42 Leonis Minoris (42 LMi) is a solitary, bluish-white hued star located in the northern constellation Leo Minor. It has a visual apparent magnitude of 5.35, allowing it to be faintly seen with the naked eye. Parallax measurements place it at a distance of 412 light years. The object has a heliocentric radial velocity of , indicating that it is drifting away from the Solar System.
42 LMi has a general stellar classification of B9 V, indicating that it is an ordinary B-type main-sequence star. However, Cowley et al. (1969) gave a slightly cooler class of A1 Vn, indicating that it is instead an A-type main-sequence star with 'nebulous' (broad) absorption lines due to rapid rotation. Nevertheless, it has 2.77 times the mass of the Sun and a radius of . It radiates at 107 times the luminosity of the Sun from its photosphere at an effective temperature of . Its high luminosity and slightly enlarged diameter suggests that the object might be evolved. Like most hot stars, 42 LMi spins rapidly with a projected rotational velocity of .
There are two optical companions located near this star. BD+31°2181 is a 7th magnitude K2 giant star separated away along a position angle of . An 8th magnitude companion has been detected at a distance of over along a position angle of . Both have no relation to 42 LMi and is just moving with it by coincidence.
An X-ray emission with a luminosity of has been detected around the object. A-type stars are not expected to emmit X-rays, so it must be coming from an unseen companion.
References
B-type main-sequence stars
Leo Minor
Leonis Minoris, 42
093152
52638
4203
BD+31 02180 | 42 Leonis Minoris | Astronomy | 384 |
49,065,762 | https://en.wikipedia.org/wiki/C5H4O | {{DISPLAYTITLE:C5H4O}}
The molecular formula C5H4O (molar mass: 80.08 g/mol, exact mass: 80.0262 u) may refer to:
Cyclopentadienone | C5H4O | Chemistry | 56 |
12,661,325 | https://en.wikipedia.org/wiki/Regeneration%20%28sustainability%29 | Regeneration refers to rethinking and reinventing business models, supply chains, and lifestyles to sustain and improve the earth's natural environment and avoid the depletion of natural resources. Regeneration includes widespread environmental practices such as reusing, recycling, restoring, and the use of renewable resources.
History
The modern environmental movement gained traction in the early 1970s following the United Nations Conference on the Human Environment, the first time multiple nations joined together to discuss the state of the world's environment.
The concept of a generation that includes people of all ages who share a common interest in the environment was first introduced by Dell Chairman and CEO Michael Dell on World Environment Day 2007. Many of the original theories of change came from writers, thinkers, and designers such as Wendell Berry, Buckminster Fuller, David Orr and Frank Lloyd Wright. These individuals saw a shift happening in humanity toward a rekindled connection with nature and inspired monumental changes in our approach and perspectives on topics such as building community, our relationship with agriculture and architecture, as well as the disconnect between modern economics on a finite planet.
Thought leaders like Paul Hawken, Kate Raworth, Naomi Klein, David Suzuki, and Bill McKibben have modernized the discourse and given the environmental movement a new set of tools in the form of conscious capitalism and positive climate communication.
References
See also
Ecological design
Sustainable design
Environmental design
Biomimicry
Permaculture
External links
Sustainable Design Guide Loughborough University, November 2019
Environmental movements
Environmental design
Environmental social science
Environmentalism
Environmental terminology
Sustainable design | Regeneration (sustainability) | Engineering,Environmental_science | 310 |
15,504,134 | https://en.wikipedia.org/wiki/Nalva | Nalva (from Sanskrit ) is a measure of distance equal to 400 Hastas (Cubits). That is equal to 9600 Aṅgula, which is believed to be equal to approximately 180 metres.
Used in Mahābhārata.
Notes
Units of length
Obsolete units of measurement
History of science and technology in India
Mahabharata | Nalva | Mathematics | 67 |
4,205,059 | https://en.wikipedia.org/wiki/EuropaBio | EuropaBio ("The European Association for Bioindustries") is Europe's largest and most influential biotech industry group, whose members include leading large-size healthcare and industrial biotechnology companies. EuropaBio is located in Brussels, Belgium. The organisation was initiated in 1996 to represent the interests of the biotechnology industry at the European level, and therefore influence legislation that serves the interests of biotechnology companies in Europe.
Activity and goals
EuropaBio is engaged in dialogue with the European Parliament, the European Commission, and the Council of Ministers to influence legislation on biotechnology.
EuropaBio represents two sectors of the biotech industry.
White or industrial biotechnology is the application of biotechnology for industrial purposes, including manufacturing, alternative energy (or "bioenergy") biofuels, and biomaterials.
Red or healthcare biotechnology is the application of biotechnology for the production of medicines and therapies.
EuropaBio's stated goals are:
promoting an innovative, coherent, and dynamic biotechnology-based industry in Europe;
advocating free and open markets and the removal of barriers to competitiveness with other areas of the world;
committing to an open, transparent, and informed dialogue with all stakeholders about the ethical, social, and economic aspects of biotechnology and its benefits;
championing the socially responsible use of biotechnology to ensure that its potential is fully used to the benefit of humans and their environment.
EuropaBio's primary focus is the European Union but because of the global character of the biotech business, it also represents its members in transatlantic and worldwide forums.
Organisation
EuropaBio has a board of management made up of representatives from among its industry members. Since 2023, Dr. Sarah Reisinger representing dsm-firmenich is chair of the board.
The board is assisted by sectoral councils representing the main segments of EuropaBio – healthcare (red biotech), and industrial (white biotech).
Additionally, National Associations are represented through the National Associations Council.
Experts from member companies and national associations participate in EuropaBio's working groups which cover a very wide range of issues and areas of concern of biotech enterprises.
Since November 2020 EuropaBio Director General is Dr. Claire Skentelbery.
Members
In 2021, the association represents 79 corporate and associate members and BioRegions, and 17 national biotechnology associations in turn representing over 1800 biotech SMEs.
See also
CropLife International
European Federation of Biotechnology (EFB)
European Federation of Pharmaceutical Industries and Associations (EFPIA)
Genetically modified food controversies
Regulation of the release of genetic modified organisms
Citations
References
Transforming Europe’s position on GM food - ambassadors programme executive summary The Guardian, Thursday 20 October 2011, Guardian News and Media Limited.
Biotech group bids to recruit high-profile GM 'ambassadors' John Vidal and Hanna Gersmann, The Guardian, Thursday 20 October 2011, Guardian News and Media Limited.
Draft letter from EuropaBio to potential GM ambassadors The Guardian, Thursday 20 October 2011, Guardian News and Media Limited.
External links
EuropaBio
Biotechnology in the EU
Biotech Informa
BIO
GMO Compass
Lobbying organizations in Europe
Pan-European biotechnology organisations
Organizations established in 1996
Organisations based in Brussels
1996 establishments in Belgium | EuropaBio | Engineering,Biology | 633 |
62,642,855 | https://en.wikipedia.org/wiki/Clifford%20Paterson%20Medal%20and%20Prize | The Clifford Paterson Medal and Prize is awarded by the Institute of Physics. It was established in 1981 and named after Clifford Copland Paterson. The prize is awarded each year for exceptional early career contributions to the application of physics in an industrial or commercial context. The medal is bronze and is accompanied by a prize of £1000 and a certificate.
Recipients
List of medallists:
See also
Institute of Physics Awards
List of physics awards
List of awards named after people
References
Awards established in 1981
Awards of the Institute of Physics
Physics awards | Clifford Paterson Medal and Prize | Technology | 104 |
11,626,077 | https://en.wikipedia.org/wiki/Integrated%20Performance%20Primitives | Intel Integrated Performance Primitives (Intel IPP) is an extensive library of ready-to-use, domain-specific functions that are highly optimized for diverse Intel architectures. Its royalty-free APIs help developers take advantage of single instruction, multiple data (SIMD) instructions.
The library supports Intel and compatible processors and is available for Linux, macOS and Windows. It is available separately or as a part of Intel oneAPI Base Toolkit.
Intel IPP releases use a semantic versioning schema, so that even though the major version looks like a year (YYYY), it is not technically meant to be a year. So it might not change every calendar year.
Features
The library takes advantage of processor features including MMX, SSE, SSE2, SSE3, SSSE3, SSE4, AVX, AVX2, AVX-512, AES-NI and multi-core processors.
Intel IPP includes functions for:
Video decode/encode
Audio decode/encode
JPEG/JPEG2000/JPEG XR
Computer vision
Cryptography
Data compression
Image color conversion
Image processing
Ray tracing and rendering
Signal processing
Speech coding
Speech recognition
String processing
Vector and matrix mathematics
Organization
Intel IPP is divided into four major processing groups: signal processing (with linear array or vector data), image processing (with 2D arrays for typical color spaces), data compression, and cryptography.
Half the entry points are of the matrix type, a third are of the signal type, and the remainder are of the image and cryptography types. Intel IPP functions are divided into 4 data types: data types include 8u (8-bit unsigned), 8s (8-bit signed), 16s, 32f (32-bit floating-point), 64f, etc. Typically, an application developer works with only one dominant data type for most processing functions, converting between input to processing to output formats at the end points.
History
Version 2.0 files are dated April 22, 2002.
Version 3.0
Version 4.0 files are dated November 11, 2003. 4.0 runtime fully supports applications coded for 3.0 and 2.0.
Version 5.1 files are dated March 9, 2006. 5.1 runtime does not support applications coded for 4.0 or before.
Version 5.2 files are dated April 11, 2007. 5.2 runtime does not support applications coded for 5.1 or before. Introduced June 5, 2007, adding code samples for data compression, new video codec support, support for 64-bit applications on , support for Windows Vista, and new functions for ray-tracing and rendering.
Version 6.1 was released with the Intel C++ Compiler on June 28, 2009. Update 1 for version 6.1 was released on July 28, 2009. Update 2 files are dated October 19, 2009.
Version 7.1
Version 8.0
Version 8.1
Version 8.2
Version 9.0 Initial Release, August 25, 2015
Version 9.0 Update 1, December 1, 2015
Version 9.0 Update 2
Version 9.0 Update 3
Version 9.0 Update 4
Version 2017 Initial Release
Version 2017 Update 1
Version 2017 Update 2
Version 2017 Update 3, February 28, 2016
Version 2018 Initial Release
Version 2018 Update 1
Version 2018 Update 2
Version 2018 Update 2.1
Version 2018 Update 3
Version 2018 Update 3.1
Version 2018 Update 4, September 20, 2018
Version 2019 Initial Release
Version 2019 Update 1
Version 2019 Update 2
Version 2019 Update 3, February 14, 2019
Version 2019 Update 4
Version 2019 Update 5
Version 2020 Initial Release, December 12, 2019
Version 2020 Update 1, March 30, 2020
Version 2020 Update 2, July 16, 2020
Version 2020 Update 3
Version 2021 Initial Release
Version 2021.1
Version 2021.2
Version 2021.3
Version 2021.4
Version 2021.5
Version 2021.6
Version 2021.7, December 2022
Version 2021.8, April 2023
Version 2021.9.0, July 2023
Version 2021.9.1, October 2023
Version 2021.10.0, November 2023
Version 2021.10.1, December 2023
Version 2021.11.0, March 2024
Version 2021.12.0, June 2024
Counterparts
Sun: mediaLib for Solaris
Apple: vDSP, vImage, Accelerate etc. for macOS
AMD: Framewave (formerly the AMD Performance Library or APL)
Khronos Group: OpenMAX DL
NVIDIA Performance Primitives
See also
Intel oneAPI Base Toolkit
Intel oneAPI HPC Toolkit
Intel oneAPI IoT Toolkit
Intel oneAPI Data Analytics Library (oneDAL)
Intel oneAPI Math Kernel Library (oneMKL)
Intel oneAPI Threading Building Blocks (oneTBB)
Intel Advisor
Intel Inspector
Intel VTune Profiler
Intel Developer Zone (Intel DZ; support and discussion)
References
External links
Intel oneAPI Base Toolkit Home Page
Stewart Taylor, "Intel Integrated Performance Primitives - How to Optimize Software Applications Using Intel IPP", Intel Press.
Jpeg Delphi implementation using official JPEG Group C library or Intel Jpeg Library 1.5 (ijl.dll included)
How To Install OpenCV using IPP (french).
C (programming language) libraries
C++ libraries
Intel software
Multimedia software | Integrated Performance Primitives | Technology | 1,109 |
14,505,530 | https://en.wikipedia.org/wiki/Pier%20Luigi%20Luisi | Pier Luigi Luisi (born 23 May 1938) is an Italian chemist and academic. He received the "professor emeritus" title from the Swiss Federal Institute of Technology (ETHZ). He worked there as a scientist from 1970 until 2003, and as a Professor of Chemistry from 1980 until he departed. Luisi then moved to the Roma Tre University as a Professor of Biochemistry, where he worked until 2015.
In 1985, Luisi founded the Cortona Week, an international summer school.
Personal life
Pier Luigi Luisi was born on 23 May 1938. He is now a retired professor and lives in Tavira, Portugal.
Education
Luisi graduated with a chemistry degree from the Scuola Normale Superiore di Pisa.
Books
The Systems View of Life: A Unifying Vision (with Fritjof Capra) Cambridge University Press, 2014, translated in several languages. The Italian edition was published by Aboca, 2014, under the title Vita e Natura - Una visione sistemica.
The Emergence of Life: From Chemical Origins to Synthetic Biology Cambridge University Press, second edition 2016
Giant Vesicles (Perspectives in Supramolecular Chemistry) (with Peter Walde)
Mind and life: discussions with the Dalai Lama on the nature of reality, Columbia University Press, 2009, ,
References
External links
Pier Luigi Luisi on Lifeboat Foundation
Pier Luigi Luisi on Meer.com
Pier Luigi Luisi on Meer.com
Pier Luigi Luisi on Meer.com
1938 births
Living people
Scuola Normale Superiore di Pisa alumni
Swiss chemists
Synthetic biologists
Academic staff of Roma Tre University | Pier Luigi Luisi | Biology | 328 |
5,004,139 | https://en.wikipedia.org/wiki/MON-90 | The MON-90 () is a Claymore-shaped, plastic bodied, directional type of anti-personnel mine designed in the Soviet Union. It is designed to wound or kill by fragmentation. The mine is similar in appearance to the MON-50, but is approximately twice the size with a much greater depth.
Design
The MON-90 has an attachment point on the bottom for connecting a special clamp which can be attached to wood, metal etc. but it has no scissor type legs. It has a sight centered on the top which is flanked by two detonator cavities. The mine contains 6.2 kg of RDX (PVV-5A) to propel approximately 2000 steel rod fragments to a lethal range of 90 meters in a 54° arc (60 m wide spread at 90 m range).
The MON-90 is usually command actuated using a PN manual inductor and an EDP-R electric detonator (ZT non-electric detonator also available). It can also be actuated by a variety of booby trap (BT) switches including:
MUV series pull
MVE-72 electric breakwire
VP13 seismic controller.
The MON-90 is usually mounted above ground level on the surface or up in a tree to give the greatest dispersion of fragments. It is waterproof and will function effectively from temperatures of +50 to −50 °C. Due to its large size the MON-90 is effective against unarmored vehicles and it may have applications as an anti-helicopter mine.
It can be located visually or with metal detectors under most field conditions. Depending on its actuation method the MON 90 may be resistant to blast overpressure from explosive breaching systems like the Giant Viper and M58 MICLIC.
Specifications
Country of origin: Soviet Union
Mine action:
Material: Plastic casing
Shape: Claymore
Colour: Green, olive
Total weight: 12.1 kg
Explosive content: 6.2 kg RDX (PVV-5A) explosive
Operating pressure (kg):
Length: 345 mm
Width: 153 mm
Height: 202 mm
Fuze #1: Command detonated using PN manual inductor attached by demolition cable to an EDP-R electric detonator
Fuze #2:
MUV Series Mechanical Pull
MVE-72 Electric Breakwire (battery powered)
VP13 Seismic Controller (battery powered)
Disarming (demining) hazards
The MON-90 is known to be used with the VP13 seismic controller which prevents close approach for any clearance operations. If the mine is encountered with any type of electrical wires running from it, secure both ends of the wire before approaching the mine, because it could be linked to another mine or other booby trap device.
On detonation the mine will normally propel lethal fragmentation to a range of 90 meters. The actual hazard range for these types of mines can be as high as 300 metres based on US Army tests of the M18A1 Claymore (this is directly in front of the mine, fragmentation range and density drop off to 125 meters to the sides and rear of these mines).
See also
MON-50, similar but smaller claymore shaped AP mine.
MON-100
MON-200
M18A1 Claymore Antipersonnel Mine
References
Area denial weapons
Land mines of the Soviet Union
Anti-personnel mines | MON-90 | Engineering | 682 |
43,151,478 | https://en.wikipedia.org/wiki/Clometerone | Clometerone (; developmental code L-38000; also known as clometherone () or 6α-chloro-16α-methylprogesterone) is a synthetic pregnane steroid and derivative of progesterone which was reported in 1962 and is described as an antiestrogen and antiandrogen but was never marketed.
Clometerone has been found to suppress estrone-induced uterine hypertrophy in mice at oral and parenteral doses in which progesterone is inactive (active at 10 μg with clometerone and progesterone inactive at 10–100 μg in the case of both routes). However, its progestogenic potency in the Clauberg assay is considerably less than that of progesterone. As such, the progestogenic effects of clometerone do not seem to parallel its estrogenic effects. It was also studied as an antiandrogen in men but was found to slightly increase sebum production when given orally and to variably and inconsistently affect sebum production when given as a topical medication.
See also
Steroidal antiandrogen
List of steroidal antiandrogens
References
Abandoned drugs
Antiestrogens
Organochlorides
Diketones
Pregnanes
Progestogens
Steroidal antiandrogens | Clometerone | Chemistry | 281 |
706,311 | https://en.wikipedia.org/wiki/Canonical%20coordinates | In mathematics and classical mechanics, canonical coordinates are sets of coordinates on phase space which can be used to describe a physical system at any given point in time. Canonical coordinates are used in the Hamiltonian formulation of classical mechanics. A closely related concept also appears in quantum mechanics; see the Stone–von Neumann theorem and canonical commutation relations for details.
As Hamiltonian mechanics are generalized by symplectic geometry and canonical transformations are generalized by contact transformations, so the 19th century definition of canonical coordinates in classical mechanics may be generalized to a more abstract 20th century definition of coordinates on the cotangent bundle of a manifold (the mathematical notion of phase space).
Definition in classical mechanics
In classical mechanics, canonical coordinates are coordinates and in phase space that are used in the Hamiltonian formalism. The canonical coordinates satisfy the fundamental Poisson bracket relations:
A typical example of canonical coordinates is for to be the usual Cartesian coordinates, and to be the components of momentum. Hence in general, the coordinates are referred to as "conjugate momenta".
Canonical coordinates can be obtained from the generalized coordinates of the Lagrangian formalism by a Legendre transformation, or from another set of canonical coordinates by a canonical transformation.
Definition on cotangent bundles
Canonical coordinates are defined as a special set of coordinates on the cotangent bundle of a manifold. They are usually written as a set of or with the xs or qs denoting the coordinates on the underlying manifold and the ps denoting the conjugate momentum, which are 1-forms in the cotangent bundle at point q in the manifold.
A common definition of canonical coordinates is any set of coordinates on the cotangent bundle that allow the canonical one-form to be written in the form
up to a total differential. A change of coordinates that preserves this form is a canonical transformation; these are a special case of a symplectomorphism, which are essentially a change of coordinates on a symplectic manifold.
In the following exposition, we assume that the manifolds are real manifolds, so that cotangent vectors acting on tangent vectors produce real numbers.
Formal development
Given a manifold , a vector field on (a section of the tangent bundle ) can be thought of as a function acting on the cotangent bundle, by the duality between the tangent and cotangent spaces. That is, define a function
such that
holds for all cotangent vectors in . Here, is a vector in , the tangent space to the manifold at point . The function is called the momentum function corresponding to .
In local coordinates, the vector field at point may be written as
where the are the coordinate frame on . The conjugate momentum then has the expression
where the are defined as the momentum functions corresponding to the vectors :
The together with the together form a coordinate system on the cotangent bundle ; these coordinates are called the canonical coordinates.
Generalized coordinates
In Lagrangian mechanics, a different set of coordinates are used, called the generalized coordinates. These are commonly denoted as with called the generalized position and the generalized velocity. When a Hamiltonian is defined on the cotangent bundle, then the generalized coordinates are related to the canonical coordinates by means of the Hamilton–Jacobi equations.
See also
Linear discriminant analysis
Symplectic manifold
Symplectic vector field
Symplectomorphism
Kinetic momentum
Complementarity (physics)
Canonical quantization
Canonical quantum gravity
References
Ralph Abraham and Jerrold E. Marsden, Foundations of Mechanics, (1978) Benjamin-Cummings, London See section 3.2.
Differential topology
Symplectic geometry
Hamiltonian mechanics
Lagrangian mechanics
Coordinate systems
Moment (physics) | Canonical coordinates | Physics,Mathematics | 746 |
45,101,285 | https://en.wikipedia.org/wiki/David%20S.%20Frankel | David S. Frankel (born 1950) is an American Information Technology expert and consultant, known for his work on model-driven engineering and semantic information modeling.
Biography
Frankel obtained his BS in Mathematics from the University of Illinois at Urbana-Champaign, and sequentially his Master of Social Work at the same university.
Frankel started his career in the software industry in the 1970s, developing software tool for HP 2100 mini-computers and became senior Programmer-Analyst. In 1982 he started as independent consultant participating in the development of Local area network-based database applications. He was Enterprise Architect for several companies, and was Lead Standards Architect in the domain of Model-Driven Systems at SAP Labs in California from 2005 to 2012, and independent consultant ever since.
Frankel has been on the Architecture Board of the Object Management Group (OMG) for a long time. In 2003 he published his most cited work "Model Driven Architecture: Applying Mda to Enterprise Computing."
Selected publications
Frankel, David S. Model Driven Architecture Applying Mda to Enterprise Computing. John Wiley & Sons, 2003.
Parodi, John, and David S. Frankel. The MDA journal: model driven architecture straight from the masters. Meghan-Kiffer Press, 2004.
Articles, a selection:
David S. Frankel, Harmon, P., Mukerji, J., Odell, J., Owen, M., Rivitt, P., Rosen, M... & Soley, R. M. et al. (2003) "The Zachman Framework and the OMG's Model Driven Architecture," Business Process Trends, 9 (2003).
Frankel, David S. "The MDA marketing message and the MDA reality." MDA Journal, a Business Process Trends Column (2004).
Frankel, David S., et al. "A Model-Driven Semantic Web: Reinforcing Complementary Strengths." MDA Journal, Business Process Trends (2004).
Frankel, D., Hayes, P., Kendall, E., & McGuinness, D. (2004). "The model driven semantic web." In 1st International Workshop on the Model-Driven Semantic Web (MDSW2004), Monterey, California, USA.
References
External links
Published Articles Written by David S. Frankel
1950 births
Living people
American computer scientists
Information systems researchers
University of Illinois Urbana-Champaign alumni | David S. Frankel | Technology | 497 |
38,245,286 | https://en.wikipedia.org/wiki/Phenotype%20microarray | The phenotype microarray approach is a technology for high-throughput phenotyping of cells.
A phenotype microarray system enables one to monitor simultaneously the phenotypic reaction of cells to environmental challenges or exogenous compounds in a high-throughput manner.
The phenotypic reactions are recorded as either end-point measurements or respiration kinetics similar to growth curves.
Usages
High-throughput phenotypic testing is increasingly important for exploring the biology of bacteria, fungi, yeasts, and animal cell lines such as human cancer cells. Just as DNA microarrays and proteomic technologies have made it possible to assay the expression level of thousands of genes or proteins all a once, phenotype microarrays (PMs) make it possible to quantitatively measure thousands of cellular phenotypes simultaneously. The approach also offers potential for testing gene function and improving genome annotation. In contrast to many of the hitherto available molecular high-throughput technologies, phenotypic testing is processed with living cells, thus providing comprehensive information about the performance of entire cells. The major applications of the PM technology are in the fields of systems biology, microbial cell physiology, microbiology, and taxonomy, and mammalian cell physiology including clinical research such as on autism. Advantages of PMs over standard growth curves are that cellular respiration can be measured in environmental conditions where cellular replication (growth) may not be possible, and that it is more accurate than optical density, which can vary between different cellular morphologies. In addition, respiration reactions are usually detected much earlier than cellular growth.
Technology
A sole carbon source that can be transported into a cell and metabolized to produce NADH engenders a redox potential and flow of electrons to reduce a tetrazolium dye, such as tetrazolium violet, which produces a purple color. The more rapid this metabolic flow, the more quickly purple color forms. The formation of purple color is a positive reaction. interpreted such that the sole carbon source is used as an energy source. A microplate reader and incubation facility are needed to provide the appropriate incubation conditions, and to automatically read the intensity of colour formation during tetrazolium reduction in intervals of, e.g., 15 minutes.
The principal idea of retrieving information about the abilities of an organism and its special modes of action when making use of certain energy sources can be equivalently applied to other macro-nutrients such as nitrogen, sulfur, or phosphorus and their compounds and derivatives.
As an extension, the impact of auxotrophic supplements or antibiotics, heavy metals or other inhibitory compounds on the respiration behaviour of the cells can be determined.
Data structure
During a positive reaction, the longitudinal kinetics are expected to appear as sigmoidal curves in analogy to typical bacterial growth curves. Comparable to bacterial growth curves, the respiration kinetic curves may provide valuable information coded in the length of the lag phase λ, the respiration rate μ (corresponding to the steepness of the slope), the maximum cell respiration A (corresponding to the maximum value recorded), and the area under the curve (AUC). In contrast to bacterial growth curves, there is typically no death phase in PMs, as the reduced tetrazolium dye is insoluble.
Software
Proprietary and commercially available software is available that provides a solution for storage, retrieval, and analysis of high throughput phenotype data. A powerful free and open source software is the "opm" package based on R. "opm" contains tools for analyzing PM data including management, visualization and statistical analysis of PM data, covering curve-parameter estimation, dedicated and customizable plots, metadata management, statistical comparison with genome and pathway annotations, automatic generation of taxonomic reports, data discretization for phylogenetic software and export in the YAML markup language. In conjunction with other R packages it was used to apply boosting to re-analyse autism PM data and detect more determining factors. The "opm" package has been developed and is maintained at the Deutsche Sammlung von Mikroorganismen und Zellkulturen. Another free and open source software developed to analyze Phenotype Microarray data is "DuctApe", a Unix command-line tool that also correlates genomic data. Other software tools are PheMaDB, which provides a solution for storage, retrieval, and analysis of high throughput phenotype data, and the PMViewer software which focuses on graphical display but does not enable further statistical analysis. The latter is not publicly available.
See also
Cell Painting
References
External links
PheMaDB website
Microbiology
Physiology
Phenomics | Phenotype microarray | Chemistry,Biology | 972 |
7,533,645 | https://en.wikipedia.org/wiki/Cone%20%28category%20theory%29 | In category theory, a branch of mathematics, the cone of a functor is an abstract notion used to define the limit of that functor. Cones make other appearances in category theory as well.
Definition
Let F : J → C be a diagram in C. Formally, a diagram is nothing more than a functor from J to C. The change in terminology reflects the fact that we think of F as indexing a family of objects and morphisms in C. The category J is thought of as an "index category". One should consider this in analogy with the concept of an indexed family of objects in set theory. The primary difference is that here we have morphisms as well. Thus, for example, when J is a discrete category, it corresponds most closely to the idea of an indexed family in set theory. Another common and more interesting example takes J to be a span. J can also be taken to be the empty category, leading to the simplest cones.
Let N be an object of C. A cone from N to F is a family of morphisms
for each object X of J, such that for every morphism f : X → Y in J the following diagram commutes:
The (usually infinite) collection of all these triangles can
be (partially) depicted in the shape of a cone with the apex N. The cone ψ is sometimes said to have vertex N and base F.
One can also define the dual notion of a cone from F to N (also called a co-cone) by reversing all the arrows above. Explicitly, a co-cone from F to N is a family of morphisms
for each object X of J, such that for every morphism f : X → Y in J the following diagram commutes:
Equivalent formulations
At first glance cones seem to be slightly abnormal constructions in category theory. They are maps from an object to a functor (or vice versa). In keeping with the spirit of category theory we would like to define them as morphisms or objects in some suitable category. In fact, we can do both.
Let J be a small category and let CJ be the category of diagrams of type J in C (this is nothing more than a functor category). Define the diagonal functor Δ : C → CJ as follows: Δ(N) : J → C is the constant functor to N for all N in C.
If F is a diagram of type J in C, the following statements are equivalent:
ψ is a cone from N to F
ψ is a natural transformation from Δ(N) to F
(N, ψ) is an object in the comma category (Δ ↓ F)
The dual statements are also equivalent:
ψ is a co-cone from F to N
ψ is a natural transformation from F to Δ(N)
(N, ψ) is an object in the comma category (F ↓ Δ)
These statements can all be verified by a straightforward application of the definitions. Thinking of cones as natural transformations we see that they are just morphisms in CJ with source (or target) a constant functor.
Category of cones
By the above, we can define the category of cones to F as the comma category (Δ ↓ F). Morphisms of cones are then just morphisms in this category. This equivalence is rooted in the observation that a natural map between constant functors Δ(N), Δ(M) corresponds to a morphism between N and M. In this sense, the diagonal functor acts trivially on arrows. In similar vein, writing down the definition of a natural map from a constant functor Δ(N) to F yields the same diagram as the above. As one might expect, a morphism from a cone (N, ψ) to a cone (L, φ) is just a morphism N → L such that all the "obvious" diagrams commute (see the first diagram in the next section).
Likewise, the category of co-cones from F is the comma category (F ↓ Δ).
Universal cones
Limits and colimits are defined as universal cones. That is, cones through which all other cones factor. A cone φ from L to F is a universal cone if for any other cone ψ from N to F there is a unique morphism from ψ to φ.
Equivalently, a universal cone to F is a universal morphism from Δ to F (thought of as an object in CJ), or a terminal object in (Δ ↓ F).
Dually, a cone φ from F to L is a universal cone if for any other cone ψ from F to N there is a unique morphism from φ to ψ.
Equivalently, a universal cone from F is a universal morphism from F to Δ, or an initial object in (F ↓ Δ).
The limit of F is a universal cone to F, and the colimit is a universal cone from F. As with all universal constructions, universal cones are not guaranteed to exist for all diagrams F, but if they do exist they are unique up to a unique isomorphism (in the comma category (Δ ↓ F)).
See also
References
External links
Category theory
Limits (category theory) | Cone (category theory) | Mathematics | 1,083 |
979,564 | https://en.wikipedia.org/wiki/Near-infrared%20spectroscopy | Near-infrared spectroscopy (NIRS) is a spectroscopic method that uses the near-infrared region of the electromagnetic spectrum (from 780 nm to 2500 nm). Typical applications include medical and physiological diagnostics and research including blood sugar, pulse oximetry, functional neuroimaging, sports medicine, elite sports training, ergonomics, rehabilitation, neonatal research, brain computer interface, urology (bladder contraction), and neurology (neurovascular coupling). There are also applications in other areas as well such as pharmaceutical, food and agrochemical quality control, atmospheric chemistry, combustion research and knowledge.
Theory
Near-infrared spectroscopy is based on molecular overtone and combination vibrations. Overtones and combinations exhibit lower intensity compared to the fundamental, as a result, the molar absorptivity in the near-IR region is typically quite small. (NIR absorption bands are typically 10–100 times weaker than the corresponding fundamental mid-IR absorption band.) The lower absorption allows NIR radiation to penetrate much further into a sample than mid infrared radiation. Near-infrared spectroscopy is, therefore, not a particularly sensitive technique, but it can be very useful in probing bulk material with little to no sample preparation.
The molecular overtone and combination bands seen in the near-IR are typically very broad, leading to complex spectra; it can be difficult to assign specific features to specific chemical components. Multivariate (multiple variables) calibration techniques (e.g., principal components analysis, partial least squares, or artificial neural networks) are often employed to extract the desired chemical information. Careful development of a set of calibration samples and application of multivariate calibration techniques is essential for near-infrared analytical methods.
History
The discovery of near-infrared energy is ascribed to William Herschel in the 19th century, but the first industrial application began in the 1950s. In the first applications, NIRS was used only as an add-on unit to other optical devices that used other wavelengths such as ultraviolet (UV), visible (Vis), or mid-infrared (MIR) spectrometers. In the 1980s, a single-unit, stand-alone NIRS system was made available.
In the 1980s, Karl Norris (while working at the USDA Instrumentation Research Laboratory, Beltsville, USA) pioneered the use NIR spectroscopy for quality assessments of agricultural products. Since then, use has expanded from food and agricultural to chemical, polymer, and petroleum industries; pharmaceutical industry; biomedical sciences; and environmental analysis.
With the introduction of light-fiber optics in the mid-1980s and the monochromator-detector developments in the early 1990s, NIRS became a more powerful tool for scientific research. The method has been used in a number of fields of science including physics, physiology, or medicine. It is only in the last few decades that NIRS began to be used as a medical tool for monitoring patients, with the first clinical application of so-called fNIRS in 1994.
Instrumentation
Instrumentation for near-IR (NIR) spectroscopy is similar to instruments for the UV-visible and mid-IR ranges. There is a source, a detector, and a dispersive element (such as a prism, or, more commonly, a diffraction grating) to allow the intensity at different wavelengths to be recorded. Fourier transform NIR instruments using an interferometer are also common, especially for wavelengths above ~1000 nm. Depending on the sample, the spectrum can be measured in either reflection or transmission.
Common incandescent or quartz halogen light bulbs are most often used as broadband sources of near-infrared radiation for analytical applications. Light-emitting diodes (LEDs) can also be used. For high precision spectroscopy, wavelength-scanned lasers and frequency combs have recently become powerful sources, albeit with sometimes longer acquisition timescales. When lasers are used, a single detector without any dispersive elements might be sufficient.
The type of detector used depends primarily on the range of wavelengths to be measured. Silicon-based CCDs are suitable for the shorter end of the NIR range, but are not sufficiently sensitive over most of the range (over 1000 nm). InGaAs and PbS devices are more suitable and have higher quantum efficiency for wavelengths above 1100 nm. It is possible to combine silicon-based and InGaAs detectors in the same instrument. Such instruments can record both UV-visible and NIR spectra 'simultaneously'.
Instruments intended for chemical imaging in the NIR may use a 2D array detector with an acousto-optic tunable filter. Multiple images may be recorded sequentially at different narrow wavelength bands.
Many commercial instruments for UV/vis spectroscopy are capable of recording spectra in the NIR range (to perhaps ~900 nm). In the same way, the range of some mid-IR instruments may extend into the NIR. In these instruments, the detector used for the NIR wavelengths is often the same detector used for the instrument's "main" range of interest.
NIRS as an analytical technique
The use of NIR as an analytical technique did not come from extending the use of mid-IR into the near-IR range, but developed independently. A striking way this was exhibited is that, while mid-IR spectroscopists use wavenumbers (cm−1) when displaying spectra, NIR spectroscopists used wavelength (nm), as is used in ultraviolet–visible spectroscopy. The early practitioners of IR spectroscopy, who depended on assignment of absorption bands to specific bond types, were frustrated by the complexity of the region. However, as a quantitative tool, the lower molar absorption levels in the region tended to keep absorption maxima "on-scale", enabling quantitative work with little sample preparation. The techniques applied to extract the quantitative information from these complex spectra were unfamiliar to analytical chemists, and the technique was viewed with suspicion in academia.
Generally, a quantitative NIR analysis is accomplished by selecting a group of calibration samples, for which the concentration of the analyte of interest has been determined by a reference method, and finding a correlation between various spectral features and those concentrations using a chemometric tool. The calibration is then validated by using it to predict the analyte values for samples in a validation set, whose values have been determined by the reference method but have not been included in the calibration. A validated calibration is then used to predict the values of samples. The complexity of the spectra are overcome by the use of multivariate calibration. The two tools most often used a multi-wavelength linear regression and partial least squares.
Applications
Typical applications of NIR spectroscopy include the analysis of food products, pharmaceuticals, combustion products, and a major branch of astronomical spectroscopy.
Astronomical spectroscopy
Near-infrared spectroscopy is used in astronomy for studying the atmospheres of cool stars where molecules can form. The vibrational and rotational signatures of molecules such as titanium oxide, cyanide, and carbon monoxide can be seen in this wavelength range and can give a clue towards the star's spectral type. It is also used for studying molecules in other astronomical contexts, such as in molecular clouds where new stars are formed. The astronomical phenomenon known as reddening means that near-infrared wavelengths are less affected by dust in the interstellar medium, such that regions inaccessible by optical spectroscopy can be studied in the near-infrared. Since dust and gas are strongly associated, these dusty regions are exactly those where infrared spectroscopy is most useful. The near-infrared spectra of very young stars provide important information about their ages and masses, which is important for understanding star formation in general. Astronomical spectrographs have also been developed for the detection of exoplanets using the Doppler shift of the parent star due to the radial velocity of the planet around the star.
Agriculture
Near-infrared spectroscopy is widely applied in agriculture for determining the quality of forages, grains, and grain products, oilseeds, coffee, tea, spices, fruits, vegetables, sugarcane, beverages, fats, and oils, dairy products, eggs, meat, and other agricultural products. It is widely used to quantify the composition of agricultural products because it meets the criteria of being accurate, reliable, rapid, non-destructive, and inexpensive. Abeni and Bergoglio 2001 apply NIRS to chicken breeding as the assay method for characteristics of fat composition.
Remote monitoring
Techniques have been developed for NIR spectroscopic imaging. Hyperspectral imaging has been applied for a wide range of uses, including the remote investigation of plants and soils. Data can be collected from instruments on airplanes, satellites or unmanned aerial systems to assess ground cover and soil chemistry.
Remote monitoring or remote sensing from the NIR spectroscopic region can also be used to study the atmosphere. For example, measurements of atmospheric gases are made from NIR spectra measured by the OCO-2, GOSAT, and the TCCON.
Materials science
Techniques have been developed for NIR spectroscopy of microscopic sample areas for film thickness measurements, research into the optical characteristics of nanoparticles and optical coatings for the telecommunications industry.
Medical uses
The application of NIRS in medicine centres on its ability to provide information about the oxygen saturation of haemoglobin within the microcirculation. Broadly speaking, it can be used to assess oxygenation and microvascular function in the brain (cerebral NIRS) or in the peripheral tissues (peripheral NIRS).
Cerebral NIRS
When a specific area of the brain is activated, the localized blood volume in that area changes quickly. Optical imaging can measure the location and activity of specific regions of the brain by continuously monitoring blood hemoglobin levels through the determination of optical absorption coefficients.
NIRS can be used as a quick screening tool for possible intracranial bleeding cases by placing the scanner on four locations on the head. In non-injured patients the brain absorbs the NIR light evenly. When there is an internal bleeding from an injury, the blood may be concentrated in one location causing the NIR light to be absorbed more than other locations, which the scanner detects.
So-called functional NIRS can be used for non-invasive assessment of brain function through the intact skull in human subjects by detecting changes in blood hemoglobin concentrations associated with neural activity, e.g., in branches of cognitive psychology as a partial replacement for fMRI techniques. NIRS can be used on infants, and NIRS is much more portable than fMRI machines, even wireless instrumentation is available, which enables investigations in freely moving subjects. However, NIRS cannot fully replace fMRI because it can only be used to scan cortical tissue, whereas fMRI can be used to measure activation throughout the brain. Special public domain statistical toolboxes for analysis of stand alone and combined NIRS/MRI measurement have been developed.
The application in functional mapping of the human cortex is called functional NIRS (fNIRS) or diffuse optical tomography (DOT). The term diffuse optical tomography is used for three-dimensional NIRS. The terms NIRS, NIRI, and DOT are often used interchangeably, but they have some distinctions. The most important difference between NIRS and DOT/NIRI is that DOT/NIRI is used mainly to detect changes in optical properties of tissue simultaneously from multiple measurement points and display the results in the form of a map or image over a specific area, whereas NIRS provides quantitative data in absolute terms on up to a few specific points. The latter is also used to investigate other tissues such as, e.g., muscle, breast and tumors. NIRS can be used to quantify blood flow, blood volume, oxygen consumption, reoxygenation rates and muscle recovery time in muscle.
By employing several wavelengths and time resolved (frequency or time domain) and/or spatially resolved methods blood flow, volume and absolute tissue saturation ( or Tissue Saturation Index (TSI)) can be quantified. Applications of oximetry by NIRS methods include neuroscience, ergonomics, rehabilitation, brain-computer interface, urology, the detection of illnesses that affect the blood circulation (e.g., peripheral vascular disease), the detection and assessment of breast tumors, and the optimization of training in sports medicine.
The use of NIRS in conjunction with a bolus injection of indocyanine green (ICG) has been used to measure cerebral blood flow and cerebral metabolic rate of oxygen consumption (CMRO2).
It has also been shown that CMRO2 can be calculated with combined NIRS/MRI measurements. Additionally metabolism can be interrogated by resolving an additional mitochondrial chromophore, cytochrome-c-oxidase, using broadband NIRS.
NIRS is starting to be used in pediatric critical care, to help manage patients following cardiac surgery. Indeed, NIRS is able to measure venous oxygen saturation (SVO2), which is determined by the cardiac output, as well as other parameters (FiO2, hemoglobin, oxygen uptake). Therefore, examining the NIRS provides critical care physicians with an estimate of the cardiac output. NIRS is favoured by patients, because it is non-invasive, painless, and does not require ionizing radiation.
Optical coherence tomography (OCT) is another NIR medical imaging technique capable of 3D imaging with high resolution on par with low-power microscopy. Using optical coherence to measure photon pathlength allows OCT to build images of live tissue and clear examinations of tissue morphology. Due to technique differences OCT is limited to imaging 1–2 mm below tissue surfaces, but despite this limitation OCT has become an established medical imaging technique especially for imaging of the retina and anterior segments of the eye, as well as coronaries.
A type of neurofeedback, hemoencephalography or HEG, uses NIR technology to measure brain activation, primarily of the frontal lobes, for the purpose of training cerebral activation of that region.
The instrumental development of NIRS/NIRI/DOT/OCT has proceeded tremendously during the last years and, in particular, in terms of quantification, imaging and miniaturization.
Peripheral NIRS
Peripheral microvascular function can be assessed using NIRS. The oxygen saturation of haemoglobin in the tissue (StO2) can provide information about tissue perfusion. A vascular occlusion test (VOT) can be employed to assess microvascular function. Common sites for peripheral NIRS monitoring include the thenar eminence, forearm and calf muscles.
Particle measurement
NIR is often used in particle sizing in a range of different fields, including studying pharmaceutical and agricultural powders.
Industrial uses
As opposed to NIRS used in optical topography, general NIRS used in chemical assays does not provide imaging by mapping. For example, a clinical carbon dioxide analyzer requires reference techniques and calibration routines to be able to get accurate CO2 content change. In this case, calibration is performed by adjusting the zero control of the sample being tested after purposefully supplying 0% CO2 or another known amount of CO2 in the sample. Normal compressed gas from distributors contains about 95% O2 and 5% CO2, which can also be used to adjust %CO2 meter reading to be exactly 5% at initial calibration.
See also
Chemical imaging
Fourier transform infrared spectroscopy
Fourier transform spectroscopy
Functional near-infrared spectroscopy (fNIR/fNIRS)
Hyperspectral imaging
Infrared spectroscopy
Optical imaging
Rotational spectroscopy
Spectroscopy
Terahertz time-domain spectroscopy
Vibrational spectroscopy
References
Further reading
Kouli, M.: "Experimental investigations of non invasive measuring of cerebral blood flow in adult human using the near infrared spectroscopy." Dissertation, Technical University of Munich, December 2001.
Raghavachari, R., Editor. 2001. Near-Infrared Applications in Biotechnology, Marcel-Dekker, New York, NY.
Workman, J.; Weyer, L. 2007. Practical Guide to Interpretive Near-Infrared Spectroscopy, CRC Press-Taylor & Francis Group, Boca Raton, FL.
External links
NIR Spectroscopy NIR Spectroscopy News
Vibrational spectroscopy
Infrared technology | Near-infrared spectroscopy | Physics,Chemistry | 3,316 |
48,664,343 | https://en.wikipedia.org/wiki/Pit%20additive | Pit additives is a commercially-produced material that aims to reduce fecal sludge build-up and control odor in pit latrines, septic tanks and wastewater treatment plants. Manufacturers claim to use effective microorganisms (EM) in their products. Current scientific evidence does not back up most claims made by manufacturers about the benefits. Removing sludge continues to be a problem in pit latrines and septic tanks.
Background
Pit additives are advocated for use in sanitation systems like pit latrines and septic tanks. Additives consist of packages of micro-organisms or enzymes or both. More than 1,200 septic system additives were estimated to be available in the U.S. in 2011. However, very little peer-reviewed and replicated field research exists to confirm the efficacy of biological additives.
Claimed benefits
Pit additive claims include an increase in speed of the breakdown of sludge, which may also decrease odor. The claim is based on assertions that the additive contains nutrients or certain aerobic (oxygen-breathing) micro-organisms that will break down the sludge. Research, however, finds that these claims are unlikely to be true. The amount of bacteria introduced by pit additives is insignificant compared to the bacteria already present in the pit or septic tank.
Applications
Septic tanks
Researchers from the U.S. carried out field experiments in 2011 to assess the effect of additives on the performance of 20 septic tanks. These septic tanks served residences at a mobile home park located in Orange County, North Carolina. The researchers distinguished between tanks that were well maintained, poorly maintained and maintained to an intermediate level. "Well maintained" was defined as "de-sludged in the last 2-3 years.: "Poorly maintained" had not been de-sludged for the last 15-20 years. Tanks put in the intermediate category fell somewhere in between.
Only well-maintained septic tanks showed some reduction in sludge build-up. To determine if the reduction could be attributed to pit additives, a follow-up study investigated the impact of three additives on just the well-maintained septic tanks. Overall, the research concluded there was limited evidence of additive impact on the performance of septic tanks. It should be stressed that these field experiments used additives other than EM (effective microorganisms), leaving the results open to the argument that the more varied composition of EM could make such additives more effective than the three additives tested.
The United States Environmental Protection Agency (USEPA) produced a fact sheet on the use of pit additives to improve the performance of septic tank treatment systems. The fact sheet concludes that bacteria and extracellular enzymes do not appear to significantly enhance normal biological decomposition processes in septic tanks. They go on to say that ‘some biological additives have been found to degrade or dissipate septic tank scum and sludge. However, whether this relatively minor benefit is derived without compromising long-term viability of the soil infiltration system has not been demonstrated conclusively’. They noted that some studies suggest that material degraded by additives in the tank actually adds to the suspended solids and other contaminants in the otherwise clarified septic tank effluent.
Wastewater treatment plants
Proponents claim the additives in wastewater can facilitate reduction in organic load and pathogen removal, leading to significant improvements in effluent quality. They also claim benefits relating to the rate of sludge build-up and odor reduction. One source claims that septic tank additives can reduce hydrogen sulphide and ammonia production. Their reasoning is that additives contain natural’ organisms that prevail over the rather less ‘natural’ organisms that would otherwise dominate conditions in the treatment unit, whether this be a septic tank or some form of aerobic treatment. They even claim that by overcoming the effects of ‘unnatural’ substances such as bleach and other disinfectants, the use of septic tank additives allows septic tanks and other treatment systems to function in conditions that would otherwise have resulted in their becoming ‘dead’ and non-functional.
One short note claims that microorganisms in the additives contain various organic acids due to the presence of lactic acid bacteria. These secrete organic acids, enzymes antioxidants, and metallic chelates thus create an antioxidant environment, which assists in the enhancement of solid-liquid separation, which is the foundation for cleaning water. The authors of the note provide no explanation of how this works.
However, the findings from various studies around the world indicate that:
There is no reliable evidence that addition of pit additives to wastewater prior to treatment has a significant effect on pathogen concentrations.
The evidence on the effect of pit additives on settleability of solids and reduction in effluent BOD and suspended solids is mixed. Under some circumstances, it appears that adding pit additives can have some effect on both BOD and SS concentrations but the effect is not large and is not proven.
The available evidence suggests that any lasting effect of pit additives is dependent on regular application of the microorganisms combined with good maintenance of the treatment technology. This will require (a) a reliable supply chain for the pit additive and (b) management systems that ensure that the pit additive is added regularly and on schedule.
While pit additives can lead to some improvement in effluent quality, it is unlikely that the improvement would be enough to make a difference. Claims that pit additives can make otherwise ‘unsafe’ effluents ‘safe’ is unlikely to be justified.
Examples
Australia
Australian scientists investigated the effect of additives in a wastewater treatment plant and a number of septic tanks. Their aim was to test the hypothesis that the additive reduces sludge volumes. They found significant reduction in pH levels at the wastewater treatment plant together with improved settlement of sludge but with a significant increase in organic matter (measured as biological oxygen demand). Their results for the septic tanks showed a homogenization of conditions in the tanks after application of septic tank additives, which they suggested was due to domination by a particular type of micro-organism. However, they found no reduction in suspended solids concentration in the effluent and concluded that there were not sufficient changes in sludge volume in the wastewater treatment plant or suspended solids in the septic tanks to indicate a clear benefit from the use of these kinds of additives in wastewater.
Orangi Pilot Project in Karachi, Pakistan
A project in Karachi, Pakistan called the Orangi Pilot Project (OPP) has been making use of pit additives. The OPP promotes a treatment technology comprising a two-chamber tank. The first of these acts like the first compartment of a septic tank while the second is filled with gravel to provide filtration. It is not clear whether flow through the second compartment is upward or downward. This arrangement has some similarities to baffled reactor designs promoted by the German NGO BORDA, although standard BORDA designs provide more chambers, arranged in series and with all after the first chamber operating in an upward flow mode. The baffled reactor design is one of a number of ‘DEWATS’ (decentralised wastewater treatment systems) wastewater treatment technologies promoted by BORDA. All operate anaerobically and are examples of what might be termed enhanced primary treatment. If maintained well, enhanced primary treatment modules should perform better than a well maintained conventional septic tank but will still produce an effluent with high pathogen levels and relatively high biological oxygen demand and suspended solids concentrations.
The OPP is using the additives to improve the effluent produced at these small treatment plants, including the plant that treats effluent from a nursery in Karachi. It has also supported the installation of several small treatment plants using EM technology in rural Sindh and Punjab. Its partner organization Ali Hasan Mangi Memorial Trust (AHMMT) installed a small sewage treatment unit with additives to treat sewage from 300 houses in the village Khairodero in Larkana District. Another eleven are reported to be functioning and more are planned.
During discussions at the Urban Resource Centre in Karachi in late 2011, the late Parveen Rehman of OPP stated that adding pit additives to the inlet chamber of these treatment facilities had resulted in improved effluent quality and a significant reduction in smell. However, it seems that OPP had not attempted to quantify the improvement and had not made any formal assessment of the effect of the pit additive on effluent quality.
References
Toilets
Sanitation | Pit additive | Biology | 1,744 |
53,563,061 | https://en.wikipedia.org/wiki/Kousha%20Etessami | Kousha Etessami is a professor of computer science at the University of Edinburgh, Scotland, UK. He has received his Ph.D. from the University of Massachusetts Amherst in 1995. He works on theoretical computer science, in particular on computational complexity theory, game theory and probabilistic systems.
References
External links
Year of birth missing (living people)
Living people
Computer scientists
Academics of the University of Edinburgh
Theoretical computer scientists | Kousha Etessami | Technology | 88 |
37,647,499 | https://en.wikipedia.org/wiki/Diamond%20operator | In number theory, the diamond operators 〈d〉 are operators acting on the space of modular forms for the group Γ1(N), given by the action of a matrix in Γ0(N) where δ ≈ d mod N. The diamond operators form an abelian group and commute with the Hecke operators.
Unicode
In Unicode, the diamond operator is represented by the character .
Notes
References
Modular forms | Diamond operator | Mathematics | 84 |
5,535,702 | https://en.wikipedia.org/wiki/Ed%20Carpenter%20%28artist%29 | Ed Carpenter (born 1946) is an artist specializing in large-scale public sculptures made of glass. His work can be found in conference centers, libraries, and airports.
Early life and education
Carpenter studied architecture at the Rhode Island School of Design, where he studied with Dale Chihuly. He attended the University of California, Berkeley from 1968-1971.
Glass technique
Carpenter specializes in large-scale installations in glass. He is known for his technical innovation using cold-bent tempered glass, encapsulated glass elements, and programmed lighting elements. His work is often described as "architectural".
Works
While working with Dale Chihuly they created lead glass doors that are in the collections of the Corning Museum of Glass and the Toledo Museum of Art.
In 2019 he installed the first phase of a dichroic glass sculpture in the Portland Public Library, called "Mollie's Garden". The piece honored his mother, a library volunteer named Mollie Starbuck, who died in her 80's. His work "Aloft" is a 360 foot glass sculpture in the Wichita Dwight D. Eisenhower National Airport lobby and was featured as an event by the Wichita Art Museum on November 18, 2021.
He created a lobby sculpture for the Meydenbauer Convention Center in Bellevue, Washington; a large (17 meters x 18 meters x 6.5 meters) work for the Morgan Library at Colorado State University (commissioned by the Colorado Council on the Arts); and glass windows for the Christian Theological Seminary in Indianapolis, Indiana.
Other works include the Flying Bridge between buildings at Central Washington University, an installation at the Hokkaido Sports Center, and a large sphere for the atrium of Carlson school. He also created an outdoor sculpture for the Broadway pumphouse.
Personal life
Carpenter lives and has his studio in Portland, Oregon.
Writings
References
External links
Ed Carpenter's official web site
Living people
Artists from Portland, Oregon
Glass architecture
American glass artists
Rhode Island School of Design alumni
University of California, Berkeley alumni
1946 births | Ed Carpenter (artist) | Materials_science,Engineering | 404 |
18,193,125 | https://en.wikipedia.org/wiki/Reza%20Olfati-Saber | Reza Olfati-Saber is an Iranian roboticist and Assistant Professor of Engineering at the Thayer School of Engineering at Dartmouth College. Olfati-Saber is an internationally renowned expert in the control and coordination of multi-robot formations. He has also worked in mobile sensor networks, and innovative educational and outreach activities in robotics for disaster management and rescue operations.
Early life and education
Olfati-Saber was born in Iran. He received his B.S. degree in 1994 in Electrical Engineering from Sharif University of Technology. He received S.M. degree in 1997 and Ph.D. degree in 2001 in both Electrical Engineering and Computer Science from Massachusetts Institute of Technology (MIT).
He was a postdoctoral scholar at the California Institute of Technology (Caltech) from 2001 until 2004.
Awards and honors
2010 – Presidential Early Career Award for Scientists and Engineers (PECASE), National Science Foundation (NSF)
References
External links
Dartmouth faculty website
Year of birth missing (living people)
Living people
Control theorists
Dartmouth College faculty
Iranian roboticists
Sharif University of Technology alumni
Massachusetts Institute of Technology alumni
Recipients of the Presidential Early Career Award for Scientists and Engineers | Reza Olfati-Saber | Engineering | 229 |
30,807,231 | https://en.wikipedia.org/wiki/Kamiumi | In Japanese mythology, the story of the occurs after the creation of Japan (Kuniumi). It concerns the birth of the divine (kami) descendants of Izanagi and Izanami.
Story
According to the Kojiki, various gods were born from the relationship between Izanagi and Izanami until the fire deity, Kagu-tsuchi, at birth burned Izanami's genitals and wounded her fatally. Izanagi, witnessing the death of his beloved wife, in rage took the ten-grasp sabre and crushed his child, Kagutsuchi. A number of gods were born from the blood and remains of Kagutsuchi. Subsequently, Izanagi went to the land of Yomi (the world of the dead) to find Izanami, however when he found her, she had become a rotting corpse and from her parts other gods had arisen, causing the flight of Izanagi to the world of the living. Then Izanagi performed the misogi ritual purification through which more gods are born. The last of these are the three most important gods of Shinto: Amaterasu, goddess of the sun; Tsukuyomi, deity of the moon; and Susano'o, god of the sea.
Birth of the gods
After having created the Eight Large Islands (Ōyashima) and other islands during the creation of Japan, Izanagi and Izanami decided to give birth to other gods, among them household deities, deities of the wind, trees and meadows, all born spontaneously:
= Ōgoto-oshi'o, male deity
, male deity
, female deity
, genderless deity and spirit
, male deity
, male deity
, male deity
, genderless deity and spirit
, male deity
, female deity
From the relationship between Haya'akitsuhiko and Haya'akitsuhime the following gods were born:
= Awa-nagi, male deity
= Awa-nami, female deity
, male deity
, female deity
, genderless deity and spirit
, genderless deity and spirit
, genderless deity and spirit
, genderless deity and spirit
, male deity
, genderless deity and spirit
Ohoyama-tsumi/ Ōyama-tsumi (大山津見神, Ohoyama-tsumi/ Ōyama-tsumi -no-kami), male deity - [for his genealogy with Susano'o, please refer to Ōyamatsumi]
, also known as , female deity
From the relationship between Ohoyamatsumi and Kaya-no-hime the following gods were born:
, genderless deity and spirit
, genderless deity and spirit
, genderless deity and spirit
, genderless deity and spirit
, genderless deity and spirit
, genderless deity and spirit
, male deity
, female deity
, also known as - genderless deity and spirit
, female deity, Goddess of food.
= Kagu-tsuchi, also known as and , male deity, Kami of fire and the hearth.
During Kagutsuchi's birth, Izanami's genitals were burned and she was mortally wounded. In her agony, from her vomit, urine and feces more gods were born.
, male deity born from the vomit and feces of Izanami
, female deity born from the vomit and feces of Izanami
, male deity born from the feces of Izanami
, female deity born from the feces of Izanami
= Mizuhanome (Kami of water), female deity born from the urine of Izanami
, = Tori-no-wakumusubi (Kami of agriculture), male deity born from the urine of Izanami
Wakumusuhi had a daughter:
1. = Toyoukebime (goddess of agriculture) female deity;
Death of Kagutsuchi
After the agony, Izanami dies. At the time Izanagi crept moaning about the body and mourned her death. From his tears, the female deity was born. Subsequently Izanagi buried Izanami on Mount Hiba. His sadness turned into anger and he decided to kill Kagutsuchi with a ten-grasp sword called Ame-no-ohabari/ (archaic name) Ame-no-wohabari .
From the blood of Kagutsuchi the following gods emerged:
- Minor Star God.
The gods above were born from the blood that fell from the tip of the sword in the rocks.
, also known as or
The gods above were born from the blood that fell from the blade of the sword.
The gods above were born from the blood that fell from the handle of the sword.
Also, from the body of Kagutsuchi the following gods were born:
, emerged from Kagutsuchi's head;
, from the chest;
, from the abdomen;
Kurayama-tsumi (Kojiki: 闇山津見神) or (Nihon Shiki: 闇山祇), from the genitals;
, from the left arm;
, from the right arm;
, from the left foot;
Toyama-tsumi (Kojiki: 戸山津見神) or (Nihon Shiki: 戸山祇), from the right foot.
Land of Yomi
Izanagi then decided to bring back Izanami and goes to Yomi-no-kuni, the underworld. Crossing the gates to that world, he met Izanami and says to her:
Izanami replied:
On saying this, Izanami entered the palace of these gods. However, time passed and she did not return and Izanagi began to despair. So he broke one of the tines of his ornamental comb mizura that he wore in the left bun of his hair, lit it in order to light the place and decided to enter the world of dead. He manages to find Izanami but is surprised to see that she lost her beauty and had become a rotting corpse, covered with maggots. Of her body were born the eight Gods of thunder, which were:
, from the head of Izanami;
, from her chest;
, from her abdomen;
, from her genitals;
, from her left arm;
, from her right arm;
, from her left foot;
, from her right foot.
Izanagi, shocked, decided to return home, but Izanami was embarrassed by his appearance and commanded the to chase Izanagi. In his flight, he took the head-dress from his head, and threw it to the ground where it turned into a bunch of grapes. The Yomotsushikome started to eat them but kept chasing the fleeing Izanagi. So he broke the tine of the comb that he wore in his right bun, and as he threw it to the ground it became bamboo shoots, prompting the Yomotsushikome to eat them and enabling Izanagi to flee.
However, Izanami decided to release the eight gods of thunder and 1,500 warriors from Yomi to continue the pursuit. Izanagi drew and brandished his Totsuka-no-Tsurugi sword to continue his flight. As they pursued him, Izanagi reached the , the slope that descends from the land of the living to Yomi. He took three peaches from a tree that had grown in that place and threw them at his pursuers so that they fled.
Izanagi commented:
These peaches were called .
Finally, Izanami persecuted Izanagi, but he lifted a rock that a thousand men could not move and blocked the slope with it. At that moment, their eyes met for the last time.
Izanami said:
Izanagi replied:
These words justified the circle of life and death in humans. For the same reason, Izanami is also called or and the boulder that covers the entrance to the world of the dead is known as or ō and is today known as in Izumo, Shimane Prefecture.
Purification of Izanagi
Leaving Yomi, Izanagi decided to remove all uncleanness in his body through a purification ceremony (misogi) consisting of a bath in the river at Ahakihara in Tachibana no Ono in Tsukushi. As he stripped his clothes and accessories on the floor the following twelve gods are born:
Tsukitatsu funato (衝立船戸神 – Post at the Road Bend) = Chimata no Kami, emerges from the staff.
Michi no nagachiha (道之長乳歯神 – Long Winding Way Stones), from the obi.
Tokihakashi (時量師神 – Time Keeper Loosed), from the handbag.
Wazurai no ushi (和豆良比能宇斯能神 – Master Miasma), from cloths.
Michi mata (道俣神 – Road Fork), from the hakama.
Akigui no ushi (飽咋之宇斯能神 – Master Filled Full), from the crown corona.
Oki zakaru (奥疎神 – Beyond Offshore), from the armband of the left hand.
Okitsu nagisa biko (奥津那芸佐毘古神 – Offshore Surf Lad), from the armband of the left hand.
Okitsu kaibera (奥津甲斐弁羅神 – Offshore Tide Lad), from the armband of the left hand.
He zakaru (辺疎神 – Beyond Shoreside), from the armband of the right hand.
Hetsu nagisa biko (辺津那芸佐毘古神 – Shoreside Surf Lad), from the armband of the right hand.
Hetsu kaibera (辺津甲斐弁羅神 – Shoreside Tide Lad), from the armband of the right hand.
Subsequently Izanagi is stripped of impurities from the land of Yomi. In this moment two gods were born:
, male deity
, male deity
Then, shaking off the curse, three gods were born:
, male deity
, male deity
, female deity
Then, when washing with water the lower parts of his body, two gods were born;
, genderless deity and spirit
, male deity
When washing the middle of his body, two more gods were born:
, genderless deity and spirit
, male deity
Finally, washing the upper part of his body, two more gods were born:
, genderless deity and spirit
, male deity
The trio of Sokotsu-watatsumi, Nakatsu-watatsumi and Uwatsu-watatsumi make up the group of deities called Sanjin Watatsumi, or the gods of water. The trio of Sokotsutsuno'o, Nakatsutsuno'o and Uhatsutsuno'o make up the Sumiyoshi Sanjin group of deities, gods of fishing and sea, to whom tribute is paid at Sumiyoshi Taisha.
In the last step of the purification ceremony, Izanagi washed his left eye from which the female deity
was born; washed his right eye from which the genderless deity and spirit was born; and when washing his nose from which the male deity = commonly known as Susano'o was born.
With these three gods called , Izanagi ordered their investiture. Amaterasu received the mandate to govern Takamagahara and a necklace of jewels called from Izanagi. Tsukuyomi is mandated to govern over the Dominion of the Night, and Takehaya-susano'o (建速須佐之男命) = Susano'o is to rule the seas.
Notes
References
Bibliography
Japanese mythology
Creation myths
Kojiki | Kamiumi | Astronomy | 2,410 |
1,599,902 | https://en.wikipedia.org/wiki/Andrew%20B.%20Whinston | Andrew B. Whinston (born June 3, 1936), is an American economist and computer scientist. He serves as the Hugh Roy Cullen Centennial Chair in Business Administration and works as a Professor of Information Systems, Computer Science, and Economics, and Director of the Center for Research in Electronic Commerce (CREC) in the McCombs School of Business at the University of Texas at Austin.
In the late 1950s, he was Sanxsay Fellow at Princeton University. Whinston received his PhD from the Carnegie Institute of Technology in 1962, when he also received the Alexander Henderson Award for Excellence in Economic Theory. He then started working at the economics department of Yale University, where he was a member of the Cowles Foundation. In 1964, he became Associate Professor of Economics at University of Virginia. By 1966, he was a Full Professor at Purdue University, where he became the university's inaugural Weiler Distinguished Professor of management, economics, and computer science.
In 1962, Whinston published a research paper in the Journal of Political Economy on how non-cooperative game theory could be applied to issues in microeconomics. In a second paper entitled "A Model of Multi-Period Investment Under Uncertainty", which appeared in Management Science, he used nonlinear optimization methods to determine optimal portfolios over time.
Publications
Whinston has papers in economics journals such as American Economic Review, Econometrica, Review of Economic Studies, Journal of Economic Theory, Journal of Financial Economics, Journal of Mathematical Economics, in multidisciplinary journals such as Management Science, Decision Sciences, and Organization Science, in operations journals such as Operations Research, European Journal of Operational Research, Production and Operations Management, Journal of Production Research, and Naval Research Logistics, in mathematics journals such as Journal of Combinatorics, SIAM Journal on Applied Mathematics, and Discrete Mathematics, in accounting journals such as the Accounting Review and Auditing: A Journal of Practice and Theory, in marketing journals such as Marketing Science, Journal of Marketing, Journal of Marketing Research, and Journal of Retailing, in the premier journals devoted to information systems – Management Science, Decision Support Systems, MIS Quarterly, Journal of Management Information Systems, and Information Systems Research - and in computer science journals such as Communications of the ACM, ACM Transactions on Database Systems, ACM Transactions, IEEE Computing on Internet Technology, and ACM Journal on Mobile Networking and Applications.
His publication record consists of more than 25 books, and 400 refereed publications.
Awards
In 1995, Whinston was honored by the Data Processing Management Association with its Information Systems Educator of the Year Award. In 2005, Whinston received the LEO Award for Lifetime Exceptional Achievement in Information Systems. This award, created by the Association for Information Systems Council and the International Conference on Information Systems Executive Committee, recognizes the work of outstanding scholars on the field.
In 2009, Whinston was honored with the Career Award for Outstanding Research Contributions at the University of Texas at Austin which recognizes significant research contributions made by a tenured or tenure-track faculty member. In 2009, the INFORMS Information System Society (ISS) honored Whinston by recognizing him as the inaugural INFORMS ISS Fellow for outstanding contributions to information systems research.
Bibliography
See also
Chance-constrained portfolio selection
References
External links
Andrew Whinston's personal website
Presentation of Andrew B. Whinston at UT Austin
The Center for Research in Electronic Commerce
Andrew Whinston's CV
American economists
American computer scientists
Princeton University fellows
Carnegie Mellon University alumni
Yale University faculty
Purdue University faculty
McCombs School of Business faculty
1936 births
Living people
Information systems researchers | Andrew B. Whinston | Technology | 723 |
52,322,936 | https://en.wikipedia.org/wiki/Nyhamna%20Gas%20Plant | The Nyhamna Gas Plant is a large and significant natural-gas processing plant in Aukra, Møre og Romsdal, Norway. As of January 2018, Norway was the world's third-largest natural gas exporter, after Russia and Qatar.
History
Construction of the plant began around 2005, and was expected to cost about , including the extremely long undersea pipeline. The gas plant was built for the Ormen Lange gas field, named after a ship of a Viking king. The head of the Ormen Lange project was Tom Rotjer. The site was built by Norsk Hydro, with partnership with Royal Dutch Shell, ExxonMobil and Petoro which is owned by the Norwegian government. When being built, the plant was Norway's largest construction project.
In 2005, Norway supplied 15% of the UK's natural gas. Once the gas plant was up and running, 20% of the UK's gas was coming from the Langeled pipeline. It supplies heat to around 10 million British people.
Shell took over as operator on 1 December 2007.
Statnett built a 180 MW gas power plant in 2008, but it only operated 400 hours in 10 years, and was put up for sale.
Operation
It is situated near Gossa island at Nyhamna. Nyhamna has about 3,000 residents.
Langeled pipeline
The Langeled pipeline was built for Norsk Hydro, to begin operation in 2007, via the Sleipner gas field; as it passes through the Sleipner field, it is possible for this gas to be diverted to other countries. The pipeline travels an incredible 745 miles (1,200 km) to the Easington Gas Terminal in Yorkshire, England. The pipeline was built around the clock, 24 hours a day, with the pipeline sections being welded on Acergy's construction ship LB200, which could lay about 4 km a day. It required 1.2 million tonnes of steel. Langeled was the responsibility of Statoil. The pipeline sections for the southern section were assembled at the Bredero Shaw site in Farsund in Southern Norway (Sørlandet). The northern section was assembled at Måløy in Western Norway, and the middle sections at Sotra in Western Norway.
From Nyhamna to Sleipner, the pipeline is 42 inches diameter, and from Sleipner to Easington it is 44 inches diameter. The section from Sleipner to Easington became operational on Sunday 1 October 2006. The project for the pipeline had begun in October 2004.
Gas fields
Ormen Lange
Ormen Lange is around 65 miles west of the gas plant. The field was discovered by Norsk Hydro in 1997. The wells were drilled by the ship West Navigator. The operation of Ormen Lange was owned 18% by Norsk Hydro, 17% by Norske Shell, 36% by Petoro, 10% by Statoil, 10% by Dansk Olie og Naturgas, and 7% by ExxonMobil (Esso). Ormen Lange is Norway's second largest gas field.
See also
Energy in Norway
References
External links
Norske Shell
Nyhamna Expansion Project
2007 establishments in Norway
Buildings and structures in Møre og Romsdal
Chemical plants in Norway
Energy infrastructure completed in 2007
Natural gas infrastructure in Norway
Natural gas plants
Norsk Hydro
Shell plc buildings and structures | Nyhamna Gas Plant | Chemistry | 687 |
20,867,069 | https://en.wikipedia.org/wiki/NGC%20925 | NGC 925 is a barred spiral galaxy located about 30 million light-years away in the constellation Triangulum. German-British astronomer William Herschel discovered this galaxy on 13 September 1784.
The morphological classification of this galaxy is SB(s)d, indicating that it has a bar structure and loosely wound spiral arms with no ring. The spiral arm to the south is stronger than the northern arm, with the latter appearing flocculent and less coherent. The bar is offset from the center of the galaxy and is the site of star formation all along its length. Both of these morphological traits—a dominant spiral arm and the offset bar—are typically characteristics of a Magellanic spiral galaxy. The galaxy is inclined at an angle of 55° to the line of sight along a position angle of 102°.
The NGC 925 is a member of the NGC 1023 Group, a nearby, gravitationally-bound group of galaxies associated with NGC 1023. However, the nearest member lies at least distant from NGC 925. There is a 10 million solar mass () cloud of neutral hydrogen attached to NGC 925 by a streamer. It is uncertain whether this is a satellite dwarf galaxy, the remnant of a past tidal interaction, or a cloud of primordial gas.
Although no supernovae have been observed in NGC 925 yet, a luminous red nova, designated AT 2023nzt (type LRN, mag. 19), was discovered on 26 July 2023.
References
External links
Barred spiral galaxies
Triangulum
0925
01913
009332
NGC 1023 Group
Astronomical objects discovered in 1784
Galaxies discovered in 1784
Discoveries by William Herschel | NGC 925 | Astronomy | 339 |
54,288,689 | https://en.wikipedia.org/wiki/NGC%207098 | NGC 7098 is a doubled barred spiral galaxy located about 95 million light-years away from Earth in the constellation of Octans. NGC 7098 has an estimated diameter of 152,400 light-years. NGC 7098 was discovered by astronomer John Herschel on September 22, 1835.
NGC 7098 has a very prominent bar that is shaped like a broad oval with very prominent, nearly straight ansae. Surrounding the bar, an inner ring made of four tightly wrapped spiral arms is found. Located outside of the inner ring, a well-defined outer ring surrounding the inner region appears to have formed due to the wrapping of two spiral arms. It appears that both rings are being affected by new star formation. However, there is no star formation in the core of NGC 7098 as shown by the absence of dust lanes.
See also
NGC 7013
NGC 7020
References
External links
SIMBAD
NGC 70-- Project
Ring galaxies
Barred spiral galaxies
Octans
7098
67266
Astronomical objects discovered in 1835 | NGC 7098 | Astronomy | 208 |
292,224 | https://en.wikipedia.org/wiki/Topologist%27s%20sine%20curve | In the branch of mathematics known as topology, the topologist's sine curve or Warsaw sine curve is a topological space with several interesting properties that make it an important textbook example.
It can be defined as the graph of the function sin(1/x) on the half-open interval (0, 1], together with the origin, under the topology induced from the Euclidean plane:
Properties
The topologist's sine curve is connected but neither locally connected nor path connected. This is because it includes the point but there is no way to link the function to the origin so as to make a path.
The space is the continuous image of a locally compact space (namely, let be the space and use the map defined by and for ), but is not locally compact itself.
The topological dimension of is 1.
Variants
Two variants of the topologist's sine curve have other interesting properties.
The closed topologist's sine curve can be defined by taking the topologist's sine curve and adding its set of limit points, ; some texts define the topologist's sine curve itself as this closed version, as they prefer to use the term 'closed topologist's sine curve' to refer to another curve. This space is closed and bounded and so compact by the Heine–Borel theorem, but has similar properties to the topologist's sine curve—it too is connected but neither locally connected nor path-connected.
The extended topologist's sine curve can be defined by taking the closed topologist's sine curve and adding to it the set . It is arc connected but not locally connected.
See also
List of topologies
Warsaw circle
References
Topological spaces | Topologist's sine curve | Mathematics | 352 |
68,628,242 | https://en.wikipedia.org/wiki/1%2C2-Dichloro-2-nitrosopropane | 1,2-Dichloro-2-nitrosopropane is a chlorinated nitrosoalkane. It's a deep blue liquid with powerful lachrymatory effects.
See also
Chloropicrin
Trifluoronitrosomethane
Trichloronitrosomethane
References
Nitroso compounds
Organochlorides
Lachrymatory agents
Pulmonary agents | 1,2-Dichloro-2-nitrosopropane | Chemistry | 85 |
69,389,829 | https://en.wikipedia.org/wiki/Age-1 | The age-1 gene is located on chromosome 2 in C.elegans. It gained attention in 1983 for its ability to induce long-lived C. elegans mutants. The age-1 mutant, first identified by Michael Klass, was reported to extend mean lifespan by over 50% at 25 °C when compared to the wild type worm (N2) in 1987 by Johnson et al. Development, metabolism, lifespan, among other processes have been associated with age-1 expression. The age-1 gene is known to share a genetic pathway with daf-2 gene that regulates lifespan in worms. Additionally, both age-1 and daf-2 mutants are dependent on daf-16 and daf-18 genes to promote lifespan extension.
Long-lived age-1 mutants are resistant to oxidative stress and UV light. Age-1 mutants also have a higher DNA repair capability than wild-type C. elegans. Knockdown of the nucleotide excision repair gene Xpa-1 increases sensitivity to UV and reduces the life span of the long-lived mutants. These findings support the hypothesis that DNA repair capability underlies longevity.
Insulin/IGF-1 signaling (IIS) pathway
The age-1 gene is said to encode for AGE-1, the catalytic subunit ortholog to phosphoinositide 3-kinase in C.elegans, which plays an important role in the insulin/IGF-1(IIS) signaling pathway. This pathway gets activated upon binding of an insulin-like peptide to the DAF-2/IGF1R receptor. Binding causes dimerization and phosphorylation of the receptor, which induces recruitment of the DAF-2 receptor substrate IST-1. Subsequently, IST-1 promotes activation of both AGE-1/PI3K and its adaptor subunit AAP-1. AGE-1 then induces conversion of phosphatidylinositol- 4,5-biphosphate (PIP2) to phosphatidylinositol-3,4,5-triphosphate (PIP3). This conversion can be reversed by DAF-18 (PTEN in humans). PIP3, causes activation of its major effector PDK-1, which in turn promotes phosphorylation of AKT 1/2, and SGK-1. This phosphorylation causes inhibition of the transcription factor DAF-16/FoXO and glucocorticoid-inducible kinase-1(SKN-1), preventing the expression of downstream genes involved in longevity. In other words, activation of the IIS pathway blocks expression of genes known to extend lifespan by preventing DAF-16 from translocating to the nucleus and activating them.
History
The age-1 gene was first characterized by Thomas Johnson as a follow up study to Michael Klass's findings on the isolation of long-lived C. elegans mutants. Johnson demonstrated that long-lived age-1 (hx546) mutants did not have significant differences in growth rate or development. Additionally, all age-1 isolates were also fer-15 (mutants sensitive to temperature), suggesting that both genes were inherited together. This result suggested that the age phenotype was caused by a single mutation. Johnson proposed a negative pleiotropy theory, in which the age-1 gene is beneficial early in life but harmful at a later stage, on the basis that the long-lived mutants had decreased self-fertility compared to controls. This theory was contradicted in 1993 by Johnson himself when he ablated the fertility defect on the mutant, and the animals still lived long. After the age-1 gene was discovered, Cynthia Kenyon published groundbreaking research on doubling the lifespan of C. elegans by the insulin/IGF-1 pathway. The age-1 gene plays a pivotal role in the IGF-1 pathway and encodes the homolog of phosphatidylinositol-3-OH kinase (PI3K) catalytic subunits in mammals.
See also
Unfolded protein response
Genetics of aging
References
Aging-related genes
Caenorhabditis elegans genes | Age-1 | Biology | 875 |
10,691,989 | https://en.wikipedia.org/wiki/Shoulder%20pad%20%28fashion%29 | Shoulder pads are a type of fabric-covered padding used in men's and women's clothing to give the wearer the illusion of having broader and less sloping shoulders. In the beginning, shoulder pads were shaped as a semicircle or small triangle and were stuffed with wool, cotton, or sawdust. They were positioned at the top of the sleeve to extend the shoulder line. A good example of this is their use in "leg o' mutton" sleeves or the smaller puffed sleeves which are based on styles from the 1890s. In men's styles, shoulder pads are often used in suits, jackets, and overcoats, usually sewn at the top of the shoulder and fastened between the lining and the outer fabric layer. In women's clothing, their inclusion depends on the fashion taste of the day. Although from a non-fashion point of view they are generally for people with narrow or sloping shoulders, there are also quite a few cases in which shoulder pads will be necessary for a suit or blazer in order to compensate for certain fabrics' natural properties, most notably suede blazers, due to the weight of the material. There are also periods when pads intended to exaggerate the width of the shoulders are favored. As such, they were popular additions to clothing (particularly business clothing) during the 1930s and 1940s; the 1980s (encompassing a period from the late 1970s to the early 1990s); and the late 2000s to early 2010s.
1930 to 1945
In sports, the shoulder pad was invented in 1877 by a Princeton football player and was used in American football. In women's fashion, shoulder pads originally became popular in the 1930s when fashion designers Elsa Schiaparelli and Marcel Rochas included them in their designs of 1931. Though Rochas may have been the first to present them, Schiaparelli was the most consistent in promoting them during the 1930s and '40s and it is her name that came to be most associated with them. Both designers had been influenced by the extravagant shoulder flanges and small waists of traditional Southeast Asian ceremonial dress. Costume designer Travis Banton's broad-shouldered designs for Marlene Dietrich also influenced public tastes.
Soon, broad, padded shoulders dominated fashion, seen even in eveningwear and perhaps reaching a peak of variety in 1935-36, when even Vionnet showed them; Rochas presented high, pinched-up shoulders; and Piguet outdid even Rochas by extending his widened shoulders vertically like oars or paddles. Amid all this competing extravagance, the widest shoulders were still said to come from Schiaparelli, who hadn't given them up even when they briefly dropped out of favor with designers in 1933.
War was in the air during this entire period, and fashion reflected it in epaulettes and other martial details, but after World War II began in 1939, women's fashions became even more militarised. Jackets, coats, and even dresses in particular were influenced by masculine styles and shoulder pads became bulkier and were positioned at the top of the shoulder to create a solid look that sloped slightly toward the neck.
The shoulder-padded style had now become universal, found in all garments except lingerie, so standard that when US designer Claire McCardell wanted to remove them from her garments in 1940, her financiers feared their sales would suffer and insisted that pads be retained. McCardell's innovative response was to put them in with very simple stitching so that they could be easily removed by the wearer, prefiguring the flexibility of the velcro-fastened shoulder pads of the 1980s. The following year, British designer Molyneux also eliminated shoulder pads, part of a prophetic trend in high fashion that would be carried further by Balenciaga in 1945 and culminate in Dior's slope-shouldered 1947 Corolle collection.
Big shoulders were still popular in 1945, when Joan Crawford wore a fur coat with wide, exaggerated shoulders, also designed by Adrian, in the film Mildred Pierce.
In men's fashion, zoot suits had their own share of popularity. Basically, a zoot suit is based on a "regular" 2-piece suit, yet one or two sizes larger, so it was supposed to be padded
During this period, stiff, felt-covered cotton batting was the material used for most shoulder pads, a combination that allowed for easy adjustment but didn't hold its shape very well when washed.
1945 to 1970
Balenciaga's 1945 endorsement of sloped shoulders signaled the direction that fashion was heading, and this was confirmed with Christian Dior's transformative 1947 Corolle collection, characterized by a striking natural shoulder line.
The popularity of shoulder pads with the public, too, ultimately tapered off later in the decade, after the war was over and women yearned for a softer, more feminine look.
During the late 1940s to about 1951, some dresses featured a soft, smaller shoulder pad with so little padding as to be barely noticeable. Its function seems to have been to slightly shape the shoulder line. By the 1950s, shoulder pads appeared only in jackets and coats—not in dresses, knitwear or blouses as they had previously during the heyday of the early 1940s.
Some of the rounded-shoulder, barrel-shaped coats of the late 1950s, particularly those of Balenciaga and Givenchy, contained shoulder pads to widen the rounded line.
By the early 1960s, coat and jacket shoulder pads slowly became less noticeable (with Marc Bohan's fall 1963 collection for Dior a notable exception) and midway through the decade, shoulder pads had disappeared.
1970s
Shoulder pads made their next appearance in women's clothing in the early 1970s, through the influence of British fashion designer Barbara Hulanicki and her label Biba. Biba produced designs influenced by the styles of the 1930s and 1940s, and so a soft version of the shoulder pad was revived. Ossie Clark was another London designer using shoulder pads at the time, showing forties-revival suits as early as 1968. During the first five years of the 1970s, a number of designers in other fashion capitals also presented padded shoulders with an explicit 1940s inspiration, constituting a minor trend that peaked in 1971. In 1970, Yves Saint Laurent showed forties-themed padded shoulders; in 1971, Angelo Tarlazzi, Yves Saint Laurent, Karl Lagerfeld for Chloé, Marc Bohan for Dior, Valentino, Jean-Louis Scherrer, Guy Laroche, Michel Goma for Patou, Michele Aujard, Thierry Mugler, and many New York designers; in 1972, Jean-Louis Scherrer and Scott Barrie; in 1973, Valentino, Jean-Louis Scherrer, and Daniel Hechter; and in 1974, Jean-Louis Scherrer and Nino Cerruti. These padded shoulders never reached mainstream acceptance, though; Saint Laurent's forties-revival attempts in particular were widely criticized, and so the look was relatively limited in reach, with designers showing and the public preferring the relaxed, natural, often jeans-based clothing styles typical of the times.
During the mid-1970s, Saint Laurent and a few others did show an occasional padded-shoulder jacket scattered among the popular ethnic and peasant looks, but sensibly-proportioned, easy, and contemporary in appearance instead of being part of a forties look, suitable for the standard officewear women were preferring as they entered the workforce in greater numbers during the decade, a look codified with the 1977 publications of John T. Molloy's The Woman's Dress for Success Book and Michael Korda's Success!. The shoulder padding occasionally seen in these business blazers was unobtrusive, no more pronounced than in a standard men's suit jacket, and the most high-fashion versions carried no pads at all, in line with the unconstructed Big Look that dominated the fashion world at the time.
Fall 1978
For fall 1978, designers in all fashion capitals suddenly endorsed wide, padded shoulders across the board, introducing the broad-shouldered styles that would characterize the 1980s. There had been some signs of a move toward broader shoulders the previous year, but it would be a January 1978 collection from Yves Saint Laurent that would be cited as the first clear expression of the trend when Saint Laurent showed a handful of jackets with exaggerated shoulder padding over slim trousers. Jean-Louis Scherrer showed somewhat similar square-shouldered designs two days before Saint Laurent, but it was Saint Laurent's shoulders that made an impression on the press. In later years, there would be various claims about who began the eighties big-shoulders trend, with Norma Kamali, Giorgio Armani, and several others variously cited as the exclusive originator, but Saint Laurent was the designer credited by sources at its 1978 inception with launching the trend.
When most of the rest of the fashion world showed broad-shouldered looks a couple of months later, there would be two distinct versions of it. The first, favored by Paris designers like Saint Laurent, Karl Lagerfeld for Chloé, Thierry Mugler, Claude Montana, Pierre Cardin, Jean-Claude de Luca, Anne Marie Beretta, France Andrevie, and a number of others, was an explicit but exaggerated 1940s-revival silhouette based largely on tailored suits and dresses, though more a slim-skirted haute couture forties look than the flared-skirt, World War II Utility Suit-inspired shapes flirted with by Saint Laurent in the early seventies, no platform shoes or snoods this time. This first version was referred to as retro and included 1940s accessories, some mid-20th-century sci-fi looks, and military influences.
The second was a more contemporary sportswear look in which shoulder pads were added to easy but slimmed-down casualwear, favored largely by US and Italian designers like Perry Ellis, Norma Kamali, Calvin Klein, and Giorgio Armani.
This time, the shoulder line was usually continuous from outer edge to neck, without the dip toward the center seen in the 1940s, and the pads used, even when enormous, were much lighter and held their shape better than the ones used in the 1940s, now most often made of foam and other lightweight, well-shaped, moldable materials. As shoulder pads hadn't been this common in womenswear in decades, some in the fashion industry worried that the tailoring skills necessary for them had been lost and measures were taken to train workers in their proper placement.
Initially, this big change from the natural shoulder of the sixties and seventies could be extreme, with some designers showing shoulders three feet wide and others presenting pagoda shoulders, and the buying public was strongly resistant. Undeterred, designers continued to present the look, slowly acclimating the public to it until it became one of the most characteristic and popular fashion trends of the 1980s.
Most designers did adopt the new trend of padded shoulders, but a few prominent designers, Kenzo, Ralph Lauren, and Emanuel Ungaro among them, refrained, at least at first. Kenzo mostly adhered to his popular, easy, comfortable clothes even during the shoulder-padded eighties. Ralph Lauren continued with his familiar English country classics and devoted his fall 1978 collection to a cowboy theme, his shoulders the same size they had been in previous seasons. He wouldn't adopt the new big-shouldered silhouette until the following year and it would remain only a minor part of his offerings into the eighties. Ungaro would also only resist the new broad-shoulders trend for a season or two, during which he continued to show the easy, seventies Soft Look/Big Look, before enthusiastically adopting big-shoulder styles in 1979 and making the look his signature the following decade.
Shoulder Pads in 1970s Menswear
Standard, mass-market menswear during the 1970s continued to feature standard, unobtrusive shoulder pads shaping suits and sport jackets, but more high-fashion menswear basically followed the same trajectory as high-fashion womenswear, with a delay of about a season or two. Thus, there was a removal of shoulder pads and other internal structuring during the easy, oversized, unconstructed Big Look or Soft Look era of the mid-seventies, spearheaded in womenswear by Kenzo Takada in 1973-74 and in menswear by Giorgio Armani a couple of years later. When high-fashion womenswear reverted to highly structured garments with big shoulder pads for fall of 1978, high-fashion menswear followed suit the following year, Cardin replicating his women's pagoda shoulders in his men's suits and even Armani adding unusually pronounced shoulder pads to his men's jackets, a trend that would continue during the following decade.
1980s
The early 1980s continued a trend begun in the late 1970s toward a resurgence of interest in the ladies' evening wear styles of the early 1940s, with peplums, batwing sleeves and other design elements of the times reinterpreted for a new market. The shoulder pad helped define the silhouette and continued to be made in the cut foam versions introduced in the fall 1978 collections, especially in well-cut suits reminiscent of the World War II era. These styles had initially been resisted by the public at their 1978 introduction, but designers continued to present exaggerated shoulder pads into the eighties so that they saturated the market and women did come to adopt them, with everyone from television celebrities to politicians wearing them.
For example, British Prime Minister Margaret Thatcher was internationally noted for her adoption of these fashions as they more and more became the norm. Before too long, these masculinized shapes were adopted by women seeking success in the corporate world, women who in the mid-seventies had worn sensibly-proportioned blazers for the same purpose, and exaggerated shoulder pads later became seen as an icon of women's attempts to smash the glass ceiling, a mission that was aided by their notable appearance in the US TV series Dynasty, whose stars' broad-shouldered, Valentino-inspired outfits were designed by Nolan Miller.
As the decade wore on, exaggerated shoulder pads became the defining fashion statement of the era, known as power dressing (a term that had previously been applied to the more sensibly proportioned business blazers of the mid-seventies) and bestowing the perception of status and position onto those who wore them. Some of the exaggerated shoulder pad sizes from the fall 1978 introduction of the trend became accepted and even common among the public by the mid-eighties. Every garment from the brassiere upwards would come with its own set of shoulder pads, with women frequently layering one shoulder-padded garment atop another, a trend launched by designers in 1978.
To prevent excessive shoulder padding, velcro was sewn onto the pads so that the wearer could choose how many sets to wear. The ability to remove shoulder pads also helped prevent deforming the pads in the wash, but discomfort could result if the pad wasn't attached securely to the velcro strip and the rough side scratched the skin. Other problems experienced by women as shoulder pads became widespread included slipping and displacement of the pads in oversized garments and interference with purse straps.
Prominent designers of big shoulders who had name recognition with the public during this period included Norma Kamali, Emanuel Ungaro, and Donna Karan. Kamali was one of a number of designers who, instead of just reviving highly tailored 1940s-style suits, added large shoulder pads to more contemporary sportswear styles, achieving great fame and influence in 1980 by showing sweatshirt-fabric versions of the flounced, hip-yoked, mini-length skirts she had introduced in 1979 (called rah-rah skirts in the UK) and presenting them with hugely shoulder-padded tops in the same material. Some made the plausible claim that the worldwide success of this collection is what finally made shoulder pads acceptable to the public after two or three years of designers promoting them. Ungaro became perhaps the most commercially successful of the Paris designers of the period by maximizing the use of seductive-looking shirring, ruching, and draping in large-shouldered dresses and suits, reintroducing a Schiaparelli-era trend of Edwardian revival. Donna Karan, who had achieved fame in the 1970s as one of the designers behind the Anne Klein label, opened her own house in the mid-eighties, specializing in versatile separates for working women as she had in the seventies, but with eighties-style big shoulder pads and more formal glamor added to conform to the times. Though distracting to the eye today, exaggerated shoulder pads were so normal during the eighties that the huge shoulders of Karan, Ungaro, and others were often not even commented on by fashion writers.
Throughout the Fall 1978-through-1980s big-shoulder-pads period, designers and fashion writers often said that the current year's shoulders were not as big as the previous year's. Often, means besides or in addition to shoulder pads were used to enlarge the shoulder, including puff-top sleeves, tucks and pleats, shoulder flanges, and stiffened ruffles. Yet, pronounced shoulder padding continued in high fashion through the mid-eighties. The most consistent in showing particularly huge ones was probably Claude Montana, who declared in 1985, "Shoulders forever!" Nicknamed "King of the Shoulder Pad," Montana's silhouette designs were credited for defining the 1980s power-dressing era. There were some designers who never really took them up, particularly Japanese designers like Kenzo and Issey Miyake, but by and large, most put them in everything, with almost all creating their own versions of the heavily structured, prominently shoulder-padded eighties suit jacket, even normally independent designers like Mary McFadden, Jean Muir, André Courrèges, and Giorgio di Sant'Angelo.
Eighties designers even incorporated big shoulder pads when they were doing revival styles from earlier, non-shoulder-padded eras like the 1950s and 1960s. For instance, a version of the 1950s chemise dress was widely shown by designers from the 1978 inception of the big-shoulder era into the eighties, but with shoulder pads instead of authentic 1950s sloped shoulders. Similarly, when Thierry Mugler did sixties-revival styles in 1985, they included his characteristic enormous shoulder pads. Even sixties-revivalist Stephen Sprouse showed his period-perfect shift and trapeze minidresses in the eighties with broad-shouldered jackets and topcoats. Designers producing more eighties-looking minidresses added shoulder pads because they felt that prominent shoulders helped balance out the increased expanse of leg. During a brief general designer return to a sort of mid-seventies style of long dirndl skirts and shawls for Fall 1981, most shoulders remained broad and padded, very unlike the seventies.
All of this had an effect on the public, so that by the end of the era, some mass-market shoulder pads were the size of dinner plates and people were no longer shocked by them as they had been at their 1978 introduction.
During the mid-eighties, though, there were clear signs of a move away from big shoulder pads among several prominent designers, with Vivienne Westwood introducing her famous 1985-86 mini-crini specifically to, as she put it, "kill this big shoulder." Christian Lacroix's celebrated mini-pouf skirt collections of 1986-87 were dominated by sloping, fichu shoulders, and even Karl Lagerfeld, who had been an early leader in the 1978 move to huge shoulders, for 1986 took pads from the shoulders and placed them visibly on the outside of the hips. Two years later, he would proclaim that shoulders would now be "tiny." Yves Saint Laurent had initiated the eighties big-shoulder trend in January 1978 and had been a shoulder-pad stalwart throughout the intervening years, but in 1988 even his shoulders, while still padded, had been noticeably narrowed. The two designers most noted for showing huge shoulders at the start of the era, Thierry Mugler and Claude Montana, brought their shoulders down in size somewhat mid-decade, with Montana giving up big shoulders entirely by 1988, when he began showing collections with completely natural shoulders. Avant-garde designers like Adeline André and Marc Audibet had long shown sloped shoulders with no pads, as had Romeo Gigli, who was hailed as the most prophetic designer of the end of the eighties. He showed almost exclusively natural, sloping shoulders, even on tailored jackets. This direction among designers was clear enough that in The Washington Post'''s New Year in/out list for 1989, "Shoulder pads" were listed as out and "Shoulders" were listed as in.
The public and retailers, though, had embraced shoulder pads wholeheartedly by the end of the decade, feeling that they filled out their form and gave clothes a more saleable "hanger appeal." Shoulder pad manufacturers were flourishing, with literally millions of pads produced every week. Many women seemed reluctant to give up big shoulder pads as designers began sending new signals in the late eighties. Prominent shoulder pads would not completely disappear until into the nineties.
Shoulder Pads in 1980s Menswear
In menswear, the exaggerated shoulder pads that had been introduced into high-fashion clothing in 1979 would continue to various degrees throughout the eighties, even becoming mainstream, with many everyday business suits having more pronounced shoulders than had usually been worn in the seventies. High-fashion shoulder pad shapes would vary with the whims of designers, a sharp-edged pad preferred one season, a more rounded pad preferred another. Part of what drove these styles was the increased proliferation of serious working out in the eighties after widespread fitness and health pursuits had emerged in the seventies. Near-bodybuilder physiques became normal sights starting in the eighties for everyday people, both on the streets and in advertising, and jacket shapes seemed to echo this, sometimes by padding the shoulders and shaping the cut even more to a V-shape, other times by leaving out or reducing the pads to allow the newly built-up wearer's own body to give the jacket shape. By the end of the eighties, there was a fad for often brightly colored sport jackets with big shoulders worn over deep-cut, also often brightly colored muscle tank tops or string tank shirts, or even no shirt at all, letting a well-worked-out torso show and sometimes allowing the shoulder-padded jacket to slide off the wearer's own chiseled shoulder, a style that would continue into the early nineties.
1990s
The shoulder pad fashion carried over from the late 1980s with continued popularity in the early 1990s, but wearers' tastes were changing due to a backlash against 1980s culture. Some designers continued to produce ranges featuring shoulder pads into the mid-1990s, as shoulder pads were prominent in women's formal suits and matching top-bottom attire, highly exemplified in earlier episodes of The Nanny'' from 1993 and 1994, where costume designer Brenda Cooper outfitted star Fran Drescher in things like late-eighties-style square-shouldered jackets by Moschino and Patrick Kelly. The velcro-fastened shoulder pads of the eighties were still familiar items in the early nineties. In 1993, a US patent was even registered for a removable shoulder pad that contained a hidden pocket to hide valuables. But as the decade wore on, shoulder-padded styles became outdated and were shunned by young and fashion-conscious wearers. Appearances were reduced to smaller, subtler versions augmenting the shoulder lines of jackets and coats.
2000s and 2010s
The late 2000s and early 2010s saw the resurgence of shoulder pads. Many young women imitated pop artists, mainly Lady Gaga and Rihanna, who were known for their use of shoulder pads in their stylistic outfits. There was a large presence of shoulder pads on many runways, in fashion designer collections, and a revival of 1980s trends became mainstream among many people who were interested in them. By the 2009-2010 seasons, shoulder pads had made their way back into the mainstream market. By 2010 many retailers like Wal-Mart had shoulder pads on at least half of all women's tops and blouses.
The late 2010s saw another resurgence of shoulder pads. With the rise of the Me Too movement and other female empowerment movements, the increase of women being elected to political positions, and a continuing revival of 1980s trends, many are opting to wear clothes with shoulder pads.
See also
1930–1945 in fashion
1980s fashion
Epaulette
References
External links
Parts of clothing
1930s fashion
1940s fashion
Shoulder | Shoulder pad (fashion) | Technology | 5,008 |
58,354,188 | https://en.wikipedia.org/wiki/Call-to-gate%20system | A call-to-gate system is an airport terminal design in which passengers are kept in a central area until shortly before their flight is due to board, rather than waiting near their gate. The international terminal at Calgary International Airport was the first terminal in North America to use this system, which is also used by European airports such as London Heathrow. The system is used to decrease the amount of time that passengers spend around the gate area, thereby increasing the amount of time they spend in the retail areas of the terminal.
References
Airport infrastructure | Call-to-gate system | Engineering | 109 |
30,149,053 | https://en.wikipedia.org/wiki/Insertion%20reaction | An insertion reaction is a chemical reaction where one chemical entity (a molecule or molecular fragment) interposes itself into an existing bond of typically a second chemical entity e.g.:
The term only refers to the result of the reaction and does not suggest a mechanism. Insertion reactions are observed in organic, inorganic, and organometallic chemistry. In cases where a metal-ligand bond in a coordination complex is involved, these reactions are typically organometallic in nature and involve a bond between a transition metal and a carbon or hydrogen. It is usually reserved for the case where the coordination number and oxidation state of the metal remain unchanged. When these reactions are reversible, the removal of the small molecule from the metal-ligand bond is called extrusion or elimination.
There are two common insertion geometries— 1,1 and 1,2 (pictured above). Additionally, the inserting molecule can act either as a nucleophile or as an electrophile to the metal complex. These behaviors will be discussed in more detail for CO, nucleophilic behavior, and SO2, electrophilic behavior.
Organic chemistry
Homologation reactions like the Kowalski ester homologation provide simple examples of insertion process in organic synthesis. In the Arndt-Eistert reaction, a methylene unit is inserted into the carboxyl-carbon bond of carboxylic acid to form the next acid in the homologous series. Organic Syntheses provides the example of t-BOC protected (S)-phenylalanine (2-amino-3-phenylpropanoic acid) being reacted sequentially with triethylamine, ethyl chloroformate, and diazomethane to produce the α-diazoketone, which is then reacted with silver trifluoroacetate / triethylamine in aqueous solution to generate the t-BOC protected form of (S)-3-amino-4-phenylbutanoic acid.
Mechanistically, the α-diazoketone undergoes a Wolff rearrangement to form a ketene in a 1,2-rearrangement. Consequently, the methylene group α- to the carboxyl group in the product is the methylene group from the diazomethane reagent. The 1,2-rearrangement has been shown to conserve the stereochemistry of the chiral centre as the product formed from t-BOC protected (S)-phenylalanine retains the (S) stereochemistry with a reported enantiomeric excess of at least 99%.
A related transformation is the Nierenstein reaction in which a diazomethane methylene group is inserted into the carbon-chlorine bond of an acid chloride to generate an α-chloro ketone. An example, published in 1924, illustrates the reaction in a substituted benzoyl chloride system:
Perhaps surprisingly, α-bromoacetophenone is the minor product when this reaction is carried out with benzoyl bromide, a dimeric dioxane being the major product. Organic azides also provide an example of an insertion reaction in organic synthesis and, like the above examples, the transformations proceed with loss of nitrogen gas. When tosyl azide reacts with norbornadiene, a ring expansion reaction takes place in which a nitrogen atom is inserted into a carbon-carbon bond α- to the bridge head:
The Beckmann rearrangement is another example of a ring expanding reaction in which a heteroatom is inserted into a carbon-carbon bond. The most important application of this reaction is the conversion of cyclohexanone to its oxime, which is then rearranged under acidic conditions to provide ε-caprolactam, the feedstock for the manufacture of Nylon 6. Annual production of caprolactam exceeds 2 billion kilograms.
Carbenes undergo both intermolecular and intramolecular insertion reactions. Cyclopentene moieties can be generated from sufficiently long-chain ketones by reaction with trimethylsilyldiazomethane, (CH3)3Si–CHN2:
Here, the carbene intermediate inserts into a carbon-hydrogen bond to form the carbon-carbon bond needed to close the cyclopentene ring. Carbene insertions into carbon-hydrogen bonds can also occur intermolecularly:
Carbenoids are reactive intermediates that behave similarly to carbenes. One example is the chloroalkyllithium carbenoid reagent prepared in situ from a sulfoxide and t-BuLi which inserts into the carbon-boron bond of a pinacol boronic ester:
Organometallic chemistry
Many reactions in organometallic chemistry involve insertion of one ligand (L) into a metal-hydride or metal-alkyl/aryl bond. Generally it is the hydride, alkyl, or aryl group that migrates onto L, which is often CO, an alkene, or alkyne.
Carbonylations
The insertion of carbon monoxide and alkenes into metal-carbon bonds is a widely exploited reaction with major industrial applications.
Such reactions are subject to the usual parameters that affect other reactions in coordination chemistry, but steric effects are especially important in determining the stereochemistry and regiochemistry of the reactions. The reverse reaction, the de-insertion of CO and alkenes, are of fundamental significance in many catalytic cycles as well.
Widely employed applications of migratory insertion of carbonyl groups are hydroformylation and the carbonylative production of acetic acid. The former converts alkenes, hydrogen, and carbon monoxide into aldehydes. The production of acetic acid by carbonylation proceeds via two similar industrial processes. More traditional is the rhodium-based Monsanto acetic acid process, but this process has been superseded by the iridium-based Cativa process. By 2002, worldwide annual production of acetic acid stood at 6 million tons, of which approximately 60% is produced by the Cativa process.
The Cativa process catalytic cycle, shown above, includes both insertion and de-insertion steps. The oxidative addition reaction of methyl iodide with (1) involves the formal insertion of the iridium(I) centre into the carbon-iodine bond, whereas step (3) to (4) is an example of migratory insertion of carbon monoxide into the iridium-carbon bond. The active catalyst species is regenerated by the reductive elimination of acetyl iodide from (4), a de-insertion reaction.
Olefin insertion
The insertion of ethylene and propylene into titanium alkyls is the cornerstone of Ziegler-Natta catalysis, the commercial route of polyethylene and polypropylene. This technology mainly involves heterogeneous catalysts, but it is widely assumed that the principles and observations on homogeneous systems are applicable to the solid-state versions. Related technologies include the Shell Higher Olefin Process which produces detergent precursors. the olefin can be coordinated to the metal before insertion. Depending on the ligand density of the metal, ligand dissociation may be necessary to provide a coordination site for the olefin.
Other insertion reactions in coordination chemistry
Many electrophilic oxides insert into metal carbon bonds; these include sulfur dioxide, carbon dioxide, and nitric oxide. These reactions have limited practical significance, but are of historic interest. With transition metal alkyls, these oxides behave as electrophiles and insert into the bond between metals and their relatively nucleophilic alkyl ligands. As discussed in the article on Metal sulfur dioxide complexes, the insertion of SO2 has been examined in particular detail.
More insertion reactions in organic chemistry
Electropositive metals such as sodium, potassium, magnesium, zinc, etc. can insert into alkyl halides, breaking the carbon-halide bond
( halide could be chlorine, bromine, iodine ) and forming a carbon-metal bond. This reaction happens via a SET mechanism ( single-electron-transfer mechanism ). If magnesium reacts with an alkyl halide, it forms a Grignard reagent, or if lithium reacts, an organolithium reagent is formed. Thus, this type of insertion reactions has important applications in chemical synthesis.
References
Organometallic chemistry | Insertion reaction | Chemistry | 1,761 |
4,123,257 | https://en.wikipedia.org/wiki/NBC%20suit | An NBC (nuclear, biological, chemical) suit, also called a chem suit, or chemical suit is a type of military personal protective equipment. NBC suits are designed to provide protection against direct contact with and contamination by radioactive, biological, or chemical substances, and provide protection from contamination with radioactive materials and all types of radiation. They are generally designed to be worn for extended periods to allow the wearer to fight (or generally function) while under threat of or under actual nuclear, biological, or chemical attack. The civilian equivalent is the hazmat suit. The term NBC has been replaced by CBRN (chemical, biological, radiological, nuclear), with the addition of the new threat of radiological weapons.
Use
NBC stands for nuclear, biological, and chemical. It is a term used in the armed forces and in health and safety, mostly in the context of weapons of mass destruction (WMD) clean-up in overseas conflict or protection of emergency services during the response to terrorism, though there are civilian and common-use applications (such as recovery and clean up efforts after industrial accidents).
In military operations, NBC suits are intended to be quickly donned over a soldier’s uniform and can continuously protect the user for up to several days. Most are made of impermeable material such as rubber, but some incorporate a filter, allowing air, sweat and condensation to slowly pass through. An example of this is the Canadian military NBC suit.
The older Soviet suit was impermeable rubber-coated canvas. Now known as the CBRN suit, the British Armed Forces suit is reinforced nylon with charcoal impregnated felt. It is more comfortable because of the breathability but has a shorter useful life, and must be replaced often. The British Armed Forces suit is known as a "Noddy suit" because some of them had a pointed hood like the hat worn by the fictional character Noddy. The Soviet style suit will protect the wearer at higher concentrations than the British suit but is less comfortable due to the build-up of moisture within it. A Soviet suit was known as a "Womble" because of its long faced respirator with round visor glasses. In Canadian terminology, an NBC suit or any kind of similar protective over-suit is also known as a "Bunnysuit".
See also
(Chemical, Biological, Radiological, and Nuclear, known formerly as NBC)
List of NBC warfare forces
(Mission Oriented Protective Posture gear)
(PPPS) (for use in biocontainment)
(WMD, formerly NBC weapon)
Joint Service Lightweight Integrated Suit Technology - Used as part of MOPP.
References
External links
Chemical protective suits reflect advancements in PPE
Environmental suits
Military personal equipment
Chemical, biological, radiological and nuclear defense | NBC suit | Chemistry,Biology | 565 |
62,010,020 | https://en.wikipedia.org/wiki/Rebecca%20Shipley | Rebecca Julia Shipley is a British mathematician and professor of healthcare engineering at University College London (UCL). She is director of the UCL Institute of Healthcare Engineering, co-director of the UCL Centre for Nerve Engineering and Vice Dean (Health) for the UCL Faculty of Engineering Sciences. She is also co-director of the UCL CHIMERA Research Hub with Prof Christina Pagel and a Fellow of the Institution of Engineering and Technology.
Early life and education
Shipley grew up in Buckinghamshire, where she attended Dr Challoner's High School for Girls. She graduated with an MMath in Mathematics from St Hugh's College, University of Oxford and was awarded a doctorate from the Oxford Centre for Industrial and Applied Mathematics, Mathematical Institute, University of Oxford in 2008 for her thesis "Multiscale Modelling of Fluid and Drug Transport in Vascular Tumours".
Research career
Her first postdoctoral position was a prestigious Research Fellowship at Christ Church, Oxford to develop mathematical and computational models that describe biomechanical and biochemical stimulation of tissues. She also held two concurrent Visiting Research Fellowships at the Centre for Regenerative Medicine, Bath, and Tissue Repair and Engineering Centre, UCL during that time.
In 2012, Shipley moved from mathematics into bioengineering, taking up a Lectureship in UCL Mechanical Engineering. Her research is predominantly divided into two themes: tumour blood flow and nervous system tissue engineering.
Within the field of tumour blood flow and therapy prediction, she is developing new bioengineering platforms which combine computational modelling with in vivo and ex vivo imaging data to better understand and interrogate cancer therapies. Her work advancing cancer therapies has been recognised in the national press.
Within nervous system tissue engineering, she has developed an interdisciplinary programme spanning bioengineering, computational modelling and tissue engineering to characterise the response of repairing nerves to chemical and mechanical stimuli, and integrate these data to design and test repair constructs. This is complemented by her work using computational modelling to understand the role of biochemical and biophysical stimuli, and define operating parameters, in tissue engineering development.
In 2017 she co-founded the UCL Centre for Nerve Engineering, the first Centre in the UK to bring together engineering and physical sciences with the life and clinical sciences to tackle translational nerve engineering problem.
In March 2020, Shipley was one of the main leads of the UCL / UCL Hospitals / Mercedes F1 effort to develop, manufacture and distribute the Ventura non-invasive breathing mask to provide crucial machines to help patients during the COVID-19 pandemic. The design was made open source and to date over 1,800 teams from 105 countries have taken licences and 20 have manufactured their own prototypes to test.
Honours, awards and recognition
“Young Researcher of the Year” by the Tissue and Cell Engineering Society UK (TCES) in 2011
Rosetrees Trust Interdisciplinary Prize 2016
5-year EPSRC Fellowship 2018-2023
Associate editor for Nature Scientific Reports, Journal of Engineering Mathematics
Shipley was appointed Officer of the Order of the British Empire (OBE) in the 2021 Birthday Honours for services to the development of the Continuous Positive Airways Pressure Device during the pandemic.
In 2024 she was elected a fellow of the Royal Academy of Engineering
Public outreach and engagement
Shipley is active in bringing mathematics and engineering to the wider public. Her outreach activities include:
Participating in a UK wide event, Tomorrow’s Engineers Week Big Assembly, in November 2019 to inspire young people to enter engineering careers
Royal Society Summer Exhibition stall on the Mathematics of Cancer
Podcasts for BBC Radio 2 (Naked Scientists)
References
Year of birth missing (living people)
Living people
Alumni of St Hugh's College, Oxford
People from Buckinghamshire
21st-century British women mathematicians
British biochemists
British bioengineers
Academics of University College London
Fellows of the Institution of Engineering and Technology
Officers of the Order of the British Empire
Fellows of the Royal Academy of Engineering
Female fellows of the Royal Academy of Engineering | Rebecca Shipley | Engineering | 805 |
45,475,440 | https://en.wikipedia.org/wiki/Three%20mountain%20problem | The Three Mountains Task was a task developed by Jean Piaget, a developmental psychologist from Switzerland. Piaget came up with a theory for developmental psychology based on cognitive development. Cognitive development, according to his theory, took place in four stages. These four stages were classified as the sensorimotor, preoperational, concrete operational and formal operational stages. The Three Mountain Problem was devised by Piaget to test whether a child's thinking was egocentric, which was also a helpful indicator of whether the child was in the preoperational stage or the concrete operational stage of cognitive development.
Methods
Piaget's aim in the Three Mountain Problem was to investigate egocentrism in children's thinking. The original setup for the task was:
The child who is seated at a table where a model of three mountains is presented in front. The mountains were of different sizes, and they had different identifiers (one mountain had snow; one had a red cross on top; one had a hut on top). The child was allowed to do a 360 surveillance of the model. Upon having a good look at the model, a doll is placed at different vantage points relative to the child, and the child is shown 10 photographs. The child is to select which of the 10 photographs best reflects the doll's view. Children of different ages were tested using this task to determine the age at which children begin to 'decenter,' or take the perspective of others.
Findings
The findings showed that at age 4, children would choose the photograph that best reflected with their own view. At age 6, an awareness of perspective different from their own could be seen. Then, by ages 7–8, children can clearly acknowledge more than one point of view and consistently select the correct photograph.
During Preoperational Stage
A distinction can be made between children who are in the preoperational stage of cognitive development and the concrete operational stage. The prototypical child in the preoperational stage will fail the Three Mountain Problem task. The child will choose the photograph that best represents their own viewpoint, not that of the doll's.
What is implied is that the child's selection is based on egocentric thinking. Egocentric thinking is looking at the world from the child's point of view solely, thus "an egocentric child assumes that other people see, hear, and feel exactly the same as the child does.” This is consistent with the results for the preoperational age range as they selected photographs paralleling their own view.
On a similar note, these results help Piaget home in on what age children show the capacity to decenter their thoughts, otherwise seen in a deviation away from egocentric thinking. Preoperational children have not achieved this yet; their thinking is centered, which is defined as a propensity to focus on one salient aspect or one dimension of a problem while simultaneously neglecting other potentially relevant aspects.
During Concrete Operational Stage
The concept of centration is observed predominantly in children in the preoperational stage of cognitive development. Conversely, children in the concrete operational stage demonstrate decentration - an ability to recognize alternate point of views and a straying away from egocentric thinking. Piaget concluded that, by age 7, children were able to decenter their thoughts and acknowledge perspectives different than their own. This was evidenced by the consistent and correct selection of photographs by seven- and eight-year-olds in the 1956 study.
An example of a correct answer would be if the child and the doll were situated on the complete opposite sides of the mountain model with a tree on the child's side and a large mountain in the middle acting as a visual barrier. A preoperational child would claim that the doll could see the tree, whereas the concrete operational child would select a photograph without the tree since the mountain is large enough to block the tree from the doll's view. A concrete operational child would pass the Three Mountain Problem task.
Follow Up Studies
There has been some criticism that the Three Mountain Problem was too difficult for the children to understand, compounded with the additional requirement of matching their answer to a photograph. Martin Hughes conducted a study in 1975 called the Policeman Doll Study. Two intersecting walls were used to create different quadrants, and “policeman" dolls were moved in various locations. The children were asked to hide another doll, a “boy” doll, away from both policemen's views. The results showed that among the sample of children ranging from ages 3.5-5, 90% gave correct answers. When the stakes were raised and additional walls and policeman dolls were added, 90% of four-year-olds were still able to pass the task. Hughes claimed that because this task made more sense to the child (with a primer session with one police doll to guarantee this), children were able to exhibit a loss of egocentric thinking as early as four years of age.
Variations of the Three Mountain Problem
Common criticism of the Three Mountain Problem is about the complexity of the task. In 1975, another researcher by the name of Helen Borke replicated the task using a farm area with landmarks such as a lake, animals, people, trees, and a building. A character from Sesame Street, Grover, was put in a car, and he was driven around the area. When he stopped to "take a look at the scenery," children were asked what the landscape looked like from Grover's perspective. The results showed that children as young as three-years-old were able to perform well, and they showed evidence of perspective-taking, the ability to understand a situation from an alternate point of view. Hence, evaluation of Piaget's Three Mountain Problem has shown that using objects more familiar to the child and making the task less complex will produce different results than the original study.
See also
References
Developmental psychology | Three mountain problem | Biology | 1,191 |
26,093,896 | https://en.wikipedia.org/wiki/Meiomitosis | In cell biology, meiomitosis is an aberrant cellular division pathway that combines normal mitosis pathways with ectopically expressed meiotic machinery resulting in genomic instability.
Description
Meiotic pathways are normally restricted to germ cells. Meiotic proteins drive double stranded DNA breaks, chiasma formation, sister chromatid adhesion and rearrange the spindle apparatus.
During meiosis, there are 2 sets of cell divisions, the second division is similar to mitosis in that sister chromatids are directly separated. However, in the first meiotic division the sister chromatids are held together by cohesins and segregated from their homologous pair of cohesion bound sister chromatids after resolution of recombination crossover points (chiasma) between the homologous pairs. The collision of mitosis and meiosis (first division) pathways could cause abnormal chiasma formation, abnormal cohesion expression, and mitotic/meiotic spindle defects that could result in insertions, deletions, abnormal segregation, DNA bridging, and potentially failure of cell division altogether resulting in polyploidy.
Role in cancer
Meiotic proteins have been noted to be expressed in cancer particularly melanoma and lymphoma. In cutaneous T-cell lymphoma meiosis proteins have been shown to be regulated with the cell cycle. Lymphoma cell lines have also been noted to up-regulate meiosis specific genes with irradiation and a correlation with mitotic arrest and polyploidy has been noted. The overall role of meiomitosis in cancer development and evolution has yet to be determined.
References
Sources
Cell cycle | Meiomitosis | Biology | 345 |
191,538 | https://en.wikipedia.org/wiki/H%C3%B6lder%27s%20inequality | In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of spaces.
The numbers and above are said to be Hölder conjugates of each other. The special case gives a form of the Cauchy–Schwarz inequality. Hölder's inequality holds even if is infinite, the right-hand side also being infinite in that case. Conversely, if is in and is in , then the pointwise product is in .
Hölder's inequality is used to prove the Minkowski inequality, which is the triangle inequality in the space , and also to establish that is the dual space of for .
Hölder's inequality (in a slightly different form) was first found by . Inspired by Rogers' work, gave another proof as part of a work developing the concept of convex and concave functions and introducing Jensen's inequality, which was in turn named for work of Johan Jensen building on Hölder's work.
Remarks
Conventions
The brief statement of Hölder's inequality uses some conventions.
In the definition of Hölder conjugates, means zero.
If , then and stand for the (possibly infinite) expressions
If , then stands for the essential supremum of , similarly for .
The notation with is a slight abuse, because in general it is only a norm of if is finite and is considered as equivalence class of -almost everywhere equal functions. If and , then the notation is adequate.
On the right-hand side of Hölder's inequality, 0 × ∞ as well as ∞ × 0 means 0. Multiplying with ∞ gives ∞.
Estimates for integrable products
As above, let and denote measurable real- or complex-valued functions defined on . If is finite, then the pointwise products of with and its complex conjugate function are -integrable, the estimate
and the similar one for hold, and Hölder's inequality can be applied to the right-hand side. In particular, if and are in the Hilbert space , then Hölder's inequality for implies
where the angle brackets refer to the inner product of . This is also called Cauchy–Schwarz inequality, but requires for its statement that and are finite to make sure that the inner product of and is well defined. We may recover the original inequality (for the case ) by using the functions and in place of and .
Generalization for probability measures
If is a probability space, then just need to satisfy , rather than being Hölder conjugates. A combination of Hölder's inequality and Jensen's inequality implies that
for all measurable real- or complex-valued functions and on .
Notable special cases
For the following cases assume that and are in the open interval with .
Counting measure
For the -dimensional Euclidean space, when the set is with the counting measure, we have
Often the following practical form of this is used, for any :
For more than two sums, the following generalisation (, ) holds, with real positive exponents and :
Equality holds iff .
If with the counting measure, then we get Hölder's inequality for sequence spaces:
Lebesgue measure
If is a measurable subset of with the Lebesgue measure, and and are measurable real- or complex-valued functions on , then Hölder's inequality is
Probability measure
For the probability space let denote the expectation operator. For real- or complex-valued random variables and on Hölder's inequality reads
Let and define Then is the Hölder conjugate of Applying Hölder's inequality to the random variables and we obtain
In particular, if the th absolute moment is finite, then the th absolute moment is finite, too. (This also follows from Jensen's inequality.)
Product measure
For two σ-finite measure spaces and define the product measure space by
where is the Cartesian product of and , the arises as product σ-algebra of and , and denotes the product measure of and . Then Tonelli's theorem allows us to rewrite Hölder's inequality using iterated integrals: If and are real- or complex-valued functions on the Cartesian product , then
This can be generalized to more than two measure spaces.
Vector-valued functions
Let denote a measure space and suppose that and are -measurable functions on , taking values in the -dimensional real- or complex Euclidean space. By taking the product with the counting measure on , we can rewrite the above product measure version of Hölder's inequality in the form
If the two integrals on the right-hand side are finite, then equality holds if and only if there exist real numbers , not both of them zero, such that
for -almost all in .
This finite-dimensional version generalizes to functions and taking values in a normed space which could be for example a sequence space or an inner product space.
Proof of Hölder's inequality
There are several proofs of Hölder's inequality; the main idea in the following is Young's inequality for products.
Alternative proof using Jensen's inequality:
We could also bypass use of both Young's and Jensen's inequalities. The proof below also explains why and where the Hölder exponent comes in naturally.
Extremal equality
Statement
Assume that and let denote the Hölder conjugate. Then for every ,
where max indicates that there actually is a maximizing the right-hand side. When and if each set in the with contains a subset with (which is true in particular when is ), then
Proof of the extremal equality:
Remarks and examples
The equality for fails whenever there exists a set of infinite measure in the -field with that has no subset that satisfies: (the simplest example is the -field containing just the empty set and and the measure with ) Then the indicator function satisfies but every has to be -almost everywhere constant on because it is -measurable, and this constant has to be zero, because is -integrable. Therefore, the above supremum for the indicator function is zero and the extremal equality fails.
For the supremum is in general not attained. As an example, let and the counting measure. Define:
Then For with let denote the smallest natural number with Then
Applications
The extremal equality is one of the ways for proving the triangle inequality for all and in , see Minkowski inequality.
Hölder's inequality implies that every defines a bounded (or continuous) linear functional on by the formula
The extremal equality (when true) shows that the norm of this functional as element of the continuous dual space coincides with the norm of in (see also the article).
Generalization with more than two functions
Statement
Assume that and such that
where 1/∞ is interpreted as 0 in this equation. Then for all measurable real or complex-valued functions defined on ,
where we interpret any product with a factor of ∞ as ∞ if all factors are positive, but the product is 0 if any factor is 0.
In particular, if for all then
Note: For contrary to the notation, is in general not a norm because it doesn't satisfy the triangle inequality.
Proof of the generalization:
Interpolation
Let and let denote weights with . Define as the weighted harmonic mean, that is,
Given measurable real- or complex-valued functions on , then the above generalization of Hölder's inequality gives
In particular, taking gives
Specifying further and , in the case we obtain the interpolation result
An application of Hölder gives
Both Littlewood and Lyapunov imply that if then for all
Reverse Hölder inequalities
Two functions
Assume that and that the measure space satisfies . Then for all measurable real- or complex-valued functions and on such that for all ,
If
then the reverse Hölder inequality is an equality if and only if
Note: The expressions:
and
are not norms, they are just compact notations for
Multiple functions
The Reverse Hölder inequality (above) can be generalized to the case of multiple functions if all but one conjugate is negative.
That is,
Let and be such that (hence ). Let be measurable functions for . Then
This follows from the symmetric form of the Hölder inequality (see below).
Symmetric forms of Hölder inequality
It was observed by Aczél and Beckenbach that Hölder's inequality can be put in a more symmetric form, at the price of introducing an extra vector (or function):
Let be vectors with positive entries and such that for all . If are nonzero real numbers such that , then:
if all but one of are positive;
if all but one of are negative.
The standard Hölder inequality follows immediately from this symmetric form (and in fact is easily seen to be equivalent to it). The symmetric statement also implies the reverse Hölder inequality (see above).
The result can be extended to multiple vectors:
Let be vectors in with positive entries and such that for all . If are nonzero real numbers such that , then:
if all but one of the numbers are positive;
if all but one of the numbers are negative.
As in the standard Hölder inequalities, there are corresponding statements for infinite sums and integrals.
Conditional Hölder inequality
Let be a probability space, a , and Hölder conjugates, meaning that . Then for all real- or complex-valued random variables and on ,
Remarks:
If a non-negative random variable has infinite expected value, then its conditional expectation is defined by
On the right-hand side of the conditional Hölder inequality, 0 times ∞ as well as ∞ times 0 means 0. Multiplying with ∞ gives ∞.
Proof of the conditional Hölder inequality:
Hölder's inequality for increasing seminorms
Let be a set and let be the space of all complex-valued functions on . Let be an increasing seminorm on meaning that, for all real-valued functions we have the following implication (the seminorm is also allowed to attain the value ∞):
Then:
where the numbers and are Hölder conjugates.
Remark: If is a measure space and is the upper Lebesgue integral of then the restriction of to all functions gives the usual version of Hölder's inequality.
Distances based on Hölder inequality
Hölder inequality can be used to define statistical dissimilarity measures between probability distributions. Those Hölder divergences are projective: They do not depend on the normalization factor of densities.
See also
Cauchy–Schwarz inequality
Minkowski inequality
Jensen's inequality
Young's inequality for products
Clarkson's inequalities
Brascamp–Lieb inequality
Citations
References
.
. Available at Digi Zeitschriften.
.
.
.
External links
.
.
.
Archived at Ghostarchive and the Wayback Machine: .
Inequalities
Probabilistic inequalities
Theorems in functional analysis
Articles containing proofs
Lp spaces | Hölder's inequality | Mathematics | 2,199 |
10,889,931 | https://en.wikipedia.org/wiki/IEEE%201613 | IEEE-1613 is the IEEE standard detailing environmental and testing requirements for communications networking devices in electric power substations. The standard is sponsored by the IEEE Power & Energy Society.
External links
(current [2009] version)
(2011 amendments to current version)
(superseded by 2009 version)
IEEE standards
Electric power infrastructure | IEEE 1613 | Technology | 65 |
58,673,886 | https://en.wikipedia.org/wiki/Steve%20Butler%20%28mathematician%29 | Steven Kay Butler (born May 16, 1977) is an American mathematician specializing in graph theory and combinatorics. He is a Morrill Professor and the Barbara J. Janson Professor in Mathematics at Iowa State University.
Education and career
Butler earned his master's degree at Brigham Young University in 2003. His master's thesis was titled Bounding the Number of Graphs Containing Very Long Induced Paths. He completed a doctorate at the University of California, San Diego in 2008, authoring the dissertation Eigenvalues and Structures of Graphs, advised by Fan Chung. Upon completing his postdoctoral studies at the University of California, Los Angeles, Butler joined the Iowa State University faculty in 2011. In 2015, Butler became the 512th (and so far final) person to have an Erdős number of 1, when he published a paper with Paul Erdős and Ronald Graham on Egyptian fractions. In 2017, Butler was named the Barbara J. Janson Professor in Mathematics, and to a Morrill Professorship in 2022.
Work with undergraduates
Butler has been a project lead for the Iowa State University Math REU in 2015, 2017, 2019, 2022, 2023, and 2024, and will head a project in 2025.
References
External links
Official website
1977 births
Living people
Graph theorists
21st-century American mathematicians
Brigham Young University alumni
University of California, San Diego alumni | Steve Butler (mathematician) | Mathematics | 277 |
53,478,734 | https://en.wikipedia.org/wiki/HYDAC%20%28company%29 | HYDAC is a German company group that specializes in the production and distribution of components and systems as well as services related to hydraulics and fluidics. Hydac is constituted of 15 legal entities, all of which are GmbHs, companies with limited liabilities. CEOs for these companies are Alexander Dieter, Wolfgang Haering and Hartmut Herzog. Subsidiaries exist in more than 10 countries. In 2011, the company had approximately 5,500 employees.
History
HYDAC was founded in 1963 by Werner Dieter and Ottmar Schön, when an exclusive license for central Europe for a hydraulic accumulator was taken out. The name HYDAC is an abbreviation for "hydraulic accumulator".
Company structuring
In 2015, HYDAC had 9,000 employees, 50 subsidiaries, and 500 distribution and service partners. The company group is headquartered in Sulzbach (Saarland). More than 3000 employees are employed in Sulzbach.
In 2014, the subsidiary with the highest revenue was HYDAC Technology GmbH with revenues of €445.6 million and 384 employees. In the same year, HYDAC International GmbH and its 466 employees, which is presently providing distribution services to the other subsidiaries of the group, had revenues of €75.9 million.
The groups subsidiaries are:
HYDAC Technology GmbH in Sulzbach / Saar
HYDAC Filtertechnik GmbH in Sulzbach / Saar
HYDAC Fluidtechnik GmbH in Sulzbach / Saar
HYDAC International GmbH in Sulzbach / Saar
HYDAC Verwaltung GmbH in Sulzbach / Saar
HYDAC Electronic GmbH in Gersweiler / Saar
HYDAC Accessories GmbH in Sulzbach / Saar
HYDAC Process Technology GmbH in Neunkirchen / Saar
HYDAC System GmbH in Sulzbach / Saar
HYDAC Service GmbH in Sulzbach / Saar
HYDAC PTK Produktionstechnik GmbH in Sulzbach / Saar
HYDAC Cooling GmbH in Sulzbach / Saar
HYDAC Filter Systems GmbH in Sulzbach / Saar
HYDAC Grundstücksverwaltung GmbH in Sulzbach / Saar
HYDAC FluidCareCenter GmbH in Sulzbach / Saar
HYDAC Drive Center GmbH in Langenau
HYDAC Speichertechnik GmbH in Sulzbach / Saar
HYDROSAAR GmbH in Sulzbach / Saar
Kraeft GmbH Systemtechnik in Bremerhaven
NORDHYDRAULIC in Kramfors, Sweden
QHP in Chester, England
BIERI in Liebefeld, Switzerland
HYCOM in Apeldoorn, Netherlands
References
Hydraulic accumulators
Hydraulic engineering
German companies established in 1963
Companies based in Saarland | HYDAC (company) | Physics,Engineering,Environmental_science | 580 |
26,160,599 | https://en.wikipedia.org/wiki/Tiospirone | Tiospirone (BMY-13,859), also sometimes called tiaspirone or tiosperone, is an atypical antipsychotic of the azapirone class. It was investigated as a treatment for schizophrenia in the late 1980s and was found to have an effectiveness equivalent to those of typical antipsychotics in clinical trials but without causing extrapyramidal side effects. However, development was halted and it was not marketed. Perospirone, another azapirone derivative with antipsychotic properties, was synthesized and assayed several years after tiospirone. It was found to be both more potent and more selective in comparison and was commercialized instead.
Pharmacology
Pharmacodynamics
Tiospirone acts as a 5-HT1A receptor partial agonist, 5-HT2A, 5-HT2C, and 5-HT7 receptor inverse agonist, and D2, D4, and α1-adrenergic receptor antagonist.
Binding profile
See also
Azapirone
References
Abandoned drugs
Antipsychotics
Azapirones
Benzothiazoles
Piperazines | Tiospirone | Chemistry | 247 |
10,588,834 | https://en.wikipedia.org/wiki/Apostolos%20Gerasoulis | Apostolos Gerasoulis is a Greek professor of computer science at Rutgers University, and the co-creator of Teoma, an Internet search engine that powers Ask.com, which Apostolos co-founded along with his colleagues at Rutgers in 2000. Apostolos later went on to serve as the vice president of search technology at Ask.com, before leaving the company in 2010. Gerasoulis has appeared in TV commercials for Ask.com.
References
Living people
American computer scientists
Rutgers University faculty
Greek academics
1952 births
Internet search engines
People from Ioannina | Apostolos Gerasoulis | Technology | 120 |
51,215,300 | https://en.wikipedia.org/wiki/Electronic%20specific%20heat | In solid state physics the electronic specific heat, sometimes called the electron heat capacity, is the specific heat of an electron gas. Heat is transported by phonons and by free electrons in solids. For pure metals, however, the electronic contributions dominate in the thermal conductivity. In impure metals, the electron mean free path is reduced by collisions with impurities, and the phonon contribution may be comparable with the electronic contribution.
Introduction
Although the Drude model was fairly successful in describing the electron motion within metals, it has some erroneous aspects: it predicts the Hall coefficient with the wrong sign compared to experimental measurements, the assumed additional electronic heat capacity to the lattice heat capacity, namely per electron at elevated temperatures, is also inconsistent with experimental values, since measurements of metals show no deviation from the Dulong–Petit law. The observed electronic contribution of electrons to the heat capacity is usually less than one percent of . This problem seemed insoluble prior to the development of quantum mechanics. This paradox was solved by Arnold Sommerfeld after the discovery of the Pauli exclusion principle, who recognised that the replacement of the Boltzmann distribution with the Fermi–Dirac distribution was required and incorporated it in the free electron model.
Derivation within the free electron model
Internal energy
When a metallic system is heated from absolute zero, not every electron gains an energy as equipartition dictates. Only those electrons in atomic orbitals within an energy range of of the Fermi level are thermally excited. Electrons, in contrast to a classical gas, can only move into free states in their energetic neighbourhood.
The one-electron energy levels are specified by the wave vector through the relation with the electron mass. This relation separates the occupied energy states from the unoccupied ones and corresponds to the spherical surface in k-space. As the ground state distribution becomes:
where
is the Fermi–Dirac distribution
is the energy of the energy level corresponding to the ground state
is the ground state energy in the limit , which thus still deviates from the true ground state energy.
This implies that the ground state is the only occupied state for electrons in the limit , the takes the Pauli exclusion principle into account. The internal energy of a system within the free electron model is given by the sum over one-electron levels times the mean number of electrons in that level:
where the factor of 2 accounts for the spin up and spin down states of the electron.
Reduced internal energy and electron density
Using the approximation that for a sum over a smooth function over all allowed values of for finite large system is given by:
where is the volume of the system.
For the reduced internal energy the expression for can be rewritten as:
and the expression for the electron density can be written as:
The integrals above can be evaluated using the fact that the dependence of the integrals on can be changed to dependence on through the relation for the electronic energy when described as free particles, , which yields for an arbitrary function :
with
which is known as the density of levels or density of states per unit volume such that is the total number of states between and . Using the expressions above the integrals can be rewritten as:
These integrals can be evaluated for temperatures that are small compared to the Fermi temperature by applying the Sommerfeld expansion and using the approximation that differs from for by terms of order . The expressions become:
For the ground state configuration the first terms (the integrals) of the expressions above yield the internal energy and electron density of the ground state. The expression for the electron density reduces to . Substituting this into the expression for the internal energy, one finds the following expression:
Final expression
The contributions of electrons within the free electron model is given by:
, for free electrons :
Compared to the classical result (), it can be concluded that this result is depressed by a factor of which is at room temperature of order of magnitude . This explains the absence of an electronic contribution to the heat capacity as measured experimentally.
Note that in this derivation is often denoted by which is known as the Fermi energy. In this notation, the electron heat capacity becomes:
and for free electrons : using the definition for the Fermi energy with the Fermi temperature.
Comparison with experimental results for the heat capacity of metals
For temperatures below both the Debye temperature and the Fermi temperature the heat capacity of metals can be written as a sum of electron and phonon contributions that are linear and cubic respectively: . The coefficient can be calculated and determined experimentally. We report this value below:
The free electrons in a metal do not usually lead to a strong deviation from the Dulong–Petit law at high temperatures. Since is linear in and is linear in , at low temperatures the lattice contribution vanishes faster than the electronic contribution and the latter can be measured. The deviation of the approximated and experimentally determined electronic contribution to the heat capacity of a metal is not too large. A few metals deviate significantly from this approximated prediction. Measurements indicate that these errors are associated with the electron mass being somehow changed in the metal, for the calculation of the electron heat capacity the effective mass of an electron should be considered instead. For Fe and Co the large deviations are attributed to the partially filled d-shells of these transition metals, whose d-bands lie at the Fermi energy.
The alkali metals are expected to have the best agreement with the free electron model since these metals only one s-electron outside a closed shell. However even sodium, which is considered to be the closest to a free electron metal, is determined to have a more than 25 per cent higher than expected from the theory.
Certain effects influence the deviation from the approximation:
The interaction of the conduction electrons with the periodic potential of the rigid crystal lattice is neglected.
The interaction of the conduction electrons with phonons is also neglected. This interaction causes changes in the effective mass of the electron and therefore it affects the electron energy.
The interaction of the conduction electrons with themselves is also ignored. A moving electron causes an inertial reaction in the surrounding electron gas.
Superconductors
Superconductivity occurs in many metallic elements of the periodic system and also in alloys, intermetallic compounds, and doped semiconductors. This effect occurs upon cooling the material. The entropy decreases on cooling below the critical temperature for superconductivity which indicates that the superconducting state is more ordered than the normal state. The entropy change is small, this must mean that only a very small fraction of electrons participate in the transition to the superconducting state but, the electronic contribution to the heat capacity changes drastically. There is a sharp jump of the heat capacity at the critical temperature while for the temperatures above the critical temperature the heat capacity is linear with temperature.
Derivation
The calculation of the electron heat capacity for super conductors can be done in the BCS theory. The entropy of a system of fermionic quasiparticles, in this case Cooper pairs, is:
where is the Fermi–Dirac distribution with
and
is the particle energy with respect to the Fermi energy
the energy gap parameter where and represents the probability that a Cooper pair is occupied or unoccupied respectively.
The heat capacity is given by .
The last two terms can be calculated:
Substituting this in the expression for the heat capacity and again applying that the sum over in the reciprocal space can be replaced by an integral in multiplied by the density of states this yields:
Characteristic behaviour for superconductors
To examine the typical behaviour of the electron heat capacity for species that can transition to the superconducting state, three regions must be defined:
Above the critical temperature
At the critical temperature
Below the critical temperature
Superconductors at T > T c
For it holds that and the electron heat capacity becomes:
This is just the result for a normal metal derived in the section above, as expected since a superconductor behaves as a normal conductor above the critical temperature.
Superconductors at T < T c
For the electron heat capacity for super conductors exhibits an exponential decay of the form:
Superconductors at T = T c
At the critical temperature the heat capacity is discontinuous. This discontinuity in the heat capacity indicates that the transition for a material from normal conducting to superconducting is a second order phase transition.
See also
Drude model
Fermi–Dirac statistics
Thermal effective mass
Effective mass
Superconductivity
BCS theory
References
General references:
Condensed matter physics
Thermodynamic properties | Electronic specific heat | Physics,Chemistry,Materials_science,Mathematics,Engineering | 1,736 |
75,337,763 | https://en.wikipedia.org/wiki/Chi2%20Fornacis | {{DISPLAYTITLE:Chi2 Fornacis}}
|
Chi2 Fornacis, Latinized from χ2 Fornacis, is a solitary star located in the southern constellation Fornax, the furnace. It is faintly visible to the naked eye as an orange-hued point of light with an apparent magnitude of 5.70. Gaia DR3 parallax measurements imply a distance of 476 light-years and it is currently receding with a heliocentric radial velocity of approximately . At its current distance, Chi2 Fornacis' brightness is diminished by an interstellar extinction of 0.11 magnitudes and it has an absolute magnitude of 0.00.
Chi2 Fornacis is an old-disk star and it has a stellar classification of K2 III. The class indicates that it is an evolved K-type giant that has ceased hydrogen fusion at its core and left the main sequence. It has 118% the mass of the Sun but it has expanded to 23.58 times the radius of the Sun. It radiates 194 times the luminosity of the Sun from its photosphere at an effective temperature of . Chi2 Fornacis is slightly metal enriched with a near-solar iron abundance of [Fe/H] = +0.02. It spins too slowly for its projected rotational velocity to be measured accurately, having a projected rotational velocity lower than .
The star was observed to be variable in infrared light during a 1991 IRAS survey for galaxy clusters. However, its variability in optical light is unknown. In addition, subsequent observations have not confirmed the variability in infrared and optical light. The lenticular galaxy NGC 1380 lies 2 degrees north-northeast of Chi2 Fornacis.
References
Fornax
Forancis, Chi2
Suspected variables
K-type giants
Fornacis, 91
CD-36 01306
021574
016112
1054
00142889216 | Chi2 Fornacis | Astronomy | 398 |
47,763,703 | https://en.wikipedia.org/wiki/Cortinarius%20alboviolaceus | Cortinarius alboviolaceus is a basidiomycete mushroom of the genus Cortinarius native to Europe and North America.
Description
The mushroom is lilac, later yellowing and often becoming whitish/grayish. Its cap is 3–8 cm wide, conical to umbonate, dry, silky, with whitish to pale lilac flesh. The gills are adnate or adnexed, grayish lilac becoming brown as the spores mature and lend their color. The stalk is 4–8 cm tall and .5–1.5 wide, larger at the base, sometimes with white veil tissue. The odour and taste are indistinct.
Similar species
Similar species include the essentially identical C. griseoviolaceus, as well as Inocybe lilacina. C. camphoratus is similar, but with a foul odour. C. malachius has a grayish cap and, when dry, a scaly surface.
Potential edibility
Its edibility is considered unknown by some guides but it is not recommended due to its similarity to deadly poisonous species. At least one guide considers it edible, but not recommended. Conflicting accounts indicate that it may itself be poisonous.
References
alboviolaceus
Fungi described in 1801
Fungi of Europe
Taxa named by Christiaan Hendrik Persoon
Fungus species | Cortinarius alboviolaceus | Biology | 279 |
47,590,590 | https://en.wikipedia.org/wiki/Retroprogesterone | Retroprogesterone, also known as 9β,10α-progesterone or as 9β,10α-pregn-4-ene-3,20-dione, is a progestin which was never marketed. It is a stereoisomer of the naturally occurring progestogen progesterone, in which the hydrogen atom at the 9th carbon is in the α-position (below the plane) instead of the β-position (above the plane) and the methyl group at the 10th carbon is in the β-position instead of the α-position. In other words, the atom positions at the two carbons have been reversed relative to progesterone, hence the name retroprogesterone. This reversal results in a "bent" configuration in which the plane of rings A and B is orientated at a 60° angle below the rings C and D. This configuration is ideal for interaction with the progesterone receptor, with retroprogesterone binding with high affinity to this receptor. However, the configuration is not as ideal for binding to other steroid hormone receptors, and as a result, retroprogesterone derivatives have increased selectivity for the progesterone receptor relative to progesterone.
Retroprogesterone is the parent compound of a group of progestins consisting of the marketed progestins dydrogesterone (6-dehydroretroprogesterone) and trengestone (1,6-didehydro-6-chlororetroprogesterone) and the never-marketed progestin Ro 6-3129, as well as the active metabolites of these progestins like 20α-dihydrodydrogesterone and 20α-dihydrotrengestone (i.e., the 20α-hydroxylated analogues).
Chemistry
See also
17α-Hydroxyprogesterone
19-Norprogesterone
17α-Ethynyltestosterone
19-Nortestosterone
17α-Spirolactone
References
Abandoned drugs
Diketones
Pregnanes
Progestogens | Retroprogesterone | Chemistry | 456 |
2,152,618 | https://en.wikipedia.org/wiki/COSILAB | COSILAB is a software tool for solving complex chemical kinetics problems. It is used worldwide in research and industry, in particular in automotive, combustion, and chemical processing applications.
Problems to be solved by COSILAB may involve thousands of reactions amongst hundreds of species for practically any mixture composition, pressure and temperature. Its computational capabilities allow for a complex chemical reaction to be studied in detail, including intermediate compounds, trace compounds and pollutants.
Whilst complex chemistry is accounted for, chemical reactor or combustion geometries that can be handled by COSILAB are relatively simple. For the purpose of ``real-life" simulations this limitation can be overcome, however, by using a library of pre-compiled subroutines and functions, that one can link to his or her own code written in Fortran, the C programming language or C++. In this way, it is possible to develop fully two-dimensional or three-dimensional CFD or computational fluid dynamics codes that are able to capture fairly realistic geometries.
The development of codes like COSILAB is motivated by a worldwide attempt to keep the environment clean and to save—or at least make best use of—the continuously diminishing fossil fuel resources.
External links
United States Environmental Protection Agency on NOX
World Energy Council
Softpredict's COSILAB page,
Combustion
Computational chemistry software | COSILAB | Chemistry | 277 |
25,317,960 | https://en.wikipedia.org/wiki/Hyperfinite%20set | In nonstandard analysis, a branch of mathematics, a hyperfinite set or *-finite set is a type of internal set. An internal set H of internal cardinality g ∈ *N (the hypernaturals) is hyperfinite if and only if there exists an internal bijection between G = {1,2,3,...,g} and H. Hyperfinite sets share the properties of finite sets: A hyperfinite set has minimal and maximal elements, and a hyperfinite union of a hyperfinite collection of hyperfinite sets may be derived. The sum of the elements of any hyperfinite subset of *R always exists, leading to the possibility of well-defined integration.
Hyperfinite sets can be used to approximate other sets. If a hyperfinite set approximates an interval, it is called a near interval with respect to that interval. Consider a hyperfinite set with a hypernatural n. K is a near interval for [a,b] if k1 = a and kn = b, and if the difference between successive elements of K is infinitesimal. Phrased otherwise, the requirement is that for every r ∈ [a,b] there is a ki ∈ K such that ki ≈ r. This, for example, allows for an approximation to the unit circle, considered as the set for θ in the interval [0,2π].
In general, subsets of hyperfinite sets are not hyperfinite, often because they do not contain the extreme elements of the parent set.
Ultrapower construction
In terms of the ultrapower construction, the hyperreal line *R is defined as the collection of equivalence classes of sequences of real numbers un. Namely, the equivalence class defines a hyperreal, denoted in Goldblatt's notation. Similarly, an arbitrary hyperfinite set in *R is of the form , and is defined by a sequence of finite sets
References
External links
Nonstandard analysis | Hyperfinite set | Mathematics | 408 |
12,412,341 | https://en.wikipedia.org/wiki/Tachykinin%20receptor%201 | The tachykinin receptor 1 (TACR1) also known as neurokinin 1 receptor (NK1R) or substance P receptor (SPR) is a G protein coupled receptor found in the central nervous system and peripheral nervous system. The endogenous ligand for this receptor is Substance P, although it has some affinity for other tachykinins. The protein is the product of the TACR1 gene.
Structure
Tachykinins are a family of neuropeptides that share the same hydrophobic C-terminal region with the amino acid sequence Phe-X-Gly-Leu-Met-NH2, where X represents a hydrophobic residue that is either an aromatic or a beta-branched aliphatic. The N-terminal region varies between different tachykinins. The term tachykinin originates in the rapid onset of action caused by the peptides in smooth muscles.
Substance P (SP) is the most researched and potent member of the tachykinin family. It is an undecapeptide with the amino acid sequence Arg-Pro-Lys-Pro-Gln-Gln-Phe-Phe-Gly-Leu-Met-NH2. SP binds to all three of the tachykinin receptors, but it binds most strongly to the NK1 receptor.
The tachykinin NK1 receptor consists of 407 amino acid residues, and it has a molecular weight of 58,000. NK1 receptor, as well as the other tachykinin receptors, is made of seven hydrophobic transmembrane (TM) domains with three extracellular and three intracellular loops, an amino-terminus and a cytoplasmic carboxy-terminus. The loops have functional sites, including two cysteines for a disulfide bridge, Asp-Arg-Tyr, responsible for association with arrestin, and Lys/Arg-Lys/Arg-X-X-Lys/Arg, which interacts with G-proteins. The binding site for substance P and other agonists and antagonists is found between the second and third transmembrane domains. The NK-1 receptor is found on human chromosome 2 and is located on the cell's surface as a cytoplasmic receptor.
Function
The binding of SP to the NK1 receptor has been associated with the transmission of stress signals and pain, the contraction of smooth muscles, and inflammation. NK1 receptor antagonists have also been studied in migraine, emesis, and psychiatric disorders. In fact, aprepitant has been proved effective in a number of pathophysiological models of anxiety and depression. Other diseases in which the NK1 receptor system is involved include asthma, rheumatoid arthritis, and gastrointestinal disorders.
Tissue distribution
The NK1 receptor can be found in both the central and peripheral nervous system. It is present in neurons, brainstem, vascular endothelial cells, muscle, gastrointestinal tracts, genitourinary tract, pulmonary tissue, thyroid gland, and different types of immune cells.
Mechanisms of action
SP is synthesized by neurons and transported to synaptic vesicles; the release of SP is accomplished through the depolarizing action of calcium-dependent mechanisms. When NK1 receptors are stimulated, they can generate various second messengers, which can trigger a wide range of effector mechanisms that regulate cellular excitability and function.
There are three well-defined, independent second messenger systems:
Stimulation via phospholipase C, leading to phosphatidyl inositol turnover and Ca mobilization from both intra- and extracellular sources.
Arachidonic acid mobilization via phospholipase A2.
cAMP accumulation via stimulation of adenylate cyclase.
It has also been reported that SP elicits interleukin-1 (IL-1) production in macrophages, sensitizes neutrophils, and enhances dopamine release in the substantia nigra region in cat brain. From spinal neurons, SP is known to evoke release of neurotransmitters like acetylcholine, histamine, and GABA. It also secretes catecholamines and plays a role in the regulation of blood pressure and hypertension. Likewise, SP is known to bind to N-methyl-D-aspartate (NMDA) receptors, eliciting excitation with calcium ion influx, which further releases nitric oxide. Studies in frogs have shown that SP elicits the release of prostaglandin E2 and prostacyclin by the arachidonic acid pathway, which leads to an increase in corticosteroid output.
Clinical significance
In combination therapy, NK1 receptor antagonists appear to offer better control of delayed emesis and post-operative emesis than drug therapy without NK1 receptor antagonists. NK1 receptor antagonists block responses to a broader range of emetic stimuli than the established 5-HT3 antagonist treatments. It has been reported that centrally-acting NK1 receptor antagonists, such as CP-99994, inhibit emesis induced by apomorphine and loperimidine, which are two compounds that act through central mechanisms.
This receptor is considered an attractive drug target, particularly with regards to potential analgesics and anti-depressants. It is also a potential treatment for alcoholism and opioid addiction. In addition, it has been identified as a candidate in the etiology of bipolar disorder. Finally NK1R antagonists may also have a role as novel antiemetics and hypnotics.
Neurokinin receptor 1 (NK-1R) also plays a significant role in cancer progression. NK-1R is overexpressed in various cancer types and is activated by substance P (SP). This activation promotes tumor cell proliferation, migration, and invasion while inhibiting apoptosis. The SP/NK-1R system is involved in angiogenesis, chronic inflammation, and the Warburg effect, all of which contribute to tumor growth. NK-1R antagonists, such as aprepitant, have shown promise as potential anticancer treatments by inhibiting tumor growth, inducing apoptosis, and blocking metastasis. The overexpression of NK-1R in tumors may also serve as a prognostic biomarker.
Ligands
Many selective ligands for NK1 are now available, several of which have gone into clinical use as antiemetics.
Agonists
GR-73632 - potent and selective agonist, EC50 2nM, 5-amino acid polypeptide chain. CAS# 133156-06-6
Antagonists
Aprepitant
Casopitant
Elinzanetant
Ezlopitant
Fosaprepitant
Lanepitant
Maropitant
Rolapitant
Vestipitant
L-733,060
L-741,671
L-742,694
RP-67580 - potent and selective antagonist, Ki 2.9nM, (3aR,7aR)-Octahydro-2-[1-imino-2-(2-methoxyphenyl)ethyl ]-7,7-diphenyl-4H-isoindol, CAS# 135911-02-3
RPR-100,893
CP-96345
CP-99994
GR-205,171/Vofopitant
TAK-637
T-2328
See also
NK1 receptor antagonist
Tachykinin receptor
Discovery and development of neurokinin 1 receptor antagonists
References
Further reading
External links
G protein-coupled receptors
Molecular neuroscience
Biology of bipolar disorder | Tachykinin receptor 1 | Chemistry | 1,613 |
66,612,002 | https://en.wikipedia.org/wiki/Evolutionary%20Classification%20of%20Protein%20Domains | The Evolutionary Classification of Protein Domains (ECOD) is a biological database that classifies protein domains available from the Protein Data Bank. The ECOD tries to determine the evolutionary relationships between proteins.
Similar to Pfam, CATH, and SCOP, ECOD compiles domains instead of whole proteins. However, ECOD focuses on evolutionary relationships more heavily: instead of grouping proteins by folds, which may simply represent convergent evolution, ECOD groups proteins by demonstratable homology only.
References
Protein structure
Protein classification
Protein databases
Protein superfamilies | Evolutionary Classification of Protein Domains | Chemistry,Biology | 114 |
17,988,849 | https://en.wikipedia.org/wiki/VR%20%28nerve%20agent%29 | VR (Russian VX, VXr, Soviet V-gas, GOSNIIOKhT substance No. 33, Agent "November") is a "V-series" unitary
nerve agent closely related (it is an isomer) to the better-known VX nerve agent. It became a prototype for the series of Novichok agents. According to chemical weapons expert Jonathan Tucker, the first binary formulation developed under the Soviet Foliant program was used to make Substance 33, differing from VX only in the alkyl substituents on its nitrogen and oxygen atoms. "This weapon was given the code name Novichok."
History
The development of VR started in 1957, after the Soviet Union obtained information about detection of high level of toxicity in phosphorylthiocholines (the same year Lars-Erik Tammelin published his first articles on fluorophosphorylcholines and phosphorylthiocholines in Acta Chemica Scandinavica) by a team from the Soviet Union's Scientific Research Institute No. 42 (NII-42). Sergei Zotovich Ivin, Leonid Soborovsky, and Iya Danilovna Shilakova jointly developed this analogue of VX. They completed their work in 1963 and were later awarded the Lenin Prize for their achievement. A binary weapon comprising two less toxic precursors which mixed during flight to form Substance 33 was later developed by a team led by Nikolai Kuznetsov.
In 1972 the Soviets opened Cheboksary Khimprom, a manufacturing plant for VR in Novocheboksarsk. All facilities in USSR produced 15,557 tons of VR according to their declaration to the Organisation for the Prohibition of Chemical Weapons (OPCW), although most if not all of this has now been destroyed under disarmament treaties.
Comparison to VX
VR has similar lethal dose levels to VX (between 10–50 mg), as well as being similar in appearance. However, due to usage of diethylamino radicals instead of diisopropylamino it is more prone to decomposition. The former are worse at sterically protecting the nitrogen atom from attacking either phosphorus or the α-carbon atom adjacent to sulfur than the latter. According to UK Defence Science and Technology Laboratory Detection Department scientists Robin M. Black and John M. Harrison, chemical stability was an important factor why of all the similarly toxic phosphorylthiocholines, ethyl N-2-diisopropylaminoethyl methylphosphonothiolate in particular (now known as VX), was weaponized in the West.
According to Russian CW developer Vil Mirzayanov, in the late 1980s a group of GosNIIOKhT chemists led by Georgiy Drozd prepared a scientific report that Substance 33 had much lower shelf life than VX. The report, writes Mirzayanov, caused 'panic' in the institute top management and the military representative office, and later was met with administrative resistance. This finding was independently verified by another chemist Igor Revelskiy but his report wasn't approved either.
Following the poisoning of Sergei and Yulia Skripal, former head of the GosNIIOKhT security department Nikolay Volodin said in an interview to Novaya Gazeta that Substance 33 was decomposing too quickly in combat conditions, and implied that this fact may have influenced the decision to continue research on the Novichok program.
Toxicity
Both agents have similar symptoms and method of action to other nerve agents that act on cholinesterase, and treatment remains the same. However, the window for effectively treating second generation V series seizures is shorter, as they rapidly denature the acetylcholinesterase protein in a similar manner to soman, making treatment with the standard nerve gas antidote pralidoxime ineffective unless it is given very soon after exposure. Pre-treatment with pyridostigmine prior to exposure, and treatment with other drugs such as atropine and diazepam after exposure, will reduce symptoms of nerve agent toxicity but may not be sufficient to prevent death if a large dose of nerve agent has been absorbed. In addition to the standard seizures, some of the second generation V series agents are known to cause comas.
See also
A-234 (nerve agent)
Novichok agent
References
Acetylcholinesterase inhibitors
V-series nerve agents
Phosphonothioates
Chemical weapons
Cold War weapons of the Soviet Union
Soviet inventions
Science and technology in the Soviet Union
Diethylamino compounds
Soviet chemical weapons program
Isobutyl esters | VR (nerve agent) | Chemistry,Biology | 954 |
7,275,524 | https://en.wikipedia.org/wiki/DSOS | DSOS (Deep Six Operating System) was a real-time operating system (sometimes termed an operating system kernel) developed by Texas Instruments' division Geophysical Services Incorporated (GSI) in the mid-1970s.
Background
The Geophysical Services division of Texas Instruments' main business was to search for petroleum (oil). They would collect data in likely spots around the world, process that data using high performance computers, and produce analyses that guided oil companies toward promising sites for drilling.
Much of the oil being sought was to be found beneath the ocean, hence GSI maintained a fleet of ships to collect seismic data from remote regions of the world. To do this properly, it was essential that the ships be navigated precisely. If evidence of oil is found, one cannot just mark an X on a tree. The oil is thousands of feet below the ocean and typically hundreds of miles from land. But this was a decade or more before GPS existed, thus the processing load to keep an accurate picture of where a finding is, was considerable.
The GEONAV systems, which used DSOS (Frailey, 1975) as their operating system, performed the required navigation, and collected, processed, and stored the seismic data being received in real-time.
Naming
The name Deep Six Operating System was the brainchild of Phil Ward (subsequently a world-renowned GPS expert) who, at the time, was manager of the project and slightly skeptical of the computer science professor, Dennis Frailey, who insisted that an operating system was the solution to the problem at hand. In a sense the system lived up to its name, according to legend. Supposedly one of the ships hit an old World War II naval mine off the coast of Egypt and sank while being navigated by GEONAV and DSOS.
Why an operating system?
In the 1970s, most real-time applications did not use operating systems because the latter were perceived as adding too much overhead. Typical computers of the time had barely enough computing power to handle the tasks at hand. Moreover, most software of this type was written in assembly language. As a consequence, real-time systems were classic examples of spaghetti code: complex masses of assembly language software using all sorts of machine-dependent tricks to achieve maximum performance.
DSOS ran on a Texas Instruments 980 minicomputer being used for marine navigation on GSI's fleet. DSOS was created to bring some order to the chaos that was typical of real-time system design at that time. The 980 was, for its time, a relatively powerful small computer that offered memory protection and multiple-priority interrupt abilities. DSOS was designed to exploit these features.
Significance
DSOS (Frailey, 1975) was one of the pioneering efforts in real-time operating systems. Incorporating many of the principles being introduced at the time in mainframe computer systems, such as semaphores, memory management, task management, and software interrupts, it used a clever scheme to assure appropriate real-time performance while providing many services formerly uncommon in the real-time domain (such as an orderly way to communicate with external devices and computer operators, multitasking, maintaining records, a disciplined form of inter-task communication, a reliable real-time clock, memory protection, and debugging support). It remained in use for at least three decades and it demonstrated that, if well designed, an operating system can make a real-time system faster (and vastly more maintainable) than what had been typical before. Today, almost all real-time applications use operating systems of this type.
References
Real-time operating systems | DSOS | Technology | 729 |
35,274,587 | https://en.wikipedia.org/wiki/UPC%20and%20NPC | Usage Parameter Control (UPC) and Network Parameter Control (NPC) are functions that may be performed in a computer network. UPC may be performed at the input to a network "to protect network resources from malicious as well as unintentional misbehaviour". NPC is the same and done for the same reasons as UPC, but at the interface between two networks.
UPC and NPC may involve traffic shaping, where traffic is delayed until it conforms to the expected levels and timing, or traffic policing, where non-conforming traffic is either discarded immediately, or reduced in priority so that it may be discarded downstream in the network if it would cause or add to congestion.
Uses
In ATM
The actions for UPC and NPC in the ATM protocol are defined in ITU-T Recommendation I.371 Traffic control and congestion control in B ISDN and the ATM Forum's User-Network Interface (UNI) Specification. These provide a conformance definition, using a form of the leaky bucket algorithm called the Generic Cell Rate Algorithm (GCRA), which specifies how cells are checked for conformance with a cell rate, or its reciprocal emission interval, and jitter tolerance: either a Cell Delay Variation tolerance (CDVt) for testing conformance to the Peak Cell Rate (PCR) or a Burst Tolerance or Maximum Burst Size (MBS) for testing conformance to the Sustainable Cell Rate (SCR).
UPC and NPC define a Maximum Burst Size (MBS) parameter on the average or Sustained Cell Rate (SCR), and a Cell Delay Variation tolerance (CDVt) on the Peak Cell Rate (PCR) at which the bursts are transmitted. This MBS can be derived from or used to derive the maximum variation between the arrival time of traffic in the bursts from the time it would arrive at the SCR, i.e. a jitter about that SCR.
UPC and NPC are normally performed on a per Virtual Channel (VC) or per Virtual Path (VP) basis, i.e. the intervals are measured between cells bearing the same virtual channel identifier (VCI) and or virtual path identifier (VPI). If the function is implemented at, e.g., a switch input, then because cells on the different VCs and VPs arrive sequentially, only a single implementation of the function is required. However, this single implementation must be able to access the parameters relating to a specific connection using the VCI and or VPI to address them. This is often done using Content-addressable memory (CAM), where the VCI and or VPI form the addressable content.
Cells that fail to conform, i.e. because they come too soon after the preceding cell on the channel or path because the average rate is too high or because the jitter exceeds the tolerance, may be dropped, i.e. discarded, or reduced in priority so that they may be discarded downstream if there is congestion.
The GCRA, while, possibly, complicated to describe and understand, can be implemented very simply. While it is more likely to be implemented in hardware, as an example, an assembly language implementation can be written in as few as 15 to 20 instructions with a longest execution path of as few as 8 to 12 instructions, depending on the language (availability of indirection and the orthogonality of the instruction set).
In AFDX
Transmissions onto an Avionics Full-Duplex Switched Ethernet (AFDX) network are required to be limited to a Bandwidth Allocation Gap (BAG). Conformance to this BAG (and maximum transmission jitter) is then checked in the network switches in a similar way to UPC in ATM networks. However, the token bucket algorithm is recommended for AFDX, and a version that allows for variable length frames (one that counts bytes) is preferred over one that only counts frames and assumes that all frames are of the maximum permitted length.
See also
Traffic contract
Connection admission control
Traffic shaping
Traffic policing (communications)
Leaky Bucket
Token bucket
Generic Cell Rate Algorithm
Audio Video Bridging
References
Computer networking
Asynchronous Transfer Mode | UPC and NPC | Technology,Engineering | 861 |
12,251,526 | https://en.wikipedia.org/wiki/HD%20171028 | HD 171028 is a star with an exoplanet companion in the equatorial constellation of Ophiuchus. With an apparent visual magnitude of 8.3, it is too faint to be readily visible with the naked eye. Unlike most planet-harboring stars, it does not have a Hipparcos number. The star is located at a distance of approximately 365 light years from the Sun based on parallax, and is drifting further away with a radial velocity of +13.5 km/s.
This is a yellow-hued G-type star of unknown luminosity class with a stellar classification of G0. It is a metal-poor star belonging to the thin disk population. HD 171028 is estimated to be nearly five billion years old and is spinning with a projected rotational velocity of 2.3 km/s. It has the same mass as the Sun, but the radius is 2.4 times larger. The star is radiating 5.4 times the luminosity of the Sun from its photosphere at an effective temperature of 5,671 K.
In the summer of 2007, a Jovian planetary companion was discovered by the HARPS planet search program using the radial velocity method. This object is orbiting at a distance of from the host star with a period of and an eccentricity (ovalness) of 0.59. Since the inclination of the orbit is unknown, only a minimum mass can be determined. This planet has at least double the mass of Jupiter.
See also
List of extrasolar planets
References
G-type stars
Planetary systems with one confirmed planet
Ophiuchus
Durchmusterung objects
171028 | HD 171028 | Astronomy | 335 |
19,541,494 | https://en.wikipedia.org/wiki/Cloud%20computing | Cloud computing is "a paradigm for enabling network access to a scalable and elastic pool of shareable physical or virtual resources with self-service provisioning and administration on-demand," according to ISO.
Essential Characteristics
In 2011, the National Institute of Standards and Technology (NIST) identified five "essential characteristics" for cloud systems. Below are the exact definitions according to NIST:
On-demand self-service: "A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider."
Broad network access: "Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations)."
Resource pooling: " The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand."
Rapid elasticity: "Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear unlimited and can be appropriated in any quantity at any time."
Measured service: "Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.
By 2023, the International Organization for Standardization (ISO) had expanded and refined the list.
History
The history of cloud computing extends back to the 1960s, with the initial concepts of time-sharing becoming popularized via remote job entry (RJE). The "data center" model, where users submitted jobs to operators to run on mainframes, was predominantly used during this era. This was a time of exploration and experimentation with ways to make large-scale computing power available to more users through time-sharing, optimizing the infrastructure, platform, and applications, and increasing efficiency for end users.
The "cloud" metaphor for virtualized services dates to 1994, when it was used by General Magic for the universe of "places" that mobile agents in the Telescript environment could "go". The metaphor is credited to David Hoffman, a General Magic communications specialist, based on its long-standing use in networking and telecom. The expression cloud computing became more widely known in 1996 when Compaq Computer Corporation drew up a business plan for future computing and the Internet. The company's ambition was to supercharge sales with "cloud computing-enabled applications". The business plan foresaw that online consumer file storage would likely be commercially successful. As a result, Compaq decided to sell server hardware to internet service providers.
In the 2000s, the application of cloud computing began to take shape with the establishment of Amazon Web Services (AWS) in 2002, which allowed developers to build applications independently. In 2006 Amazon Simple Storage Service, known as Amazon S3, and the Amazon Elastic Compute Cloud (EC2) were released. In 2008 NASA's development of the first open-source software for deploying private and hybrid clouds.
The following decade saw the launch of various cloud services. In 2010, Microsoft launched Microsoft Azure, and Rackspace Hosting and NASA initiated an open-source cloud-software project, OpenStack. IBM introduced the IBM SmartCloud framework in 2011, and Oracle announced the Oracle Cloud in 2012. In December 2019, Amazon launched AWS Outposts, a service that extends AWS infrastructure, services, APIs, and tools to customer data centers, co-location spaces, or on-premises facilities.
Value proposition
Cloud computing can enable shorter time to market by providing pre-configured tools, scalable resources, and managed services, allowing users to focus on their core business value instead of maintaining infrastructure. Cloud platforms can enable organizations and individuals to reduce upfront capital expenditures on physical infrastructure by shifting to an operational expenditure model, where costs scale with usage. Cloud platforms also offer managed services and tools, such as artificial intelligence, data analytics, and machine learning, which might otherwise require significant in-house expertise and infrastructure investment.
While cloud computing can offer cost advantages through effective resource optimization, organizations often face challenges such as unused resources, inefficient configurations, and hidden costs without proper oversight and governance. Many cloud platforms provide cost management tools, such as AWS Cost Explorer and Azure Cost Management, and frameworks like FinOps have emerged to standardize financial operations in the cloud. Cloud computing also facilitates collaboration, remote work, and global service delivery by enabling secure access to data and applications from any location with an internet connection.
Cloud providers offer various redundancy options for core services, such as managed storage and managed databases, though redundancy configurations often vary by service tier. Advanced redundancy strategies, such as cross-region replication or failover systems, typically require explicit configuration and may incur additional costs or licensing fees.
Cloud environments operate under a shared responsibility model, where providers are typically responsible for infrastructure security, physical hardware, and software updates, while customers are accountable for data encryption, identity and access management (IAM), and application-level security. These responsibilities vary depending on the cloud service model—Infrastructure as a Service (IaaS), Platform as a Service (PaaS), or Software as a Service (SaaS)—with customers typically having more control and responsibility in IaaS environments and progressively less in PaaS and SaaS models, often trading control for convenience and managed services.
Factors Influencing Adoption and Suitability of Cloud Computing
The decision to adopt cloud computing or maintain on-premises infrastructure depends on factors such as scalability, cost structure, latency requirements, regulatory constraints, and infrastructure customization.
Organizations with variable or unpredictable workloads, limited capital for upfront investments, or a focus on rapid scalability benefit from cloud adoption. Startups, SaaS companies, and e-commerce platforms often prefer the pay-as-you-go operational expenditure (OpEx) model of cloud infrastructure. Additionally, companies prioritizing global accessibility, remote workforce enablement, disaster recovery, and leveraging advanced services such as AI/ML and analytics are well-suited for the cloud. In recent years, some cloud providers have started offering specialized services for high-performance computing and low-latency applications, addressing some use cases previously exclusive to on-premises setups.
On the other hand, organizations with strict regulatory requirements, highly predictable workloads, or reliance on deeply integrated legacy systems may find cloud infrastructure less suitable. Businesses in industries like defense, government, or those handling highly sensitive data often favor on-premises setups for greater control and data sovereignty. Additionally, companies with ultra-low latency requirements, such as high-frequency trading (HFT) firms, rely on custom hardware (e.g., FPGAs) and physical proximity to exchanges, which most cloud providers cannot fully replicate despite recent advancements. Similarly, tech giants like Google, Meta, and Amazon build their own data centers due to economies of scale, predictable workloads, and the ability to customize hardware and network infrastructure for optimal efficiency. However, these companies also use cloud services selectively for certain workloads and applications where it aligns with their operational needs.
In practice, many organizations are increasingly adopting hybrid cloud architectures, combining on-premises infrastructure with cloud services. This approach allows businesses to balance scalability, cost-effectiveness, and control, offering the benefits of both deployment models while mitigating their respective limitations.
Challenges and limitations
One of the main challenges of cloud computing, in comparison to more traditional on-premises computing, is data security and privacy. Cloud users entrust their sensitive data to third-party providers, who may not have adequate measures to protect it from unauthorized access, breaches, or leaks. Cloud users also face compliance risks if they have to adhere to certain regulations or standards regarding data protection, such as GDPR or HIPAA.
Another challenge of cloud computing is reduced visibility and control. Cloud users may not have full insight into how their cloud resources are managed, configured, or optimized by their providers. They may also have limited ability to customize or modify their cloud services according to their specific needs or preferences. Complete understanding of all technology may be impossible, especially given the scale, complexity, and deliberate opacity of contemporary systems; however, there is a need for understanding complex technologies and their interconnections to have power and agency within them. The metaphor of the cloud can be seen as problematic as cloud computing retains the aura of something noumenal and numinous; it is something experienced without precisely understanding what it is or how it works.
Additionally, cloud migration is a significant challenge. This process involves transferring data, applications, or workloads from one cloud environment to another, or from on-premises infrastructure to the cloud. Cloud migration can be complicated, time-consuming, and expensive, particularly when there are compatibility issues between different cloud platforms or architectures. If not carefully planned and executed, cloud migration can lead to downtime, reduced performance, or even data loss.
Cloud migration challenges
According to the 2024 State of the Cloud Report by Flexera, approximately 50% of respondents identified the following top challenges when migrating workloads to public clouds:
"Understanding application dependencies"
"Comparing on-premise and cloud costs"
"Assessing technical feasibility."
Implementation challenges
Applications hosted in the cloud are susceptible to the fallacies of distributed computing, a series of misconceptions that can lead to significant issues in software development and deployment.
Cloud cost overruns
In a report by Gartner, a survey of 200 IT leaders revealed that 69% experienced budget overruns in their organizations' cloud expenditures during 2023. Conversely, 31% of IT leaders whose organizations stayed within budget attributed their success to accurate forecasting and budgeting, proactive monitoring of spending, and effective optimization.
The 2024 Flexera State of Cloud Report identifies the top cloud challenges as managing cloud spend, followed by security concerns and lack of expertise. Public cloud expenditures exceeded budgeted amounts by an average of 15%. The report also reveals that cost savings is the top cloud initiative for 60% of respondents. Furthermore, 65% measure cloud progress through cost savings, while 42% prioritize shorter time-to-market, indicating that cloud's promise of accelerated deployment is often overshadowed by cost concerns.
Service Level Agreements
Typically, cloud providers' Service Level Agreements (SLAs) do not encompass all forms of service interruptions. Exclusions typically include planned maintenance, downtime resulting from external factors such as network issues, human errors, like misconfigurations, natural disasters, force majeure events, or security breaches. Typically, customers bear the responsibility of monitoring SLA compliance and must file claims for any unmet SLAs within a designated timeframe. Customers should be aware of how deviations from SLAs are calculated, as these parameters may vary by service. These requirements can place a considerable burden on customers. Additionally, SLA percentages and conditions can differ across various services within the same provider, with some services lacking any SLA altogether. In cases of service interruptions due to hardware failures in the cloud provider, the company typically does not offer monetary compensation. Instead, eligible users may receive credits as outlined in the corresponding SLA.
Leaky abstractions
Cloud computing abstractions aim to simplify resource management, but leaky abstractions can expose underlying complexities. These variations in abstraction quality depend on the cloud vendor, service and architecture. Mitigating leaky abstractions requires users to understand the implementation details and limitations of the cloud services they utilize.
Service lock-in within the same vendor
Service lock-in within the same vendor occurs when a customer becomes dependent on specific services within a cloud vendor, making it challenging to switch to alternative services within the same vendor when their needs change.
Security and privacy
Cloud computing poses privacy concerns because the service provider can access the data that is in the cloud at any time. It could accidentally or deliberately alter or delete information. Many cloud providers can share information with third parties if necessary for purposes of law and order without a warrant. That is permitted in their privacy policies, which users must agree to before they start using cloud services. Solutions to privacy include policy and legislation as well as end-users' choices for how data is stored. Users can encrypt data that is processed or stored within the cloud to prevent unauthorized access. Identity management systems can also provide practical solutions to privacy concerns in cloud computing. These systems distinguish between authorized and unauthorized users and determine the amount of data that is accessible to each entity. The systems work by creating and describing identities, recording activities, and getting rid of unused identities.
According to the Cloud Security Alliance, the top three threats in the cloud are Insecure Interfaces and APIs, Data Loss & Leakage, and Hardware Failure—which accounted for 29%, 25% and 10% of all cloud security outages respectively. Together, these form shared technology vulnerabilities. In a cloud provider platform being shared by different users, there may be a possibility that information belonging to different customers resides on the same data server. Additionally, Eugene Schultz, chief technology officer at Emagined Security, said that hackers are spending substantial time and effort looking for ways to penetrate the cloud. "There are some real Achilles' heels in the cloud infrastructure that are making big holes for the bad guys to get into". Because data from hundreds or thousands of companies can be stored on large cloud servers, hackers can theoretically gain control of huge stores of information through a single attack—a process he called "hyperjacking". Some examples of this include the Dropbox security breach, and iCloud 2014 leak. Dropbox had been breached in October 2014, having over seven million of its users passwords stolen by hackers in an effort to get monetary value from it by Bitcoins (BTC). By having these passwords, they are able to read private data as well as have this data be indexed by search engines (making the information public).
There is the problem of legal ownership of the data (If a user stores some data in the cloud, can the cloud provider profit from it?). Many Terms of Service agreements are silent on the question of ownership. Physical control of the computer equipment (private cloud) is more secure than having the equipment off-site and under someone else's control (public cloud). This delivers great incentive to public cloud computing service providers to prioritize building and maintaining strong management of secure services. Some small businesses that do not have expertise in IT security could find that it is more secure for them to use a public cloud. There is the risk that end users do not understand the issues involved when signing on to a cloud service (persons sometimes do not read the many pages of the terms of service agreement, and just click "Accept" without reading). This is important now that cloud computing is common and required for some services to work, for example for an intelligent personal assistant (Apple's Siri or Google Assistant). Fundamentally, private cloud is seen as more secure with higher levels of control for the owner, however public cloud is seen to be more flexible and requires less time and money investment from the user.
The attacks that can be made on cloud computing systems include man-in-the middle attacks, phishing attacks, authentication attacks, and malware attacks. One of the largest threats is considered to be malware attacks, such as Trojan horses. Recent research conducted in 2022 has revealed that the Trojan horse injection method is a serious problem with harmful impacts on cloud computing systems.
Service models
The National Institute of Standards and Technology recognized three cloud service models in 2011: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The International Organization for Standardization (ISO) later identified additional models in 2023, including "Network as a Service", "Communications as a Service", "Compute as a Service", and "Data Storage as a Service".
Infrastructure as a service (IaaS)
Infrastructure as a service (IaaS) refers to online services that provide high-level APIs used to abstract various low-level details of underlying network infrastructure like physical computing resources, location, data partitioning, scaling, security, backup, etc. A hypervisor runs the virtual machines as guests. Pools of hypervisors within the cloud operational system can support large numbers of virtual machines and the ability to scale services up and down according to customers' varying requirements. Linux containers run in isolated partitions of a single Linux kernel running directly on the physical hardware. Linux cgroups and namespaces are the underlying Linux kernel technologies used to isolate, secure and manage the containers. The use of containers offers higher performance than virtualization because there is no hypervisor overhead. IaaS clouds often offer additional resources such as a virtual-machine disk-image library, raw block storage, file or object storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs), and software bundles.
The NIST's definition of cloud computing describes IaaS as "where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls)."
IaaS-cloud providers supply these resources on-demand from their large pools of equipment installed in data centers. For wide-area connectivity, customers can use either the Internet or carrier clouds (dedicated virtual private networks). To deploy their applications, cloud users install operating-system images and their application software on the cloud infrastructure. In this model, the cloud user patches and maintains the operating systems and the application software. Cloud providers typically bill IaaS services on a utility computing basis: cost reflects the number of resources allocated and consumed.
Platform as a service (PaaS)
The NIST's definition of cloud computing defines Platform as a Service as:
PaaS vendors offer a development environment to application developers. The provider typically develops toolkit and standards for development and channels for distribution and payment. In the PaaS models, cloud providers deliver a computing platform, typically including an operating system, programming-language execution environment, database, and the web server. Application developers develop and run their software on a cloud platform instead of directly buying and managing the underlying hardware and software layers. With some PaaS, the underlying computer and storage resources scale automatically to match application demand so that the cloud user does not have to allocate resources manually.
Some integration and data management providers also use specialized applications of PaaS as delivery models for data. Examples include iPaaS (Integration Platform as a Service) and dPaaS (Data Platform as a Service). iPaaS enables customers to develop, execute and govern integration flows. Under the iPaaS integration model, customers drive the development and deployment of integrations without installing or managing any hardware or middleware. dPaaS delivers integration—and data-management—products as a fully managed service. Under the dPaaS model, the PaaS provider, not the customer, manages the development and execution of programs by building data applications for the customer. dPaaS users access data through data-visualization tools.
Software as a service (SaaS)
The NIST's definition of cloud computing defines Software as a Service as:
In the software as a service (SaaS) model, users gain access to application software and databases. Cloud providers manage the infrastructure and platforms that run the applications. SaaS is sometimes referred to as "on-demand software" and is usually priced on a pay-per-use basis or using a subscription fee. In the SaaS model, cloud providers install and operate application software in the cloud and cloud users access the software from cloud clients. Cloud users do not manage the cloud infrastructure and platform where the application runs. This eliminates the need to install and run the application on the cloud user's own computers, which simplifies maintenance and support. Cloud applications differ from other applications in their scalability—which can be achieved by cloning tasks onto multiple virtual machines at run-time to meet changing work demand. Load balancers distribute the work over the set of virtual machines. This process is transparent to the cloud user, who sees only a single access-point. To accommodate a large number of cloud users, cloud applications can be multitenant, meaning that any machine may serve more than one cloud-user organization.
The pricing model for SaaS applications is typically a monthly or yearly flat fee per user, so prices become scalable and adjustable if users are added or removed at any point. It may also be free. Proponents claim that SaaS gives a business the potential to reduce IT operational costs by outsourcing hardware and software maintenance and support to the cloud provider. This enables the business to reallocate IT operations costs away from hardware/software spending and from personnel expenses, towards meeting other goals. In addition, with applications hosted centrally, updates can be released without the need for users to install new software. One drawback of SaaS comes with storing the users' data on the cloud provider's server. As a result, there could be unauthorized access to the data. Examples of applications offered as SaaS are games and productivity software like Google Docs and Office Online. SaaS applications may be integrated with cloud storage or File hosting services, which is the case with Google Docs being integrated with Google Drive, and Office Online being integrated with OneDrive.
Serverless computing
Serverless computing allows customers to use various cloud capabilities without the need to provision, deploy, or manage hardware or software resources, apart from providing their application code or data. ISO/IEC 22123-2:2023 classifies serverless alongside Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) under the broader category of cloud service categories. Notably, while ISO refers to these classifications as cloud service categories, the National Institute of Standards and Technology (NIST) refers to them as service models.
Deployment models
"A cloud deployment model represents the way in which cloud computing can be organized based on the control and sharing of physical or virtual resources." Cloud deployment models define the fundamental patterns of interaction between cloud customers and cloud providers. They do not detail implementation specifics or the configuration of resources.
Private
Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third party, and hosted either internally or externally. Undertaking a private cloud project requires significant engagement to virtualize the business environment, and requires the organization to reevaluate decisions about existing resources. It can improve business, but every step in the project raises security issues that must be addressed to prevent serious vulnerabilities. Self-run data centers are generally capital intensive. They have a significant physical footprint, requiring allocations of space, hardware, and environmental controls. These assets have to be refreshed periodically, resulting in additional capital expenditures. They have attracted criticism because users "still have to buy, build, and manage them" and thus do not benefit from less hands-on management, essentially "[lacking] the economic model that makes cloud computing such an intriguing concept".
Public
Cloud services are considered "public" when they are delivered over the public Internet, and they may be offered as a paid subscription, or free of charge. Architecturally, there are few differences between public- and private-cloud services, but security concerns increase substantially when services (applications, storage, and other resources) are shared by multiple customers. Most public-cloud providers offer direct-connection services that allow customers to securely link their legacy data centers to their cloud-resident applications.
Several factors like the functionality of the solutions, cost, integrational and organizational aspects as well as safety & security are influencing the decision of enterprises and organizations to choose a public cloud or on-premises solution.
Hybrid
Hybrid cloud is a composition of a public cloud and a private environment, such as a private cloud or on-premises resources, that remain distinct entities but are bound together, offering the benefits of multiple deployment models. Hybrid cloud can also mean the ability to connect collocation, managed and/or dedicated services with cloud resources. Gartner defines a hybrid cloud service as a cloud computing service that is composed of some combination of private, public and community cloud services, from different service providers. A hybrid cloud service crosses isolation and provider boundaries so that it cannot be simply put in one category of private, public, or community cloud service. It allows one to extend either the capacity or the capability of a cloud service, by aggregation, integration or customization with another cloud service.
Varied use cases for hybrid cloud composition exist. For example, an organization may store sensitive client data in house on a private cloud application, but interconnect that application to a business intelligence application provided on a public cloud as a software service. This example of hybrid cloud extends the capabilities of the enterprise to deliver a specific business service through the addition of externally available public cloud services. Hybrid cloud adoption depends on a number of factors such as data security and compliance requirements, level of control needed over data, and the applications an organization uses.
Another example of hybrid cloud is one where IT organizations use public cloud computing resources to meet temporary capacity needs that can not be met by the private cloud. This capability enables hybrid clouds to employ cloud bursting for scaling across clouds. Cloud bursting is an application deployment model in which an application runs in a private cloud or data center and "bursts" to a public cloud when the demand for computing capacity increases. A primary advantage of cloud bursting and a hybrid cloud model is that an organization pays for extra compute resources only when they are needed. Cloud bursting enables data centers to create an in-house IT infrastructure that supports average workloads, and use cloud resources from public or private clouds, during spikes in processing demands.
Community
Community cloud shares infrastructure between several organizations from a specific community with common concerns (security, compliance, jurisdiction, etc.), whether it is managed internally or by a third-party, and hosted internally or externally, the costs are distributed among fewer users compared to a public cloud (but more than a private cloud). As a result, only a portion of the potential cost savings of cloud computing is achieved.
Multi cloud
According to ISO/IEC 22123-1: "multi-cloud is a cloud deployment model in which a customer uses public cloud services provided by two or more cloud service providers". Poly cloud refers to the use of multiple public clouds for the purpose of leveraging specific services that each provider offers. It differs from Multi cloud in that it is not designed to increase flexibility or mitigate against failures but is rather used to allow an organization to achieve more than could be done with a single provider.
Market
According to International Data Corporation (IDC), global spending on cloud computing services has reached $706 billion and is expected to reach $1.3 trillion by 2025. Gartner estimated that global public cloud services end-user spending would reach $600 billion by 2023. According to a McKinsey & Company report, cloud cost-optimization levers and value-oriented business use cases foresee more than $1 trillion in run-rate EBITDA across Fortune 500 companies as up for grabs in 2030. In 2022, more than $1.3 trillion in enterprise IT spending was at stake from the shift to the cloud, growing to almost $1.8 trillion in 2025, according to Gartner.
The European Commission's 2012 Communication identified several issues which were impeding the development of the cloud computing market:
fragmentation of the digital single market across the EU
concerns about contracts including reservations about data access and ownership, data portability, and change control
variations in standards applicable to cloud computing
The Communication set out a series of "digital agenda actions" which the Commission proposed to undertake in order to support the development of a fair and effective market for cloud computing services.
List of public clouds
Adobe Creative Cloud
Amazon Web Services
Google Cloud
IBM Cloud
Microsoft Azure
OpenStack
Oracle Cloud
Panorama9
Similar concepts
The goal of cloud computing is to allow users to take benefit from all of these technologies, without the need for deep knowledge about or expertise with each one of them. The cloud aims to cut costs and helps the users focus on their core business instead of being impeded by IT obstacles. The main enabling technology for cloud computing is virtualization. Virtualization software separates a physical computing device into one or more "virtual" devices, each of which can be easily used and managed to perform computing tasks. With operating system–level virtualization essentially creating a scalable system of multiple independent computing devices, idle computing resources can be allocated and used more efficiently. Virtualization provides the agility required to speed up IT operations and reduces cost by increasing infrastructure utilization. Autonomic computing automates the process through which the user can provision resources on-demand. By minimizing user involvement, automation speeds up the process, reduces labor costs and reduces the possibility of human errors.
Cloud computing uses concepts from utility computing to provide metrics for the services used. Cloud computing attempts to address QoS (quality of service) and reliability problems of other grid computing models.
Cloud computing shares characteristics with:
Client–server model Client–server computing refers broadly to any distributed application that distinguishes between service providers (servers) and service requestors (clients).
Computer bureau A service bureau providing computer services, particularly from the 1960s to 1980s.
Grid computing A form of distributed and parallel computing, whereby a 'super and virtual computer' is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks.
Fog computing Distributed computing paradigm that provides data, compute, storage and application services closer to the client or near-user edge devices, such as network routers. Furthermore, fog computing handles data at the network level, on smart devices and on the end-user client-side (e.g. mobile devices), instead of sending data to a remote location for processing.
Utility computing The "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity."
Peer-to-peer A distributed architecture without the need for central coordination. Participants are both suppliers and consumers of resources (in contrast to the traditional client-server model).
Cloud sandbox A live, isolated computer environment in which a program, code or file can run without affecting the application in which it runs.
See also
As a service
Block-level storage
Browser-based computing
:Category:Cloud computing providers
:Category:Cloud platforms
Cloud computing architecture
Cloud broker
Cloud collaboration
Cloud-computing comparison
Cloud computing security
Cloud engineering
Cloud gaming
Cloud management
Cloud-native computing
Cloud research
Cloud robotics
Cloud storage
Cloud-to-cloud integration
Cloudlet
Computer cluster
Cooperative storage cloud
Decentralized computing
Desktop virtualization
Dew computing
Directory
Distributed data store
Distributed database
Distributed computing
Distributed networking
e-Science
Edge computing
Edge device
File system
Clustered file system
Distributed file system
Distributed file system for cloud
Fog computing
Fog robotics
Green computing (environmentally sustainable computing)
Grid computing
In-memory database
In-memory processing
Internet of things
IoT security device
Knowledge as a service
Microservices
Mobile cloud computing
Multi-access edge computing
Multisite cloud
Peer-to-peer
Personal cloud
Private cloud computing infrastructure
Robot as a service
Service-oriented architecture
Time-sharing
Ubiquitous computing
Virtual private cloud
Notes
References
Further reading
Weisser, Alexander (2020). International Taxation of Cloud Computing. Editions Juridiques Libres, .
Mell, P. (2011, September). The NIST Definition of Cloud Computing. Retrieved November 1, 2015, from National Institute of Standards and Technology website
Cloud infrastructure | Cloud computing | Technology | 6,574 |
31,464,379 | https://en.wikipedia.org/wiki/DAD-IS | DAD-IS is the acronym for Domestic Animal Diversity Information System, which is a tool developed and maintained by the Food and Agriculture Organization of the United Nations. It is part of FAO's programme on management of animal genetic resources for food and agriculture. It includes a searchable database of animal breed-related information.
Overview
FAO began to collect data on animal breeds in 1982. The first version of DAD-IS was launched in 1996 and the software has been updated several times. The fourth and latest version of DAD-IS was launched in 2017.
DAD-IS includes a searchable database of information about animal breeds, the Global Databank for Animal Genetic Resources. DAD-IS contains information on breed characteristics, uses, geographic distribution and demographics; more than 4,000 images; and tools for generating user-defined reports; and has a multilingual interface and content. It also provides contact information for the National and Regional Coordinators for the Management of Animal Genetic Resources (AnGR). Data is collected and entered by each country's National Coordinator via web-based data-entry screens available in several languages.
Data from DAD-IS is used for reporting on the global status and trends of animal genetic resources, including the data for indicators 2.5.1b (number of animal genetic resources for food and agriculture secured in either medium- or long-term conservation facilities) and 2.5.2 (proportion of local breeds classified as being at risk of extinction) of the UN Sustainable Development Goals.
Breeds in the global databank
The data in DAD-IS pertain to 37 different mammalian and avian livestock species. Based on data collected as of September 2022, DAD-IS contained data for 11,555 mammalian and 3,758 avian national breed populations. These national breed populations represent a global total of 8,859 breeds, which included 595 breeds (7%) that were reported to be extinct. Local breeds (found in only one region) made up 7,739 entries, while 1,120 were transboundary breeds (found in more than one region).
As of 2022, there were 4,954 mammalian local species reported worldwide, and 2,199 avian species. For transboundary breeds, there were 458 mammalian species and 97 avian species reported regionally.
Risk status
FAO uses the information about population sizes to classify breeds according to risk of extinction. Risk classes that are measured include: at risk (critical, critical-maintained, endangered, endangered-maintained, and vulnerable), not at risk, unknown, and already extinct.
Approximately 27% of breeds (about 2,350) are either classified as being at risk of extinction or are already extinct. A further 54% are classified as unknown risk status (includes breeds with no reports of population during the prior 10 years).
See also
Food and Agriculture Organization
:Category:Food and Agriculture Organization for other FAO organizations and programs
FAO GM Foods Platform
Notes
References
External links
Biodiversity databases
Information systems
Animals by conservation status
Agricultural organizations
Food politics
Food and Agriculture Organization
Agrarian politics
Environmental organisations based in Italy | DAD-IS | Technology,Biology,Environmental_science | 629 |
48,005,621 | https://en.wikipedia.org/wiki/List%20of%20reefs | This is a list of notable reefs.
Reefs
See also
Fringing reef
Recreational dive sites
Recreational diving
Southeast Asian coral reefs
The Structure and Distribution of Coral Reefs
References | List of reefs | Biology | 33 |
38,951,939 | https://en.wikipedia.org/wiki/Sheng%20n%C3%BC | Sheng nü (), translated as 'leftover women' or 'leftover ladies', are women who remain unmarried in their late twenties and beyond in China. The term was popularized by the All-China Women's Federation. Most prominently used in China, the term has also been used colloquially to refer to women in India, North America, Europe, and other parts of Asia. The term compares unmarried women to leftover food and has gone on to become widely used in the mainstream media and has been the subject of several television series, magazine and newspaper articles, and book publications, focusing on the negative connotations and positive reclamation of the term. While initially backed and disseminated by pro-government media in 2007, the term eventually came under criticism from government-published newspapers two years later. Xu Xiaomin of The China Daily described the sheng nus as "a social force to be reckoned with" and others have argued the term should be taken as a positive to mean "successful women". The slang term, 3S or 3S Women, meaning "single, seventies (1970s), and stuck" has also been used in place of sheng nu.
The equivalent term for men, guang gun 'bare branches' is used to refer to men who do not marry and thus do not add 'branches' to the family tree. Similarly, shengnan () 'leftover men' has also been used. Scholars have noted that this term is not as commonly used as "leftover women" in Chinese society and that single males reaching a certain age will often be labeled as either 'golden bachelors' () or 'diamond single men' ().
Background
As a long-standing tradition, early-age marriage was prevalent in China's past. It was until 2005, that merely 2% of females aged between 30 and 34 were single. By contrast, 10% of the males were single. China's one-child policy (Family Planning Program) and sex-selective abortions have led to a disproportionate growth in the country's gender balance. Approximately 20 million more men than women have been born since the one-child policy was introduced in 1979, or 120 males born for every 100 females. By 2020, China is expected to have 24 million more men than women. The global average is 103 to 107 newborn males per 100 females.
According to The New York Times, the State Council of the People's Republic of China (Central People's Government) issued an "edict" in 2007 regarding the Population and Family Planning Program (one-child-policy) to address the urgent gender imbalance and cited it as a major "threat to social stability". The council further cited "upgrading population quality (suzhi)" as one of its primary goals and appointed the All-China Women's Federation, a state agency established in 1949 to "protect women's rights and interests", to oversee and resolve the issue.
The exact etymology of the term is not conclusively known, but most reliable sources cite it as having entered the mainstream in 2006. The China Daily reported in 2011 that Xu Wei, the editor-in-chief of the Cosmopolitan Magazine China, coined the term. The term, sheng nu, literally translates to "leftover ladies" or "leftover women". The China Daily newspaper further reported that the term originally gained popularity in the city of Shanghai and later grew to nationwide prominence. In 2007, the Ministry of Education of the People's Republic of China released an official statement defining sheng nu as any "unmarried women over the age of 27" and added it to the national lexicon. According to several sources, the government mandated the All-China Women's Federation to publish series of articles stigmatizing unwed women who were in their late twenties.
In March 2011, the All-China Women's Federation posted a controversial article titled 'Leftover Women Do Not Deserve Our Sympathy' shortly after International Women's Day. An excerpt states, "Pretty girls do not need a lot of education to marry into a rich and powerful family. But girls with an average or ugly appearance will find it difficult" and "These girls hope to further their education in order to increase their competitiveness. The tragedy is, they don't realise that as women age, they are worth less and less. So by the time they get their MA or PhD, they are already old – like yellowed pearls." Originally at least 15 articles were available on its website relating to the subject of sheng nu, which have now been subsequently removed, that included matchmaking advice and tips.
China
Culture and statistics
The National Bureau of Statistics of the People's Republic of China (NBS) and state census figures reported approximately 1 in 5 women between the ages of 25-29 remain unmarried. In contrast, the proportion of unwed men in that age range is much higher, sitting at around 1 in 3. In a 2010 Chinese National Marriage Survey, it was reported that 9 out of 10 men believe that women should be married before they are 27 years old. 7.4% of Chinese women between 30 and 34 were unmarried and the percentage falls to 4.6% between the ages 35–39. In comparison with neighbouring countries with similar traditional values, China has much higher female marriage rates. Despite being categorized as a "relatively rare" demographic, the social culture and traditions of China have put the issue in the social spotlight.
Under the context of the one-child policy, gender selective abortion caused the male population in China to exceed that of women; more than 10% of men over 50 will choose not to enter into marriage in 2044.
A study of married couples in China noted that men tended to marry down the socio-economic ladder. "There is an opinion that A-quality guys will find B-quality women, B-quality guys will find C-quality women, and C-quality men will find D-quality women," says Huang Yuanyuan. "The people left are A-quality women and D-quality men. So if you are a leftover woman, you are A-quality." A University of North Carolina demographer who studies China's gender imbalance, Yong Cai, further notes that "men at the bottom of society get left out of the marriage market, and that same pattern is coming to emerge for women at the top of society".
China, and many other Asian countries, share a long history of conservative and patriarchal view of marriage and the family structure including marrying at a young age and hypergamy. The pressure from society and family has been the source criticism, shame, social embarrassment and social anxiety for many women who are unmarried. Chen, another woman interviewed by the BBC, said the sheng nu are "afraid their friends and neighbours will regard me as abnormal. And my parents would also feel they were totally losing face, when their friends all have grandkids already". Similar sentiment has been shared amongst other women in China, particularly amongst recent university graduates. A report by CNN cited a survey of 900 female university graduates across 17 Chinese universities where approximately 70 percent of those surveyed said "their greatest fear is becoming a 3S lady".
Under the patriarchal system in China, males tend to come under substantial financial pressure. For example, in China, great importance is often attached to male ownership of a property and a vehicle. This is evidenced in a survey which revealed that less than 20% of parents of daughters do not consider the ownership of a property as a precondition for marriage. This may have caused people to lay the blame on women. Moreover, the social image of so-called "Shengnus" is characterized by monetary worship, egocentricity and selfishness. Besides, people consider "Shengnu" as setting the bar high for their future partner but lacking in the virtues required as a tradition in the old times. Some females regard marriage as a springboard to improve the quality of their life. At one of the most popular dating TV show broadcast in China, a female participant blatantly claimed that "I’d rather cry in a BMW than laugh on a bicycle" when an unemployed male participant questioned her whether or not she is willing to take a ride on bike. This remark went viral instantly on social media in China, and attracted widespread criticism from many unmarried females.
The increasing popularity of unwed women in China has been largely accredited to the growing educated middle class. Women are more free and able to live independently in comparison to previous generations. Forbes reported that in 2013, "11 of the 20 richest self-made women in the world are Chinese". In addition, it cites that Chinese female CEOs make up 19 percent of women in management jobs making it the second highest worldwide after Thailand. A rapidly growing trend in premarital sex has been commonly surveyed and noted amongst women in China. In 1989, 15% of Chinese women engaged in premarital sex as compared to 2013, where 60-70% had done so. Chinese Academy of Social Sciences professor Li states that this shows an increase in the types of relationships amongst new generations in China.
The term has also been embraced by some feminists with the opening of 'sheng nu' social clubs. In an interview with fashion editor Sandra Bao by the Pulitzer Center on Crisis Reporting, Bao stated that "many modern, single women in China enjoy their independence and feel comfortable holding out for the right man, even as they grow older." She further explained, "We don't want to make compromises because of age or social pressure".
Between 2008 and 2012, sociologist Sandy To, while at the University of Cambridge, conducted a 'grounded theory method' study in China regarding the topic. To's research focused on "marriage partner choice" by Chinese professional women in the form of a typology of four different "partner choice strategies". The main finding of the study found that contrary to the popular belief that highly educated and single women remain unmarried, or do not want to take on traditional roles in marriage, because of personal preference, that in contrast, they commonly have an appetite for marriage and that their main obstacle is traditional patriarchal attitudes. The study also pointed out that in other Asian countries such as Japan, Singapore, South Korea, and Taiwan, where women have been receiving a higher education, that correspondingly, the average age of marriage amongst them is much higher. The Chinese People's Daily cited a 2012 United Nations survey that found 74 percent of women in the United Kingdom and 70 percent of women in Japan were single between the ages of 25 and 29. The China Daily published an article that cited figures from the 2012 United Nations' World Marriage Data which reported 38% of women in the United States, and more than 50% of women in Britain remained unmarried in their 30s.
Media
The Chinese media has capitalized on the subject matter with television shows, viral videos, newspapers and magazine articles, and pundits that have sharply criticized women for "waiting it out for a man with a bigger house or fancier car". The television series comedy Will You Marry Me and My Family, which premièred on CCTV-8, revolves around the principal concept of sheng nu as a family frantically searches for a prospective spouse of the main character who is in her 30s. This series and You Are the One (MediaCorp Channel 8) have been accredited with minting terms like "the shengnu economy" and further bringing the subject into public fascination and obsession. If You Are the One (Jiangsu Satellite Television) is a popular Chinese game show, loosely based on Taken Out, whose rise has been credited with the "national obsession" surrounding sheng nu. The show between 2010 and 2013 was China's most viewed game show.
The media is always in attempt to highlight the anxiety people feel about late marriage or even no marriage. Whether in reality show or drama, people tend to make jokes on "Shengnu". For example, in a TV show known as iApartment, they brand a female character with a Doctor degree as gender neutral, implying that she needs to be nice to her boyfriend, because it is difficult for a female Doctor to find a boyfriend if they break up with each other.
In response to a popular music video called "No Car, No House" about blue-collar Chinese bachelors, another music video called "No House, No Car" was made by a group of women and uploaded on International Women's Day. The video was viewed over 1.5 million times over the first two days on the Chinese video site Youku. Other commercial interests have taken advantage of the situation such as the increased popularity of "boyfriends for hire". The concept has also been turned into a popular television drama series called Renting a Girlfriend for Home Reunion.
The topic has also been the subject of literary works. Hong Kong author Amy Cheung's bestselling novel Hummingbirds Fly Backwards (三个A Cup的女人) depicts the anxieties of three unmarried women on the verge of turning 30.
It is also worth noting that the Chinese English-language news media has more often challenged the "leftover" myth than perpetuated it. The media representations of leftover women has shown four distinct ideologies, namely ageism, heteronormativity, patriarchy and egalitarianism. Similarly, the Western English-language news media has formulated the female individualisation discourse that emphasises independence and self-actualisation.
Longevity and consequences
Experts have further theorized about the term's longevity as the National Population and Family Planning Commission moves towards phasing out the one-child policy in favour of an "appropriate and scientific family planning policy (one-child policy)" where the child limit may be increased. He Feng in The China Daily points out, "the sheng nu phenomenon is nothing like the feminist movement in the West, in which women consciously demanded equal rights in jobs and strived for independence." Rather, the change has been "subtle" and that "perhaps decades later, will be viewed as symbolic of China's social progress and a turning point for the role of women in its society."
In an article by the South China Morning Post, it concludes, "with mounting pressure and dwindling hopes of fulfilling both career and personal ambitions at home, for women such as Xu the urge to pack up and leave only grows stronger with time. Without women such as her, though, the mainland will be left with not only a weaker economy, but an even greater pool of frustrated leftover men."
Divorce rates in Shanghai and Beijing, China's two most populated economic centres, have been steadily rising since 2005 with it reaching 30% in 2012. In 2016, divorce rates rose by 8.3% from 2015 to 4.2 million. At the same time, in 2017, marriage rates have declined since 2013 to 8.3%, down from a peak of 9.9% in 2013. These among other contributing factors such as online dating and the upward mobility of people have been attributed to pushing the average age of marriage in China to 27, up from 20 in 1950, making it closer to global marriage trends.
Sheng Nu Movement
Influence of media in the movement
The Sheng Nu Movement uses the internet and media as outlets to remove the stigma against leftover women. SK-II, a Japanese skincare brand, launched in the early 1980s, has launched a global campaign called #changedestiny, to empower women affected by the prejudice against "leftover women". In their campaign video, "Marriage Market Takeover", stories of women who overcame the challenges of being unmarried after they turn 27. The video includes interviews from leftover women. In the interview, Wang Xiao Qi describes how her parents pushed her into marriage by arguing that "marriage doesn’t wait". She refutes them by saying, "even if I don’t have a significant other half, I can still live wonderfully". The commercial was launched with the idea of taking over the "Marriage Market”, a place where Chinese parents essentially advertise children as marriage potential, listing their height, weight, salary, values and personality.
Reaction
Powerful figures of modern-day China have publicly expressed irritation towards the growing feminist movement in their male-dominated society. Feng Gang, a leading sociologist, posted on social media "History has proved that academia is not the domain of women". Xu Youzhe the CEO of one of China's most popular gaming companies, Duoyi Network, mentioned "If a woman in her lifetime has fewer than two children, no matter how hard she works, she is destined to be unhappy". These comments are some of the many examples of the outward condemnation towards the growing feminist movement in China.
China's government have also been known to combat the growing feminist movement in China. On International Women's Day in 2015, feminists in China were detained for publicly raising awareness about sexual harassment on public transportation. Five Women in Beijing were also arrested and sent to a detention center by the Public Security Bureau for handing out the feminist sticker. In 2017, Women's Voices, a social media account run by China's most prominent feminists, was suspended with no specific explanation as to why.
The first female president of Taiwan, Tsai Ing-Wen, aged 59 at the time she assumed office, was criticized for being an unmarried president and so-called 'leftover woman'. The Chinese State newspaper Xinhua shamed Tsai Ing-Wen by commenting, "As a single female politician, she lacks the emotional drag of love, the pull of the 'home,’ and no children to care for."
Chinese women have taken initiative to form social clubs where they support one another over the pressures of marriage and motherhood. An article written by The Atlantic state that these social groups have over 1,000 members. Sandra Bao is a co-founder and a fashion magazine editor who formed a social group known as "Leftover Attitude" in Shanghai as a way to support unmarried professional women. She states, "Parents are pressuring us, the media label us, there's a whole industry of matchmakers and others out there telling us it's a problem to be single".
Recently, feminists in China change the original meaning of "leftover women"("剩女") into “'victorious' women" ("胜女"), but retain the pronunciation of "Shengnu". This move is purposed to emphasize the independence gained by single women. In fact, unlike the social image imposed on "Shengnu", most unmarried females living in urban areas do not value wealth as the sole criterion when they search for their other half, even though they will not completely ignore that.
Sexism in China
Sexism is prominent in China's workforce, where women are either expected to meet China's many societal standards or aren't given any opportunities at all on the basis of their gender. In male-dominated areas such as technology and construction, one of the requirements needed to get the job may actually require the applicant to be a male. According to the South China Morning Post, Gender discrimination is deeply ingrained in Chinese society, which, for centuries, was dominated by Confucianism which places women as inferior to men. Gender discrimination also occurs in employment where women have to fit certain physical features to be hired. Sexism exists in the Chinese employment system. Brian Stauffer from Human Rights Watch describes "Sexual objectification of women—treating women as a mere object of sexual desire—is prevalent in Chinese job advertising. Some job postings require women to have certain physical attributes—with respect to height, weight, voice, or facial appearance—that are completely irrelevant to the execution of job duties". Legal actions have been taken against sexism in China's job field. In 2014, a woman named Cao Ju was refused a job in the private tutoring firm Juren in Beijing based on the fact that she was a woman. The company settled for 30,000 yuan for what's known to be "China's first gender discrimination lawsuit". Cao justified her actions by stating that "[I] think as long as the person is capable of doing the work the post requires, gender is irrelevant". Traditional patriarchy and modern egalitarianism shape Chinese womanhood within the Chinese sociocultural context.
In other cultures
United States
Comparisons have been made to a 1986 Newsweek cover and featured article that said "women who weren't married by 40 had a better chance of being killed by a terrorist than of finding a husband". Newsweek eventually apologized for the story and in 2010 launched a study that discovered 2 in 3 women who were 40 and single in 1986 had married since. The story caused a "wave of anxiety" and some "skepticism" amongst professional and highly educated women in the United States. The article was cited several times in the 1993 Hollywood film Sleepless in Seattle starring Tom Hanks and Meg Ryan. The Chinese People's Daily noted a United Nations study, mentioned earlier, that in the United States in 2012, nearly half of all women between 25 and 29 were single.
The term bachelorette is used to describe any unmarried woman who is still single. The popular American reality television series The Bachelorette capitalizes on matchmaking often successful businesswomen in their mid to late twenties with other eligible bachelors.
Former Los Angeles deputy mayor Joy Chen, a Chinese-American, wrote a book titled Do Not Marry Before Age 30 (2012). Chen's book, a pop culture bestseller, was commissioned and published by the Chinese government as a self-help book for unmarried women. In an earlier interview with The China Daily, she was quoted saying, "We should not just try to find a 'Mr Right Now', but a 'Mr Right Forever. The same year, Chen was named "Woman of the Year" by the All-China Women's Federation.
Other countries
Singapore is noted to have gone through a similar period. In 1983, then Prime Minister of Singapore Lee Kuan Yew sparked the "Great Marriage Debate" when he encouraged Singapore men to choose highly educated women as wives. He was concerned that a large number of graduate women were unmarried. Some sections of the population, including graduate women, were upset by his views. Nevertheless, a match-making agency Social Development Unit (SDU) was set up to promote socialising among men and women graduates. In the Graduate Mothers Scheme, Lee also introduced incentives such as tax rebates, schooling, and housing priorities for graduate mothers who had three or four children, in a reversal of the over-successful 'Stop-at-Two' family planning campaign in the 1960s and 1970s. By the late 1990s, the birth rate had fallen so low that Lee's successor Goh Chok Tong extended these incentives to all married women, and gave even more incentives, such as the 'baby bonus' scheme. Lee reaffirmed his controversial position in his personal memoir, From Third World to First, "many well-educated Singaporean women did not marry and have children."
The 2012 UN study cited by the Chinese People's Daily reported that in Britain 74 percent and in Japan 70 percent of all women between 25 and 29 were single. A similar feature in the People's Daily focused on the reception of the concept of sheng nu from netizens outside of China, particularly in Asia, specifically Korea, Japan, and India. One Japanese netizen noted that during the 1980s, the term "Christmas cakes" was commonly used to refer to women who were unmarried and beyond the national age average of married women. The actual reference to Christmas cakes is the saying, "who wants Christmas cakes after December 25". A newer term has since supplanted this one, referring to unmarried women as "unsold goods" (urenokori, 売れ残り). Another contributor wrote, similarly "a class of highly educated, independent age 27+ women who choose to live a more liberated life and put their talent/skill to good use in society" is happening in India. "People must make their own choices and must simply refuse others' labels and be blissfully happy", she further explained. Alternatively, for men in Japan, the term Herbivore men is used to describe men who have no interest in getting married or finding a girlfriend.
The China Daily posted the question, "Are 'leftover women' a unique Chinese phenomenon?" on their opinions column. Readers cited their own experiences universally stating they too felt societal and family pressures in their 30s and 40s for marriage. Yong Cai who studies China's gender imbalance at the University of North Carolina stated, "The 'sheng nu' phenomenon is similar to trends we've already seen around the world, in countries ranging from the United States to Japan as higher education and increased employment give women more autonomy". Cai cites studies that show that women are now breaking the tradition of "mandatory marriage" to have fewer children or marry later on in life.
Other typologically similar terms that are still used in the modern lexicon of other countries and cultures show the concept has existed in some cases as far back as the 16th century. The term spinster was used to describe unmarried or single women of a marriageable age. It wasn't until 2004 when the Civil Partnership Act replaced the word spinster with "single" in the relationship history section of marriage certificates in the UK. Subsequently, at the height of the Industrial Revolution, the term surplus women was used to describe the excess of unmarried women in Britain. The card game 'old maid' sees people competing to avoid being labelled an 'old maid'....ie a Leftover woman. It has its roots in the Medieval World.
Catherinette was a traditional French label for women 25 years old or older who were still unmarried by the Feast of Saint Catherine of Alexandria on 25 November. The French idiom, "to do St. Catherine's hair", meaning "to remain an old maid" is also associated with this tradition.
In Russia, marriage is a substantial part of the national culture, with 30 years being the age at which a woman is considered an "old maid".
See also
Spinster
Feminism
Chinese marriage
Marriage in modern China
Sexuality in China
Women in China
Gender inequality in China
References
Further reading
Roseann Lake (February 2018), Leftover in China: The Women Shaping the World's Next Superpower, New York: W. W. Norton & Company
China's "leftovers" are rejects in a man's world, Cambridge University. 28 Feb 2013.
Sandy To (25 January 2013), . Symbolic Interaction: Volume 36, Issue 1, pages 1–20, February 2013. John Wiley & Sons.
Leta Hong Fincher (1 May 2014), Leftover Women: The Resurgence of Gender Inequality in China (Asian Arguments). Zed Books.
China's Fake Boyfriends. Witness, Al Jazeera English, May 2016
Marriage, unions and partnerships in China
Marriage in Chinese culture
Interpersonal relationships
Pejorative terms for women
Slang terms for women
Age-related stereotypes
Stereotypes of women
Stereotypes of East Asian people
Women in China
Chinese slang | Sheng nü | Biology | 5,550 |
73,252,527 | https://en.wikipedia.org/wiki/Sabukaze%20Kiln%20Sites | is an archaeological site consisting of the remains of kilns for firing Sue ware pottery from the start of the Asuka period to the Heian period located in the Ushimado neighborhood of the city of Setouchi, Okayama Prefecture in the San'yō region of Japan. The site was the largest production area of Sue ware in western Japan, and the Sue ware from these kilns evolved into the current Bizen ware. The site has been protected by the central government as a National Historic Site since 1986.
Overview
Sue ware is pottery brought to Japan from the Korean peninsula in the middle of the Kofun period (approximately 1600 years ago). New techniques are used that were not used in Yayoi pottery and Haji ware, which were continuations from Jomon pottery, including the use of the potter's wheel and semi-underground anagama-style kilns. By using firing temperatures in excess of 1000 deg C, reduction flame firing became possible, resulting in a hard blue-grey pottery that had less porosity than previous techniques.
At the beginning of the twentieth century a local historian, Tokizane Mokusui, collected a large amount of Sue ware shards from the vicinity of the ruins and published a research journal, which led to recognition of the existence of the ruins in academia. From 1978, archaeological excavations were carried out, and continued from 2005 to 2008 to preserve and open the ruins to the public. The ruins consist of three kiln sites: Kiln No. 1, Kiln No. 2, and Kiln No. 3, however, multiple noborigama-style kilns with lengths of over ten meters have been found at each location. Three kiln sites have been confirmed at the No. 1 kiln, and currently, a total of five kiln sites have been confirmed at the No. 3 kiln. The kilns are located on the southwestern slope of a hill with an elevation of 50 to 60 meters, and the site also contains the ruins of a building thought to be the remains of a workshop and a pond. There is also a kofun burial mount located on the south slope of the hill, but this is not thought to be related to the kilns.
Along with the ashes of firewood, unsuccessfully fired pottery and other items have been unearthed. There items indicate that in addition to jars, plates, and bottles, the kilns also produced items such as ceramic coffins, Shibi roof ornaments and inkstones. As Sue ware items from these kilns have also unearthed from the capitals of Heijō-kyō and Fujiwara-kyō in Nara Prefecture, this suggests that the kiln was not simply a local kiln, but had an official function as well. The kiln was most active for a one-hundred year period from the beginning of the 7th century to the beginning of the 8th century.
The unearthed Sue pottery and other artifacts are stored and exhibited at the on site
See also
List of Historic Sites of Japan (Okayama)
Oibora-Asakura Sue Ware Kiln Site
Dodo Sue Ware Kiln ruins
References
External links
Setouchi city Official home page
Sabukaze Tōgei Kaikan Information official site
Setouchi, Okayama
Japanese pottery kiln sites
History of Okayama Prefecture
Historic Sites of Japan
Bizen Province
Asuka period | Sabukaze Kiln Sites | Chemistry,Engineering | 688 |
32,280,762 | https://en.wikipedia.org/wiki/Otidea%20onotica | Otidea onotica, commonly known as hare's ear or donkey ear, is a species of apothecial fungus belonging to the family Pyronemataceae.
The fruiting body appears from spring to early autumn as a deep cup split down one side and elongated at the other, up to tall. It is yellow to orangish or slightly pinkish. White hairs cover the outside, while the inside is smooth or rippled.
Similar species include Guepinia helvelloides, others of the genus Otidea, as well as some of Pezizaceae family.
Otidea onotica occurs in Europe and North America, singly or in small groups on the soil of deciduous woodland, most often with beech trees.
References
Other sources
Otidea onotica at Species Fungorum
Pyronemataceae
Fungi described in 1801
Taxa named by Christiaan Hendrik Persoon
Fungus species | Otidea onotica | Biology | 188 |
48,937,680 | https://en.wikipedia.org/wiki/Estrapronicate | Estrapronicate (), also known as estradiol nicotinate propionate is an estrogen medication and estrogen ester which was never marketed. It was studied as a component of the experimental tristeroid combination drug Trophobolene, which contained nandrolone decanoate, estrapronicate, and hydroxyprogesterone heptanoate.
See also
List of estrogen esters § Estradiol esters
Estrapronicate/hydroxyprogesterone heptanoate/nandrolone undecanoate
References
Abandoned drugs
Estradiol esters
Estranes
Nicotinate esters
Synthetic estrogens | Estrapronicate | Chemistry | 142 |
30,040,515 | https://en.wikipedia.org/wiki/Chimeric%20gene | Chimeric genes (literally, made of parts from different sources) form through the combination of portions of two or more coding sequences to produce new genes. These mutations are distinct from fusion genes which merge whole gene sequences into a single reading frame and often retain their original functions.
Formation
Chimeric genes can form through several different means. Many chimeric genes form through errors in DNA replication or DNA repair so that pieces of two different genes are inadvertently combined. Chimeric genes can also form through retrotransposition where a retrotransposon accidentally copies the transcript of a gene and inserts it into the genome in a new location. Depending on where the new retrogene appears, it can recruit new exons to produce a chimeric gene. Finally, ectopic recombination, when there is an exchange between portions of the genome that are not actually related, can also produce chimeric genes. This process occurs often in human genomes, and abnormal chimeras formed by this process are known to cause color blindness.
Evolutionary Importance of Fusion Proteins
Chimeric genes are important players in the evolution of genetic novelty. Much like gene duplications, they provide a source of new genes, which can allow organisms to develop new phenotypes and adapt to their environment. Unlike duplicate genes, chimeric proteins are immediately distinct from their parental genes, and therefore are more likely to produce entirely new functions.
Chimeric fusion proteins form often in genomes, and many of these are likely to be dysfunctional and eliminated by natural selection. However, in some cases, these new peptides can form fully functional gene products that are selectively favored and spread through populations quickly.
Functions
One of the most well known chimeric genes was identified in Drosophila and has been named Jingwei. This gene is formed from a retrotransposed copy of Alcohol dehydrogenase that united with the yellow emperor gene to produce a new protein. The new amino acid residues that it recruited from yellow emperor allow the new protein to act on long chain alcohols and diols, including growth hormones and pheremones. These changes affect fly development. In this case, the combination of different protein domains resulted in a gene that was fully functional and favored by selection.
The functions of many chimeric genes are not yet known. In some cases these gene products are not beneficial and they may even cause diseases such as cancer.
References
Mutation
Genes
Molecular evolution
Evolutionary biology | Chimeric gene | Chemistry,Biology | 493 |
53,340,196 | https://en.wikipedia.org/wiki/Ji%C5%99%C3%AD%20Draho%C5%A1 | Jiří Drahoš (born 20 February 1949; ) is a Czech physical chemist and politician who has been the Senator of Prague 4 since October 2018. Previously, Drahoš served as President of the Czech Academy of Sciences from 2009 to 2017, and was a candidate in the 2018 Czech presidential election.
Born in Český Těšín and raised in nearby Jablunkov, Drahoš studied physical chemistry at the University of Chemistry and Technology in Prague, and joined the Institute of Chemical Process Fundamentals of the Czechoslovak Academy of Sciences in 1973, which he later led from 1995 to 2003. In 2009, he was elected President of the Czech Academy of Sciences. His term as head of the academy ended on 24 March 2017.
In March 2017, Drahoš announced his candidacy for President of the Czech Republic in the 2018 election. He ran on a moderate centrist platform, and is generally pro-European and supportive of NATO and Atlanticism. Drahoš lost the second round of the presidential election to his opponent President Miloš Zeman with 48.6% of the vote, but vowed to remain in public life. In October 2018, he stood for the Czech Senate in the Prague 4 district, winning the election outright in the first round with 52.65% of the vote.
Early life and career
Jiří Drahoš was born on 20 February 1949 in Český Těšín to a Czech father originally from Skuteč in Vysočina, and a Polish mother from Jablunkov. He spent most of his childhood in Jablunkov, where his mother Anna lived and worked as a nurse. His father, also named Jiří, was a teacher in a local Czech school.
Drahoš studied at the University of Chemistry and Technology in Prague and qualified as a scientist in 1972. He joined the Institute of Chemical Process Fundamentals at the Czech Academy of Sciences, and was later head of the institute from 1996 to 2003. On 13 March 2009, Drahoš was elected President of the Czech Academy of Sciences, defeating Eva Syková. During his tenure, he successfully opposed 50% budget cuts to the Academy proposed by the governments of Prime Ministers Mirek Topolánek and Jan Fischer as a consequence of the 2008 financial crisis. Drahoš later called it an "attempt to destroy my motherly institution". In 2012, President Václav Klaus awarded him the Medal of Merit in the field of science. His second term as head of the academy ended on 24 March 2017. He is co-author of 14 patents.
Political career
2018 presidential campaign
On 28 March 2017, Drahoš announced his intention to stand in the 2018 presidential election. On 24 April 2017, he started gathering the signatures required to be registered as a candidate. In July 2017, after meeting with Drahoš, the leaders of Populars and Mayors, Pavel Bělobrádek and Petr Gazdík, announced that they would ask their respective parties' members to nominate and endorse Drahoš's candidacy. Mayors and Independents endorsed Drahoš on 25 July 2017 while the Christian and Democratic Union – Czechoslovak People's Party (KDU–ČSL) endorsed him on 14 November 2017. Young Social Democrats also endorsed Drahoš on 9 December 2017. Polls in late 2017 showed Drahoš as the second strongest candidate behind Zeman.
Drahoš received campaign donations from several influential businessmen, including Dalibor Dědek, Jiří Grygar and Luděk Sekyra. Drahoš started gathering signatures for his nomination in May 2017. On 19 August 2017, Drahoš announced he had gathered 78,000 signatures. He submitted his nomination on 3 November 2017 with 142,000 signatures.
On 4 November 2017 on Facebook, Drahoš criticized Mirek Topolánek, who had announced his candidacy that day, describing Topolánek as similar to Miloš Zeman and calling his candidacy a bad joke. The two candidates met during a presidential debate at Charles University; Drahoš reflected that the status he posted was "Topolánek-like", to which Topolánek replied that it was written either by "a woman or PR mage".
Drahoš received media attention when he expressed his fear that the election could be influenced by Russia. He met outgoing Prime Minister Bohuslav Sobotka to discuss the matter and stated he would also meet the new Prime Minister Andrej Babiš. The incumbent president Miloš Zeman criticized Drahoš and compared his actions to Hillary Clinton's when she lost to Donald Trump.
Drahoš received criticism when he published a status on social media about Václav Klaus' amnesty, when it was revealed that he had copied a similar status by his fellow presidential candidate Michal Horáček. Drahoš apologised and attributed the mistake to an external member of staff.
The first round was held on 12 and 13 January 2018. Drahoš received 1,369,601 (26.6%) votes, and advanced to the second round against the incumbent president Miloš Zeman. In the second round, held on 26–27 January 2018, Drahoš received 48.63% of the vote and thus lost to Zeman. Drahoš conceded defeat to Zeman, telling a crowd of his supporters that "I would like to congratulate election winner Miloš Zeman".
Senate
Following the 2018 presidential election, Drahoš vowed to remain in public life, and in March 2018 announced his bid for the Prague 4 Senate seat in the 2018 election, nominated by Mayors and Independents and supported by TOP 09, KDU–ČSL and the Green Party. He won the election outright in the first round, with 52.65% of the vote.
Political views
Drahoš considers himself a centrist politician. As a candidate, Jiří Drahoš has presented himself as someone who can unite society, and as a respectable person who would act according to the constitution. Drahoš emphasises the importance of Czech science and education and has called for solidarity with those "who cannot take care of themselves". He has called for a "responsible approach" to the landscape and environment and has described human reason, creativity and ingeniousness as the only "renewable resource" of the wealth of the Czech Republic.
Drahoš wants the Czech Republic to play an active role in discussions over the future of the European Union, and wants the country to be a part of the Western world. He supports European integration but has said that he believes that the European Union should not impose unnecessary regulations on member states. He also said that he would not rush into Czech adoption of the Euro. Drahoš opposes a referendum about Czech membership of the European Union, and said that important geopolitical questions should not be decided by referendum. He supports the Czech Republic's membership of NATO.
In August 2015, Drahoš signed a petition named "scientists against fear and apathy" in opposition to both anti-Islamic radicalism and anti-immigrant populism.
Drahoš suggested that the Catalan independence referendum was "not legal", supporting the position of the Spanish government.
Drahoš says he supports the anti-Russian sanctions imposed by the United States and the EU. However he also said that having good relations with Russia is in the interest of the Czech Republic and European Union. Drahoš supports trade and economic relations with China, arguing that "China is a superpower" and "many countries are doing business with China."
In 2017, Drahoš rejected the European Union's proposal of compulsory migrant quotas, saying, "there is no successful model of Muslim integration in Europe". Drahoš also said that "Europe can't feed 100 million Africans, it is necessary to help them at home."
Drahoš described the pre-war German minority in Czechoslovakia as "Adolf Hitler's fifth column", and said that he agreed with the post-war expulsion of Germans from Czechoslovakia.
Drahoš has described himself as a sympathizer with Israel.
References
External links
Scientific publications
1949 births
Czech chemists
KDU-ČSL presidential candidates
Candidates in the 2018 Czech presidential election
Living people
Mayors and Independents presidential candidates
Presidents of the Czech Academy of Sciences
People from Český Těšín
People from Jablunkov
Recipients of Medal of Merit (Czech Republic)
Czech people of Polish descent
Czechoslovak people of Polish descent
Mayors and Independents politicians
Slovak University of Technology in Bratislava alumni
Physical chemists
Mayors and Independents senators | Jiří Drahoš | Chemistry | 1,725 |
8,949,082 | https://en.wikipedia.org/wiki/Of%20the%20form | In mathematics, the phrase "of the form" indicates that a mathematical object, or (more frequently) a collection of objects, follows a certain pattern of expression. It is frequently used to reduce the formality of mathematical proofs.
Example of use
Here is a proof which should be appreciable with limited mathematical background:
Statement:
The product of any two even natural numbers is also even.
Proof:
Any even natural number is of the form 2n, where n is a natural number. Therefore, let us assume that we have two even numbers which we will denote by 2k and 2l. Their product is (2k)(2l) = 4(kl) = 2(2kl). Since 2kl is also a natural number, the product is even.
Note:
In this case, both exhaustivity and exclusivity were needed. That is, it was not only necessary that every even number is of the form 2n (exhaustivity), but also that every expression of the form 2n is an even number (exclusivity). This will not be the case in every proof, but normally, at least exhaustivity is implied by the phrase of the form.
References
External links
Mathematical proofs
Mathematical terminology | Of the form | Mathematics | 259 |
2,311,390 | https://en.wikipedia.org/wiki/WS-Policy | WS-Policy is a specification that allows web services to use XML to advertise their policies (on security, quality of service, etc.) and for web service consumers to specify their policy requirements.
WS-Policy is a W3C recommendation as of September 2007.
WS-Policy represents a set of specifications that describe the capabilities and constraints of the security (and other business) policies on intermediaries and end points (for example, required security tokens, supported encryption algorithms, and privacy rules) and how to associate policies with services and end points.
Policy Assertion
Assertions can either be requirements put upon a web service or an advertisement for the policies of a web service.
Operator tags
Two "operators" (XML tags) are used to make statements about policy combinations:
wsp:ExactlyOne - asserts that only one child node must be satisfied.
wsp:All - asserts that all child nodes must be satisfied.
Logically, an empty wsp:All tag makes no assertions.
Policy Intersection
If both provider and consumer specify a policy, an effective policy will be computed, which usually consists of the intersection of both policies. The new policy contains those assertions made by both sides which do not contradict each other. However, synonymous assertions are considered incompatible by a policy intersection. This can easily be explained by the fact that policy intersection is a syntactic approach, which does not incorporate the semantics of the assertions. Furthermore, it ignores the assertion parameters.
Opposed to what the name might suggest, a policy intersection is (although quite similar) not a set-intersection.
Associated specifications
WS-Policy - Attachment specifies how to add policies to WSDL and UDDI.
WS-SecurityPolicy specifies security policy assertions for WS-Security, WS-Trust and WS-SecureConversation.
WS-Policy4MASC specifies management policies for Web services and their compositions.
External links
The latest Web Services Policy - Framework recommendation at W3C
The latest Web Services Policy - Primer introduction at W3C
The Web Services Policy Working Group page at W3C
WS-Policy specification
Policy
XML-based standards
Policy | WS-Policy | Technology | 444 |
32,767,589 | https://en.wikipedia.org/wiki/Polysilicon%20halide | Polysilicon halides are silicon-backbone polymeric solids. At room temperature, the polysilicon fluorides are colorless to yellow solids while the chlorides, bromides, and iodides are, respectively, yellow, amber, and red-orange. Polysilicon dihalides (perhalo-polysilenes) have the general formula (SiX2)n while the polysilicon monohalides (perhalo-polysilynes) have the formula (SiX)n, where X is F, Cl, Br, or I and n is the number of monomer units in the polymer.
Macromolecular structure
The polysilicon halides can be considered structural derivatives of the polysilicon hydrides, in which the side-group hydrogen atoms are substituted with halogen atoms. In the monomeric silicon dihalide (aka dihalo-silylene and dihalosilene) molecule, which is analogous to carbene molecules, the silicon atom is divalent (forms two bonds). By contrast, in both the polysilicon dihalides and the polysilicon monohalides, as well as the polysilicon hydrides, the silicon atom is tetravalent with a local coordination geometry that is tetrahedral, even though the stoichiometry of the monohalides ([SiX]n = SinXn) might erroneously imply a structural analogy between perhalopolysilynes and [linear] polyacetylenes with the similar formula (C2H2)n. The carbon atoms in the polyacetylene polymer are sp2-hybridized and thus have a local coordination geometry that is trigonal planar. However, this is not observed in the polysilicon halides or hydrides because the Si=Si double bond in disilene compounds are much more reactive than C=C double bonds. Only when the substituent groups on silicon are very large are disilene compounds kinetically non-labile.
Synthesis
The first indication that the reaction of SiX4 and Si yields a higher halide SinX2n+2 (n > 1) was in 1871 for the comproportionation reaction of SiCl4 vapor and Si at white heat to give Si2Cl6. This was discovered by the French chemists Louis Joseph Troost (1825 - 1911) and Paul Hautefeuille (1836–1902). Since that time, it has been shown that gaseous silicon dihalide molecules (SiX2) are formed as intermediates in the Si/SiX4 reactions. The silicon dihalide gas molecules can be condensed at low temperatures. For example, if the gaseous SiF2 (difluorosilylene) produced from SiF4 (g) and Si (s) at 1100-1400°C is condensed at temperatures below -80°C and subsequently allowed to warm to room temperature, (SiF2)n is obtained. That reaction was first observed by Donald C. Pease, a DuPont scientist in 1958. The polymerization is believed to occur via paramagnetic di-radical oligomeric intermediates like Si2F4 (•SiF2-F2Si•) and Si3F6 (•SiF2-SiF2-F2Si•),
The polysilicon dihalides also form from the thermally-induced disproportionation of perhalosilanes (according to: x SinX2n+2 → x SiX4 + (n-1) (SiX2)x where n ≥ 2). For example, SiCl4 and Si forms SinCl2n cyclic oligomers (with n = 12-16) at 900-1200°C. Under conditions of high vacuum and fast pumping, SiCl2 may be isolated by rapidly quenching the reaction products or, under less stringent vacuum conditions, (SiCl2)n polymer is deposited just beyond the hot zone while the perchlorosilanes SinCl2n+2 are trapped farther downstream. The infrared multiphoton dissociation of trichlorosilane (HSiCl3) also yields polysilicon dichloride, (SiCl2)n, along with HCl. SiBr4 and SiI4 react with Si at high temperatures to produce SiBr2 and SiI2, which polymerize on quenching.
Reactivity
The polysilicon dihalides are generally stable under vacuum up to about 150-200°C, after which they decompose to perhalosilanes, SinX2n+2 (where n = 1 to 14), and to polysilicon monohalides. However, they are sensitive to air and moisture. Polysilicon difluoride is more reactive than the heavier polysilicon dihalides. In stark contrast to its carbon analog, polytetrafluoroethylene, (SiF2)n ignites spontaneously in air, whereas (SiCl2)n inflames in dry air only when heated to 150°C. The halogen atoms in polysilicon dihalides can be substituted with organic groups. For example, (SiCl2)n undergoes substitution by alcohols to give poly(dialkoxysilylene)s. The polysilicon monohalides are all stable to 400°C, but are also water and air sensitive. Polysilicon monofluoride reacts more vigorously than the heavier polysilicon monohalides. For example, (SiF)n decomposes [to SiF4 and Si] above 400°C explosively.
See also
Polysilicon hydride
References
External links
Inorganic Chemistry (Holleman and Wiberg)
Inorganic silicon compounds
Halides | Polysilicon halide | Chemistry | 1,201 |
69,313,236 | https://en.wikipedia.org/wiki/State%20violence | State violence is the use of force, intimidation, or oppression by a government or ruling body against the citizens within the jurisdiction of said state. This can be seen in a variety of forms, including military violence, settler colonialism, surveillance, immigration law, and other tactics used to express authority over a certain group. State violence can happen through law enforcement or military force, as well as through other branches of government and bureaucracy. State violence is typically justified under the pretense of maintaining law and order, or protecting borders. State violence can include prolonged conditions imposed on individuals that are upheld, unaddressed, or furthered by the state. For example, structural violence that lead to Flint, Michigan having lead-contaminated water may be considered state violence. U.S immigration laws are an additional example of structural violence.
Immigration policy
The effects of US immigration enforcement policies create difficulties for transnational families. The immigration policies also create a state of anxiety within immigrant communities. Individuals with familial ties and long-term residency in the US are forced to leave the country. Additionally, immigration policing policies are intended to capture criminals, yet they do not always target serious offenders. Many immigrants are arrested without warrants by local police, often due to status violations or minor traffic violations. Harm caused by immigration policies necessarily includes involvement of the state. The nature of the states involvement in structural violence is being critically evaluated.
Violence through policy
The passing of the Patriot Act (2001) and the subsequent formation of the Department of Homeland Security expanded the definition (status) of individuals deemed worthy for detention. Policymakers enact laws that reduce individuals to a status. The fear of family separation due to legal status causes ongoing stress for undocumented people, regardless of how long they have lived with their families.
Judicial violence and policing
The involvement of the state in law enforcement is frequently linked to the perpetration of violent behaviors, both on a systemic and personal level. Instances of such behaviors can range from the application of police force by officers, to extended periods of pretrial detention, excessively long prison sentences, and insufficient care provided to those who are incarcerated. These concerns tend to have a greater impact on communities of color. The policies implemented by law enforcement agencies, and the resulting imprisonment, can have a significant impact on various aspects of one's life
Mass incarceration
The United States' high rate of imprisonment represents a form of structural violence that disproportionately affects black Americans. Approximately 33% of black men in the U.S. have felony convictions which leads to disenfranchisement from the voting process. The more someone interacts with law enforcement, jail, and prison, the more they tend to believe that their place in society is predetermined. The consequences of continued imprisonment for them and other members of their community shape their views on the social structure.
Excessive use of force
The issue of excessive police violence is being critically examined as a public health concern. The location where excessive police violence occurs plays a significant role in policing. Police officers tend to use excessive force more frequently in low-income neighborhoods that are predominantly inhabited by people of color. Several factors influence the use of force, including gender, social status, and actual or perceived involvement in criminal activity.
Settler colonialism
Unlike colonialism, settler colonialism seeks to claim land that is already occupied by an indigenous group. Typically, settlers will establish settlements, displace the indigenous groups, and initiate governmental control over the region. During the 18th and 19th centuries, the United States advanced their settler colonial project with forced conversion, residential schools, and displacement of various indigenous communities. Residential schools, also referred to as boarding schools, were state funded and typically managed by churches. These schools took a central role in perpetrating state violence against the native population. While indigenous children were in these schools, they were discouraged from participating in their culture and were given Anglo names. The children were also subjected to abuse, exposed to illness, and isolated from their families. These practices were funded by the Indian Civilization Act Fund of March 3, 1819 and the Compulsory Indian Education Act approved by Congress in 1887.
Violence against indigenous women in the United States
Historically, native women's bodies have been destroyed to further the colonial project in the United States. Because of women's ability to reproduce, native women have been killed in an attempt to extinguish indigenous populations. This reproductive state violence is then continued into the 1970's when the state performed forced sterilizations on unknowing indigenous women.
State surveillance
Government surveillance is a tool used by government agencies to protect citizens from potential attacks from terrorists, extremists, or dissidents. Surveillance methods can include monitoring phone calls, video surveillance, or tracking internet usage. Although surveillance was designed to protect national security, it has the potential to perpetuate state violence.
While surveilling as an action is not inherently violent, it can encroach upon citizens' civil liberties and right to privacy. After the terrorist attacks on September 11, 2001, President George W. Bush signed the Patriot Act; this Act allowed for an expansion of surveillance by the government and law enforcement. In 2008, U.S. Congress passed the FISA Amendment Act that gave government agencies, such as the NSA, unfettered access to private communications of foreigners. Section 702 of the FISA Amendment Act allows for government agencies to collect information from private companies like AT&T, Google, and Facebook to target non- U.S. citizens. In some instances, this permission includes communications between a non-citizen and a U.S. citizen. The FBI has been known to use these databases to search for information on U.S. citizens in a process called “backdoor searches”. Although it is unclear who these searches have been used on, they could potentially be used to control populations, target activists, or profile minority groups. The misuse of surveillance to target civilians can amplify existing power imbalances and reinforce state violence.
References
Further reading
Violence
Human rights abuses | State violence | Biology | 1,229 |
40,286,840 | https://en.wikipedia.org/wiki/MYF6 | Myogenic factor 6 (also known as Mrf4 or herculin) is a protein that in humans is encoded by the MYF6 gene.
This gene is also known in the biomedical literature as MRF4 and herculin. MYF6 is a myogenic regulatory factor (MRF) involved in the process known as myogenesis.
Function
MYF6/Mrf4 is a member of the myogenic factor (MRF) family of transcription factors that regulate skeletal muscle myogenesis and muscle regeneration. Myogenic factors are basic helix-loop-helix (bHLH) transcription factors.
MYF6 is a gene that encodes a protein involved in the regulation of myogenesis. The precise role(s) of Myf6/Mrf4 in myogenesis are unclear, although in mice it is able to initiate myogenesis in the absence of Myf5 and MyoD, two other MRFs. The portion of the protein integral to myogenesis regulation requires the basic helix-loop-helix (bHLH) domain that is conserved among all of the genes in the MRF family.
MYF6 is expressed exclusively in skeletal muscle, and it is expressed at a higher levels in adult skeletal muscle than all of the other MRF family genes. In mouse, Myf6/Mrf4 differs somewhat from the other MRF genes due to its two-phase expression. Initially, Myf6 is transiently expressed along with Myf-5 in the somites during the early stages of myogenesis. However, it is more noticeably expressed postnatally. This suggests that it serves an important role in the maintenance and repair of adult skeletal muscle.
The MYF6 gene is physically linked to the MYF5 gene on chromosome 12, and similar linkage is observed in all vertebrates. Mutations in the mouse Myf6 gene typically exhibit reduced levels of Myf5. Despite reductions in muscle mass of the back and defective rib formation, Myf6 mutants still exhibit fairly normal skeletal muscle. This demonstrates that Myf6 is not essential for the formation of most myofibers, at least in the strains of mice tested.
In zebrafish, Myf6/Mrf4 is expressed in all terminally differentiated muscle examined, but expression has not been reported in muscle precursor cells.
Clinical significance
Mutations in the MYF6 gene are associated with autosomal dominant centronuclear myopathy (ADCNM) and Becker's muscular dystrophy.
References
Further reading
Transcription factors | MYF6 | Chemistry,Biology | 525 |
9,733,137 | https://en.wikipedia.org/wiki/Promela | PROMELA (Process or Protocol Meta Language) is a verification modeling language introduced by Gerard J. Holzmann. The language allows for the dynamic creation of concurrent processes to model, for example, distributed systems. In PROMELA models, communication via message channels can be defined to be synchronous (i.e., rendezvous), or asynchronous (i.e., buffered). PROMELA models can be analyzed with the SPIN model checker, to verify that the modeled system produces the desired behavior. An implementation verified with Isabelle/HOL is also available, as part of the Computer Aided Verification of Automata (CAVA) project. Files written in Promela traditionally have a .pml file extension.
Introduction
PROMELA is a process-modeling language whose intended use is to verify the logic of parallel systems. Given a program in PROMELA, Spin can verify the model for correctness by performing random or iterative simulations of the modeled system's execution, or it can generate a C program that performs a fast exhaustive verification of the system state space. During simulations and verifications, SPIN checks for the absence of deadlocks, unspecified receptions, and unexecutable code. The verifier can also be used to prove the correctness of system invariants and it can find non-progress execution cycles. Finally, it supports the verification of linear time temporal constraints; either with Promela never-claims or by directly formulating the constraints in temporal logic. Each model can be verified with SPIN under different types of assumptions about the environment. Once the correctness of a model has been established with SPIN, that fact can be used in the construction and verification of all subsequent models.
PROMELA programs consist of processes, message channels, and variables. Processes are global objects that represent the concurrent entities of the distributed system. Message channels and variables can be declared either globally or locally within a process. Processes specify behavior, channels and global variables define the environment in which the processes run.
Language reference
Data types
The basic data types used in PROMELA are presented in the table below. The sizes in bits are given for a PC i386/Linux machine.
The names and are synonyms for a single bit of information. A is an unsigned quantity that can store a value between 0 and 255. s and s are signed quantities that differ only in the range of values they can hold.
Variables can also be declared as arrays. For example, the declaration:
declares an array of 10 integers that can be accessed in array subscript expressions like:
x[0] = x[1] + x[2];
But the arrays can not be enumerated on creation, so they must be initialised as follows:
The index to an array can be any expression that determines a unique integer value. The effect of an index outside the range is undefined. Multi-dimensional arrays can be defined indirectly with the help of the construct (see below).
Processes
The state of a variable or of a message channel can only be changed or inspected by processes. The behavior of a process is defined by a declaration. For example, the following declares a process type with one variable state:
The definition only declares process behavior, it does not execute it. Initially, in the PROMELA model, just one process will be executed: a process of type init, that must be declared explicitly in every PROMELA specification.
New processes can be spawned using the run statement, which takes an argument consisting of the name of a , from which a process is then instantiated. The run operator can be used in the body of the definitions, not only in the initial process. This allows for dynamic creation of processes in PROMELA.
An executing process disappears when it terminates—that is, when it reaches the end of the body in the definition, and all child processes that it started have terminated.
A proctype may also be active (below).
Atomic construct
By prefixing a sequence of statements enclosed in curly braces with the keyword , the user can indicate that the sequence is to be executed as one indivisible unit, non-interleaved with any other processes.
Atomic sequences can be an important tool in reducing the complexity of verification models. Note that atomic sequences restrict the amount of interleaving that is allowed in a distributed system. Intractable models can be made tractable by labeling all manipulations of local variables with atomic sequences.
Message passing
Message channels are used to model the transfer of data from one process to another. They are declared either locally or globally, for instance as follows:
This declares a buffered channel that can store up to 16 messages of type (capacity is 16 here).
The statement:
qname ! expr;
sends the value of the expression to the channel with name , that is, it appends the value to the tail of the channel.
The statement:
qname ? msg;
receives the message, retrieves it from the head of the channel, and stores it in the variable msg. The channels pass messages in first-in-first-out order.
A rendezvous port can be declared as a message channel with the store length zero. For example, the following:
defines a rendezvous port that can pass messages of type . Message interactions via such rendezvous ports are by definition synchronous, i.e. sender or receiver (the one that arrives first at the channel) will block for the contender that arrives second (receiver or sender).
When a buffered channel has been filled to its capacity (sending is "capacity" number of outputs ahead of receiving inputs), the default behavior of the channel is to become synchronous, and the sender will block on the next sending. Observe that there is no common message buffer shared between channels. Increasing complexity, as compared to using a channel as unidirectional and point to point, it is possible to share channels between multiple receivers or multiple senders, and to merge independent data-streams into a single shared channel. From this follows that a single channel may also be used for bidirectional communication.
Control flow constructs
There are three control flow constructs in PROMELA. They are the case selection, the repetition and the unconditional jump.
Case selection
The simplest construct is the selection structure. Using the relative values of two variables and , for example, one can write:
The selection structure contains two execution sequences, each preceded by a double colon. One sequence from the list will be executed. A sequence can be selected only if its first statement is executable. The first statement of a control sequence is called a guard.
In the example above, the guards are mutually exclusive, but they need not be. If more than one guard is executable, one of the corresponding sequences is selected non-deterministically. If all guards are unexecutable, the process will block until one of them can be selected. (Opposite, the occam programming language would stop or not be able to proceed on no executable guards.)
The consequence of the non-deterministic choice is that, in the example above, if A is true, both choices may be taken. In "traditional" programming, one would understand an structure sequentially. Here, the if – double colon – double colon must be understood as "any one being ready" and if none is ready, only then would the be taken.
In the example above, value is non-deterministically given the value 3 or 4.
There are two pseudo-statements that can be used as guards: the timeout statement and the statement. The timeout statement models a special condition that allows a process to abort the waiting for a condition that may never become true. The else statement can be used as the initial statement of the last option sequence in a selection or iteration statement. The is only executable if all other options in the same selection are not executable. Also, the may not be used together with channels.
Repetition (loop)
A logical extension of the selection structure is the repetition structure. For example:
describes a repetition structure in PROMELA. Only one option can be selected at a time. After the option completes, the execution of the structure is repeated. The normal way to terminate the repetition structure is with a statement. It transfers the control to the instruction that immediately follows the repetition structure.
Unconditional jumps
Another way to break a loop is the statement. For example, one can modify the example above as follows:
The in this example jumps to a label named done. A label can only appear before a statement. To jump at the end of the program, for example, a dummy statement is useful: it is a place-holder that is always executable and has no effect.
Assertions
An important language construct in PROMELA that needs a little explanation is the statement. Statements of the form:
assert(any_boolean_condition)
are always executable. If a boolean condition specified holds, the statement has no effect. If, however, the condition does not necessarily hold, the statement will produce an error during verifications with SPIN.
Complex data structures
A PROMELA definition can be used to introduce a new name for a list of data objects of predefined or earlier defined types. The new type name can be used to declare and instantiate new data objects, which can be used in any context in an obvious way:
The access to the fields declared in a construction is done in the same manner as in C programming language. For example:
MyStruct x;
x.Field1 = 1;
is a valid PROMELA sequence that assigns to the field of the variable the value .
Active proctypes
The keyword can be prefixed to any definition. If the keyword is present, an instance of that proctype will be active in the initial system state. Multiple instantiations of that proctype can be specified with an optional array suffix of the keyword. Example:
Executability
The semantics of executability provides the basic means in Promela for modeling process synchronizations.
In the example, the two processes P1 and P2 have non-deterministic choices of (1) input from the other or (2) output to the other. Two rendezvous handshakes are possible, or executable, and one of them is chosen. This repeats forever. Therefore, this model will not deadlock.
When Spin analyzes a model like the above, it will verify the choices with a non-deterministic algorithm, where all executable choices will be explored. However, when Spin's simulator visualizes possible non-verified communication patterns, it may use a random generator to resolve the "non-deterministic" choice. Therefore, the simulator may fail to show a bad execution (in the example, there is no bad trail). This illustrates a difference between verification and simulation.
In addition, it is also possible to generate executable code from Promela models using Refinement.
Keywords
The following identifiers are reserved for use as keywords.
References
External links
Spin homepage
Spin Tutorials and Online References
Concise Promela Reference
Computer Aided Verification of Automata: "The CAVA Project" (2010-2019) and "Verified Model Checkers" (2016-curr) website
Specification languages
Model checkers | Promela | Mathematics,Engineering | 2,335 |
2,761,068 | https://en.wikipedia.org/wiki/Retroreflective%20sheeting | Retroreflective sheeting is flexible retroreflective material primarily used to increase the nighttime conspicuity of traffic signs, high-visibility clothing, and other items so they are safely and effectively visible in the light of an approaching driver's headlamps. They are also used as a material to increase the scanning range of barcodes in factory settings. The sheeting consists of retroreflective glass beads, microprisms, or encapsulated lenses sealed onto a fabric or plastic substrate. Many different colors and degrees of reflection intensity are provided by numerous manufacturers for various applications. As with any retroreflector, sheeting glows brightly when there is a small angle between the observer's eye and the light source directed toward the sheeting but appears nonreflective when viewed from other directions.
Applications
Retroreflective sheeting is widely used in a variety of applications today, after early widespread use on road signs in the 1960s.
High-visibility clothing
High-visibility clothing frequently combines retroreflective sheeting with fluorescent fabrics in order to significantly increase the wearer's visibility from a distance, which in turn reduces the risk of traffic-related accidents. Such clothing is commonly worn as (often mandatory) PPE by professionals who work near road traffic or heavy machinery, often at night or in low-visibility weather conditions, such as construction workers, road workers and emergency service personnel. It is also commonly worn by cyclists or joggers to increase their nighttime visibility to road traffic. High-visibility clothing typically come in fluorescent colors like yellow, orange, and red, as these shades are highly visible in various lighting conditions and are internationally recognized for safety use. It designed according to specific standards to ensure effectiveness. In Canada, these requirements are outlined in the CSA Standard Z96-15 (R2020), while in the United States, they follow the ANSI/ISEA 107-2020.
For road signs
Retroreflective sheeting for road signs is categorized by construction and performance specified by technical standards such as ASTM D4956-11a.; various types give differing levels of retroreflection, effective view angles, and lifespan. Sheeting has replaced button copy as the predominant type of retroreflector used in roadway signs.
There are several grades of retroreflective sheeting which include the three major grades: engineer grade, high intensity prismatic (HIP) and diamond grade. Within these categories are further delineations based on material used and visibility distance. Diamond grade typically has the greatest distance for visibility of the three major categories.
For barcode labels
Barcodes can be printed onto retroreflective sheeting to enable scanning up to 50 feet away.
In motion pictures
The special effects technique of front projection uses retroreflective screens to create false backgrounds for scenes shot in studios. Front projection was used in 2001: A Space Odyssey during the "Dawn of Man" sequence. Other films that have used front projection techniques include Silent Running, Where Eagles Dare and Superman.
Star Wars episodes IV, V and VI used retroreflective sheeting for the lightsaber blades.
Autonomous vehicle navigation
Reflective tape is used to provide an explicit way to do optical navigation of autonomous vehicles. For example, strips of retroreflective tape are used to provide navigation inputs to the prototype Hyperloop pod vehicles on the SpaceX Hypertube test track.
References
Glass compositions
Glass engineering and science
Nonwoven fabrics
Traffic signs | Retroreflective sheeting | Chemistry,Materials_science,Engineering | 706 |
62,638,447 | https://en.wikipedia.org/wiki/Medicinal%20Chemistry%20Research | Medicinal Chemistry Research is a peer-reviewed scientific journal of medicinal chemistry emphasizing the structure-activity relationships of biologically active compounds. It was founded in 1991 by Alfred Burger (University of Virginia), who also founded the Journal of Medicinal Chemistry. The journal is currently edited by Longqin Hu.
Editors in chief
Alfred Burger served as its first editor-in-chief before passing on the mantle to Richard Glennon (Virginia Commonwealth University). Stephen J. Cutler (University of South Carolina) then took over and served between 2002 and 2019. Longqin Hu (Rutgers University–New Brunswick) became editor in 2020.
Abstracting and indexing
The journal is abstracted and indexed in the following bibliographic databases:
References
External links
Medicinal chemistry journals
Academic journals established in 1991
Monthly journals
English-language journals
Springer Science+Business Media academic journals | Medicinal Chemistry Research | Chemistry | 171 |
1,887,714 | https://en.wikipedia.org/wiki/Large%20Plasma%20Device | The Large Plasma Device (often stylized as LArge Plasma Device or LAPD) is an experimental physics device located at UCLA. It is designed as a general purpose laboratory for experimental plasma physics research. The device began operation in 1991 and was upgraded in 2001 to its current version. The modern LAPD is operated as the primary device for a national collaborative research facility, the Basic Plasma Science Facility (or BaPSF), which is supported by the US Department of Energy, Fusion Energy Sciences and the National Science Foundation. Half of the operation time of the device is available to scientists at other institutions and facilities who can compete for time through a yearly solicitation.
History
The first version of the LAPD was a 10 meter long device constructed by a team led by Walter Gekelman in 1991. The construction took 3.5 years to complete and was funded by the Office of Naval Research (ONR). A major upgrade to a 20 meter version was funded by ONR and an NSF Major Research Instrumentation award in 1999. Following the completion of that major upgrade, the award of a $4.8 million grant by the US Department of Energy and the National Science Foundation in 2001 enabled the creation of the Basic Plasma Science Facility and the operation of the LAPD as part of this national user facility. Gekelman was director of the facility until 2016, when Troy Carter became BaPSF director.
Machine overview
The LAPD is a linear pulsed-discharge device operated at a high (1 Hz) repetition rate, producing a strongly magnetized background plasma which is physically large enough to support Alfvén waves. Plasma is produced from a barium oxide (BaO) cathode-anode discharge at one end of a 20-meter long, 1 meter diameter cylindrical vacuum vessel (diagram). The resulting plasma column is roughly 16.5 meters long and 60 cm in diameter. The background magnetic field, produced by a series of large electromagnets surrounding the chamber, can be varied from 400 gauss to 2.5 kilogauss (40 to 250 mT).
Plasma parameters
Because the LAPD is a general-purpose research device, the plasma parameters are carefully selected to make diagnostics simple without the problems associated with hotter (e.g. fusion-level) plasmas, while still providing a useful environment in which to do research. The typical operational parameters are:
Density: n = 1–4 1012 cm−3
Temperature: Te = 6 eV, Ti = 1 eV
Background field: B = 400–2500 gauss (40–250 mT)
In principle, a plasma may be generated from any kind of gas, but inert gases are typically used to prevent the plasma from destroying the coating on the barium oxide cathode. Examples of gases used are helium, argon, nitrogen and neon. Hydrogen is sometimes used for short periods of time. Multiple gases can also be mixed in varying ratios within the chamber to produce multi-species plasmas.
At these parameters, the ion Larmor radius is a few millimeters, and the Debye length is tens of micrometres. Importantly, it also implies that the Alfvén wavelength is a few meters, and in fact shear Alfvén waves are routinely observed in the LAPD. This is the main reason for the 20-meter length of the device.
Plasma sources
The main source of plasma within the LAPD is produced via discharge from the barium oxide (BaO) coated cathode, which emits electrons via thermionic emission. The cathode is located near the end of the LAPD and is made from a thin nickel sheet, uniformly heated to roughly 900 °C. The circuit is closed by a molybdenum mesh anode a short distance away. Typical discharge currents are in the range of 3-8 kiloamperes at 60-90 volts, supplied by a custom-designed transistor switch backed by a 4-farad capacitor bank.
A secondary cathode source made of lanthanum hexaboride (LaB6) was developed in 2010 to provide a hotter and denser plasma when required. It consists of four square tiles joined to form a 20 20 cm2 area and is located at the other end of the LAPD. The circuit is also closed by a molybdenum mesh anode, which may be placed further down the machine, and is slightly smaller in size to the one used to close the BaO cathode source. The LaB6 cathode is typically heated to temperatures above 1750 °C by a graphite heater, and produces discharge currents of 2.2 kiloamperes at 150 volts.
The plasma in the LAPD is usually pulsed at 1 Hz, with the background BaO source on for 10-20 milliseconds at a time. If the LaB6 source is being utilized, it typically discharges together BaO cathode, but for a shorter period of time (about 5–8 ms) nearing the end of each discharge cycle. The use of an oxide-cathode plasma source, along with a well-designed transistor switch for the discharge, allows for a plasma environment which is extremely reproducible shot-to-shot.
One interesting aspect of the BaO plasma source is its ability to act as an "Alfvén Maser", a source of large-amplitude, coherent shear Alfvén waves. The resonant cavity is formed by the highly reflective nickel cathode and the semitransparent grid anode. Since the source is located at the end of the solenoid which generates the main LAPD background field, there is a gradient in the magnetic field within the cavity. As shear waves do not propagate above the ion cyclotron frequency, the practical effect of this is to act as a filter on the modes which may be excited. Maser activity occurs spontaneously at certain combinations of magnetic field strength and discharge current, and in practice may be activated (or avoided) by the machine user.
Diagnostic access and probes
Probes
The main diagnostic is the movable probe. The relatively low electron temperature makes probe construction straightforward and does not require the use of exotic materials. Most probes are constructed in-house within the facility and include magnetic field probes, Langmuir probes, Mach probes (to measure flow), electric dipole probes and many others. Standard probe design also allows external users to bring their own diagnostics with them, if they desire. Each probe is inserted through its own vacuum interlock, which allows probes to be added and removed while the device is in operation.
A 1 Hz rep-rate, coupled with the high reproducibility of the background plasma, allows the rapid collection of enormous datasets. An experiment on LAPD is typically designed to be repeated once per second, for as many hours or days as is necessary to assemble a complete set of observations. This makes it possible to diagnose experiments using a small number of movable probes, in contrast to the large probe arrays used in many other devices.
The entire length of the device is fitted with "ball joints," vacuum-tight angular couplings (invented by a LAPD staff member) which allow probes to be inserted and rotated, both vertically and horizontally. In practice, these are used in conjunction with computer-controlled motorized probe drives to sample "planes" (vertical cross-sections) of the background plasma with whatever probe is desired. Since the only limitation on the amount of data to be taken (number of points in the plane) is the amount of time spent recording shots at 1 Hz, it is possible to assemble large volumetric datasets consisting of many planes at different axial locations.
Visualizations composed from such volumetric measurements can be seen at the LAPD gallery.
Including the ball joints, there are a total of 450 access ports on the machine, some of which are fitted with windows for optical or microwave observation.
Other diagnostics
A variety of other diagnostics are also available at the LAPD to complement probe measurements. These include photodiodes, microwave interferometers, a high speed camera (3 ns/frame) and laser-induced fluorescence.
See also
Enormous Toroidal Plasma Device (ETPD), a toroidal plasma device housed in the same facility as the LAPD
References
External links
Basic Plasma Science Facility website
Physics laboratories
Plasma physics facilities
University of California, Los Angeles buildings and structures | Large Plasma Device | Physics | 1,720 |
51,008,123 | https://en.wikipedia.org/wiki/Agrocybe%20rivulosa | Agrocybe rivulosa (wrinkled fieldcap) is a species of mushroom in the genus Agrocybe. The first recorded sighting of the mushroom was in 2003. The species was first found in Britain in the year 2004. It is a relatively large mushroom, with a stem of 5 to 10 centimeters, and a cap which reaches 4 to 10 centimeters across. The colour of the cap ranges from yellow to pale orange-brown. It has been eaten, and is reasonably tasty with no obvious toxicity.
References
Strophariaceae
Fungi described in 2004
Fungus species | Agrocybe rivulosa | Biology | 118 |
793,295 | https://en.wikipedia.org/wiki/Mapping%20class%20group | In mathematics, in the subfield of geometric topology, the mapping class group is an important algebraic invariant of a topological space. Briefly, the mapping class group is a certain discrete group corresponding to symmetries of the space.
Motivation
Consider a topological space, that is, a space with some notion of closeness between points in the space. We can consider the set of homeomorphisms from the space into itself, that is, continuous maps with continuous inverses: functions which stretch and deform the space continuously without breaking or gluing the space. This set of homeomorphisms can be thought of as a space itself. It forms a group under functional composition. We can also define a topology on this new space of homeomorphisms. The open sets of this new function space will be made up of sets of functions that map compact subsets K into open subsets U as K and U range throughout our original topological space, completed with their finite intersections (which must be open by definition of topology) and arbitrary unions (again which must be open). This gives a notion of continuity on the space of functions, so that we can consider continuous deformation of the homeomorphisms themselves: called homotopies. We define the mapping class group by taking homotopy classes of homeomorphisms, and inducing the group structure from the functional composition group structure already present on the space of homeomorphisms.
Definition
The term mapping class group has a flexible usage. Most often it is used in the context of a manifold M. The mapping class group of M is interpreted as the group of isotopy classes of automorphisms of M. So if M is a topological manifold, the mapping class group is the group of isotopy classes of homeomorphisms of M. If M is a smooth manifold, the mapping class group is the group of isotopy classes of diffeomorphisms of M. Whenever the group of automorphisms of an object X has a natural topology, the mapping class group of X is defined as , where is the path-component of the identity in . (Notice that in the compact-open topology, path components and isotopy classes coincide, i.e., two maps f and g are in the same path-component iff they are isotopic). For topological spaces, this is usually the compact-open topology. In the low-dimensional topology literature, the mapping class group of X is usually denoted MCG(X), although it is also frequently denoted , where one substitutes for Aut the appropriate group for the category to which X belongs. Here denotes the 0-th homotopy group of a space.
So in general, there is a short exact sequence of groups:
Frequently this sequence is not split.
If working in the homotopy category, the mapping class group of X is the group of homotopy classes of homotopy equivalences of X.
There are many subgroups of mapping class groups that are frequently studied. If M is an oriented manifold, would be the orientation-preserving automorphisms of M and so the mapping class group of M (as an oriented manifold) would be index two in the mapping class group of M (as an unoriented manifold) provided M admits an orientation-reversing automorphism. Similarly, the subgroup that acts as the identity on all the homology groups of M is called the Torelli group of M.
Examples
Sphere
In any category (smooth, PL, topological, homotopy)
corresponding to maps of degree ±1.
Torus
In the homotopy category
This is because the n-dimensional torus is an Eilenberg–MacLane space.
For other categories if , one has the following split-exact sequences:
In the category of topological spaces
In the PL-category
(⊕ representing direct sum).
In the smooth category
where are the Kervaire–Milnor finite abelian groups of homotopy spheres and is the group of order 2.
Surfaces
The mapping class groups of surfaces have been heavily studied, and are sometimes called Teichmüller modular groups (note the special case of above), since they act on Teichmüller space and the quotient is the moduli space of Riemann surfaces homeomorphic to the surface. These groups exhibit features similar both to hyperbolic groups and to higher rank linear groups. They have many applications in Thurston's theory of geometric three-manifolds (for example, to surface bundles). The elements of this group have also been studied by themselves: an important result is the Nielsen–Thurston classification theorem, and a generating family for the group is given by Dehn twists which are in a sense the "simplest" mapping classes. Every finite group is a subgroup of the mapping class group of a closed, orientable surface; in fact one can realize any finite group as the group of isometries of some compact Riemann surface (which immediately implies that it injects in the mapping class group of the underlying topological surface).
Non-orientable surfaces
Some non-orientable surfaces have mapping class groups with simple presentations. For example, every homeomorphism of the real projective plane is isotopic to the identity:
The mapping class group of the Klein bottle K is:
The four elements are the identity, a Dehn twist on a two-sided curve which does not bound a Möbius strip, the y-homeomorphism of Lickorish, and the product of the twist and the y-homeomorphism. It is a nice exercise to show that the square of the Dehn twist is isotopic to the identity.
We also remark that the closed genus three non-orientable surface N3 (the connected sum of three projective planes) has:
This is because the surface N has a unique class of one-sided curves such that, when N is cut open along such a curve C, the resulting surface is a torus with a disk removed. As an unoriented surface, its mapping class group is . (Lemma 2.1).
3-Manifolds
Mapping class groups of 3-manifolds have received considerable study as well, and are closely related to mapping class groups of 2-manifolds. For example, any finite group can be realized as the mapping class group (and also the isometry group) of a compact hyperbolic 3-manifold.
Mapping class groups of pairs
Given a pair of spaces (X,A) the mapping class group of the pair is the isotopy-classes of automorphisms of the pair, where an automorphism of (X,A) is defined as an automorphism of X that preserves A, i.e. f: X → X is invertible and f(A) = A.
Symmetry group of knot and links
If K ⊂ S3 is a knot or a link, the symmetry group of the knot (resp. link) is defined to be the mapping class group of the pair (S3, K). The symmetry group of a hyperbolic knot is known to be dihedral or cyclic; moreover every dihedral and cyclic group can be realized as symmetry groups of knots. The symmetry group of a torus knot is known to be of order two Z2.
Torelli group
Notice that there is an induced action of the mapping class group on the homology (and cohomology) of the space X. This is because (co)homology is functorial and Homeo0 acts trivially (because all elements are isotopic, hence homotopic to the identity, which acts trivially, and action on (co)homology is invariant under homotopy). The kernel of this action is the Torelli group, named after the Torelli theorem.
In the case of orientable surfaces, this is the action on first cohomology H1(Σ) ≅ Z2g. Orientation-preserving maps are precisely those that act trivially on top cohomology H2(Σ) ≅ Z. H1(Σ) has a symplectic structure, coming from the cup product; since these maps are automorphisms, and maps preserve the cup product, the mapping class group acts as symplectic automorphisms, and indeed all symplectic automorphisms are realized, yielding the short exact sequence:
One can extend this to
The symplectic group is well understood. Hence understanding the algebraic structure of the mapping class group often reduces to questions about the Torelli group.
Note that for the torus (genus 1) the map to the symplectic group is an isomorphism, and the Torelli group vanishes.
Stable mapping class group
One can embed the surface of genus g and 1 boundary component into by attaching an additional hole on the end (i.e., gluing together and ), and thus automorphisms of the small surface fixing the boundary extend to the larger surface. Taking the direct limit of these groups and inclusions yields the stable mapping class group, whose rational cohomology ring was conjectured by David Mumford (one of conjectures called the Mumford conjectures). The integral (not just rational) cohomology ring was computed in 2002 by Ib Madsen and Michael Weiss, proving Mumford's conjecture.
See also
Braid groups, the mapping class groups of punctured discs
Homotopy groups
Homeotopy groups
Lantern relation
References
Stable mapping class group
External links
Madsen-Weiss MCG Seminar; many references
Geometric topology
Homeomorphisms | Mapping class group | Mathematics | 1,961 |
504,509 | https://en.wikipedia.org/wiki/Probabilistically%20checkable%20proof | In computational complexity theory, a probabilistically checkable proof (PCP) is a type of proof that can be checked by a randomized algorithm using a bounded amount of randomness and reading a bounded number of bits of the proof. The algorithm is then required to accept correct proofs and reject incorrect proofs with very high probability. A standard proof (or certificate), as used in the verifier-based definition of the complexity class NP, also satisfies these requirements, since the checking procedure deterministically reads the whole proof, always accepts correct proofs and rejects incorrect proofs. However, what makes them interesting is the existence of probabilistically checkable proofs that can be checked by reading only a few bits of the proof using randomness in an essential way.
Probabilistically checkable proofs give rise to many complexity classes depending on the number of queries required and the amount of randomness used. The class PCP[r(n),q(n)] refers to the set of decision problems that have probabilistically checkable proofs that can be verified in polynomial time using at most r(n) random bits and by reading at most q(n) bits of the proof. Unless specified otherwise, correct proofs should always be accepted, and incorrect proofs should be rejected with probability greater than 1/2. The PCP theorem, a major result in computational complexity theory, states that .
Definition
Given a decision problem L (or a language L with its alphabet set Σ), a probabilistically checkable proof system for L with completeness c(n) and soundness s(n), where , consists of a prover and a verifier. Given a claimed solution x with length n, which might be false, the prover produces a proof π which states x solves (, the proof is a string ). And the verifier is a randomized oracle Turing Machine V (the verifier) that checks the proof π for the statement that x solves (or ) and decides whether to accept the statement. The system has the following properties:
Completeness: For any , given the proof π produced by the prover of the system, the verifier accepts the statement with probability at least c(n),
Soundness: For any , then for any proof π, the verifier mistakenly accepts the statement with probability at most s(n).
For the computational complexity of the verifier, we have the randomness complexity r(n) to measure the maximum number of random bits that V uses over all x of length n and the query complexity q(n) of the verifier is the maximum number of queries that V makes to π over all x of length n.
In the above definition, the length of proof is not mentioned since usually it includes the alphabet set and all the witnesses. For the prover, we do not care how it arrives at the solution to the problem; we care only about the proof it gives of the solution's membership in the language.
The verifier is said to be non-adaptive if it makes all its queries before it receives any of the answers to previous queries.
The complexity class is the class of all decision problems having probabilistically checkable proof systems over binary alphabet of completeness c(n) and soundness s(n), where the verifier is nonadaptive, runs in polynomial time, and it has randomness complexity r(n) and query complexity q(n).
The shorthand notation is sometimes used for . The complexity class PCP is defined as .
History and significance
The theory of probabilistically checkable proofs studies the power of probabilistically checkable proof systems under various restrictions of the parameters (completeness, soundness, randomness complexity, query complexity, and alphabet size). It has applications to computational complexity (in particular hardness of approximation) and cryptography.
The definition of a probabilistically checkable proof was explicitly introduced by Arora and Safra in 1992, although their properties were studied earlier. In 1990 Babai, Fortnow, and Lund proved that PCP[poly(n), poly(n)] = NEXP, providing the first nontrivial equivalence between standard proofs (NEXP) and probabilistically checkable proofs. The PCP theorem proved in 1992 states that .
The theory of hardness of approximation requires a detailed understanding of the role of completeness, soundness, alphabet size, and query complexity in probabilistically checkable proofs.
Properties
From computational complexity point of view, for extreme settings of the parameters, the definition of probabilistically checkable proofs is easily seen to be equivalent to standard complexity classes. For example, we have the following for different setting of PCP[r(n), q(n)]:
PCP[0, 0] = P (P is defined to have no randomness and no access to a proof.)
PCP[O(log(n)), 0] = P (A logarithmic number of random bits doesn't help a polynomial time Turing machine, since it could try all possibly random strings of logarithmic length in polynomial time.)
PCP[0,O(log(n))] = P (Without randomness, the proof can be thought of as a fixed logarithmic sized string. A polynomial time machine could try all possible logarithmic sized proofs in polynomial time.)
PCP[poly(n), 0] = coRP (By definition of coRP.)
PCP[0, poly(n)] = NP (By the verifier-based definition of NP.)
The PCP theorem and MIP = NEXP can be characterized as follows:
(the PCP theorem)
.
It is also known that .
In particular, .
On the other hand, if then P = NP.
Linear PCP
A Linear PCP is a PCP in which the proof is a vector of elements of a finite field , and such that the PCP oracle is only allowed to do linear operations on the proof. Namely, the response from the oracle to a verifier query is a linear function . Linear PCPs have important applications in proof systems that can be compiled into SNARKs.
References
External links
Holographic proof at the Encyclopedia of Mathematics
PCP course notes by Subhash Khot at the New York University, 2008.
PCP course notes and A history of the PCP theorem by Ryan O'Donnell and Venkatesan Guruswami from the University of Washington, 2005.
Mathematical proofs
Randomized algorithms | Probabilistically checkable proof | Mathematics | 1,400 |
68,513 | https://en.wikipedia.org/wiki/Surface%20science | Surface science is the study of physical and chemical phenomena that occur at the interface of two phases, including solid–liquid interfaces, solid–gas interfaces, solid–vacuum interfaces, and liquid–gas interfaces. It includes the fields of surface chemistry and surface physics. Some related practical applications are classed as surface engineering. The science encompasses concepts such as heterogeneous catalysis, semiconductor device fabrication, fuel cells, self-assembled monolayers, and adhesives. Surface science is closely related to interface and colloid science. Interfacial chemistry and physics are common subjects for both. The methods are different. In addition, interface and colloid science studies macroscopic phenomena that occur in heterogeneous systems due to peculiarities of interfaces.
History
The field of surface chemistry started with heterogeneous catalysis pioneered by Paul Sabatier on hydrogenation and Fritz Haber on the Haber process. Irving Langmuir was also one of the founders of this field, and the scientific journal on surface science, Langmuir, bears his name. The Langmuir adsorption equation is used to model monolayer adsorption where all surface adsorption sites have the same affinity for the adsorbing species and do not interact with each other. Gerhard Ertl in 1974 described for the first time the adsorption of hydrogen on a palladium surface using a novel technique called LEED. Similar studies with platinum, nickel, and iron followed. Most recent developments in surface sciences include the 2007 Nobel prize of Chemistry winner Gerhard Ertl's advancements in surface chemistry, specifically
his investigation of the interaction between carbon monoxide molecules and platinum surfaces.
Chemistry
Surface chemistry can be roughly defined as the study of chemical reactions at interfaces. It is closely related to surface engineering, which aims at modifying the chemical composition of a surface by incorporation of selected elements or functional groups that produce various desired effects or improvements in the properties of the surface or interface. Surface science is of particular importance to the fields of heterogeneous catalysis, electrochemistry, and geochemistry.
Catalysis
The adhesion of gas or liquid molecules to the surface is known as adsorption. This can be due to either chemisorption or physisorption, and the strength of molecular adsorption to a catalyst surface is critically important to the catalyst's performance (see Sabatier principle). However, it is difficult to study these phenomena in real catalyst particles, which have complex structures. Instead, well-defined single crystal surfaces of catalytically active materials such as platinum are often used as model catalysts. Multi-component materials systems are used to study interactions between catalytically active metal particles and supporting oxides; these are produced by growing ultra-thin films or particles on a single crystal surface.
Relationships between the composition, structure, and chemical behavior of these surfaces are studied using ultra-high vacuum techniques, including adsorption and temperature-programmed desorption of molecules, scanning tunneling microscopy, low energy electron diffraction, and Auger electron spectroscopy. Results can be fed into chemical models or used toward the rational design of new catalysts. Reaction mechanisms can also be clarified due to the atomic-scale precision of surface science measurements.
Electrochemistry
Electrochemistry is the study of processes driven through an applied potential at a solid–liquid or liquid–liquid interface. The behavior of an electrode–electrolyte interface is affected by the distribution of ions in the liquid phase next to the interface forming the electrical double layer. Adsorption and desorption events can be studied at atomically flat single-crystal surfaces as a function of applied potential, time and solution conditions using spectroscopy, scanning probe microscopy and surface X-ray scattering. These studies link traditional electrochemical techniques such as cyclic voltammetry to direct observations of interfacial processes.
Geochemistry
Geological phenomena such as iron cycling and soil contamination are controlled by the interfaces between minerals and their environment. The atomic-scale structure and chemical properties of mineral–solution interfaces are studied using in situ synchrotron X-ray techniques such as X-ray reflectivity, X-ray standing waves, and X-ray absorption spectroscopy as well as scanning probe microscopy. For example, studies of heavy metal or actinide adsorption onto mineral surfaces reveal molecular-scale details of adsorption, enabling more accurate predictions of how these contaminants travel through soils or disrupt natural dissolution–precipitation cycles.
Physics
Surface physics can be roughly defined as the study of physical interactions that occur at interfaces. It overlaps with surface chemistry. Some of the topics investigated in surface physics include friction, surface states, surface diffusion, surface reconstruction, surface phonons and plasmons, epitaxy, the emission and tunneling of electrons, spintronics, and the self-assembly of nanostructures on surfaces. Techniques to investigate processes at surfaces include surface X-ray scattering, scanning probe microscopy, surface-enhanced Raman spectroscopy and X-ray photoelectron spectroscopy.
Analysis techniques
The study and analysis of surfaces involves both physical and chemical analysis techniques.
Several modern methods probe the topmost 1–10 nm of surfaces exposed to vacuum. These include angle-resolved photoemission spectroscopy (ARPES), X-ray photoelectron spectroscopy (XPS), Auger electron spectroscopy (AES), low-energy electron diffraction (LEED), electron energy loss spectroscopy (EELS), thermal desorption spectroscopy (TPD), ion scattering spectroscopy (ISS), secondary ion mass spectrometry, dual-polarization interferometry, and other surface analysis methods included in the list of materials analysis methods. Many of these techniques require vacuum as they rely on the detection of electrons or ions emitted from the surface under study. Moreover, in general ultra-high vacuum, in the range of 10−7 pascal pressure or better, it is necessary to reduce surface contamination by residual gas, by reducing the number of molecules reaching the sample over a given time period. At 0.1 mPa (10−6 torr) partial pressure of a contaminant and standard temperature, it only takes on the order of 1 second to cover a surface with a one-to-one monolayer of contaminant to surface atoms, so much lower pressures are needed for measurements. This is found by an order of magnitude estimate for the (number) specific surface area of materials and the impingement rate formula from the kinetic theory of gases.
Purely optical techniques can be used to study interfaces under a wide variety of conditions. Reflection-absorption infrared, dual polarisation interferometry, surface-enhanced Raman spectroscopy and sum frequency generation spectroscopy can be used to probe solid–vacuum as well as solid–gas, solid–liquid, and liquid–gas surfaces. Multi-parametric surface plasmon resonance works in solid–gas, solid–liquid, liquid–gas surfaces and can detect even sub-nanometer layers. It probes the interaction kinetics as well as dynamic structural changes such as liposome collapse or swelling of layers in different pH. Dual-polarization interferometry is used to quantify the order and disruption in birefringent thin films. This has been used, for example, to study the formation of lipid bilayers and their interaction with membrane proteins.
Acoustic techniques, such as quartz crystal microbalance with dissipation monitoring, is used for time-resolved measurements of solid–vacuum, solid–gas and solid–liquid interfaces. The method allows for analysis of molecule–surface interactions as well as structural changes and viscoelastic properties of the adlayer.
X-ray scattering and spectroscopy techniques are also used to characterize surfaces and interfaces. While some of these measurements can be performed using laboratory X-ray sources, many require the high intensity and energy tunability of synchrotron radiation. X-ray crystal truncation rods (CTR) and X-ray standing wave (XSW) measurements probe changes in surface and adsorbate structures with sub-Ångström resolution. Surface-extended X-ray absorption fine structure (SEXAFS) measurements reveal the coordination structure and chemical state of adsorbates. Grazing-incidence small angle X-ray scattering (GISAXS) yields the size, shape, and orientation of nanoparticles on surfaces. The crystal structure and texture of thin films can be investigated using grazing-incidence X-ray diffraction (GIXD, GIXRD).
X-ray photoelectron spectroscopy (XPS) is a standard tool for measuring the chemical states of surface species and for detecting the presence of surface contamination. Surface sensitivity is achieved by detecting photoelectrons with kinetic energies of about 10–1000 eV, which have corresponding inelastic mean free paths of only a few nanometers. This technique has been extended to operate at near-ambient pressures (ambient pressure XPS, AP-XPS) to probe more realistic gas–solid and liquid–solid interfaces. Performing XPS with hard X-rays at synchrotron light sources yields photoelectrons with kinetic energies of several keV (hard X-ray photoelectron spectroscopy, HAXPES), enabling access to chemical information from buried interfaces.
Modern physical analysis methods include scanning-tunneling microscopy (STM) and a family of methods descended from it, including atomic force microscopy (AFM). These microscopies have considerably increased the ability of surface scientists to measure the physical structure of many surfaces. For example, they make it possible to follow reactions at the solid–gas interface in real space, if those proceed on a time scale accessible by the instrument.
See also
References
Further reading
External links
"Ram Rao Materials and Surface Science", a video from the Vega Science Trust
Surface Chemistry Discoveries
Surface Metrology Guide
Physical chemistry | Surface science | Physics,Chemistry,Materials_science | 2,022 |
57,185,095 | https://en.wikipedia.org/wiki/Prix%20Francoeur | The Prix Francoeur, or Francoeur Prize, was an award granted by the Institut de France, Academie des Sciences, Fondation Francoeur to authors of works useful to the progress of pure and applied mathematics. Preference was given to young scholars or to geometricians not yet established. It was established in 1882 and has been discontinued.
Prize winners
1882–1888 — Emile Barbier
1889–1890 — Maximilien Marie
1891–1892 — Augustin Mouchot
1893 — Guy Robin
1894 — J. Collet
1895 — Jules Andrade
1896 — Alphonse Valson
1897 — Guy Robin
1898 — Aimé Vaschy
1899 — Le Cordier
1900 — Edmond Maillet
1901 — Léonce Laugel
1902–1904 — Emile Lemoine
1905 — Xavier Stouff
1906–1912 — Emile Lemoine
1913–1914 — A. Claude
1915 — Joseph Marty
1916 — René Gateaux
1917 — Henri Villat
1918 — Paul Montel
1919 — Georges Giraud
1920–1921 — René Baire
1922 — Louis Antoine
1923 — Gaston Bertrand
1924 — Ernest Malo
1925 — Georges Valiron
1926 — Gaston Julia
1927 — Georges Cerf
1928 — Szolem Mandelbrojt
1929 — Paul Noaillon
1930 — Eugène Fabry
1931 — Jacques Herbrand
1932 — Henri Milloux
1933 — Paul Mentre
1934 — Jean Favard
1935 — André Weil
1936 — Claude Chevalley
1937 — Jean Leray
1938 — Jean Dieudonné
1939 — Marcel Brelot
1940 — Charles Ehresmann
1941 — Paul Vincensini
1942 — Paul Dubreil
1943 — René de Possel
1944 — No award
1945 — No award
1946 — Laurent Schwartz
1952 — No award
1957 — Jean-Pierre Serre
1962 — Jean-Louis Koszul
1967 — Jacques Neveu
1972 — Pierre Gabriel
1977 — Jean-Claude Tougeron
1982 — François Laudenbach
1987 — Jean-Louis Loday
1992 — Georges Skandalis
See also
List of mathematics awards
References
Mathematics awards
French awards
1882 establishments in France | Prix Francoeur | Technology | 408 |
7,339,649 | https://en.wikipedia.org/wiki/Preventive%20action | A preventive action is a change implemented to address a weakness in a management system that is not yet responsible for causing nonconforming product or service.
Candidates for preventive action generally result from suggestions from customers or participants in the process but preventive action is a proactive process to identify opportunities for improvement rather than a simple reaction to identified problems or complaints. Apart from the review of the operational procedures, the preventive action might involve analysis of data, including trend and risk analyses and proficiency-testing results.
The focus for preventive actions is to avoid creating nonconformances, but also commonly includes improvements in efficiency. Preventive actions can address technical requirements related to the product or service supplied or to the internal management system.
Many organizations require that when opportunities to improve are identified or if preventive action is required, action plans are developed, implemented and monitored to reduce the likelihood of nonconformities and to take advantage of the opportunities for improvement. Additionally, a thorough preventive action process will include the application of controls to ensure that the preventive actions are effective.
In some settings, corrective action is used as an encompassing term that includes remedial actions, corrective actions and preventive actions.
Risk and decision making
Preventive actions rely upon on the consequences of change. Once changed, inevitably, risks should be taken into consideration. In this case preventive actions aim to minimize or, where possible, eliminate the risks.
Risks arise when little is known and understood about a particular situation. The chances of risk are minimized whilst one has better knowledge of the opportunities and consequences that could follow a situation. In order to reduce risk, a full analysis of potential best and worst results is required. Before taking into consideration any plan, people should be aware of the consequences of both success and failure. Not only the internal aspects - capability, expertise and willingness of staff- but also the external aspects of an organisation - stakeholders, customers, clients - should be assessed.
Strategic risk management works with defining an organisation's approach to risk in terms of condition, attitudes and expertise. It identifies the possible areas of risk and assures that the proper approach is used. Then operational risk management will insure that steps for minimizing or eliminating the risk are followed. A strategic approach of the risk management includes studying the environment and being aware of the issues that must be considered in any situation.
Risks can occur due to a range of unexpected possible and potential events outside of the organisation's control, such as: political instability, change in currency, changes of the weather which could lead to a change in customer behavior, etc.
Therefore, in an organisation it is important to know and understand what events could take place, where and why. So, managers should prioritize some steps of preventive actions in order to anticipate these kind of issues, especially focusing more on:
Patterns of behavior
Accidents
Single events and errors
"Patterns of behavior" relates to the morale and motivation of people. The effects of human behavior (such as victimization, bullying, harassment and discrimination) could affect confidence, weakening the relationships meant to lead to performance.
Accidents could happen anytime and anywhere. Thus, an organisation has to assure that the accidents are kept to a minimal level. In this situation preventive actions should focus more on the nature and quality of the working environment, safety aspects and technology.
Single events and errors are very hard to be managed and impossible to be eliminated. The risk should be kept at a minimum through supervision systems, regular inspections and procedures.
In order to perform a change, an organisation has to do a forecast, deeply understanding where that event could lead and its consequences. Thus, the risk of a particular event and its probability of occurring should be clear. Using this information, one can understand and better make future decisions, proposal and initiatives.
Examples in management
Preventive actions differ from one organisation to another. Their number is vast, among them counting:
Assessing business trends
Monitoring processes
Notifications regarding any situation
Perform risk analysis
Assessing new technology
Regular training and checking
Recovery planning
Safety and security policies
Audit analysis
Technology safety and security
Nowadays, due to fast changes in engineering, there is a large emphasis in the enhancement of safety and security regarding technology. However, in order to avoid some issues, more powerful safety analysis techniques are constantly being developed. As safety and security issues can occur anytime, intentionally or not, more preventive strategies against loss or hacking are enhanced. These actions aim to focus on the possible causes of the problem, rather than solving an already critical situation.
Computing
Computer security tries to defend computers by assuring that their networks are not accessed or disrupted. They approach different tactics in order to protect against attackers, creating barriers or lines of defense, through firewalls or encryption. However, losses result also from actions not executed properly (such as human errors) or from system errors among components.
Losses could be prevented through preventive strategies and tactics. Security analysts could find possible attackers, highlighting their reasons, potential and purpose. Owning proper knowledge, security experts could assess their own system and identify the most suitable defense strategy. Tracing is one of the methods used by people in order to find any issue or deficiency in their system.
Focusing first on strategy rather than tactics can be achieved by adopting a new system-theoretic causality model recently developed to provide a more powerful approach to engineering for safety. Causality models used in accidents are either traditional, caused by human errors, or more complex, caused by wrong interaction between components and systems errors.
STAMP (System-Theoretic Accident Model and Processes) is a model of accident causality used in investigating potential accidents that can occur. In this case, issues are seen as results of inadequate control of the safety components used.
Nowadays more powerful systems that analyse safety have been created. STPA (System-Theoretic Process Analysis) uses such techniques, being based on the STAMP model of causality. Once the cause is identified, STPA examines the system, creating a proper scenario that could solve the issue.
Information systems
Regarding technology, not only the safety and security of computers and isolated devices can be threatened, but also of entire complex information systems. As not all decisions made in an organisation are based on known rules, the analytical manager will examine in details the situation and anticipates potential issues that can occur. However. many decisions could have a great impact on some aspect of the organisation and cannot be easily reversed.
Thus, modelling and simulating play the roles of preventive actions, being applied earlier for the design of the process, where real factual data is not available. It is an abstract representation, that includes all aspects of a process so its potential impact could be better analysed. Such a representation before implementing can be done through business process modeling (BPM).
On one hand, there are indeed the deterministic systems that rely on the input data and are capable of predicting accurate output. On the other hand, there is the probabilistic system as well, which does not forecast with completely accuracy. However, both deterministic and probabilistic systems need some earlier actions that could prevent issues.
Analysis and design count are among the most important activities done before starting-up a business. During analysing, one gets a better understanding over the potential of the business; a diagrammatic model ensuring the agreement between IT professionals and system users. System design aims to design the way in which the system will work, this being eventually followed by system building.
In society
Preventive healthcare
Preventive healthcare or preventive medicine refers to the measures taken in order to prevent and treat diseases. As there is a wide range of diseases in the world, there is also a wide variety of factors that influence those health disorders, such as environment, genetic and lifestyle. Preventive healthcare relies on the anticipation of the diseases, before they take occur. Among these preventing methods, there are:
regular check-ups at the doctors, in order to prevent risk factors or to monitor different diseases
getting scanned, such as scanning of chronic diseases (cancer, diabetes, heart diseases)
vaccinations
trying to have a healthy lifestyle, through healthy eating and regularly exercising
avoiding some harmful habits, such as tobacco or alcohol
life insurance
However, these traditional healthcare strategies are not the only actions that could prevent health diseases. A very important step is recognizing and being aware of some certain health changes that can turn into real health threats. Examples of minor problems that people usually do not take seriously into consideration are numerous, such as losing involuntary weight, lasting coughs, body changes and others aches and pains. Once with noticing a disorder, people can take action by checking a specialist in order to avoid the situation getting worse.
Crime prevention
Crime prevention relies on the actions that defend and fight against criminals and crimes, such as murders, robberies, burglaries, black mail, high jacking or smuggling.
Criminologists focus on preventing the risks that can cause crime rather than reacting to crime that have already occurred.
There is a great number of techniques used in reducing crime. These could be split up into ones at a large scale, such as strategies implemented by a society or community, and others at a smaller one, such as personal security.
Examples of collective strategies preventing criminality:
Increasing capacity of the police in an area
Investing in jails
Monitoring areas
Support exchange of information regarding violent activities and events
Enforcing the security
Introducing violence preventing behavior in education
However, in most of the cases people tend to rely on their own personal skills and capabilities that could help them in preventing and defending criminal attacks. For example:
Self-defense training
Securing goods
Avoiding wilderness
Anti-terrorism operation
Preventive actions taken against acts of terrorism could either be preventive lockdown (preemptive lockdown to mitigate the risk) or an emergency lockdown (during or after the occurrence of the risk).
The August 2019 clampdown in Jammu and Kashmir is an example of preventive lockdown to eliminate the risk to the lives of civilians from the militants, violent protesters and stonepelters.
See also
Corrective and Preventive Action (CAPA)
Preventive diplomacy
Risk management
Preventive lockdown
References
Quality management
Prevention | Preventive action | Engineering | 2,056 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.