id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
1,304,321 | https://en.wikipedia.org/wiki/Mucoid%20plaque | Mucoid plaque (or mucoid cap or rope) is a pseudoscientific term used by some alternative medicine advocates to describe what is claimed to be a combination of harmful mucus-like material and food residue that they say coats the gastrointestinal tract of most people. The term was coined by Richard Anderson, a naturopath and entrepreneur, who sells a range of products that claim to "cleanse" the body of such purported plaques.
Many such "colon cleansing" products are promoted to the public on websites that have been described as making misleading medical claims. The presence of laxatives, bentonite clay, and fibrous thickening agents in some of these "cleansing agents" has led to suggestions that the products themselves produce the excreted matter regarded as the plaque.
The concept of a 'mucoid plaque' has been dismissed by medical experts as having no anatomical or physiological basis.
History
Various forms of colon cleansing were popular in the 19th and early 20th century. In 1932, Bastedo wrote in the Journal of the American Medical Association about his observation of mucus masses being removed during a colon irrigation procedure: "When one sees the dirty gray, brown or blackish sheets, strings and rolled up wormlike masses of tough mucus with a rotten or dead-fish odor that are obtained by colon irrigations, one does not wonder that these patients feel ill and that they obtain relief and show improvement as the result of the irrigation."
While colonic irrigation enjoyed a vogue in the early 20th century as a possible cure for numerous diseases, subsequent research showed that it was useless and potentially harmful. With the scientific rationale for "colon cleansing" disproven, the idea fell into disrepute as a form of quackery, with a 2005 medical review stating that "there is no evidence to support this ill-conceived theory that has been long abandoned by the scientific community." Similarly, in response to claims that colon cleansing removes "toxins", Bennett Roth, a gastroenterologist at the University of California, stated that "there is absolutely no science to this whatsoever. There is no such thing as getting rid of quote-unquote 'toxins.' The colon was made to carry stool. This is total baloney." The preoccupation with such bowel management products has been described as a "quaint and amusing chapter in the history of weird medical beliefs." Nevertheless, interest in colonic "autointoxication" as a cause of illness, and in colonic irrigation as a cure, enjoyed a revival in alternative medicine at the end of the 20th century.
The term "mucoid plaque" was coined and popularized by naturopath and entrepreneur Richard Anderson, who sells a range of products that claim to cleanse the body of such purported plaques by causing them to be eliminated. Anderson describes a mucoid plaque as a rubbery, ropey, generally green gel-like mucus film that covers the epithelial cells of the hollow organs, particularly of the alimentary canal. Anderson also claims the plaque can impair digestion and the absorption of nutrients, hold pathogens, and cause illnesses such as diarrhea, bowel cancer, allergies and skin conditions. Based on these claims, he promotes efforts to remove the plaque, and sells a range of products to this end.
Though Anderson argues that his beliefs are backed by scientific research, his claims are primarily supported by anecdotal evidence rather than empirical data, and doctors have noted the absence of mucoid plaques. Anderson claims this is due to medical textbooks failing to cover the concept, which results in doctors not knowing what to look for.
Medical evaluation
Practicing physicians have dismissed the concept of mucoid plaque as a hoax and a "non-credible concept". A pathologist at the University of Texas School of Medicine addressed Anderson's claims directly, saying that he has "seen several thousand intestinal biopsies and have never seen any 'mucoid plaque.' This is a complete fabrication with no anatomic basis."
Another pathologist, Edward Friedlander, noted that, in his experience, he has never observed anything resembling a "toxic bowel settlement", and that some online photographs actually depict what he recognises as a blood clot. Commenting on claims that waste material can adhere to the colon, Douglas Pleskow, a gastroenterologist at Beth Israel Deaconess Medical Center, stated, "that is the urban legend. In reality, most people clear their GI tract within three days."
In a review of websites promoting products that claim to remove 'mucoid rope' or plaque from consumers' intestines, Howard Hochster of New York University wrote that these websites are "abundant, quasi-scientific, and unfortunately convincing to a biologically uneducated public." He noted that although such sites are entertaining, they are disturbing in that they promote a belief that has no basis in physiology.
Hochster also noted that a preparation marketed to remove mucoid plaque contains laxatives and bulky fibrous ingredients. Thus, the rope-like fecal material expelled from people who consume this product "certainly is a result of the figs and senna in this preparation," rather than any sort of pathologic 'plaque'. Other 'colon cleanser' products contain bentonite clay that, when ingested, would also result in production of bulky stools.
In many cases, customers purchase supplement products that are said to help the body excrete the so-called 'mucoid plaque'. The customer may consume a number of pills, and then within 12–48 hours, will pass a rope-like fecal material in their subsequent bowel movements. This fecal material is said to be the 'mucoid plaque'. However, analysis of supplements consumed by the customer shows that the active ingredient is very similar to that of clay used in clumping cat litter. This clay takes a negative mould of the large intestine which is then excreted during the customer's next bowel movement.
References
Alternative detoxification
Naturopathy
Digestive system
Pseudoscience | Mucoid plaque | Biology | 1,267 |
2,878,457 | https://en.wikipedia.org/wiki/Lambda%20Arae | Lambda Arae (λ Ara, λ Arae) is the Bayer designation for a star in the southern constellation of Ara. It is at a distance of from Earth. The apparent visual magnitude of this star is 4.77, making it bright enough to be seen with the naked eye.
The spectrum of this star matches a stellar classification of F4 V, which places it among the category of F-type main sequence stars. It shines with 4.7 times the luminosity of the Sun. The outer atmosphere is radiating this energy at an effective temperature of 6,495 K, giving it the yellow-white hue of an F-type star. There is some evidence that this may be a binary star system consisting of two stars with identical masses.
Examination of Lambda Arae with the Spitzer Space Telescope shows an excess of infrared emission at a wavelength of 70 μm. This suggests it may be orbited by a disk of dust at a radius of more than 15 astronomical units
References
External links
HR 6569
Image Lambda Arae
160032
Arae, Lambda
Ara (constellation)
F-type subgiants
086486
9597
6569
Durchmusterung objects | Lambda Arae | Astronomy | 243 |
1,140,043 | https://en.wikipedia.org/wiki/Squeeze%20mapping | In linear algebra, a squeeze mapping, also called a squeeze transformation, is a type of linear map that preserves Euclidean area of regions in the Cartesian plane, but is not a rotation or shear mapping.
For a fixed positive real number , the mapping
is the squeeze mapping with parameter . Since
is a hyperbola, if and , then and the points of the image of the squeeze mapping are on the same hyperbola as is. For this reason it is natural to think of the squeeze mapping as a hyperbolic rotation, as did Émile Borel in 1914, by analogy with circular rotations, which preserve circles.
Logarithm and hyperbolic angle
The squeeze mapping sets the stage for development of the concept of logarithms. The problem of finding the area bounded by a hyperbola (such as is one of quadrature. The solution, found by Grégoire de Saint-Vincent and Alphonse Antonio de Sarasa in 1647, required the natural logarithm function, a new concept. Some insight into logarithms comes through hyperbolic sectors that are permuted by squeeze mappings while preserving their area. The area of a hyperbolic sector is taken as a measure of a hyperbolic angle associated with the sector. The hyperbolic angle concept is quite independent of the ordinary circular angle, but shares a property of invariance with it: whereas circular angle is invariant under rotation, hyperbolic angle is invariant under squeeze mapping. Both circular and hyperbolic angle generate invariant measures but with respect to different transformation groups. The hyperbolic functions, which take hyperbolic angle as argument, perform the role that circular functions play with the circular angle argument.
Group theory
In 1688, long before abstract group theory, the squeeze mapping was described by Euclid Speidell in the terms of the day: "From a Square and an infinite company of Oblongs on a Superficies, each Equal to that square, how a curve is begotten which shall have the same properties or affections of any Hyperbola inscribed within a Right Angled Cone."
If and are positive real numbers, the composition of their squeeze mappings is the squeeze mapping of their product. Therefore, the collection of squeeze mappings forms a one-parameter group isomorphic to the multiplicative group of positive real numbers. An additive view of this group arises from consideration of hyperbolic sectors and their hyperbolic angles.
From the point of view of the classical groups, the group of squeeze mappings is , the identity component of the indefinite orthogonal group of 2×2 real matrices preserving the quadratic form . This is equivalent to preserving the form via the change of basis
and corresponds geometrically to preserving hyperbolae. The perspective of the group of squeeze mappings as hyperbolic rotation is analogous to interpreting the group (the connected component of the definite orthogonal group) preserving quadratic form as being circular rotations.
Note that the "" notation corresponds to the fact that the reflections
are not allowed, though they preserve the form (in terms of and these are and ; the additional "" in the hyperbolic case (as compared with the circular case) is necessary to specify the identity component because the group has connected components, while the group has components: has components, while only has 1. The fact that the squeeze transforms preserve area and orientation corresponds to the inclusion of subgroups – in this case – of the subgroup of hyperbolic rotations in the special linear group of transforms preserving area and orientation (a volume form). In the language of Möbius transformations, the squeeze transformations are the hyperbolic elements in the classification of elements.
A geometric transformation is called conformal when it preserves angles. Hyperbolic angle is defined using area under y = 1/x. Since squeeze mappings preserve areas of transformed regions such as hyperbolic sectors, the angle measure of sectors is preserved. Thus squeeze mappings are conformal in the sense of preserving hyperbolic angle.
Applications
Here some applications are summarized with historic references.
Relativistic spacetime
Spacetime geometry is conventionally developed as follows: Select (0,0) for a "here and now" in a spacetime. Light radiant left and right through this central event tracks two lines in the spacetime, lines that can be used to give coordinates to events away from (0,0). Trajectories of lesser velocity track closer to the original timeline (0,t). Any such velocity can be viewed as a zero velocity under a squeeze mapping called a Lorentz boost. This insight follows from a study of split-complex number multiplications and the diagonal basis which corresponds to the pair of light lines.
Formally, a squeeze preserves the hyperbolic metric expressed in the form xy; in a different coordinate system. This application in the theory of relativity was noted in 1912 by Wilson and Lewis, by Werner Greub, and by Louis Kauffman. Furthermore, the squeeze mapping form of Lorentz transformations was used by Gustav Herglotz (1909/10) while discussing Born rigidity, and was popularized by Wolfgang Rindler in his textbook on relativity, who used it in his demonstration of their characteristic property.
The term squeeze transformation was used in this context in an article connecting the Lorentz group with Jones calculus in optics.
Corner flow
In fluid dynamics one of the fundamental motions of an incompressible flow involves bifurcation of a flow running up against an immovable wall.
Representing the wall by the axis y = 0 and taking the parameter r = exp(t) where t is time, then the squeeze mapping with parameter r applied to an initial fluid state produces a flow with bifurcation left and right of the axis x = 0. The same model gives fluid convergence when time is run backward. Indeed, the area of any hyperbolic sector is invariant under squeezing.
For another approach to a flow with hyperbolic streamlines, see .
In 1989 Ottino described the "linear isochoric two-dimensional flow" as
where K lies in the interval [−1, 1]. The streamlines follow the curves
so negative K corresponds to an ellipse and positive K to a hyperbola, with the rectangular case of the squeeze mapping corresponding to K = 1.
Stocker and Hosoi described their approach to corner flow as follows:
we suggest an alternative formulation to account for the corner-like geometry, based on the use of hyperbolic coordinates, which allows substantial analytical progress towards determination of the flow in a Plateau border and attached liquid threads. We consider a region of flow forming an angle of π/2 and delimited on the left and bottom by symmetry planes.
Stocker and Hosoi then recall Moffatt's consideration of "flow in a corner between rigid boundaries, induced by an arbitrary disturbance at a large distance." According to Stocker and Hosoi,
For a free fluid in a square corner, Moffatt's (antisymmetric) stream function ... [indicates] that hyperbolic coordinates are indeed the natural choice to describe these flows.
Bridge to transcendentals
The area-preserving property of squeeze mapping has an application in setting the foundation of the transcendental functions natural logarithm and its inverse the exponential function:
Definition: Sector(a,b) is the hyperbolic sector obtained with central rays to (a, 1/a) and (b, 1/b).
Lemma: If bc = ad, then there is a squeeze mapping that moves the sector(a,b) to sector(c,d).
Proof: Take parameter r = c/a so that (u,v) = (rx, y/r) takes (a, 1/a) to (c, 1/c) and (b, 1/b) to (d, 1/d).
Theorem (Gregoire de Saint-Vincent 1647) If bc = ad, then the quadrature of the hyperbola xy = 1 against the asymptote has equal areas between a and b compared to between c and d.
Proof: An argument adding and subtracting triangles of area , one triangle being {(0,0), (0,1), (1,1)}, shows the hyperbolic sector area is equal to the area along the asymptote. The theorem then follows from the lemma.
Theorem (Alphonse Antonio de Sarasa 1649) As area measured against the asymptote increases in arithmetic progression, the projections upon the asymptote increase in geometric sequence. Thus the areas form logarithms of the asymptote index.
For instance, for a standard position angle which runs from (1, 1) to (x, 1/x), one may ask "When is the hyperbolic angle equal to one?" The answer is the transcendental number x = e.
A squeeze with r = e moves the unit angle to one between (e, 1/e) and (ee, 1/ee) which subtends a sector also of area one. The geometric progression
e, e2, e3, ..., en, ...
corresponds to the asymptotic index achieved with each sum of areas
1,2,3, ..., n,...
which is a proto-typical arithmetic progression A + nd where A = 0 and d = 1 .
Lie transform
Following Pierre Ossian Bonnet's (1867) investigations on surfaces of constant curvatures, Sophus Lie (1879) found a way to derive new pseudospherical surfaces from a known one. Such surfaces satisfy the Sine-Gordon equation:
where are asymptotic coordinates of two principal tangent curves and their respective angle. Lie showed that if is a solution to the Sine-Gordon equation, then the following squeeze mapping (now known as Lie transform) indicates other solutions of that equation:
Lie (1883) noticed its relation to two other transformations of pseudospherical surfaces: The Bäcklund transform (introduced by Albert Victor Bäcklund in 1883) can be seen as the combination of a Lie transform with a Bianchi transform (introduced by Luigi Bianchi in 1879.) Such transformations of pseudospherical surfaces were discussed in detail in the lectures on differential geometry by Gaston Darboux (1894), Luigi Bianchi (1894), or Luther Pfahler Eisenhart (1909).
It is known that the Lie transforms (or squeeze mappings) correspond to Lorentz boosts in terms of light-cone coordinates, as pointed out by Terng and Uhlenbeck (2000):
Sophus Lie observed that the SGE [Sinus-Gordon equation] is invariant under Lorentz transformations. In asymptotic coordinates, which correspond to light cone coordinates, a Lorentz transformation is .
This can be represented as follows:
where k corresponds to the Doppler factor in Bondi k-calculus, η is the rapidity.
See also
Indefinite orthogonal group
Isochoric process
References
HSM Coxeter & SL Greitzer (1967) Geometry Revisited, Chapter 4 Transformations, A genealogy of transformation.
P. S. Modenov and A. S. Parkhomenko (1965) Geometric Transformations, volume one. See pages 104 to 106.
(see page 9 of e-link)
Affine geometry
Conformal mappings
Linear algebra
Articles containing proofs
Minkowski spacetime | Squeeze mapping | Mathematics | 2,332 |
73,561,616 | https://en.wikipedia.org/wiki/Wind%20setup | Wind setup, also known as wind effect or storm effect, refers to the rise in water level in seas, lakes, or other large bodies of water caused by winds pushing the water in a specific direction. As the wind moves across the water’s surface, it applies shear stress to the water, generating a wind-driven current. When this current encounters a shoreline, the water level increases due to the accumulation of water, which creates a hydrostatic counterforce that balances the shear force applied by the wind.
During storms, wind setup forms part of the overall storm surge. For example, in the Netherlands, wind setup during a storm surge can raise water levels by as much as 3 metres above normal tidal levels. In tropical regions, such as the Caribbean, wind setup during cyclones can elevate water levels by up to 5 metres. This phenomenon becomes especially significant when water is funnelled into shallow or narrow areas, leading to higher storm surges.
Examples of the effects of wind setup include Hurricanes Gamma and Delta in 2020, during which wind setup was a major factor when strong winds and atmospheric pressure drops caused higher-than-expected coastal flooding across the Yucatán Peninsula in Mexico. Similarly, in California’s Suisun Marsh, wind setup has been show to be a significant factor affecting local water levels, with strong winds pushing water into levees, contributing to frequent breaches and flooding.
Observation
In lakes, wind setup often leads to noticeable fluctuations in water levels. This effect is particularly clear in lakes with well-regulated water levels, such as the IJsselmeer, where the relationship between wind speed, water depth, and fetch length can be accurately measured and observed.
At sea, however, wind setup is typically masked by other factors, such as tidal variations. To measure the wind setup effect in coastal areas, the (calculated) astronomical tide is subtracted from the observed water level. For instance, during the North Sea flood of 1953, the highest water level along the Dutch coast was recorded at 2.79 metres at the Vlissingen tidal station, while the highest wind setup—measuring 3.52 metres—was observed at Scheveningen.
The highest wind setup ever recorded in the Netherlands, reaching 3.63 metres, occurred in Dintelsas, Steenbergen during the 1953 flood. However, globally, tropical regions like the Gulf of Mexico and the Caribbean often experience even higher wind setups during hurricane events, underscoring the importance of this phenomenon in coastal and flood management strategies.
Calculation of wind setup
Based on the equilibrium between the shear stress due to the wind on the water and the hydrostatic back pressure, the following equation is used:
in which:
h = water depth
x = distance
u= wind speed
, Ippen suggests = 3.3*10−6
= angle of the wind relative to the coast
g = acceleration of gravity
cw has a value between 0.8*10−3 and 3.0*10−3
Application at open coasts
For an open coast, the equation becomes:
in which
Δh = wind setup
F = fetch length, this is the distance the wind blows over the water
However, this formula is not always applicable, particularly when dealing with open coasts or varying water depths. In such cases, a more complex approach is needed, which involves solving the differential equation using a one- or two-dimensional grid. This method, combined with real-world data, is used in countries like the Netherlands to predict wind setup along the coast during potential storms.
Application at (shallow) lakes and confined small-fetch areas
To calculate the wind setup in a lake, the following solution for the differential equation is used:
In 1966 the Delta Works Committee recommended using a value of 3.8*10−6 for under Dutch conditions. However, an analysis of measurement data from the IJsselmeer between 2002 and 2013 led to a more reliable value for , specifically = 2.2*10−6.
This study also found that the formula underestimated wind setup at higher wind speeds. As a result, it has been suggested to increase the exponent of the wind speed from 2 to 3 and to further adjust to =1.7*10−7. This modified formula can predict the wind setup on the IJsselmeer with an accuracy of approximately 15 centimetres.
For confined environments such as marshes or small fetches, a simplified empirical model for wind setup has been proposed by Algra et al (2023). This model was designed to estimate wind setup in the Suisun Marsh, where fetch lengths are smaller and shallow water depth conditions apply. The equation is expressed as:
Where:
= wind setup (water level rise),
= constant (typically derived empirically),
= wind speed measured 10 metres above the water surface,
= gravitational constant,
= average water depth,
= fetch length,
= angle between wind direction and the fetch.
This equation assumes that the fetch is small and simplifies the wind setup process by making the wind setup linearly proportional to the square of the wind speed. In their 2023 analysis of Van Sickle Island, Algra et al. found this model effective for environments with limited fetch and shallow depth, where the more complex approaches used for open coasts are unnecessary. Unlike the more detailed differential equation formulations used for larger open coasts or lakes, the Van Sickle model provides a practical approximation for confined areas where wind setup may still be significant but where spatial constraints simplify the overall water movement dynamics.
Note
Wind setup should not be mistaken for wave run-up, which refers to the height which a wave reaches on a slope, or wave setup which is the increase in water level caused by breaking waves.
See also
Storm surge
Coastal flooding
Coastal Engineering
References
Coastal engineering
Civil engineering
Hydraulic engineering
Physical oceanography
Water waves | Wind setup | Physics,Chemistry,Engineering,Environmental_science | 1,172 |
4,320,269 | https://en.wikipedia.org/wiki/Grapefruit%20mercaptan | Grapefruit mercaptan is a natural organic compound found in grapefruit. It is a monoterpenoid that contains a thiol (also known as a mercaptan) functional group. Structurally a hydroxy group of terpineol is replaced by the thiol in grapefruit mercaptan, so it also called thioterpineol. Volatile thiols typically have very strong, often unpleasant odors that can be detected by humans in very low concentrations. Grapefruit mercaptan has a very potent, but not unpleasant, odor, and it is the chemical constituent primarily responsible for the aroma of grapefruit. This characteristic aroma is a property of only the R enantiomer.
Pure grapefruit mercaptan, or citrus-derived oils rich in grapefruit mercaptan, are sometimes used in perfumery and the flavor industry to impart citrus aromas and flavors. However, both industries actively seek substitutes for grapefruit mercaptans for use as a grapefruit flavorant, since its decomposition products are often highly disagreeable to the human sense of smell.
The detection threshold for the (+)-(R) enantiomer of grapefruit mercaptan is 2×10−5 ppb, or equivalently a concentration of 2×10−14. This corresponds to being able to detect 2×10−5 mg in one metric ton of water - one of the lowest detection thresholds ever recorded for a naturally occurring compound.
See also
Nootkatone, another aroma compound in grapefruit
Terpineol, where a hydroxyl is in place of the thiol
References
Thiols
Flavors
Perfume ingredients
Monoterpenes
Cyclohexenes | Grapefruit mercaptan | Chemistry | 355 |
26,789,633 | https://en.wikipedia.org/wiki/Cadet%20%28genealogy%29 | In genealogy, a cadet is a younger son, as opposed to the firstborn heir.
Etymology
The word has been recorded in English since 1634, originally for a young son, identical to the French, which is itself derived from the Gascon Occitan (spoken in Gascony in southwest France) capdet "captain, chief", in turn from the Late Latin , the diminutive of Latin "head" (hence also chief).
Younger sons from Gascon families were apparently commonly sent to the French court to serve as officers; as a rule, non-heirs from the European nobility sought careers in the military or the clergy.
Usage
As an adjective, "cadet" is used to signify a junior branch of a family. Thus, the Orléans line was a cadet branch of the Bourbon family, which itself was a cadet branch of the House of Capet.
For the status as such, the noun cadency exists, as in the heraldic term mark of cadency, for a feature which distinguishes a cadet son's coat of arms from the father's which is passed on unaltered only to the (usually firstborn) heir.
Military has been the traditional career choice of the nobility throughout the centuries, and it has been customary that the firstborn son inherited the title, lands and possessions, while the younger sons of a noble family went to the military, often to be trained as officers. Hence the meaning "cadet branch" for a junior branch of a family and the term "cadet" for an officer trainee.
Genealogy
Kinship and descent | Cadet (genealogy) | Biology | 318 |
11,799,986 | https://en.wikipedia.org/wiki/Colletotrichum%20mangenotii | Colletotrichum mangenotii is a fungal plant pathogen.
References
mangenotii
Fungal plant pathogens and diseases
Fungi described in 1952
Fungus species | Colletotrichum mangenotii | Biology | 34 |
12,037,194 | https://en.wikipedia.org/wiki/Adrenosterone | Adrenosterone, also known as Reichstein's substance G , as well as 11-ketoandrostenedione (11-KA4), 11-oxoandrostenedione (11-OXO), and androst-4-ene-3,11,17-trione, is a steroid hormone with an extremely weak androgenic effect, and an intermediate/prohormone of 11-ketotestosterone. It was first isolated in 1936 from the adrenal cortex by Tadeus Reichstein at the Pharmaceutical Institute in the University of Basel. Originally, adrenosterone was called Reichstein's substance G. Adrenosterone occurs in trace amounts in humans as well as most mammals and in larger amounts in fish, where it is a precursor to the primary androgen, 11-ketotestosterone.
Adrenosterone is sold as a dietary supplement since 2007 as a fat loss and muscle gaining supplement. It is thought to be a competitive selective 11βHSD1 inhibitor, which is responsible for activation of cortisol from cortisone. Thus preventing muscle breakdown, and contributing to a majority of its effects.
See also
11β-Hydroxyandrostenedione
11-Ketodihydrotestosterone
References
Anabolic–androgenic steroids
Androstanes
Hormones of the suprarenal cortex
Sex hormones
Steroid hormones
Triketones | Adrenosterone | Biology | 311 |
28,246,199 | https://en.wikipedia.org/wiki/BlackBerry%20Torch%209800 | The BlackBerry Torch 9800 is a 2010 model in the BlackBerry line of smartphones. It combines a physical QWERTY keyboard with a sliding multi-touch screen display and runs on BlackBerry OS 6. Introduced on August 3, 2010, the phone became available exclusively on AT&T on August 12, 2010.
The device looks similar to existing BlackBerry devices, but due to the sliding keyboard features a bigger 3.2 inch 480x360 screen (the same resolution as the BlackBerry Storm and BlackBerry Storm 2) and these features allow the BlackBerry Torch 9800 to look unique. The software is seen by most to be an improvement over the previous version. The device also features far greater social network integration, a powerful universal search feature, and a WebKit browser comparable to those found on iOS and Android devices.
On August 12, 2011, the updated Torch 9810 was released on Bell Mobility and Telus Mobility. The updated version includes a faster processor, and more memory, as well as including the new BlackBerry OS 7.
History
Speculation of the Torch began in April 2008 when RIM CEO Mike Lazaridis introduced the BlackBerry 6 operating system during his keynote address at WES 2010. A new touchscreen device was widely anticipated as the OS 6 update seemed to be touch/gesture based. Images of a BlackBerry device prototype with a touchscreen and slide-out QWERTY keyboard started emerging in late spring and early summer of 2010. The device was tentatively named the "Bold 9800" or simply the "9800 Slider". The 9800 eventually was officially named the "Torch" by RIM during its August announcement. It can be assumed that the device name was derived from Torch Mobile, the company that RIM purchased in 2009 in aid with their development of a Webkit based browser.
Official Specifications
Size:
111mm x 62mm x 14.6mm (closed)
148mm x 62mm x 14.6mm (open)
Weight: 5.68 ounces (161.1 grams)
Processor: Marvell PXA940 running at 624 MHz
Image System Processor (ISP): STmicroelectronics STV0987
Display: 3.2 inch HVGA+(480x360) Synaptics controlled touch screen
Camera: 5.0 MP camera (JPEG encoding) with flash, 2x digital zoom, image stabilization and auto-focus
Video recorder: up to 480p resolution
Battery: 1300 mAhr removable/rechargeable lithium-ion cell
Battery life: 18 days (GSM) or 14 days (UMTS) standby; 5.5 hours (GSM) or 5.8 hours (UMTS) talk
GPS using A-GPS with extended ephemeris and maps
Input: trackpad, touch screen with on-screen keyboard (QWERTY and SureType), slide-out backlit QWERTY keyboard
Video format support: MPEG4, H.263, H.264, WMV3
Audio format support: MP3, AMR-NB, AAC-LC, AAC+, eAAC+, WMA, WMV, FLAC, Ogg Vorbis
Ringtones: MIDI, MP3
Connectivity: 3G; Bluetooth 2.1 + EDR; 802.11 b/g/n Wi-Fi; 3.5mm stereo headset, Micro-USB
Networks:
Tri-band 3G UMTS/HSDPA networks: 2100/1900/850/800 MHz
Quad-band GSM/GPRS/EDGE networks: 850/900/1800/1900 MHz
Reception
The BlackBerry Torch 9800 was marketed as "the best BlackBerry ever". However, upon release of the device's technical details, critics, such as PC World's Ginny Mies, were not impressed with the specifications which lagged behind new-generation devices such as the iPhone 4 and Droid X. One reviewer did not find enough difference over earlier BlackBerrys to recommend the device to new users. Key complaints were the 624 MHz processor included in the Torch, whereas the HTC Evo 4G and Motorola Droid X (among others of their class) featured a 1 GHz processor. The Torch's 3.2 inch screen with 480x360 screen resolution was also criticized as being smaller than iPhone 4's 3.5 inch screen with 960x640 resolution and the Motorola Droid X's 4.3 inch with 854 x 480. The Torch has a screen that is the same size as the Storm and Storm 2 with a resolution the same as the Bold 9700, Bold 9650 and Tour 9630. Some critics also noted the lack of HD video recording and the lack of a front-facing camera.
CNET's Bonnie Cha found improvements such as the better user interface, universal search and the much improved browser and multimedia experience. On the other hand, she also found that the smartphone can be sluggish, and could stand for some hardware upgrades, although her review was generally positive.
RIM has stated that the processor in the Torch is 'of a newer generation', when boot times of the Torch and the BlackBerry Bold 9700 (both running OS6) are compared the Torch boots up 1.8 times faster than the Bold with the "older processor".
UBM techinsite has confirmed the claims of Research in Motion, by performing a "tear down" and making a hardware analysis, discovering a PXA940 Processor. This processor is built on 45nm as opposed to the Processor found in the Blackberry Bold 1 and 2. The 45 nm process keeps heat production and power usage down, and the processor as previously stated in this article brings significant performance gains.
Anandtech praised the screen of the phone as being: "one of the most readable outside that I've encountered in a while, with text and webpages being easy to make out even in intense daylight. Alongside the iPhone 4, the difference is pretty immediate, especially in how good white appears on the Torch compared to the iPhone4. Anandtech also noted that the contrast ratio was exceptionally good.
Crackberry's Kevin Michaluk gave the BlackBerry Torch an overwhelmingly positive review stating that the Torch is a "worthy device for any smartphone owner". Crackberry praised OS 6.0, the form factor, touchscreen, and the Webkit browser. However, Michaluk criticized the lack of HD video recording and lack of OpenGL support for 3D graphics.
Sales
Initial sales of the BlackBerry Torch were slow to moderate, with AT&T Wireless Operations president expressing some disappointment in the sales stating that he was "surprised there hasn't been a faster adoption" of the smart phone by the public. Estimates put sales at somewhere between 100,000 and 150,000 devices sold during the first week of release. However, sales reportedly improved in the months following the release and RIM shipped a record amount of smart phones in the final quarter of 2010. The BlackBerry Torch placed 6th place on Wirefly's annual top ten selling smart phones list for 2010, selling more than Motorola's Droid 2 and Samsung's Galaxy S Fascinate, but behind devices like the Evo 4G and Droid Incredible.
References
Personal digital assistants
Information appliances
Touch 9800
Mobile phones with an integrated hardware keyboard
Touchscreen portable media players
Mobile phones introduced in 2010
Slider phones | BlackBerry Torch 9800 | Technology | 1,502 |
72,477,621 | https://en.wikipedia.org/wiki/Diversity%20%28mathematics%29 | In mathematics, a diversity is a generalization of the concept of metric space. The concept was introduced in 2012 by Bryant and Tupper,
who call diversities "a form of multi-way metric". The concept finds application in nonlinear analysis.
Given a set , let be the set of finite subsets of .
A diversity is a pair consisting of a set and a function satisfying
(D1) , with if and only if
and
(D2) if then .
Bryant and Tupper observe that these axioms imply monotonicity; that is, if , then . They state that the term "diversity" comes from the appearance of a special case of their definition in work on phylogenetic and ecological diversities. They give the following examples:
Diameter diversity
Let be a metric space. Setting for all defines a diversity.
L diversity
For all finite if we define then is a diversity.
Phylogenetic diversity
If T is a phylogenetic tree with taxon set X. For each finite , define
as the length of the smallest subtree of T connecting taxa in A. Then is a (phylogenetic) diversity.
Steiner diversity
Let be a metric space. For each finite , let denote
the minimum length of a Steiner tree within X connecting elements in A. Then is a
diversity.
Truncated diversity
Let be a diversity. For all define
. Then if , is a diversity.
Clique diversity
If is a graph, and is defined for any finite A as the largest clique of A, then is a diversity.
References
Metric spaces | Diversity (mathematics) | Mathematics | 303 |
51,080,253 | https://en.wikipedia.org/wiki/Toilet%20plume | A toilet plume is the cloud-like dispersal of potentially infectious microscopic sewage particles & water vapor as a result of flushing a toilet. Day to day use of a toilet by healthy individuals is considered to be of a lower health risk. However this dynamic completely changes if an individual is fighting an illness and currently shedding out large quantities of an infectious virulent pathogen (virus or bacteria) in their urine, feces or vomitus. Aerosolization of the toilet bowl contents allows these particles to be inhaled or land on surfaces. There is evidence that specific pathogens such as norovirus or SARS coronavirus could potentially be spread by toilet aerosols. It has been hypothesized that dispersal of pathogens may be reduced by closing the toilet lid before flushing, and by using toilets with lower flush energy. A 2024 study empirically built on this theory, illustrating that the viruses that toilet plume contains still spreads out the gaps in the seat onto the walls and concentrating on the surrounding floors.
Effects on disease transmission
There is evidence that toilet aerosols generated by flushing can be a vector for diseases that involve acute gastroenteritis with the shedding of large numbers of pathogens through feces and vomit. For example, some epidemiological studies demonstrate transmission of norovirus in passenger airplanes and ships, and SARS coronavirus through a contaminated building sewage system, from flushing contaminated toilets, aerosolizing pathogens rather than other routes. The feces and vomit of infected people can contain high concentrations of viruses & bacteria many of which are known to survive on surfaces for days, weeks or even months. Toilets are scientifically proven to continue to produce contaminated toilet plumes over multiple successive flushes as indicated in the above video. Some other pathogens speculatively identified as being of potential concern for these reasons include gram-positive MRSA, Mycobacterium tuberculosis, and the pandemic H1N1/09 virus commonly known as "swine flu".
There is 70 plus years of experimental evidence on disease transmission by toilet aerosols. Toilet aerosols are known to contain Norovirus, SARS Coronavirus, Samonella and many other Diseases. The combination of cleaning and disinfecting surfaces is usually effective at removing contamination, although some pathogens such as norovirus and Salmonella have an apparent resistance to these techniques.
Mechanism
Aerosol droplets produced by flushing the toilet can mix with the air of the room, larger droplets will settle on surfaces or objects creating fomites (infectious pools) before they can dry, like on a counter top or toothbrush; and can contaminate surfaces such as the toilet seat and handle for hours, which can then be contacted by hands of the next user of that toilet. Smaller aerosol particles can become droplet nuclei as a result of evaporation of the water in the droplet; these have negligible settling velocity and are carried by natural air currents. Disease transmission through droplet nuclei is a concern for many pathogens, because they are excreted in feces or vomit. The critical size dividing these dispersal modes depends on the evaporation rate and vertical distance between the toilet and the surface in question.
Experiments to test bioaerosol production usually involve seeding a toilet with bacteria or virus particles, or fluorescent microparticles, and then testing for their presence on nearby surfaces and in the air, after varying amounts of time. The amount of bioaerosol varies with the type of flush toilet. Older wash-down toilet designs produce more bioaerosol than modern siphoning toilets. Among modern toilets, bioaerosol production increases as qualitative flush energy increases, from low-flush gravity-flow toilets common in residences, to pressure-assisted toilets, to vigorous flushometer toilets often found in public restrooms.
Lowering the toilet lid helps prevent dispersion of large droplets, however January 2024 Science authored by Gerba proved that viruses still escape in the Toilet Plume with the lid down. The study recommended discouraging the use of lidless toilets, and thus contradicts the US Uniform Plumbing Code specifications for public toilets.
History
Experiments on the bioaerosol content of toilet plumes were first performed in the 1950s. A 1975 study by Charles P. Gerba popularized the concept of disease transmission through toilet plumes. The term "toilet plume" was in use before 1999.
References
Toilets
Hygiene | Toilet plume | Biology | 902 |
1,274,203 | https://en.wikipedia.org/wiki/Alpha%20Pegasi | Alpha Pegasi (α Pegasi, abbreviated Alpha Peg, α Peg), formally named Markab , is the third-brightest star in the constellation of Pegasus and one of the four stars in the asterism known as the Great Square of Pegasus.
Properties
Alpha Pegasi has a stellar classification of A0 IV, indicating that it is an A-type subgiant star that has exhausted the hydrogen at its core and has evolved beyond the main sequence. Its spectrum has also been classified as B9V and B9.5III.
It is rotating rapidly, with a projected rotational velocity of 130 km/s giving a lower bound on the azimuthal velocity along the star's equator. The effective temperature of the photosphere is about 10,000 K and the star has expanded to nearly five times the radius of the Sun, emitting 165 times as much energy as the sun.
Nomenclature
α Pegasi (Latinised to Alpha Pegasi) is the star's Bayer designation. It bore the traditional name Markab (or Marchab), which derived from an Arabic word مركب markab "the saddle of the horse", or is mistranscription of Mankib, which itself comes from an Arabic phrase منكب الفرس Mankib al-Faras "(the Star of) the Shoulder (of the Constellation) of the Horse" for Beta Pegasi.
In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included Markab for this star.
In Chinese, (), meaning Encampment, refers to an asterism consisting α Pegasi and β Pegasi. Consequently, the Chinese name for α Pegasi itself is (), "the First Star of Encampment".
References
Pegasi, Alpha
A-type subgiants
Pegasus (constellation)
Markab
Pegasi, 54
113963
8781
218045
BD+14 4926
Suspected variables | Alpha Pegasi | Astronomy | 448 |
24,529,994 | https://en.wikipedia.org/wiki/Tasimeter | The tasimeter, or microtasimeter, or measurer of infinitesimal pressure, is a device designed by Thomas Edison to measure infrared radiation. In 1878, Samuel Langley, Henry Draper, and other American scientists needed a highly sensitive instrument that could be used to measure minute temperature changes in heat emitted from the Sun's corona during the July 29 solar eclipse, due to occur along the Rocky Mountains. To satisfy those needs Edison devised a microtasimeter employing a carbon button.
Description of operation
The value of the instrument lies in its ability to detect small variations of temperature. This is accomplished indirectly. The change of temperature causes expansion or contraction of a rod of vulcanite, which changes the resistance of an electric circuit by varying the pressure it exerts upon a carbon-button included in the circuit. During the total eclipse of the sun in 1878, it successfully demonstrated the existence of heat in the corona. It is also of service in ascertaining the relative expansion of substances due to a rise of temperature.
The functional parts are represented in the partial cross section, which shows its construction and mode of operation. The substance whose expansion is to be measured is shown at A. It is firmly clamped at B, its lower end fitting into a slot in the metal
plate, M, which rests upon the carbon-button. The latter is in an electric circuit, which includes also a delicate galvanometer. Any variation in the length of the rod changes the pressure upon the carbon, and alters the resistance of the circuit. This causes a deflection of the galvanometer-needle—a movement in one direction denoting expansion of A, while an opposite motion signifies contraction. To avoid any deflection which might arise from change in strength of battery, the tasimeter is inserted in an arm of a Wheatstone bridge.
In order to ascertain the exact amount of expansion in decimals of an inch, the screw S, seen in front of the dial, is turned until the deflection previously caused by the change of temperature is reproduced. The screw works a second screw, causing the rod to ascend or descend, and the exact distance through which the rod moves is indicated by the needle, N, on the dial.
The instrument can also be advantageously used to measure changes in the humidity of the atmosphere. In this case the strip of vulcanite is replaced by one of gelatin, which changes its volume by absorbing moisture.
Other uses
1878 was a time when great advances were being made in electric arc lighting, and during the solar eclipse expedition, which Edison accompanied, the men discussed the practicality of subdividing the intense arc lights so that electricity could be used for lighting in the same fashion as with small, individual gas burners. The basic problem seemed to be to keep the burner, or bulb, from being consumed by preventing it from overheating. Edison thought he would be able to solve this by fashioning a microtasimeter-like device to control the current. He announced that he would invent a safe, mild, and inexpensive electric light that would replace the gaslight.
Abandonment
Edison declined to patent the device, saying it was only of interest to scientists, and allowed companies in London and Philadelphia to manufacture tasimeters royalty-free. The scientists who tested it found it too erratic to be of use for quantitative measurement purposes, and it was soon abandoned.
See also
Bolometer
Infrared telescope
References
External links
Eclipse Vicissitudes:Thomas Edison and the Chickens, J. Donald Fernie, American Scientist.
Edison the scientist, John A. Eddy, Applied Optics (may require subscription).
Observational astronomy
American inventions
19th-century inventions | Tasimeter | Astronomy | 750 |
26,051,183 | https://en.wikipedia.org/wiki/Automation%20Anywhere | Automation Anywhere is an American global software company that develops robotic process automation (RPA) software.
Founded in 2003, the company is headquartered in San Jose, California.
History
Automation Anywhere was originally founded as Tethys Solutions, LLC in San Jose, by Mihir Shukla, Neeti Mehta Shukla, Ankur Kothari and Rushabh Parmani. The company rebranded itself as Automation Anywhere, Inc. in 2010.
As of early 2021, the company reported some 2,800 client firms around the world. Customers cited in 2020 included Volkswagen, Whirlpool, and other organizations.
The company's 2,100+ partner relationships include collaborations with Microsoft, Google, and Amazon Web Services to advance intelligent automation, and with Salesforce, to help its customers automate their front-office business processes. In early 2021, the company had 110+ customers who are also partners.
Between 2018 and 2019, Automation Anywhere received a total of $840 million in Series A and Series B investments at a post-money valuation of $6.9 billion. In 2018 the company announced a total of Series A investments of $550 million from General Atlantic, Goldman Sachs, NEA, World Innovation Lab, SoftBank Investment Advisers, and Workday Ventures. In late 2019, a Series B round, led by Salesforce Ventures, raised $290 million.
In 2019, the company acquired Klevops, a privately owned company based in Paris that works in the finance, banking and telecommunications industries.
In December 2021, Automation Anywhere announced it intends to acquire process discovery startup FortressIQ.
Automation Anywhere's co-founder, Neeti Mehta Shukla, received the Women's Entrepreneurship Day Organization’s Technology Pioneer Award at the United Nations in 2022.
References
Software testing tools
Automation software
Software companies based in California
Web scraping
Software companies of the United States
2003 establishments in the United States
2003 establishments in California
Software companies established in 2003
Companies established in 2003 | Automation Anywhere | Engineering | 395 |
46,445,426 | https://en.wikipedia.org/wiki/Samsung%20Electronics | Samsung Electronics Co., Ltd. (SEC; stylised as SΛMSUNG; ) is a South Korean multinational major appliance and consumer electronics corporation founded on 13 January 1969 and headquartered in Yeongtong District, Suwon, South Korea. It is currently the pinnacle of the Samsung chaebol, accounting for 70% of the group's revenue in 2012, and has played a key role in the group's corporate governance due to cross ownership. It is majority-owned by foreign investors.
Samsung Electronics is the world's second-largest technology company by revenue, and its market capitalization stood at US$520.65 billion, the 12th largest in the world. It ranks as the world's largest manufacturer of smartphones since 2011 — and most notably its Samsung Galaxy brand and its flagship Galaxy S series — and has also been the largest television manufacturer since 2006, both of which include related software and services like Samsung Pay and TV Plus. The company pioneered the phablet form factor with the Galaxy Note family. Samsung is also a major vendor of washing machines, refrigerators, computer monitors and soundbars.
Samsung Electronics is a major manufacturer of electronic components such as lithium-ion batteries, semiconductors, image sensors, camera modules, and displays for clients such as Apple, Sony, HTC, and Nokia. It is also the world's largest semiconductor memory manufacturer and from 2017 to 2018, was the largest semiconductor company in the world, briefly dethroning Intel, the decades-long champion. Samsung Electronics has assembly plants and sales networks in 76 countries and employs more than 260,000 people. In 2024, Samsung Electronics lost its #1 position in global smartphone shipments and semiconductor sales.
History
1969–1987: early years
Samsung Electric Industries was established as an industrial part of Samsung Group on 13 January 1969 in Suwon, South Korea. At the time, Samsung Group was known to the South Korean public as a trading company specialized in fertilizers and sweeteners. Despite the lack of technology and resources, falling shorter even than the domestic competitors, Samsung Group improved its footing in the manufacturing industry by cooperating with the Japanese companies, a decision that led to a significant amount of anti-Japanese public outcry and huge backlashes from the competitors fearing the outright subordination of the industry by the Japanese. The strategy was able to take off only after the government and Samsung declared that the company would exclusively focus on exports. Toshio Iue, the founder of Sanyo, played a role as an advisor to Lee Byung-chul, Samsung's founder, who was a novice in the electronics business. In December of the same year, Samsung Electric established a joint venture named Samsung-Sanyo Electric with Sanyo and Sumitomo Corporation. This is the direct predecessor of today's Samsung Electronics.
The joint venture's early products were electronic and electrical appliances including televisions, calculators, refrigerators, air conditioners, and washing machines. In 1970, Samsung established the joint venture Samsung-NEC with Japan's NEC Corporation and Sumitomo Corporation to manufacture home appliances and audiovisual devices. Samsung-NEC later became Samsung SDI, the group's display and battery business unit. In 1973, Samsung and Sanyo created Samsung-Sanyo Parts, the predecessor of Samsung Electro-Mechanics. By 1981, Samsung Electric had manufactured over 10 million black-and-white televisions.
In 1974, Samsung Group expanded into the semiconductor business by acquiring Korea Semiconductor, which was on the verge of bankruptcy while building one of the first chip-making facilities in the country at the time. Soon after, Korea Telecommunications, an electronic switching system producer and a Samsung Group company, took over the semiconductor business and became Samsung Semiconductor & Communications.
In February 1983, Lee, along with the board of the Samsung industry and corporation agreement and help by sponsoring the event, made an announcement later dubbed the "Tokyo declaration", in which he declared that Samsung intended to become a dynamic random-access memory (DRAM) vendor. One year later, Samsung announced that it had successfully developed a 64 kb DRAM, reducing the technological gap between the companies from first-world countries and the young electronics maker from more than a decade to approximately four years. In the process, Samsung used technologies imported from Micron Technology of the U.S for the development of DRAM and Sharp Corporation of Japan for its SRAM and ROM. In 1988, Samsung Electric Industries merged with Samsung Semiconductor & Communications to form Samsung Electronics, as before that, they had not been one company and had not been a leading corporation together, but they were not rivals, as they had been in talks for a time until they finally merged.
In the 1980s and early 1990s, Samsung sold personal computers under the Leading Technology brand. However, the equipment was manufactured by Samsung, and the FCC filings from this period typically refer to Samsung products.
1988–1995: consumer struggles
In 1988, Samsung Electronics launched its first mobile phone in the South Korean market. Sales were initially poor, and by the early 1990s, Motorola held a market share of over 60 percent in the country's mobile phone market compared to just 10 percent for Samsung. Samsung's mobile phone division also struggled with poor quality and inferior products until the mid-1990s, and exit from the sector was a frequent topic of discussion within the company.
1995–2008: component manufacturing and design strategy
Lee Kun-Hee decided that Samsung needed to change its strategy. The company shelved the production of many under-selling product lines and instead pursued a process of designing and manufacturing components and investing in new technologies for other companies. In addition, Samsung outlined a 10-year plan to shrug off its image as a "budget brand" and to challenge Sony as the world's largest consumer electronics manufacturer. It was hoped that, in this way, Samsung would gain an understanding of how products are made and give a technological lead sometime in the future. This patient vertical integration strategy of manufacturing components has borne fruit for Samsung in the late 2000s.
A complementary brand leadership strategy was also initiated by chairman Lee when he declared 1996 to be the "Year of Design Revolution" at Samsung. His objective was to build Samsung design capabilities as a competitive asset and transform the company into a global brand-design leader. However, this effort required major changes in corporate culture, processes, and systems. By integrating a comprehensive design management system and strategy into the corporate culture, Samsung was successful in developing an award-winning product design portfolio by the late 1990s, resulting in significant brand equity growth.
As Samsung shifted away from consumer markets, the company devised a plan to sponsor major sporting events. One such sponsorship was for the 1998 Winter Olympics held in Nagano, Japan.
As a chaebol, Samsung Group wielded wealth that allowed the company to invest and develop new technology rather than build products at a level that would not have a detrimental impact on Samsung's finances.
Samsung had a number of technological breakthroughs, particularly in the field of memory which are commonplace in most electrical products today. This includes the world's first 64 MB DRAM in 1992, 256 MB DRAM in 1994, and 1 GB DRAM in 1996. In 2004, Samsung developed the world's first 8 GB NAND flash memory chip, and a manufacturing deal was struck with Apple in 2005. A deal to supply Apple with memory chips was sealed in 2005, and Samsung remains a key supplier of Apple components as of October 2013, manufacturing the A7 processors inside the iPhone 5S model.
2008–present
From 2000 to 2003, Samsung maintained a net earnings growth of over 5%, even as 16 of South Korea's 30 largest companies collapsed following a financial crisis. In 2005, Samsung surpassed its Japanese rival, Sony, for the first time, becoming the 20th most popular global consumer brand according to Interbrand rankings. In 2007, Samsung overtook Motorola to become the world's second-largest smartphone manufacturer. By 2009, Samsung achieved $117bn in revenue, overtaking Hewlett-Packard to become the world's largest technology company by sales.
However, Samsung faced legal challenges in 2009 and 2010 when the U.S. and the EU fined the company—along with other memory chip manufacturers—for involvement in a price-fixing scheme that occurred between 1999 and 2002. In 2010, Samsung was granted immunity from prosecution by the EU for acting as an informant during the investigation into the LCD price-fixing cartel, leading to the implication of other companies, including LG Display and HannStar.
Despite its continuous growth, Samsung has been portrayed as financially insecure. In 2010, after returning from temporary retirement, chairman Lee Kun-hee expressed concern about the company's future, stating, "Samsung Electronics' future is not guaranteed, as most of our flagship products will be obsolete within ten years." Samsung has since set ambitious goals, aiming for $400bn in annual revenue within a decade, with 24 global research and development centers contributing to these efforts.
In 2011, Samsung sold its hard disk drive (HDD) operations to Seagate Technology for $1.4 billion in cash and stock. In 2012, Kwon Oh-hyun was appointed CEO of Samsung Electronics. He announced his resignation in 2017, citing an "unprecedented crisis." His departure signaled the transition to a leadership structure with three co-CEOs, which lasted until 2021, when Kyung Kye-Hyun and Han Jong-hee were appointed as new CEOs after a company-wide reorganization.
In 2014, Samsung made headlines by introducing the Samsung Galaxy S4, a new entry in its Galaxy smartphone series, and successfully tested enhanced 5G technology. From 2014 onward, Samsung expanded its presence in technology markets. In April 2014, Samsung launched the Galaxy S5, followed by the Galaxy S6 and Galaxy S6 Edge in 2015, both of which featured a significant redesign and introduced the concept of curved screens. The same year, Samsung also entered the rapidly growing Internet of Things (IoT) market by acquiring the smart home company SmartThings.
In 2016, Samsung faced one of its most publicized crises when its Galaxy Note 7 devices began to overheat and catch fire due to defective batteries. This led to a global recall of the product and a temporary halt in production. Despite the setback, Samsung recovered by launching successful products such as the Galaxy S8 and Galaxy Note 8 in 2017, which helped restore consumer confidence. During this time, Samsung continued its push into new markets. In November 2016, it announced its acquisition of Harman International Industries for $8bn, marking a major step into the automotive technology sector, particularly in connected car solutions.
In 2017, Samsung reported record profits driven by its semiconductor business, particularly memory chips. By 2018, the company had solidified its position as one of the leading global manufacturers of semiconductors, overtaking Intel as the world's largest semiconductor supplier. In 2021, Samsung announced plans to invest $17bn to build a new semiconductor manufacturing facility in Taylor, Texas, part of its strategy to expand its chip production capabilities amid the global semiconductor shortage.
At CES 2024, Samsung demonstrated Ballie, an AI-powered home robot designed to assist with daily tasks, monitor pets, and integrate with smart home appliances.
Samsung has been working to meet sustainability goals and reduce its environmental impact. In 2023, the company announced a partnership with British Gas to integrate its services into Samsung's SmartThings app, helping users reduce energy consumption through smarter home management. As part of the partnership, British Gas began offering Samsung's energy-efficient heat pumps to support the UK's 2050 net-zero goals. In 2024, Samsung continues to be a leader in consumer electronics, semiconductors, and AI development, shaping technology through its innovations in smart homes, connected devices, and sustainable energy solutions.
Samsung Electronics has become the largest shareholder of South Korea's Rainbow Robotics in 2025.
Corporate governance
As of January 2024
Global reputation
In mid-November 2021, Samsung Electronics was ranked second in the 'Best Global Brands' by YouGov a market research firm, after placing fourth in the 2020 ranking.
In June 2022, PricewaterhouseCoopers ranked Samsung Electronics 22nd on their global top 100 companies by market capitalization. The company slid seven notches from the 2021 rankings due to global inflation, the Russian invasion of Ukraine, and global monetary tightening.
Logo history
Corporate affairs
Business trends
The key trends for Samsung Electronics are (as of the financial year ending 31 December):
Operations
The company focuses on four areas: digital media, semiconductors, telecommunication networks, and LCD digital appliances.
The digital-media business area covers computer devices such as laptop computers and laser printers; digital displays such as televisions and computer monitors; consumer entertainment devices such as DVD players, MP3 players, and digital camcorders; home appliances such as refrigerators, air conditioners, air purifiers, washing machines, microwave ovens, vacuum cleaners and robot vacuum cleaners.
The semiconductor-business area includes semiconductor chips such as SDRAM, SRAM, NAND flash memory; smart cards; mobile application development, mobile application processors; mobile TV receivers; RF transceivers; CMOS Image sensors, Smart Card IC, MP3 IC, DVD/Blu-ray Disc/HD DVD Player SOC, and multi-chip package (MCP).
The telecommunication-network-business area includes multi-service DSLAMs and fax machines; cellular devices such as mobile phones, PDA phones, and hybrid devices called mobile intelligent terminals (MITs); and satellite receivers.
The LCD business area focuses on producing TFT-LCD and organic light-emitting diode (OLED) panels for laptops, desktop monitors, and televisions.
Samsung Print was established in 2009 as a separate entity to focus on B2B sales and released a broad range of multifunctional devices, printers, and more. As of 2018, Samsung sold its printing business to HP.
Products
Samsung Electronics produces LCD and LED panels, mobile phones, memory chips, NAND flash, solid-state drives, televisions, digital cinema screens, laptops and many more products. The company previously produced hard-drives and printers.
Samsung consistently invests in innovation. In 2021, the World Intellectual Property Organization (WIPO)'s annual World Intellectual Property Indicators report ranked Samsung's number of patent applications published under the PCT System as 2nd in the world, with 3,093 patent applications being published during 2020. This position is up from their previous ranking as 3rd in 2019 with 2,334 applications.
LCD and OLED panels
By 2004 Samsung was the world's-largest manufacturer of OLEDs, with a 40 percent market share worldwide and as of 2018 has a 98% share of the global AMOLED market. The company generated $100.2 million out of the total $475 million revenues in the global OLED market in 2006. As of 2006, it held more than 600 American patents and more than 2,800 international patents, making it the largest owner of AMOLED technology patents.
Samsung's current AMOLED smartphones use its Super AMOLED trademark, with the Samsung Wave S8500 and Samsung i9000 Galaxy S being launched in June 2010. In January 2011, it announced its Super AMOLED Plus displays – which offer several advances over the older Super AMOLED displays – real stripe matrix (50 percent more sub pixels), thinner form factor, brighter image and an 18 percent reduction in energy consumption.
In October 2007, Samsung introducing a ten-millimeter thick, 40-inch LCD television panel, followed in October 2008 by the world's first 7.9-mm panel. Samsung developed panels for 24-inch LCD monitors (3.5 mm) and 12.1-inch laptops (1.64 mm). In 2009, Samsung succeeded in developing a panel for forty-inch LED televisions, with a thickness of 3.9 millimeters (0.15 inch). Dubbed the "Needle Slim", the panel is as thick (or thin) as two coins put together. This is about a twelfth of the conventional LCD panel whose thickness is approximately 50 millimeters (1.97 inches).
While reducing the thickness substantially, the company maintained the performance of previous models, including Full HD 1080p resolution, 120 Hz refresh rate, and 5000:1 contrast ratio. On 6 September 2013, Samsung launched its 55-inch curved OLED TV (model KE55S9C) in the United Kingdom with John Lewis.
In October 2013, Samsung disseminated a press release for its curved display technology with the Galaxy Round smartphone model. The press release described the product as the "world's first commercialized full HD Super AMOLED flexible display". The manufacturer explains that users can check information such as time and battery life when the home screen is off, and can receive information from the screen by tilting the device.
Mobile and smart phones
Samsung's mobile cell business began with a car phone in 1984, while its first handheld mobile phone called the SH-100 was made in 1988. It expanded to other markets during the 1990s. Its first smartphone was the Samsung SPH-i300 in 2001. During the early 2000s Samsung popularised the clamshell ("flip phone") design, and the SGH-T100 was the first ever "true color" mobile phone and the firm's first to sell over 10 million handsets. In the mid-2000s the SGH-D500 popularised the slider form factor, and later slider products such as the E250 were hits. In 2006 Samsung's X820 with a depth of 6.9 mm was the thinnest phone, and for many years its successor U100 would remain the skinniest at just 5.9 mm. In 2007 it launched the slate style touchscreen phone F700 which would precede its increasingly relevant touch phones such as Tocco and Omnia. Samsung overtook declining Motorola to become the world's second largest mobile phone marker during 2007.
Presently, Samsung's flagship mobile handset line is the Samsung Galaxy S series of smartphones, which many consider a direct competitor of the Apple iPhone. It was initially launched in Singapore, Malaysia and South Korea in June 2010, followed by the United States in July. It sold more than one million units within the first 45 days on sale in the United States.
While many other handset manufacturers focused on one or two operating systems, Samsung for a time used several of them: Symbian, Windows Phone, Linux-based LiMo, and Samsung's proprietary TouchWiz, Bada and Tizen. By 2013 Samsung had dropped all operating systems except Android phone and Windows Phone. That year Samsung released at least 43 Android phones or tablets and two Windows Phones.
At the end of the third quarter of 2010, the company had surpassed the 70 million unit mark in shipped phones, giving it a global market share of 22 percent, trailing Nokia by 12 percent. Overall, the company sold 280 million mobile phones in 2010, corresponding to a market share of 20.2 percent. The company overtook Apple in worldwide smartphone sales during the third quarter 2011, with a total market share of 23.8 percent, compared to Apple's 14.6 percent share. Samsung became the world's largest smartphone manufacturer in 2012, with the sales of 95 million in the first quarter.
During the third quarter of 2013, Samsung's smartphone sales improved in emerging markets such as India and the Middle East, where cheaper handsets were most popular. As of October 2013, the company offers 40 smartphone models on its US website.
In 2019, Samsung announced that it has ended production of mobile phones in China, due to lack of Chinese demand. As of 2019 Samsung employs over 200,000 employees in the Hanoi-area of Vietnam to produce Smartphones, while offscouring some manufacturing to China and manufacturing large portions of its phones in India.
In May 2022, Samsung Electronics announced the company had expanded the Samsung Knox enterprise mobile security platform with the introduction of Samsung Knox Guard. It allows companies to quickly make phones unusable to potentially deter theft and reduce risk of fraud and data breaches.
Semiconductors
Samsung Electronics has been the world's largest memory chip manufacturer since 1993, and the largest semiconductor company since 2017. Samsung Semiconductor division manufactures various semiconductor devices, including semiconductor nodes, MOSFET transistors, integrated circuit chips, and semiconductor memory.
Since the early 1990s, Samsung Electronics has commercially introduced a number of new memory technologies. They commercially introduced SDRAM (synchronous dynamic random-access memory) in 1992, and later DDR SDRAM (double data rate SDRAM) and GDDR (graphics DDR) SGRAM (synchronous graphics RAM) in 1998. In 2009, Samsung started mass-producing 30 nm-class NAND flash memory, and in 2010 succeeded in mass-producing 30 nm class DRAM and 20 nm class NAND flash, both of which were for the first time in the world. They also commercially introduced TLC (triple-level cell) NAND flash memory in 2010, V-NAND flash in 2013, LPDDR4 SDRAM in 2013, HBM2 in 2016, GDDR6 in January 2018, and LPDDR5 in June 2018.
Another area where the company has had significant business in for years is the foundry segment. It had begun investment in the foundry business since 2006, and positioned it as one of the strategic pillars for semiconductor growth. Since then, Samsung has been a leader in semiconductor device fabrication. Samsung began mass-production of a 20 nm class semiconductor manufacturing process in 2010, followed by a 10 nm class FinFET process in 2013, and 7 nm FinFET nodes in 2018. They also began production of the first 5 nm nodes in late 2018, with plans to introduce 3 nm GAAFET nodes by 2021.
According to market research firm Gartner, during the second quarter of 2010, Samsung Electronics took the top position in the DRAM segment due to brisk sales of the item on the world market. Gartner analysts said in their report, "Samsung cemented its leading position by taking a 35-percent market share. All the other suppliers had minimal change in their shares." The company took the top slot in the ranking, followed by Hynix, Elpida, and Micron, said Gartner.
In 2010, market researcher IC Insights predicted that Samsung would become the world's-biggest semiconductor chip supplier by 2014, surpassing Intel. For the ten-year period from 1999 to 2009, Samsung's compound annual growth rate in semiconductor revenues was 13.5 percent, compared with 3.4 percent for Intel. For 2015, IC Insights and Gartner announced that Samsung was the fourth largest chip manufacturer in the world. Samsung eventually surpassed Intel to become the world's largest semiconductor company in 2017.
By the second quarter of 2020 the company had planned to start mass production of 5 nm chips using Extreme ultraviolet lithography (EUV) and aimed to become a leader in EUV process use.
On 30 November 2021, it was announced that the company would be producing new auto chips for Volkswagen vehicles. The logic chips will be used in entertainment systems to provide 5G telecommunications to meet the increased demand for high-definition video while traveling.
The Xi'an China facility, which has been running since 2014, produces approximately 40 percent of Samsung Electronics NAND flash memory chips.
Solid-state drives
In 2016, Samsung also launched to market a 15.36 TB SSD with a price tag of US$10,000 using a SAS interface, using a 2.5-inch form factor but with the thickness of 3.5-inch drives. This was the first time a commercially available SSD had more capacity than the largest currently available HDD. In 2018, Samsung introduced to market a 30.72 TB SSD using a SAS interface. Samsung introduced an M.2 NVMe SSD with read speeds of 3500 MB/s and write speeds of 3300 MB/s in the same year. In 2019, Samsung introduced SSDs capable of 8 GB/s sequential read and write speeds and 1.5 million IOPS, capable of moving data from damaged chips to undamaged chips, to allow the SSD to continue working normally, albeit at a lower capacity.
Samsung's consumer SSD lineup currently consists of the 980 PRO, 970 PRO, 970 EVO plus, 970 EVO, 960 PRO, 960 EVO, 950 PRO, 860 QVO, 860 PRO, 860 EVO, 850 PRO, 850 EVO, and the 750 EVO. The SSDs models beginning with a 9 use an NVM Express interface and the rest use a Serial ATA interface. Samsung also produces consumer portable SSDs using a USB-C USB 3.1 Gen 2 connector. The drives offer read speeds of 1,050 MB/s and write speeds of 1,000 MB/s and are available as 500 GB, 1 TB and 2 TB models.
Like many other SSD producers, Samsung's SSDs use NAND flash memory produced by Samsung Electronics.
Hard-drives
In the area of storage media, in 2009 Samsung achieved a ten percent world market share, driven by the introduction of a new hard disk drive capable of storing 250 Gb per 2.5-inch disk. In 2010, the company started marketing the 320 Gb-per-disk HDD, the largest in the industry. In addition, it was focusing more on selling external hard disk drives. Following financial losses, the hard disk division was sold to Seagate in 2011 in return for a 9.6% ownership stake in Seagate.
Televisions
In 2009, Samsung sold around 31 million flat-panel televisions, enabling to it to maintain the world's largest market share for a fourth consecutive year.
Samsung launched its first full HD 3D LED television in March 2010. Samsung had showcased the product at the 2010 International Consumer Electronics Show (CES 2010) held in Las Vegas.
Samsung sold more than one million 3D televisions within six months of its launch. This is the figure close to what many market researchers forecast for the year's worldwide 3D television sales (1.23 million units). It also debuted the 3D Home Theater (HT-C6950W) that allows the user to enjoy 3D image and surround sound at the same time. With the launch of 3D Home Theater, Samsung became the first company in the industry to have the full line of 3D offerings, including 3D television, 3D Blu-ray player, 3D content, and 3D glasses.
In 2007, Samsung introduced the "Internet TV", enabling the viewer to receive information from the Internet while at the same time watching conventional television programming. Samsung later developed "Smart LED TV" (now renamed to "Samsung Smart TV"), which additionally supports downloaded smart television apps. In 2008, the company launched the Power Infolink service, followed in 2009 by a whole new Internet@TV. In 2010, it started marketing the 3D television while unveiling the upgraded Internet@TV 2010, which offers free (or for-fee) download of applications from its Samsung Apps Store, in addition to existing services such as news, weather, stock market, YouTube videos, and movies.
Samsung Apps offers for-fee premium services in a few countries including Korea and the United States. The services will be custom-tailored for each region. Samsung plans to offer family-oriented applications such as health care programs and digital picture frames as well as games. Samsung's range of smart TVs include the apps ITV Player and motion controlled games such as Angry Birds. Since 2015, Samsung's proprietary FAST streaming service Samsung TV Plus was pre-installed to the smart TVs.
Monitors
The company started as a budget display monitor brand in the 1980s, producing cathode ray tube (CRT) monitors for computers, from which it then evolved. By the end of the decade, Samsung had become the world's largest monitor manufacturer, selling over monitors by 1989.
During the 1990s to the 2000s, Samsung started producing LCD monitors using TFT technology to which it still emphasizes on the budget market against the competition while at the same time starting to also focus on catering to the middle and upper markets through partnership with brands such as NEC and Sony via a joint venture. As it grew and became more advanced, it later on acquired the joint venture corporations to form the current Samsung OLED and S-LCD Corporation respectively from its former joint venture partners.
Tizen
As of 2015, Samsung smart televisions and smart monitors run an operating system customized from the open-source Linux-based Tizen OS. Given Samsung's high market share in the smart television market, approximately 20% of smart televisions sold worldwide in 2018 run Tizen.
In 2019, Samsung announced that they will be bringing the Apple TV app (formally iTunes Movies and TV Shows app) and AirPlay 2 support to its 2019 and 2018 smart TVs (via firmware update).
Odyssey
Samsung's Odyssey gaming monitors are designed for professional gamers and gaming enthusiasts. As of 2022, the Odyssey range consists of 4 main series, each with different resolutions, refresh rates and aspect ratios.
At the CES 2022, Samsung showed the Odyssey Neo G8, the world's first 4K monitor with a refresh rate of 240 Hz. It features a 32-inch mini LED 1000R curved display with 1,196 local dimming zones that supports HDR10+ with a peak brightness of up to 2,000 nits, and is G-Sync- and FreeSync-certified. It was released on 6 June 2022, at an MSRP of $1,500.
Printers
In the past, Samsung produced printers for both consumers and business use, including mono-laser printers, color laser printers, multifunction printers, and enterprise-use high-speed digital multi-function printer models. They exited the printer business and sold their printer division to HP in Fall 2017. In 2010, the company introduced the world's smallest mono-laser printer ML-1660 and color laser multifunction printer CLX-3185.
Speakers
In 2017, Samsung acquired Harman International. Harman makes earbuds under many brand names such as AKG, AMX, Becker, Crown, Harman Kardon, Infinity, JBL, Lexicon, dbx, DigiTech, Mark Levinson, Martin, Revel, Soundcraft, Studer, Arcam, Bang & Olufsen and BSS Audio.
Cameras
Samsung has introduced several models of digital cameras and camcorders including the WB550 camera, the ST550 dual-LCD-mounted camera, and the HMX-H106 (64 GB SSD-mounted full HD camcorder). In 2014, the company took the second place in the mirrorless camera segment. Since then, the company has focused more on higher-priced items. In 2010, the company launched the NX10, the next-generation interchangeable lens camera.
Other
Samsung entered the MP3 player (digital audio player, DAP) market in 1999 with its Yepp line. In the initial years the company struggled to gain a foothold because of emerging Korean startups iRiver, Cowon and Mpio. However by 2006, it had gained a significant share in the domestic market as well as Russia and parts of the Middle East, South East Asia and Europe. It was also starting to increase penetration in the U.S. (albeit significantly lower than the market leader, Apple). Samsung launched the world's-smallest DivX MP3 player, the R1, in 2009.
In 2014, the company announced that it was exiting the laptop market in Europe.
In 2015, Samsung announced a proposal for a constellation of 4600 satellites orbiting Earth at altitude that could bring 200 gigabytes per month of internet data to "each of the world's 5 billion people". The proposal has not yet advanced to full development. If built, such a constellation would compete with previously-announced satellite constellations currently under development by OneWeb and SpaceX.
On 13 July 2017, an LED screen for digital cinema developed by Samsung Electronics with GDC Technology Limited was publicly demonstrated on one screen at Lotte Cinema World Tower in Seoul.
Stores
Samsung runs Samsung Experience Store retail locations throughout the world. These locations primarily sell Samsung Galaxy devices, though they can feature other Samsung-owned brands as well.
Korea
Samsung has various service stores throughout all of South Korea, which have showcases of various Samsung products available for purchase, and also have repair centers for those items. It also has stores dedicated to the installation of large household appliances such as TVs, dishwashers, and refrigerators. It also has stores just for the sale and repair of its memory products, such as the SSDs. These stores do not feature Samsung's own Samsung Experience Store name and
branding.
Management and board of directors
In December 2010, Samsung switched its management system from a single CEO-system under Choi Gee-sung to a two-person management team with Choi Gee-sung, CEO and vice chairman, and Lee Jae-Yong, chief operating officer and president. In June 2012, Samsung appointed Kwon Oh-Hyun as CEO of the company. Samsung also reorganized its overseas marketing bases in line with changes in the market, including a combined Britain/Continental Europe regional subsidiary, and a combined China/Taiwan regional subsidiary.
In 2012, Samsung appointed director of mobile products, J. K. Shin, to president/CEO of Samsung Electronics for Mobile Consumer Products.
The company added a new digital imaging business division in 2010, and consists of eight divisions, including the existing display, IT solutions, consumer electronics, wireless, networking, semiconductor, and LCD divisions.
It merged consumer electronics and air conditioners in 2010 under the consumer electronics business division. The set-top boxes business was merged with the Visual Display Business division.
The company's 2023 reorganization was as follows: Among the eight divisions, the network division and the digital imaging division experienced new appointments, while the remaining divisions were maintained in accordance with their results.
Executive Chairman: Lee Jae-yong
Vice chairman and co-CEO of Samsung Electronics' device experience division: Han Jong Hee
Co-CEO of the device solutions division: Kyung Kye-hyun
vice chairman and head of the Future Business Planning division: Jun Young-hyun
The following are the names of the board of directors' members:
Ownership
Around 44% of Samsung Electronics' shares are held by the general public, around 38% are held by institutions, and insiders held around 4% of shares. The largest shareholders in early 2024 were:
Samsung Life Insurance (7.6%)
National Pension Service (6.46%)
BlackRock (4.83%)
Samsung C&T Corporation (4.4%)
The Vanguard Group (3.03%)
Norges Bank (1.76%)
Hong Ra-hee (1.45%)
Lee Jae-yong (1.45%)
Samsung Fire & Marine Insurance (1.31%)
Fidelity Investments (1.27%)
JP Morgan Asset Management (1%)
Samsung Asset Management (0.91%)
Lee Boo-jin (0.78%)
Capital Research and Management Company (0.77%)
Lee Seo-hyun (0.7%)
Market share for major products
Major clients
Relationship with Apple Inc.
Despite recent litigation activity, Samsung and Apple have been described as frenemies who share a love-hate relationship. Samsung is a major supplier for Apple – first providing memory for the early iPod devices in 2005, and Apple is a key customer for Samsung – in 2012 its component sales were thought to be worth in the region of $8 billion revenue to Samsung – to the point where Apple CEO Tim Cook originally opposed litigation against Samsung wary of the company's critical component supply chain for Apple.
In April 2011, Apple Inc. announced that it was suing Samsung over the design of its Galaxy range of mobile phones. The lawsuit was filed on 15 April 2011 and alleges that Samsung infringed on Apple's trademarks and patents of the iPhone and iPad. Samsung issued a counterclaim against Apple of patent infringement. In August 2011, at The Regional Court of Düsseldorf, Apple was granted a preliminary injunction against the sale and marketing of the Samsung Galaxy Tab 10.1 across the whole of Europe excluding the Netherlands. The ban has been temporarily lifted in the European Union, with the exclusion of Germany, while it is investigated whether or not the original injunction was appropriate.
On 31 August 2012, the Tokyo District Court ruled Samsung Electronics' mobile devices did not violate an Apple patent. The case only addressed Apple's patent that allows mobile devices and personal computers to synchronize or share data with each other and is not comparable with the U.S. court case ruled on 24 August. On 18 October 2012, the U.K. High Court ruled that Samsung did not infringe Apple's design patents. Apple was forced to issue a court-ordered apology to Samsung on its official U.K. website.
Relationship with Best Buy Co, Inc.
Best Buy and Samsung joined together to create the Samsung Experience Shop, a store that allows customers to test the company's products, and get training in mobile products they already own. In summer 2013, more than 1,400 Best Buy and Best Buy Mobile stores have established the Samsung Experience Shop. About 460 square feet of space are dedicated for the SES, with the company's placement at Best Buy's entrance, as well as its sign visible in any part of the store. The purpose of the Samsung Experience Shop is to make Samsung's products, i.e. the Galaxy, more accessible to customers.
The first Samsung Experience Shops began appearing across Best Buy locations in the United States in May 2013. In May 2014, Best Buy announced its plans to add 500 new Samsung Entertainment Experience Shops. While the previous Samsung Experience locations focused primarily on showcasing and providing support for Samsung's Galaxy smartphones, cameras, and tablets, these new locations will showcase and support the company's home theater products.
Unlike the Samsung Experience Shop, the Samsung Entertainment Experience will be run by Samsung trained Best Buy associates. The new centers are expected to finish being made in the U.S. by January 2015.
Design
In the early 1990s, Samsung began considering the importance of physical design in its products. When chairman Lee declared 1996 'The Year of Design Revolution', a comprehensive global design program was initiated with the goal of design being a strategic asset and competitive advantage for the company. Located in the company's high-rise headquarters in Gangnam (south of Seoul) the corporate design center includes more than 900 full-time designers. In 1971 there were only a few designers in the whole company, whose number rose to 1,600 by 2015. In addition to the corporate design center in Seoul, there are design centers located in Tokyo, San Francisco and London.
The company overhauls its design over a two-year cycle. For the first year, it scrutinizes design trends of the world, followed by product strategies. It then maps out new design plans during the second year.
Since 2006, it has won as many as 210 awards from international design institutions. It received the iF (International Forum) and IDEA design awards. Working with partners, Samsung was the winner in eight categories in the 2009 IDEA awards, hence receiving the most awards.
In the 2010 iF Material Awards, the company won the Gold Award for five of its products including the external hard disk drive. The iF Material Awards are given by the International Forum Design GmbH of Hannover, a design award for design materials and process technologies. In 2010, the German company selected a total of 42 products in the areas of home appliance, furniture, and industrial design. Samsung won the awards in five categories including external hard disk, full-touch screen phone, "side-by-side" refrigerator, compact digital camera, and laser printer toner.
Criticism and controversies
Environmental record
All Samsung mobile phones and MP3 players introduced on the market after April 2010 are free from polyvinyl chloride (PVC) and brominated flame retardants (BFRs).
The company is listed in Greenpeace's Guide to Greener Electronics, which rates electronics companies on policies and practices to reduce their impact on the climate, produce greener products, and make their operations more sustainable. In November 2011, Samsung was ranked seventh out of 15 leading electronics manufacturers with a score of 4.1/10. In the newly re-launched guide, Samsung moved down two places (occupying fifth position in October 2010), but scored maximum points for providing verified data and its greenhouse gas emissions. It also scored well for its Sustainable Operations, with the guide praising its relatively good e-waste take-back programme and information. However, the company was criticized for not setting an ambitious target to increase its use of renewable energy and for belonging to a trade association which has commented against energy efficiency standards.
In June 2004, Samsung was one of the first major electronics companies to publicly commit to eliminate PVC and BFRs from new models of all their products. However, the company failed to meet its deadlines to be PVC- and BFRs-free, and published new phase out dates. In March 2010, Greenpeace activists protested at the company's Benelux headquarters for what they called Samsung's "broken promises".
The company has been awarded as one of global top-ten companies in the Carbon Disclosure Leadership Index (CDLI). It was the only Asian company among top ten companies. In addition, the company is listed in Dow Jones Sustainability Index (DJSI).
The company's achievement ratio of products approaching the Global Ecolabel level ("Good Eco-Products" within the company) is 11 percentage points above the 2010 goal (80 percent). In the first half of 2010, Samsung earned the Global Ecolabel for its 2,134 models, thereby becoming the world's number-one company in terms of the number of products meeting Global Ecolabel standards.
The company is also improving its effort to recover and recycle electronic wastes. The number of wastes salvaged throughout 60 countries during 2009 was as much as 240,000 tons. The "Samsung Recycling Direct" program, the company's voluntary recycling program under way in the United States, was expanded to Canada.
In 2008, the company was praised for its recycling effort by the U.S. advocacy group Electronics Take Back Coalition as the "best eco-friendly recycling program". In 2023, the U.S. Environmental Protection Agency (EPA) awarded the company its 10th consecutive Sustainable Excellence Award in the manufacturer's category.
Litigation and safety issues
Worker safety
Many employees working in Samsung's semiconductor facilities have developed various forms of cancers. Initially, Samsung denied being responsible for the illnesses. Although Samsung is known to disfavor trade unions, these sick workers organized in the group SHARPS (Supporters for the Health And Rights of People in the Semiconductor Industry). The crowdfunded film Another Promise was produced in 2013 to depict the fight for compensation of the victims, as well as the documentary The Empire of Shame. In May 2014, Samsung offered an apology and compensation to workers who became ill. The company subsequently did not follow all the recommendations of a specially appointed mediation committee, paid several families outside of a scheme to be agreed on and required them to drop all further charges, prompting SHARPS to continue legal and public action. The quarrel was mostly resolved upon a public apology issued by Samsung in November 2018.
DRAM price fixing
In December 2010, the European Commission fined six LCD panel producers, including Samsung, a total of €648 million for operating as a cartel. The company received a full reduction of the potential fine for being the first firm to assist EU anti-trust authorities.
On 19 October 2011, Samsung was fined €145.73 million for being part of a price cartel of ten companies for DRAMs, which lasted from 1 July 1998 to 15 June 2002. Like most of the other members of the cartel, the company received a 10% reduction for acknowledging the facts to investigators. Samsung had to pay 90% of their share of the settlement, but Micron avoided payment as a result of having initially revealed the case to investigators. Micron remains the only company that avoided all payments from reduction under the settlement notice.
In Canada, the price fix was investigated in 2002. A recession started to occur that year, and the price fix ended. However, in 2014, the Canadian government reopened the case and investigated silently after the EU's success. Sufficient evidence was found and presented to Samsung and two other manufacturers during a class action lawsuit hearing. The companies agreed upon a $120 million agreement, with $40 million as a fine, and $80 million to be paid back to Canadian citizens who purchased a computer, printer, MP3 player, gaming console or camera between April 1999 and June 2002.
Apple lawsuit
On 15 April 2011, Apple sued Samsung in the United States District Court for the Northern District of California, alleging that several of Samsung's Android phones and tablets, including the Nexus S, Epic 4G, Galaxy S 4G, and Galaxy Tab, infringed on Apple's intellectual property: its patents, trademarks, user interface and style. Apple's complaint included specific federal claims for patent infringement, false designation of origin, unfair competition, and trademark infringement, as well as state-level claims for unfair competition, common law trademark infringement, and unjust enrichment.
On 24 August 2012, the jury returned a verdict largely favorable to Apple. It found that Samsung had willfully infringed on Apple's design and utility patents, and had also diluted Apple's trade dresses related to the iPhone. The jury awarded Apple $1.049 billion in damages and Samsung zero damages in its countersuit. The jury found that Samsung infringed Apple's patents on iPhone's "Bounce-Back Effect" (US Patent No.7,469,381), "On-screen Navigation" (US Patent No.7,844,915), and "Tap To Zoom" (US Patent No.7,864,163), and design patents that cover iPhone's features such as the "home button, rounded corners and tapered edges" (US D593087) and "On-Screen Icons" (US D604305).
Product safety
Despite their phones' popularity, numerous explosions of them have been reported. A Swiss teenager was left with second and third degree burns on her thigh due to her Galaxy S3's explosion, followed by two more Galaxy S3 explosions in Switzerland and Ireland. A South Korean student's Galaxy S2 battery exploded in 2012.
Samsung's Galaxy S4 also led to several accidents. A house in Hong Kong was allegedly set on fire by an S4 in July 2013, followed by minor S4 burn incidents in Pakistan and Russia. A minor fire was also reported in Newbury, United Kingdom in October 2013.
Some users of the phone have also reported swelling batteries and overheating; Samsung has offered affected customers new batteries, free of charge. In December 2013, a Canadian uploaded a YouTube video describing his S4 combusting. Samsung then asked the uploader to sign a legal document requiring him to remove the video, remain silent about the agreement, and surrender any future claims against the company to receive a replacement. No further response from Samsung was received afterwards. There were a few more reported Galaxy S4 explosions in India and the UAE.
Galaxy Note 7
On 31 August 2016, it was reported that Samsung was delaying shipments of the Galaxy Note 7 in some regions to perform "additional tests being conducted for product quality"; this came alongside user reports of batteries exploding while charging. On 2 September, Samsung suspended sales of the Note 7 and announced a worldwide "product exchange program" in which customers would be able to exchange their Note 7 for another Note 7, a Galaxy S7, or an S7 Edge (the price difference being refunded). They would also receive a gift card from a participating carrier. On 1 September, the company released a statement saying it had received 35 reports of battery failure, which, according to an unnamed Samsung official, "account for less than 0.2 percent of the entire volume sold". Although it has been referred to as a product recall by the media, it was not an official government-issued recall by an organization such as the U.S. Consumer Product Safety Commission (CPSC), and only a voluntary measure. The CPSC did issue an official recall notice on 15 September 2016, and stated that Samsung received at least 92 reports of the batteries overheating in the U.S., including 26 reports of burns and 55 reports of property damage.
After some replacement Note 7 phones also caught fire, Samsung announced on 11 October 2016 that it would permanently end production of the Note 7 in the interest of customer safety. However, Samsung was hoping to recover from the lost sales from the Note 7 with the introduction of new colors such as the Blue Coral and Black Pearl color for the Galaxy S7 edge.
On 14 October 2016, the U.S. Federal Aviation Administration and the Department of Transportation's Pipeline and Hazardous Materials Safety Administration banned the Note 7 from being taken aboard any airline flight, even if powered off. Qantas, Virgin Australia and Singapore Airlines also banned the carriage of Note 7s on their aircraft with effect from midnight on 15 October. Mexico's largest airlines Aeromexico, Interjet, Volaris and VivaAerobus all banned the handset.
Washing machines
On 4 November 2016, Samsung recalled 2.8 million top-load washing machines sold at home appliance stores between 2011 and 2016 because the machine's top could unexpectedly detach from the chassis during use due to excessive vibration.
Advertisements on smart televisions
In 2015, users on the website Reddit began reporting that some Samsung Smart TVs would display advertisements for Pepsi products during movies when viewed through the Plex application. Plex denied responsibility for the ads and Samsung told blog Gigaom that they were investigating the matter.
In March 2016, soccer star Pelé filed a lawsuit against Samsung in the United States District Court for the Northern District of Illinois, seeking $30 million in damages, claiming violations under the Lanham Act for false endorsement and a state law claim for violation of his right of publicity. The suit alleged that, at one point, Samsung and Pelé came close to entering into a licensing agreement for Pelé to appear in a Samsung advertising campaign; Samsung abruptly pulled out of the negotiations. The October 2015 Samsung ad in question included a partial face shot of a man who allegedly "very closely resembles" Pelé, and also a superimposed ultra-high-definition television screen next to the image of the man featuring a "modified bicycle or scissors-kick", perfected and famously used by Pelé.
In December 2016, Samsung forced an update to their Smart TV line, which resulted in advertisements being displayed in menus on the updated devices.
Viral marketing
On 1 April 2013, several documents were shown on TaiwanSamsungLeaks.org saying that the advertising company OpenTide (Taiwan) and its parent company Samsung were hiring students to attack its competitors by spreading harmful comments and biased opinions/reviews about the products of other phone manufacturers, such as Sony and HTC, in several famous forums and websites in Taiwan to improve its brand image. Hacker "0xb", the uploader of the documents, said that they were intercepted from an email between OpenTide and Samsung. Four days later, the Taiwan division of Samsung Electronics made an announcement stating it would "stop all online marketing strategies which involves publishing and replying in online forums". It was widely reported by the Taiwanese media. Taiwan later fined Samsung Electronics for the smear campaign.
Samsung's Response to the Russian Market Post-2022 Invasion of Ukraine
After Russia's 2022 invasion of Ukraine, Samsung's response to the Russian market was inconsistent, revealing mixed signals. Initially, the company halted shipments to Russia, seemingly aligning with international pressure. However, Samsung maintained a presence via gray imports through other Customs Union countries like Armenia and Belarus.
Despite donating $6 million for humanitarian aid, Samsung continued sourcing Russian metals and considered leasing its Kaluga factory to local businesses instead of leaving. By 2023, Samsung had resumed marketing activities in Russia, indicating instability and raising doubts about the company's commitment to international sanctions.
National Samsung Electronics Union 2024 Worker Strikes
On 5 June 2024, The National Samsung Electronics Union announced their first historic strike of roughly 28,000 workers on June 7. Negotiations failed to satisfy workers who are asking for a 6.5% raise. On 1 July 2024, the union announced that they would launch a 3-day strike from 8–10 July after negotiations fell short, with the majority of the workers striking from manufacturing states and in-production development. The strike was converted into an indefinite strike due to lack of response from management. The strike ended on 1 August, under institutional pressure and falling numbers, though the union said it intended to continue fighting for its demands with other tactics.
References
External links
Samsung Electronics official global blog
Samsung Members Community
South Korean companies established in 1969
1970s initial public offerings
British royal warrant holders
Companies based in Suwon
Companies listed on the Korea Exchange
Companies listed on the London Stock Exchange
Companies listed on the Frankfurt Stock Exchange
Companies listed on the Luxembourg Stock Exchange
Companies in the KOSPI 200
Computer companies of South Korea
Computer hardware companies
Computer memory companies
Computer peripheral companies
Computer storage companies
Computer systems companies
Consumer electronics brands
Display technology companies
Electronics companies established in 1969
Foundry semiconductor companies
Home appliance brands
Home appliance manufacturers of South Korea
Heating, ventilation, and air conditioning companies
HSA Foundation founding members
Mobile phone companies of South Korea
Multinational companies headquartered in South Korea
Netbook manufacturers
Photography equipment manufacturers of South Korea
Point of sale companies
Portable audio player manufacturers
Electronics
Semiconductor companies of South Korea
South Korean brands
Technology companies of South Korea
Vacuum cleaner manufacturers
Video equipment manufacturers
Videotelephony
Companies in the S&P Asia 50
Companies in the Dow Jones Global Titans 50 | Samsung Electronics | Technology | 11,103 |
9,982,439 | https://en.wikipedia.org/wiki/Graph%20enumeration | In combinatorics, an area of mathematics, graph enumeration describes a class of combinatorial enumeration problems in which one must count undirected or directed graphs of certain types, typically as a function of the number of vertices of the graph. These problems may be solved either exactly (as an algebraic enumeration problem) or asymptotically.
The pioneers in this area of mathematics were George Pólya, Arthur Cayley and J. Howard Redfield.
Labeled vs unlabeled problems
In some graphical enumeration problems, the vertices of the graph are considered to be labeled in such a way as to be distinguishable from each other, while in other problems any permutation of the vertices is considered to form the same graph, so the vertices are considered identical or unlabeled. In general, labeled problems tend to be easier. As with combinatorial enumeration more generally, the Pólya enumeration theorem is an important tool for reducing unlabeled problems to labeled ones: each unlabeled class is considered as a symmetry class of labeled objects.
The number of unlabelled graphs with vertices is still not known in a closed-form solution, but as almost all graphs are asymmetric this number is asymptotic to
Exact enumeration formulas
Some important results in this area include the following.
The number of labeled n-vertex simple undirected graphs is 2n(n −1)/2.
The number of labeled n-vertex simple directed graphs is 2n(n −1).
The number Cn of connected labeled n-vertex undirected graphs satisfies the recurrence relation
from which one may easily calculate, for n = 1, 2, 3, ..., that the values for Cn are
1, 1, 4, 38, 728, 26704, 1866256, ...
The number of labeled n-vertex free trees is nn−2 (Cayley's formula).
The number of unlabeled n-vertex caterpillars is
Graph database
Various research groups have provided searchable database that lists graphs with certain properties of a small sizes. For example
The House of Graphs
Small Graph Database
References
Enumerative combinatorics | Graph enumeration | Mathematics | 475 |
24,018,403 | https://en.wikipedia.org/wiki/C7H14N2 | {{DISPLAYTITLE:C7H14N2}}
The molecular formula C7H14N2 (molar mass: 126.20 g/mol) may refer to:
Bispidine (3,7-diazabicyclo[3.3.1]nonane)
N,N'-Diisopropylcarbodiimide
Molecular formulas | C7H14N2 | Physics,Chemistry | 82 |
19,983 | https://en.wikipedia.org/wiki/Central%20moment | In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random variable from the mean. The various moments form one set of values by which the properties of a probability distribution can be usefully characterized. Central moments are used in preference to ordinary moments, computed in terms of deviations from the mean instead of from zero, because the higher-order central moments relate only to the spread and shape of the distribution, rather than also to its location.
Sets of central moments can be defined for both univariate and multivariate distributions.
Univariate moments
The nth moment about the mean (or nth central moment) of a real-valued random variable X is the quantity μn := E[(X − E[X])n], where E is the expectation operator. For a continuous univariate probability distribution with probability density function f(x), the nth moment about the mean μ is
For random variables that have no mean, such as the Cauchy distribution, central moments are not defined.
The first few central moments have intuitive interpretations:
The "zeroth" central moment μ0 is 1.
The first central moment μ1 is 0 (not to be confused with the first raw moment or the expected value μ).
The second central moment μ2 is called the variance, and is usually denoted σ2, where σ represents the standard deviation.
The third and fourth central moments are used to define the standardized moments which are used to define skewness and kurtosis, respectively.
Properties
For all n, the nth central moment is homogeneous of degree n:
Only for n such that n equals 1, 2, or 3 do we have an additivity property for random variables X and Y that are independent:
provided n ∈ .
A related functional that shares the translation-invariance and homogeneity properties with the nth central moment, but continues to have this additivity property even when n ≥ 4 is the nth cumulant κn(X). For n = 1, the nth cumulant is just the expected value; for n = either 2 or 3, the nth cumulant is just the nth central moment; for n ≥ 4, the nth cumulant is an nth-degree monic polynomial in the first n moments (about zero), and is also a (simpler) nth-degree polynomial in the first n central moments.
Relation to moments about the origin
Sometimes it is convenient to convert moments about the origin to moments about the mean. The general equation for converting the nth-order moment about the origin to the moment about the mean is
where μ is the mean of the distribution, and the moment about the origin is given by
For the cases n = 2, 3, 4 — which are of most interest because of the relations to variance, skewness, and kurtosis, respectively — this formula becomes (noting that and ):
which is commonly referred to as
... and so on, following Pascal's triangle, i.e.
because
The following sum is a stochastic variable having a compound distribution
where the are mutually independent random variables sharing the same common distribution and a random integer variable independent of the with its own distribution. The moments of are obtained as
where is defined as zero for .
Symmetric distributions
In distributions that are symmetric about their means (unaffected by being reflected about the mean), all odd central moments equal zero whenever they exist, because in the formula for the nth moment, each term involving a value of X less than the mean by a certain amount exactly cancels out the term involving a value of X greater than the mean by the same amount.
Multivariate moments
For a continuous bivariate probability distribution with probability density function f(x,y) the (j,k) moment about the mean μ = (μX, μY) is
Central moment of complex random variables
The nth central moment for a complex random variable X is defined as
The absolute nth central moment of X is defined as
The 2nd-order central moment β2 is called the variance of X whereas the 2nd-order central moment α2 is the pseudo-variance of X.
See also
Standardized moment
Image moment
Complex random variable
References
Statistical deviation and dispersion
Moment (mathematics)
fr:Moment (mathématiques)#Moment centré | Central moment | Physics,Mathematics | 921 |
22,555,068 | https://en.wikipedia.org/wiki/PstI | PstI is a type II restriction endonuclease isolated from the Gram negative species, Providencia stuartii.
Function
PstI cleaves DNA at the recognition sequence 5′-CTGCA/G-3′ generating fragments with 3′-cohesive termini. This cleavage yields sticky ends 4 base pairs long. PstI is catalytically active as a dimer. The two subunits are related by a 2-fold symmetry axis which in the complex with the substrate coincides with the dyad axis of the recognition sequence. It has a molecular weight of 69,500 and contains 54 positive and 41 negatively charged residues.
The PstI restriction/modification (R/M) system has two components: a restriction enzyme that cleaves foreign DNA, and a methyltransferase which protect native DNA strands by methylation of the adenine base inside the recognition sequence. The combination of both provide is a defense mechanism against invading viruses. The methyltransferase and endonuclease are encoded as two separate proteins and act independently. In the PstI system, the genes are encoded on opposite strands and hence must be transcribed divergently from separate promoters. The transcription initiation sites are separated by only 70 base pairs. A delay in the expression of the endonuclease relative to methylase is due to the inherent differences of the two proteins. The endonuclease is a dimer, requiring a second step for assembly, whereas the methylase is a monomer.
PstI is functionally equivalent to BsuBI. Both enzymes recognize the target sequence 5'CTGCAG. The enzyme systems have similar methyltransferases (41% amino acid identity), restriction endonucleases (46% amino acid identity), and genetic makeup (58% nucleotide identity). These observations suggest a shared evolutionary history.
When examining the preferential double strand cleavage of DNA, the restriction endonuclease PstI bind to pSM1 plasmid DNA.
DNA cloning
PstI is a useful enzyme for DNA cloning as it provides a selective system for generating hybrid DNA molecules. These hybrid DNA molecules can be then cleaved at the regenerated PstI sites. Its use is not limited to molecular cloning; it is also used in restriction site mapping, genotyping, Southern blotting, restriction fragment length polymorphism (RFLP) and SNP. It is also an isoschizomer restriction enzyme SalPI from Streptomyces albus P.
Cleavage
PstI preferentially cleaves purified pSM1 DNA without being influenced by the superhelicity of the substrate. However, it is not known whether the effects of this cleavage occurs upon binding to the recognition site or DNA scission. Its differential cleavage rates at different restriction sites is due to the five features of duplex structure. The proximity to the ends in linear DNA molecule, variation in DNA sequence within the recognition sites for enzymes, short distance between regions of unusual DNA sequences and recognition sites, and lastly the special structures such as loops and hairpins. The collective effect of these five factors could affect the accessibility of the restriction enzyme to its recognition sites.
References
Restriction enzymes | PstI | Biology | 662 |
20,593,858 | https://en.wikipedia.org/wiki/True%20vapor%20pressure | True vapor pressure (TVP) is a common measure of the volatility of petroleum distillate fuels. It is defined as the
equilibrium partial pressure exerted by a volatile organic liquid as a function of temperature as determined by the test method ASTM D 2879.
The true vapor pressure (TVP) at 100 °F differs slightly from the Reid vapor pressure (RVP) (per definition also at 100 °F), as it excludes dissolved fixed gases such as air. Conversions between the two can be found in AP 42, Fifth Edition, Volume I Chapter 7: Liquid Storage Tanks (p 7.1-54 and onwards)
References
External links
ASTM D2879 - 97(2007) Standard Test Method for Vapor Pressure-Temperature Relationship and Initial Decomposition Temperature of Liquids by Isoteniscope
USA's Environmental Protection Agency (EPA) publication AP-42, Compilation of Air Pollutant Emissions. Chapter 7
Chemical properties
Physical chemistry
Engineering thermodynamics
Natural gas
Oil refining
Petroleum production | True vapor pressure | Physics,Chemistry,Engineering | 206 |
8,604,538 | https://en.wikipedia.org/wiki/Grand%20600-cell | In geometry, the grand 600-cell or grand polytetrahedron is a regular star 4-polytope with Schläfli symbol {3, 3, 5/2}. It is one of 10 regular Schläfli-Hess polytopes. It is the only one with 600 cells.
It is one of four regular star 4-polytopes discovered by Ludwig Schläfli. It was named by John Horton Conway, extending the naming system by Arthur Cayley for the Kepler-Poinsot solids.
The grand 600-cell can be seen as the four-dimensional analogue of the great icosahedron (which in turn is analogous to the pentagram); both of these are the only regular n-dimensional star polytopes which are derived by performing stellational operations on the pentagonal polytope which has simplectic faces. It can be constructed analogously to the pentagram, its two-dimensional analogue, via the extension of said (n-1)-D simplex faces of the core nD polytope (tetrahedra for the grand 600-cell, equilateral triangles for the great icosahedron, and line segments for the pentagram) until the figure regains regular faces.
The Grand 600-cell is also dual to the great grand stellated 120-cell, mirroring the great icosahedron's duality with the great stellated dodecahedron (which in turn is also analogous to the pentagram); all of these are the final stellations of the n-dimensional "dodecahedral-type" pentagonal polytope.
Related polytopes
It has the same edge arrangement as the great stellated 120-cell, and grand stellated 120-cell, and same face arrangement as the great icosahedral 120-cell.
See also
List of regular polytopes
Convex regular 4-polytope
Kepler-Poinsot solids - regular star polyhedron
Star polygon - regular star polygons
References
Edmund Hess, (1883) Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder .
H. S. M. Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. .
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26, Regular Star-polytopes, pp. 404–408)
External links
Regular polychora
Discussion on names
Reguläre Polytope
The Regular Star Polychora
The Great 600-cell, a Zome Model
Regular 4-polytopes | Grand 600-cell | Mathematics | 576 |
973,601 | https://en.wikipedia.org/wiki/Charles%20Bassett | Charles Arthur "Charlie" Bassett II (December 30, 1931 – February 28, 1966), (Major, USAF), was an American electrical engineer and United States Air Force test pilot. He went to Ohio State University for two years and later graduated from Texas Tech University with a Bachelor of Science degree in Electrical Engineering. He joined the Air Force as a pilot and graduated from both the Air Force's Experimental Test Pilot School and the Aerospace Research Pilot School. Bassett was married and had two children.
He was selected as a NASA astronaut in 1963 and was assigned to Gemini 9. He died in an airplane crash during training for his first spaceflight. He is memorialized on the Space Mirror Memorial; The Astronaut Monument; and the Fallen Astronaut memorial plaque, which was placed on the Moon during the Apollo 15 mission.
Early life and education
Bassett was born on December 30, 1931, in Dayton, Ohio, to Charles Arthur "Pete" Bassett (1897–1957) and Fannie Belle Milby Bassett ( James; 1907–1993). Bassett was active in the Boy Scouts of America, where he achieved its second highest rank, Life Scout. During high school, Bassett was a model plane aficionado. He belonged to a club that built gasoline-powered models and flew them in the school gym. Bassett's interest in model airplanes translated to real aircraft; he made his first solo flight at age 16. He worked odd jobs at the airport to earn money for flying lessons and earned his private pilot license at age seventeen.
After graduating from Berea High School, in Berea, in 1950, he attended Ohio State University, in Columbus, from 1950 to 1952. Midway through college in 1952, Bassett enrolled in Air Force ROTC; he entered the U.S. Air Force as an aviation cadet in October of that year. He attended Texas Technological College, now Texas Tech University, from 1958 to 1960. He received a Bachelor of Science degree with honors in electrical engineering from Texas Tech and did graduate work at University of Southern California (USC) in Los Angeles.
Military service
He started his career with training at Stallings Air Base, North Carolina, and Bryan Air Force Base, Texas. Bassett graduated from Bryan in December 1953 and was commissioned in the Air Force. He arrived for additional training in Nellis Air Force Base, Nevada, as a second lieutenant. There, he flew trainer aircraft, such as the T-6, the T-28, and the T-33, and flew the jet fighter F-86 Sabre in 1954.
He went to Korea with the 8th Fighter Bomber Group and flew a F-86 Sabre. Bassett was too late to fly any combat missions, and said, "If you don't have any challenge, you never know how good you are." Bassett was promoted to first lieutenant in May 1955. He returned from Korea in 1955 and was assigned to Suffolk County Air Force Base, in New York, flying aircraft such as the F-86D, the F-102, and the C-119.
In November 1960, Bassett went to Maxwell Air Force Base, in Alabama, to attend Squadron Officer School. He also graduated from the Air Force Experimental Test Pilot School (Class 62A) and the Aerospace Research Pilot School (Class III) and was promoted to captain. Bassett was an experimental test pilot and engineering test pilot in the Fighter Projects Office at Edwards Air Force Base, California, and logged over 3,600 hours of flying time, including over 2,900 hours in a jet aircraft.
NASA career
Bassett was one of NASA's third group of astronauts, named in October 1963. In addition to participating in the overall astronaut training program, he had specific responsibilities related to training and simulators. On November 8, 1965, he was selected as pilot of the Gemini 9 mission with Elliot See as command pilot. Bassett was scheduled to make an untethered ninety-minute spacewalk, which was undertaken by Gene Cernan on Gemini 9A.
According to chief astronaut Deke Slayton's autobiography, he chose Bassett for Gemini 9 because he was "strong enough to carry" both himself and See. Slayton had also assigned Bassett as command module pilot for the second backup Apollo crew, alongside Frank Borman and William Anders.
Personal life
On June 22, 1955, Bassett married Jeannie Martin. They had two children.
Death
Bassett and Elliot See died on February 28, 1966, when their T-38 trainer jet, piloted by See, crashed into McDonnell Aircraft Building 101, known as the McDonnell Space Center, from Lambert Field airport in St. Louis, Missouri. Building 101 was where the Gemini spacecraft was built, and the two astronauts were going there that Monday morning to train for two weeks in a simulator. They died within of their spacecraft.
Both astronauts died instantly from trauma sustained in the crash. See was thrown clear of the cockpit and was found in the parking lot still strapped to his ejection seat with the parachute partially open. Bassett was decapitated on impact; his severed head was found later in the day in the rafters of the damaged assembly building.
Both men's remains were buried in Arlington National Cemetery on Friday, March 4. During funeral services in Texas two days earlier, astronauts Jim McDivitt and Jim Lovell and civilian pilot Jere Cobb flew the missing man formation in Bassett's honor, while Buzz Aldrin, Bill Anders, and Walter Cunningham did the same to honor See.
A NASA investigative panel later concluded that pilot error, caused by poor visibility due to bad weather, was the principal cause of the accident. The panel concluded that See was flying too low to the ground during his second approach, probably because of the poor visibility.
Memorials
Bassett is honored at the Kennedy Space Center Visitor Center's Space Mirror Memorial, alongside 24 other NASA astronauts who died in the pursuit of space exploration.
His name also appears on the Fallen Astronaut memorial plaque at Hadley Rille on the Moon, placed by the Apollo 15 mission in 1971. Texas Tech University dedicated an Electrical Engineering Research Laboratory building in Bassett's honor in November 1996.
See also
List of spaceflight-related accidents and incidents
References
Bibliography
External links
Astronauts memorial foundation website (a different archived version from 2011)
Astronautix biography of Charles Bassett
Arlington National Cemetery
1931 births
1966 deaths
Accidental deaths in Missouri
American electrical engineers
American test pilots
Aviators from Ohio
Aviators killed in aviation accidents or incidents in the United States
Burials at Arlington National Cemetery
Deaths by decapitation
Engineers from Ohio
Military personnel from Dayton, Ohio
Ohio State University alumni
Space program fatalities
Texas Tech University alumni
20th-century American engineers
United States Air Force astronauts
United States Air Force officers
U.S. Air Force Test Pilot School alumni
USC Viterbi School of Engineering alumni
Victims of aviation accidents or incidents in 1966 | Charles Bassett | Engineering | 1,367 |
52,095,463 | https://en.wikipedia.org/wiki/Rees%20decomposition | In commutative algebra, a Rees decomposition is a way of writing a ring in terms of polynomial subrings. They were introduced by .
Definition
Suppose that a ring R is a quotient of a polynomial ring k[x1,...] over a field by some homogeneous ideal. A Rees decomposition of R is a representation of R as a direct sum (of vector spaces)
where each ηα is a homogeneous element and the d elements θi are a homogeneous system of parameters for R and
ηαk[θfα+1,...,θd] ⊆ k[θ1, θfα].
See also
Stanley decomposition
Hironaka decomposition
References
Commutative algebra | Rees decomposition | Mathematics | 146 |
6,757,063 | https://en.wikipedia.org/wiki/Cervarix | Cervarix is a vaccine against certain types of cancer-causing human papillomavirus (HPV).
Cervarix is designed to prevent infection from HPV types 16 and 18, that cause about 70% of cervical cancer cases. These types also cause most HPV-induced genital and head and neck cancers. Additionally, some cross-reactive protection against virus strains 45 and 31 were shown in clinical trials. Cervarix also contains AS04, a proprietary adjuvant that has been found to boost the immune system response for a longer period of time.
Cervarix is manufactured by GlaxoSmithKline. An alternative product, from Merck & Co., is known as Gardasil. Cervarix was voluntarily taken off of the market in the US in 2016 due to low demand.
Medical uses
HPV is a virus, usually transmitted sexually, which can cause cervical cancer in a small percentage of those women genital infected. Cervarix is a preventative HPV vaccine, not therapeutic. HPV immunity is type-specific, so a successful series of Cervarix shots will not block infection from cervical cancer-causing HPV types other than HPV types 16 and 18 and some related types, so experts continue to recommend routine cervical Pap smears even for women who have been vaccinated. Vaccination alone, without continued screening, would prevent fewer cervical cancers than regular screening alone.
Cervarix is indicated for the prevention of the following diseases caused by oncogenic HPV types 16 and 18: cervical cancer, cervical intraepithelial neoplasia (CIN) grade 2 or worse and adenocarcinoma in situ, and CIN grade 1. In the United States, Cervarix is approved for use in females 10 through 25 years of age while in some other countries the age limit is at least 45.
, Cervarix was shown to be effective 7.3 years after vaccination.
Administration
Immunization with Cervarix consists of 3 doses of 0.5-mL each, by intramuscular injection according to the following schedule: 0, 1, and 6 months. The preferred site of administration is the deltoid region of the upper arm. Cervarix is available in 0.5-mL single-dose vials and prefilled TIP-LOK syringes.
Limitations of effectiveness
Cervarix does not provide protection against disease due to all HPV types, nor against disease if a woman has previously been exposed through sexual activity and protection may not be obtained by all recipients. It is therefore recommended that women continue to adhere to cervical cancer screening procedures.
Adverse effects
The most common local adverse reactions in ≥20% of patients were pain, redness, and swelling at the injection site.
The most common general adverse events in ≥20% of subjects were fatigue, headache, muscle pain (myalgia), gastrointestinal symptoms, and joint pain (arthralgia).
In common with some other prefilled syringe vaccination products, the tip cap and the rubber plunger of the needleless prefilled syringes contain dry natural latex rubber that may cause allergic reactions in latex sensitive individuals. The vial stopper does not contain latex.
Ingredients
The active components of the vaccine are:
Human papillomavirus type 16 L1 protein 20 micrograms
Human papillomavirus type 18 L1 protein 20 micrograms
AS04 adjuvant, containing: 3-O-desacyl-4'- monophosphoryl lipid A (MPL) 50 micrograms adsorbed on aluminium hydroxide, hydrated (Al(OH)3) 0.5 milligrams Al 3+ in total
Biotechnology
Cervarix is created using the L1 protein of the viral capsid. L1 protein is in the form of non-infectious virus-like particles (VLPs) produced by recombinant DNA technology using a Baculovirus expression system which uses High Five Rix4446 cells derived from the insect Trichoplusia ni. The vaccine contains no live virus and no DNA, so it cannot infect the patient.
History
The research findings that pioneered the development of the vaccine began in 1991 by The University of Queensland investigators Jian Zhou and Ian Frazer in Australia . Researchers at UQ found a way to form non-infectious virus-like particles (VLP), which could also strongly activate the immune system. Subsequently, the final form of the vaccine was developed in parallel, by researchers at Georgetown University Medical Center, the University of Rochester, the University of Queensland in Australia, and the U.S. National Cancer Institute.
Clinical trials
Phase III trials have been conducted, including over 18,000 women from 14 countries in Asia-Pacific, Europe, Latin America and North America.
As of 2009, the manufacturer was conducting a trial to compare the immunogenicity and safety of Cervarix with Gardasil. Subsequent studies showed Cervarix generated higher antibody levels than Gardasil, the other commercially available HPV vaccine, upon testing seven months later, with twice the level for HPV type 16 and six times for HPV type 18.
Society and culture
Legal status
Australia - Cervarix received approval in May 2007 in Australia for women ages 10 to 45.
Philippines - On 25 August 2007, GlaxoSmithKline launched Cervarix in the Philippines after approval by the local Bureau of Food and Drugs.
European Union - Cervarix was approved in September 2007 in the European Union.
United States - The FDA approved Cervarix on 16 October 2009.
On 29 March 2007, GlaxoSmithKline submitted a Biologic License Application (BLA) for Cervarix (human papillomavirus vaccine, AS04 adjuvant-adsorbed), to the U.S. Food and Drug Administration (FDA) which included data from clinical trials in almost 30,000 females 10 to 55 years of age and contains data from the largest Phase III cervical cancer vaccine efficacy trial to that date.
GSK had awaited results of further trials to submit to the FDA. Approval had not been expected before late 2009.
In the United Kingdom it was included in the national vaccination programme for teenage and pre-teenage girls aged 12–13 and 17–18 from September 2008 to August 2012. This caused some controversy since Cervarix was chosen over Gardasil, even though Gardasil protects against additional HPV types 6 and 11 (which cause genital warts). However, the efficacy of Cervarix is higher.
References
Further reading
External links
Drugs developed by GSK plc
Papillomavirus
Protein subunit vaccines
Cancer vaccines
Cervical cancer | Cervarix | Biology | 1,387 |
2,147,428 | https://en.wikipedia.org/wiki/SOG%20Knife | The SOG Knife was designed for, and issued to, covert Studies and Observations Group personnel during the Vietnam War. It was unmarked and supposedly untraceable to country of origin or manufacture in order to maintain plausible deniability of covert operators in the event of their death or capture.
Design
The SOG Knife was designed by Benjamin Baker, the Deputy Chief of the U.S. Counterinsurgency Support Office (CISO). A chrome-moly steel known as SKS-3 was chosen for the blade and hardened to a Rockwell hardness of 55-57. The blade pattern featured a convex false edge on the clip point of a Bowie knife. The stacked leather handle was inspired by a Marbles Gladstone Skinning Knife made in the 1920s owned by Baker, into which finger grooves were molded. The blade was typically parkerized or blackened to reduce glare. This was done so by applying a dark gun-blue finish (similar to those used on guns) on this SK-3 carbon steel knife. The knife was carried in a leather sheath which contained a sharpening steel or whetstone.
The first contract was awarded to Japanese Trading Company Yogi Shokai, Okinawa for 1,300 seven-inch blades designated "Knife, indigenous, RECON, blade, w/scabbard & whetstone" at $9.85 each. In 1966, SOG ordered 1,200 sterile knives with six-inch blades and black sheaths and in March of the following year an additional lot of 3,700 was ordered. This second lot was serial numbered for accountability purposes and was designated "Knife, indigenous, hunting, blade, w/black sheath and whetstone". Further knives were ordered from Japan Sword, Tokyo as well. The orders were actually fulfilled by a number of knifemakers and as a result, the various lots had minor differences
such as blade bluing color and guard color or shape. Although the SOG office based at Kadena and Yogi Shokai were in Okinawa, it is believed that only a major knifemaking source like Seki could have fulfilled all these orders,
In 1986, a company named SOG Specialty Knives based in Santa Monica, California marketed a knife manufactured in Seki City, Japan very similar to the original SOG knife. It had a blued SK5 carbon steel blade, was marked with the US Army Special Forces Crest, and named the "S1 Bowie". It was a replica of the commemorative versions of the original MACV-SOG knives, rather than the actual sterile unmarked knives used in combat. SOG made a version with an Aus8 stainless steel blade and black micarta handle in commemoration of the U.S. Navy SEALs, known as the "SOG S2 Trident".
The other Vietnam replica knife is known as the "Recon Bowie" by SOG with a distinctive banana-shaped blade. This type of knife was actually the first to go into service in Vietnam.
The last replica knife is the "SCUBA/Demo" which was introduced in 2001, the rarest knife in this group as only one true original is reported to exist. It was created for and assigned to the USN Advisory Detachment, which operated coastal gunboats. The S1 and S2 knives were manufactured by Hattori of Seki under contract to SOG Knives USA from 1986 to 2005, after which SOG shifted to manufacturing in Taiwan.
Hattori also manufactured the three commemorative SOG bowies for Boker, for sale in the European market.
Replicas of the SOG knife have also been made by Al Mar Knives, Ek Knives, Tak Fukuta for Parker, and Strider Knives.
SOG also contracted with Kinryu Co. Ltd of Seki Japan to manufacture the Recon Bowie and the Scuba Demo until 2007. None of these knives are currently in official use by any branch of the US Military. Original models are extremely valuable collector's items among both knife and militaria collectors. The later replicas are also in high demand by collectors, especially the early ones made in Seki.
References
External links
Gallery of SOG knives
SOG S1 Bowie - Information on the replica of the SOG Knife (Manufactured by SOG Specialty Knives) used by MACVSOG Forces in Vietnam
Mechanical hand tools
Camping equipment
Military knives | SOG Knife | Physics | 875 |
54,321,473 | https://en.wikipedia.org/wiki/Chalconatronite | Chalconatronite is a carbonate mineral and rare secondary copper mineral that contains copper, sodium, carbon, oxygen, and hydrogen, its chemical formula is Na2Cu(CO3)2•3(H2O). Chalconatronite is partially soluble in water, and only decomposes, although chalconatronite is soluble while cold, in dilute acids. The name comes from the mineral's compounds, copper ("chalcos" in Greek) and natron, naturally forming sodium carbonate. The mineral is thought to be formed by water carrying alkali carbonates (possibly from soil) reacting with bronze. Similar minerals include malachite, azurite, and other copper carbonates. Chalconatronite has also been found and recorded in Australia, Germany, and Colorado.
Bronze Disease
Most chalconatronite formed on bronze and silver that have been treated with either sodium sesquicarbonate or sodium cyanide to prevent corrosion and bronze disease. The mineral has also been proven to form on the surface of copper artifacts after being treated with aqueous sodium carbonate. This formation by using sodium sesquicarbonate is undesirable by many antique collectors, as the mineral changes the patinas of copper artifacts. When the mineral forms, it can replace copper salts within the patina, and turn the color from a rich green to a blue-green or even black.
Historical Occurrence
The mineral was recorded in 1955 on three bronze artifacts from ancient Egypt, which were being held in the Fogg Art Museum at Harvard. Chalconatronite was found inside of two bronze figures (one depicting a seated Sekhmet, and another one depicting a group of cats and kittens) from around the late Nubian Dynasty or early Saite Period. Another chalconatronite specimen was found under a bronze censer from the late Coptic Period. The chalconatronite found on the censer formed over cuprite and some atacamite crystals, which are associated minerals.
Chalconatronite was also found on iron and copper Roman armor in 1982 at a site in Chester, England. Some of the mineral was found on a copper pin in St. Mark's Basilica, Venice and in two different Mayan paintings. Along with pseudomalachite, chalconatronite was found on an illuminated manuscript from the sixteenth century. Synthetic chalconatronite could have possibly been made in ancient China as a form of pigment, named "synthetic malachite". It was made by taking copper oxide and boiling it with white alum in a "sufficient amount of water". After the result is cooled, a natron solution would be added to precipitate a synthetic form of chalconatronite, as sodium copper carbonate.
See also
Atacamite
Cuprite
Patina
Botallackite
Bronze disease
References
Carbonate minerals
Copper(II) minerals
Corrosion
Minerals | Chalconatronite | Chemistry,Materials_science | 601 |
26,726,570 | https://en.wikipedia.org/wiki/Frost%20flower%20%28sea%20ice%29 | Frost flowers are ice crystals commonly found growing on young sea ice and thin lake ice in cold, calm conditions. The ice crystals are similar to hoar frost, and are commonly seen to grow in patches around 3–4 cm in diameter. Frost flowers growing on sea ice have extremely high salinities and concentrations of other sea water chemicals and, because of their high surface area, are efficient releasers of these chemicals into the atmosphere.
Formation
Frost flowers form when a layer of relatively warm ice is exposed to still, cold air that is at least 15 °C colder. For example, this would occur when freshly-formed ice at 0 °C underlies cold air at -30 °C. In this situation, water vapor sublimates from the surface of the ‘warm’ ice. As this moist air rises into the colder overlying air, the temperature drops, and the air becomes supersaturated. The final result is a layer of supersaturated air, lying directly above the ice (just like how steam forms above the surface of a hot mug of water on a cold day). Any protrusions from the ice surface stick up into the supersaturated air, and end up being covered in hoar-frost like crystals (i.e. frost flowers) due to condensation. This only occurs when there is little wind: in high winds the supersaturated layer is scrubbed from the surface and blowing snow obscures the ice surface.
Typically, frost flowers are only found on new ice, when the air temperature is very low. This is because thin, new ice has a temperature close that of the underlying, warm water. As ice thickens, its surface becomes much colder, and it is harder to get the necessary ice/air temperature difference needed for frost flower growth. Over fresh water, these conditions are only found when the air temperature drops dramatically below zero in a short amount of time, leading to a sudden freezing event. Thus, fresh-water frost flowers are relatively rare. By contrast, salt-water frost flowers are more common -- especially in cold, polar regions. Sea ice is often broken apart by winds, tides and currents, leading to open water ‘leads’ that are exposed to extremely cold air temperatures. When these leads freeze over, forming thin ice, frost flowers often form in dense concentrations.
Frost flowers on sea ice are extremely saline. When sea water freezes, it forms porous ice that consists of mostly fresh-water ice, run through with brine channels filled with very briny water (containing the salt rejected from the ice during the freezing process). These brine channels extend to the ice surface and coat it with a wet, highly saline surface (‘a surface skim’). This is then wicked up onto frost flowers, increasing their salinity. The tips of mature frost flowers are less saline due to vapor deposition and the bulk salinity decreases at night due to hoarfrost accumulation as the temperature drops and new snow (they are very good at collecting snow) which also reduces their bulk salinity over time. Studies have been done on frost flowers and in one study in the ocean near Barrow, Alaska Alvarez-Aviles et al. (2008) found that the bulk salinities of the frost flowers ranged from 16 ppt to 105 ppt with an average of about 62 ppt. (approximately three times more salty than sea water).
Morphology
Temperature, specifically the temperature at the surface of the ice that is not in the vicinity of the frost flowers, has a direct impact on the morphology as well as the thickness and absorbency of the ice, snow coverage and the blanket of frost flowers. The shape of frost flowers changes when the air temperature or the degree of supersaturation changes during the growth process by changing the crystal tips. The level of supersaturation determines the general formation, size and shape of the frost flower. In lower supersaturation, the tip of the frost flower will be faceted and side branches will form creating a branched-like crystal, resembling a tree, where in higher supersaturation the tip shape of the main branch will be rounded forming a star-like crystal without side branches. The ice crystals in frost flowers are usually dendritic but similarly to hoar frost can grow in rod-like morphologies. When warm brine is wicked up onto the ice crystals, it can also give the frost flower a 'clumped' appearance as the facets of the ice crystals are partly melted.
Chemistry
Frost flowers are complex in microstructural chemistry due to many different conditions, like air, temperature, chemical concentrations in the water, surface skim, humidity, and precipitation influencing their formation and growth. An important part of their formation is the fractionation of sodium and sulfate in respects to chloride during precipitation of the salts. When the temperature decreases brine rejection increases and the channels become more and more concentrated, especially at the surface. When the salts begin the precipitate out of the ice, it changes the relative ion concentrations available in liquid water and in the frost flowers. Temperatures below -8 °C there is an increase loss of sodium and sulfate in relation to a decreasing temperature resulting in a depletion of aerosol from frost flowers at such temperatures in contrast to other ions. Frost flowers aerosol will have a higher sodium to sulfate ratio in comparison to aerosol from seawater because sulfate has a greater proportion being removed than sodium when mirabilite (Na2SO4 · 10H2O) precipitates. Frost flowers have a high concentration, typically 2 to 3 times greater, of bromide ions than found in seawater which is proportional to the salinity in the frost flowers. If the temperature were low enough for the sodium chloride that is present in the brine or frost flowers to freeze out, then the bromide may become readily available. Ice surface temperatures below -22 °C start to precipitate out sodium chloride and even lower temperatures other ions will precipitate out, but with surface ice temperature that low frost flowers cannot form, so it is unlikely that there will be depleted sodium chloride.
Aerosol release
Frost flowers have attracted interest as a possible source of polar atmospheric aerosol. High chemical concentrations and the extended surface area may facilitate efficient release into the atmosphere. In particular studies have shown that abundance of frost flowers can be linked to high concentrations of tropospheric bromine monoxide causing tropospheric ozone depletion events, and higher quantities of airborne sea-salt particles. The study Obbard et al. (2009) addressing the concern of bromine, which may be causing the ozone depletion, showed no conclusive evidence that the frost flower aerosol is causing a significant contribution of bromine enrichment into the atmosphere. Furthermore, the study showed that there was bromine depletion as well as enrichment relative to chloride in frost flowers.
See also
Frost
References
Frost and rime
Hydrology
Water ice | Frost flower (sea ice) | Chemistry,Engineering,Environmental_science | 1,419 |
6,894,506 | https://en.wikipedia.org/wiki/Hydrophobic%20collapse | Hydrophobic collapse is a proposed process for the production of the 3-D conformation adopted by polypeptides and other molecules in polar solvents. The theory states that the nascent polypeptide forms initial secondary structure (ɑ-helices and β-strands) creating localized regions of predominantly hydrophobic residues. The polypeptide interacts with water, thus placing thermodynamic pressures on these regions which then aggregate or "collapse" into a tertiary conformation with a hydrophobic core. Incidentally, polar residues interact favourably with water, thus the solvent-facing surface of the peptide is usually composed of predominantly hydrophilic regions.
Hydrophobic collapse may also reduce the affinity of conformationally flexible drugs to their protein targets by reducing the net hydrophobic contribution to binding by self association of different parts of the drug while in solution. Conversely rigid scaffolds (also called privileged structures) that resist hydrophobic collapse may enhance drug affinity.
Partial hydrophobic collapse is an experimentally accepted model for the folding kinetics of many globular proteins, such as myoglobin, alpha-lactalbumin, barstar, and staphylococcal nuclease. However, because experimental evidence of early folding events is difficult to obtain, hydrophobic collapse is often studied in silico via molecular dynamics and Monte Carlo simulations of the folding process. Globular proteins that are thought to fold by hydrophobic collapse are particularly amenable to complementary computational and experimental study using phi value analysis.
Biological significance
Correct protein folding is integral to proper functionality within biological systems. Hydrophobic collapse is one of the main events necessary for reaching a protein's stable and functional conformation. Proteins perform extremely specific functions which are dependent on their structure. Proteins that do not fold correctly are nonfunctional and contribute nothing to a biological system.
Hydrophobic aggregation can also occur between unrelated polypeptides. If two locally hydrophobic regions of two unrelated structures are left near each other in aqueous solution, aggregation will occur. In this case, this can have drastic effects on the health of the organism. The formation of amyloid fibrils, insoluble aggregates of hydrophobic protein can lead to a myriad of diseases including Parkinson's and Alzheimer's disease.
Energetics
The driving force behind protein folding is not well understood, hydrophobic collapse is a theory, one of many, that is thought to influence how a nascent polypeptide will fold into its native state. Hydrophobic collapse can be visualized as part of the folding funnel model which leads a protein to its lowest kinetically accessible energy state. In this model, we do not consider the interactions of the peptide backbone as this maintains its stability in non-polar and polar environments as long as there is sufficient hydrogen bonding within the backbone, thus we will only consider the thermodynamic contributions of the side chains to protein stability.
When placed in a polar solvent, polar side chains can form weak intermolecular interactions with the solvent, specifically hydrogen bonding. The solvent is able to maintain hydrogen bonding with itself as well as the polypeptide. This maintains the stability of the structure within localized segments of the protein. However, non-polar side chains are unable to participate in hydrogen bonding interactions. The inability of the solvent to interact with these side chains leads to a decrease in entropy of the system. The solvent can interact with itself, however the portion of the molecule in proximity to the non-polar side chain is unable to form any significant interactions, thus the dissociative degrees of freedom available to the molecule decreases and entropy decreases. By aggregating the hydrophobic regions, the solvent can reduce the surface area exposed to non-polar side chains, thus reduce localized areas of decreased entropy. While the entropy of the polypeptide has decreased as it enters a more ordered state, the overall entropy of the system increases, contributing to the thermodynamic favourability of a folded polypeptide.
As can be seen in the folding funnel diagram, the polypeptide is at its highest energy state when unfolded in aqueous solution. As it forms localized folding intermediates, or molten globules, the energy of the system decreases. The polypeptide will continue folding into lower energy states as long as these conformations are kinetically accessible. In this case, a native conformation does not have to be at the lowest energy trough of the diagram as shown, it must simply exist in its natural and kinetically accessible conformation in biological systems.
Surface structures
The formation of a hydrophobic core requires the surface structures of this aggregate to maintain contact with both the polar solvent as well as the internal structures. In order to do this, these surface structures usually contain amphipathic properties. A surface exposed alpha helix may have nonpolar residues in an N+3, N+4 position, allowing the alpha-helix to express nonpolar properties on one side when split longitudinally along the axis. Note, in the diagram, the presence of non-polar(gold) amino acids along one side of the helix when viewed through the longitudinal axis, as well as charged/polar amino acids along the other face. This provides this structure with longitudinal amphipathic properties necessary for hydrophobic aggregation along the non-polar side. Similarly, beta strands can also adopt this property with simple alternation of polar and nonpolar residues. Every N+1 side chain will occupy space on the opposite side of the beta strand.
References
Protein structure | Hydrophobic collapse | Chemistry | 1,133 |
70,348,606 | https://en.wikipedia.org/wiki/Cutaneotrichosporon%20curvatum | Cutaneotrichosporon curvatum is a species of fungus in the family Trichosporonaceae. It is an extremophile found in cold-seep sites. It is oleaginous, and uses the sugars in cellulose for the growth and production of storage triglycerides. This species has been extensively studied in relationship to lipids. It can uptake both glucose and xylose simultaneously. When grown in old oil with high levels of polymerized triglyceride, the cell wall transforms from being smooth to having hairy or wart-like protuberances which are believed to assist in lipid uptake.
References
Tremellomycetes
Fungus species | Cutaneotrichosporon curvatum | Biology | 148 |
54,333,425 | https://en.wikipedia.org/wiki/List%20of%20Bangladeshi%20engineers | This is a list of notable Bangladeshi engineers.
F
Fazlur Rahman Khan
I
Iqbal Mahmud
J
Jamilur Reza Choudhury
K
Khondkar Siddique-e-Rabbani
M
Muhammad M. Hussain
M. Rezwan Khan
Mahmudur Rahman
Muhammad Shahid Sarwar
S
Sunny Sanwar
engineers
Bangladeshi | List of Bangladeshi engineers | Technology | 69 |
46,838,883 | https://en.wikipedia.org/wiki/Raffaelea%20subfusca | Raffaelea subfusca is a mycangial fungus, first isolated from female adults of the redbay ambrosia beetle, Xyleborus glabratus.
References
Further reading
Dreaden, Tyler J., et al. "Phylogeny of ambrosia beetle symbionts in the genus Raffaelea." Fungal biology 118.12 (2014): 970–978.
Harrington, Thomas C., et al. "Isolations from the redbay ambrosia beetle, Xyleborus glabratus, confirm that the laurel wilt pathogen, Raffaelea lauricola, originated in Asia." Mycologia 103.5 (2011): 1028–1036.
Inácio, M. Lurdes, et al. "Ophiostomatoid fungi, a new threat to cork oak stands."Present and Future of Cork Oak in Portugal (eds. Oliveira, M., Matos, J., Saibo, N., Miguel, C., Gil, L.)(2012): 87-92.
Harrington, T. C., and S. W. Fraedrich. "Quantification of propagules of the laurel wilt fungus and other mycangial fungi from the redbay ambrosia beetle, Xyleborus glabratus." Phytopathology 100.10 (2010): 1118–1123.
External links
MycoBank
Fungi described in 2010
Ophiostomatales
Fungus species | Raffaelea subfusca | Biology | 324 |
41,254 | https://en.wikipedia.org/wiki/Improved-definition%20television | Improved-definition television (IDTV) or enhanced-quality television transmitters and receivers exceed the performance requirements of the NTSC standard, while remaining within the general parameters of NTSC emissions standards.
IDTV improvements may be made at the television transmitter or receiver. Improvements include enhancements in encoding, digital filtering, scan interpolation, interlaced line scanning, and ghost cancellation.
IDTV improvements must allow the TV signal to be transmitted and received in the standard 4:3 aspect ratio.
The only relevant implementation of IDTV for NTSC-based broadcasts before the introduction of full-digital TV distribution (DTV) was the Japanese Clear-Vision. In European countries, PALplus and MAC had a similar role.
The more commonly used term for advanced display technology before the advent of high-definition television (HDTV) was enhanced-definition television (EDTV), used for instance for plasma TV sets with a 16:9 aspect ratio in the early 2000s.
See also
Comb filter
Federal Standard 1037C
Video scaler
References
Television technology | Improved-definition television | Technology | 212 |
36,821,570 | https://en.wikipedia.org/wiki/Riemann%E2%80%93Silberstein%20vector | In mathematical physics, in particular electromagnetism, the Riemann–Silberstein vector or Weber vector named after Bernhard Riemann, Heinrich Martin Weber and Ludwik Silberstein, (or sometimes ambiguously called the "electromagnetic field") is a complex vector that combines the electric field E and the magnetic field B.
History
Heinrich Martin Weber published the fourth edition of "The partial differential equations of mathematical physics according to Riemann's lectures" in two volumes (1900 and 1901). However, Weber pointed out in the preface of the first volume (1900) that this fourth edition was completely rewritten based on his own lectures, not Riemann's, and that the reference to "Riemann's lectures" only remained in the title because the overall concept remained the same and that he continued the work in Riemann's spirit. In the second volume (1901, §138, p. 348), Weber demonstrated how to consolidate Maxwell's equations using . The real and imaginary components of the equation
are an interpretation of Maxwell's equations without charges or currents. It was independently rediscovered and further developed by Ludwik Silberstein in 1907.
Definition
Given an electric field E and a magnetic field B defined on a common region of spacetime, the Riemann–Silberstein vector is
where is the speed of light, with some authors preferring to multiply the right hand side by an overall constant , where is the permittivity of free space. It is analogous to the electromagnetic tensor F, a 2-vector used in the covariant formulation of classical electromagnetism.
In Silberstein's formulation, i was defined as the imaginary unit, and F was defined as a complexified 3-dimensional vector field, called a bivector field.
Application
The Riemann–Silberstein vector is used as a point of reference in the geometric algebra formulation of electromagnetism. Maxwell's four equations in vector calculus reduce to one equation in the algebra of physical space:
Expressions for the fundamental invariants and the energy density and momentum density also take on simple forms:
where S is the Poynting vector.
The Riemann–Silberstein vector is used for an exact matrix representations of Maxwell's equations in an inhomogeneous medium with sources.
Photon wave function
In 1996 contribution to quantum electrodynamics, Iwo Bialynicki-Birula used the Riemann–Silberstein vector as the basis for an approach to the photon, noting that it is a "complex vector-function of space coordinates r and time t that adequately describes the quantum state of a single photon". To put the Riemann–Silberstein vector in contemporary parlance, a transition is made:
With the advent of spinor calculus that superseded the quaternionic calculus, the transformation properties of the Riemann-Silberstein vector have become even more transparent ... a symmetric second-rank spinor.
Bialynicki-Birula acknowledges that the photon wave function is a controversial concept and that it cannot have all the properties of Schrödinger wave functions of non-relativistic wave mechanics. Yet defense is mounted on the basis of practicality: it is useful for describing quantum states of excitation of a free field, electromagnetic fields acting on a medium, vacuum excitation of virtual positron-electron pairs, and presenting the photon among quantum particles that do have wave functions.
Schrödinger equation for the photon and the Heisenberg uncertainty relations
Multiplying the two time dependent Maxwell equations by
the Schrödinger equation for photon in the vacuum is given by
where is the vector built from the spin of the length 1 matrices generating full infinitesimal rotations of 3-spinor particle. One may therefore notice that the
Hamiltonian in the Schrödinger equation of the photon is the projection of its spin 1 onto
its momentum since the normal momentum operator appears there from combining parts of rotations.
In contrast to the electron wave function the modulus square of the wave function of the photon
(Riemann-Silbertein vector) is not dimensionless and must be multiplied by the "local photon
wavelength" with the proper power to give dimensionless expression to normalize i.e.
it is normalized in the exotic way with the integral kernel
The two residual Maxwell equations are only constraints i.e.
and they are automatically fulfilled all time if only fulfilled at the initial time
, i.e.
where
is any complex vector field with the non-vanishing rotation, or
it is a vector potential for the Riemann–Silberstein vector.
While having the wave function of the photon one can estimate the uncertainty relations
for the photon. It shows up that photons are "more quantum" than the electron while their
uncertainties of position and the momentum are higher. The natural candidates to estimate the uncertainty are the natural momentum like simply the projection or from Einstein
formula for the photoelectric effect and the simplest theory of quanta and the , the uncertainty
of the position length vector.
We will use the general relation for the uncertainty for the operators
We want the uncertainty relation for i.e. for the operators
The first step is to find the auxiliary operator such that this relation
can be used directly. First we make the same trick for that Dirac made to calculate the
square root of the Klein-Gordon operator to get the Dirac equation:
where are matrices from the Dirac equation:
Therefore, we have
Because the spin matrices 1 are only to calculate the commutator
in the same space we approximate the spin matrices
by angular momentum matrices of the particle with the length while
dropping the multiplying since the resulting Maxwell equations in 4 dimensions would look too artificial
to the original (alternatively we can keep the original factors but normalize the new 4-spinor
to 2 as 4 scalar particles normalized to 1/2):
We can now readily calculate the commutator while calculating commutators
of matrixes and scaled and noticing that the symmetric Gaussian state
is annihilating in average the terms containing mixed variable like
.
Calculating 9 commutators (mixed may be zero by Gaussian example and the since those matrices are counter-diagonal) and estimating
terms from the norm of the resulting matrix containing four factors giving square of the most natural norm of this matrix as and using the norm inequality for the estimate
we obtain
or
which is much more than for the mass particle in 3 dimensions that is
and therefore photons turn out to be particles
times or almost 3 times "more quantum" than particles with the mass like electrons.
See also
Matrix representation of Maxwell's equations
References
Electromagnetism
Geometric algebra
Bernhard Riemann | Riemann–Silberstein vector | Physics | 1,379 |
70,425 | https://en.wikipedia.org/wiki/Inflammation | Inflammation (from ) is part of the biological response of body tissues to harmful stimuli, such as pathogens, damaged cells, or irritants. The five cardinal signs are heat, pain, redness, swelling, and loss of function (Latin calor, dolor, rubor, tumor, and functio laesa).
Inflammation is a generic response, and therefore is considered a mechanism of innate immunity, whereas adaptive immunity is specific to each pathogen.
Inflammation is a protective response involving immune cells, blood vessels, and molecular mediators. The function of inflammation is to eliminate the initial cause of cell injury, clear out damaged cells and tissues, and initiate tissue repair. Too little inflammation could lead to progressive tissue destruction by the harmful stimulus (e.g. bacteria) and compromise the survival of the organism. However inflammation can also have negative effects. Too much inflammation, in the form of chronic inflammation, is associated with various diseases, such as hay fever, periodontal disease, atherosclerosis, and osteoarthritis.
Inflammation can be classified as acute or chronic. Acute inflammation is the initial response of the body to harmful stimuli, and is achieved by the increased movement of plasma and leukocytes (in particular granulocytes) from the blood into the injured tissues. A series of biochemical events propagates and matures the inflammatory response, involving the local vascular system, the immune system, and various cells in the injured tissue. Prolonged inflammation, known as chronic inflammation, leads to a progressive shift in the type of cells present at the site of inflammation, such as mononuclear cells, and involves simultaneous destruction and healing of the tissue.
Inflammation has also been classified as Type 1 and Type 2 based on the type of cytokines and helper T cells (Th1 and Th2) involved.
Meaning
The earliest known reference for the term inflammation is around the early 15th century. The word root comes from Old French inflammation around the 14th century, which then comes from Latin inflammatio or inflammationem. Literally, the term relates to the word "flame", as the property of being "set on fire" or "to burn".
The term inflammation is not a synonym for infection. Infection describes the interaction between the action of microbial invasion and the reaction of the body's inflammatory response—the two components are considered together in discussion of infection, and the word is used to imply a microbial invasive cause for the observed inflammatory reaction. Inflammation, on the other hand, describes just the body's immunovascular response, regardless of cause. But, because the two are often correlated, words ending in the suffix -itis (which means inflammation) are sometimes informally described as referring to infection: for example, the word urethritis strictly means only "urethral inflammation", but clinical health care providers usually discuss urethritis as a urethral infection because urethral microbial invasion is the most common cause of urethritis. However, the inflammation–infection distinction is crucial in situations in pathology and medical diagnosis that involve inflammation that is not driven by microbial invasion, such as cases of atherosclerosis, trauma, ischemia, and autoimmune diseases (including type III hypersensitivity).
Causes
Types
Appendicitis
Bursitis
Colitis
Cystitis
Dermatitis
Epididymitis
Encephalitis
Gingivitis
Meningitis
Myelitis
Myocarditis
Nephritis
Neuritis
Pancreatitis
Periodontitis
Pharyngitis
Phlebitis
Prostatitis
RSD/CRPS
Rhinitis
Sinusitis
Tendonitis
Tonsillitis
Urethritis
Vasculitis
Vaginitis
Acute
Acute inflammation is a short-term process, usually appearing within a few minutes or hours and begins to cease upon the removal of the injurious stimulus. It involves a coordinated and systemic mobilization response locally of various immune, endocrine and neurological mediators of acute inflammation. In a normal healthy response, it becomes activated, clears the pathogen and begins a repair process and then ceases.
Acute inflammation occurs immediately upon injury, lasting only a few days. Cytokines and chemokines promote the migration of neutrophils and macrophages to the site of inflammation. Pathogens, allergens, toxins, burns, and frostbite are some of the typical causes of acute inflammation. Toll-like receptors (TLRs) recognize microbial pathogens. Acute inflammation can be a defensive mechanism to protect tissues against injury. Inflammation lasting 2–6 weeks is designated subacute inflammation.
Cardinal signs
Inflammation is characterized by five cardinal signs, (the traditional names of which come from Latin):
Dolor (pain)
Calor (heat)
Rubor (redness)
Tumor (swelling)
Functio laesa (loss of function)
The first four (classical signs) were described by Celsus (–38 AD).
Pain is due to the release of chemicals such as bradykinin and histamine that stimulate nerve endings. Acute inflammation of the lung (usually in response to pneumonia) does not cause pain unless the inflammation involves the parietal pleura, which does have pain-sensitive nerve endings. Heat and redness are due to increased blood flow at body core temperature to the inflamed site. Swelling is caused by accumulation of fluid.
Loss of function
The fifth sign, loss of function, is believed to have been added later by Galen, Thomas Sydenham or Rudolf Virchow. Examples of loss of function include pain that inhibits mobility, severe swelling that prevents movement, having a worse sense of smell during a cold, or having difficulty breathing when bronchitis is present. Loss of function has multiple causes.
Acute process
The process of acute inflammation is initiated by resident immune cells already present in the involved tissue, mainly resident macrophages, dendritic cells, histiocytes, Kupffer cells and mast cells. These cells possess surface receptors known as pattern recognition receptors (PRRs), which recognize (i.e., bind) two subclasses of molecules: pathogen-associated molecular patterns (PAMPs) and damage-associated molecular patterns (DAMPs). PAMPs are compounds that are associated with various pathogens, but which are distinguishable from host molecules. DAMPs are compounds that are associated with host-related injury and cell damage.
At the onset of an infection, burn, or other injuries, these cells undergo activation (one of the PRRs recognize a PAMP or DAMP) and release inflammatory mediators responsible for the clinical signs of inflammation. Vasodilation and its resulting increased blood flow causes the redness (rubor) and increased heat (calor). Increased permeability of the blood vessels results in an exudation (leakage) of plasma proteins and fluid into the tissue (edema), which manifests itself as swelling (tumor). Some of the released mediators such as bradykinin increase the sensitivity to pain (hyperalgesia, dolor). The mediator molecules also alter the blood vessels to permit the migration of leukocytes, mainly neutrophils and macrophages, to flow out of the blood vessels (extravasation) and into the tissue. The neutrophils migrate along a chemotactic gradient created by the local cells to reach the site of injury. The loss of function (functio laesa) is probably the result of a neurological reflex in response to pain.
In addition to cell-derived mediators, several acellular biochemical cascade systems—consisting of preformed plasma proteins—act in parallel to initiate and propagate the inflammatory response. These include the complement system activated by bacteria and the coagulation and fibrinolysis systems activated by necrosis (e.g., burn, trauma).
Acute inflammation may be regarded as the first line of defense against injury. Acute inflammatory response requires constant stimulation to be sustained. Inflammatory mediators are short-lived and are quickly degraded in the tissue. Hence, acute inflammation begins to cease once the stimulus has been removed.
Chronic
Chronic inflammation is inflammation that lasts for months or years. Macrophages, lymphocytes, and plasma cells predominate in chronic inflammation, in contrast to the neutrophils that predominate in acute inflammation. Diabetes, cardiovascular disease, allergies, and chronic obstructive pulmonary disease are examples of diseases mediated by chronic inflammation. Obesity, smoking, stress and insufficient diet are some of the factors that promote chronic inflammation.
Cardinal signs
Common signs and symptoms that develop during chronic inflammation are:
Body pain, arthralgia, myalgia
Chronic fatigue and insomnia
Depression, anxiety and mood disorders
Gastrointestinal complications such as constipation, diarrhea, and acid reflux
Weight gain or loss
Frequent infections
Vascular component
Vasodilation and increased permeability
As defined, acute inflammation is an immunovascular response to inflammatory stimuli, which can include infection or trauma. This means acute inflammation can be broadly divided into a vascular phase that occurs first, followed by a cellular phase involving immune cells (more specifically myeloid granulocytes in the acute setting). The vascular component of acute inflammation involves the movement of plasma fluid, containing important proteins such as fibrin and immunoglobulins (antibodies), into inflamed tissue.
Upon contact with PAMPs, tissue macrophages and mastocytes release vasoactive amines such as histamine and serotonin, as well as eicosanoids such as prostaglandin E2 and leukotriene B4 to remodel the local vasculature. Macrophages and endothelial cells release nitric oxide. These mediators vasodilate and permeabilize the blood vessels, which results in the net distribution of blood plasma from the vessel into the tissue space. The increased collection of fluid into the tissue causes it to swell (edema). This exuded tissue fluid contains various antimicrobial mediators from the plasma such as complement, lysozyme, antibodies, which can immediately deal damage to microbes, and opsonise the microbes in preparation for the cellular phase. If the inflammatory stimulus is a lacerating wound, exuded platelets, coagulants, plasmin and kinins can clot the wounded area using vitamin K-dependent mechanisms and provide haemostasis in the first instance. These clotting mediators also provide a structural staging framework at the inflammatory tissue site in the form of a fibrin lattice – as would construction scaffolding at a construction site – for the purpose of aiding phagocytic debridement and wound repair later on. Some of the exuded tissue fluid is also funneled by lymphatics to the regional lymph nodes, flushing bacteria along to start the recognition and attack phase of the adaptive immune system.
Acute inflammation is characterized by marked vascular changes, including vasodilation, increased permeability and increased blood flow, which are induced by the actions of various inflammatory mediators. Vasodilation occurs first at the arteriole level, progressing to the capillary level, and brings about a net increase in the amount of blood present, causing the redness and heat of inflammation. Increased permeability of the vessels results in the movement of plasma into the tissues, with resultant stasis due to the increase in the concentration of the cells within blood – a condition characterized by enlarged vessels packed with cells. Stasis allows leukocytes to marginate (move) along the endothelium, a process critical to their recruitment into the tissues. Normal flowing blood prevents this, as the shearing force along the periphery of the vessels moves cells in the blood into the middle of the vessel.
Plasma cascade systems
The complement system, when activated, creates a cascade of chemical reactions that promotes opsonization, chemotaxis, and agglutination, and produces the MAC.
The kinin system generates proteins capable of sustaining vasodilation and other physical inflammatory effects.
The coagulation system or clotting cascade, which forms a protective protein mesh over sites of injury.
The fibrinolysis system, which acts in opposition to the coagulation system, to counterbalance clotting and generate several other inflammatory mediators.
Plasma-derived mediators
Cellular component
The cellular component involves leukocytes, which normally reside in blood and must move into the inflamed tissue via extravasation to aid in inflammation. Some act as phagocytes, ingesting bacteria, viruses, and cellular debris. Others release enzymatic granules that damage pathogenic invaders. Leukocytes also release inflammatory mediators that develop and maintain the inflammatory response. In general, acute inflammation is mediated by granulocytes, whereas chronic inflammation is mediated by mononuclear cells such as monocytes and lymphocytes.
Leukocyte extravasation
Various leukocytes, particularly neutrophils, are critically involved in the initiation and maintenance of inflammation. These cells must be able to move to the site of injury from their usual location in the blood, therefore mechanisms exist to recruit and direct leukocytes to the appropriate place. The process of leukocyte movement from the blood to the tissues through the blood vessels is known as extravasation and can be broadly divided up into a number of steps:
Leukocyte margination and endothelial adhesion: The white blood cells within the vessels which are generally centrally located move peripherally towards the walls of the vessels. Activated macrophages in the tissue release cytokines such as IL-1 and TNFα, which in turn leads to production of chemokines that bind to proteoglycans forming gradient in the inflamed tissue and along the endothelial wall. Inflammatory cytokines induce the immediate expression of P-selectin on endothelial cell surfaces and P-selectin binds weakly to carbohydrate ligands on the surface of leukocytes and causes them to "roll" along the endothelial surface as bonds are made and broken. Cytokines released from injured cells induce the expression of E-selectin on endothelial cells, which functions similarly to P-selectin. Cytokines also induce the expression of integrin ligands such as ICAM-1 and VCAM-1 on endothelial cells, which mediate the adhesion and further slow leukocytes down. These weakly bound leukocytes are free to detach if not activated by chemokines produced in injured tissue after signal transduction via respective G protein-coupled receptors that activates integrins on the leukocyte surface for firm adhesion. Such activation increases the affinity of bound integrin receptors for ICAM-1 and VCAM-1 on the endothelial cell surface, firmly binding the leukocytes to the endothelium.
Migration across the endothelium, known as transmigration, via the process of diapedesis: Chemokine gradients stimulate the adhered leukocytes to move between adjacent endothelial cells. The endothelial cells retract and the leukocytes pass through the basement membrane into the surrounding tissue using adhesion molecules such as ICAM-1.
Movement of leukocytes within the tissue via chemotaxis: Leukocytes reaching the tissue interstitium bind to extracellular matrix proteins via expressed integrins and CD44 to prevent them from leaving the site. A variety of molecules behave as chemoattractants, for example, C3a or C5a (the anaphylatoxins), and cause the leukocytes to move along a chemotactic gradient towards the source of inflammation.
Phagocytosis
Extravasated neutrophils in the cellular phase come into contact with microbes at the inflamed tissue. Phagocytes express cell-surface endocytic pattern recognition receptors (PRRs) that have affinity and efficacy against non-specific microbe-associated molecular patterns (PAMPs). Most PAMPs that bind to endocytic PRRs and initiate phagocytosis are cell wall components, including complex carbohydrates such as mannans and β-glucans, lipopolysaccharides (LPS), peptidoglycans, and surface proteins. Endocytic PRRs on phagocytes reflect these molecular patterns, with C-type lectin receptors binding to mannans and β-glucans, and scavenger receptors binding to LPS.
Upon endocytic PRR binding, actin-myosin cytoskeletal rearrangement adjacent to the plasma membrane occurs in a way that endocytoses the plasma membrane containing the PRR-PAMP complex, and the microbe. Phosphatidylinositol and Vps34-Vps15-Beclin1 signalling pathways have been implicated to traffic the endocytosed phagosome to intracellular lysosomes, where fusion of the phagosome and the lysosome produces a phagolysosome. The reactive oxygen species, superoxides and hypochlorite bleach within the phagolysosomes then kill microbes inside the phagocyte.
Phagocytic efficacy can be enhanced by opsonization. Plasma derived complement C3b and antibodies that exude into the inflamed tissue during the vascular phase bind to and coat the microbial antigens. As well as endocytic PRRs, phagocytes also express opsonin receptors Fc receptor and complement receptor 1 (CR1), which bind to antibodies and C3b, respectively. The co-stimulation of endocytic PRR and opsonin receptor increases the efficacy of the phagocytic process, enhancing the lysosomal elimination of the infective agent.
Cell-derived mediators
Morphologic patterns
Specific patterns of acute and chronic inflammation are seen during particular situations that arise in the body, such as when inflammation occurs on an epithelial surface, or pyogenic bacteria are involved.
Granulomatous inflammation: Characterised by the formation of granulomas, they are the result of a limited but diverse number of diseases, which include among others tuberculosis, leprosy, sarcoidosis, and syphilis.
Fibrinous inflammation: Inflammation resulting in a large increase in vascular permeability allows fibrin to pass through the blood vessels. If an appropriate procoagulative stimulus is present, such as cancer cells, a fibrinous exudate is deposited. This is commonly seen in serous cavities, where the conversion of fibrinous exudate into a scar can occur between serous membranes, limiting their function. The deposit sometimes forms a pseudomembrane sheet. During inflammation of the intestine (pseudomembranous colitis), pseudomembranous tubes can be formed.
Purulent inflammation: Inflammation resulting in large amount of pus, which consists of neutrophils, dead cells, and fluid. Infection by pyogenic bacteria such as staphylococci is characteristic of this kind of inflammation. Large, localised collections of pus enclosed by surrounding tissues are called abscesses.
Serous inflammation: Characterised by the copious effusion of non-viscous serous fluid, commonly produced by mesothelial cells of serous membranes, but may be derived from blood plasma. Skin blisters exemplify this pattern of inflammation.
Ulcerative inflammation: Inflammation occurring near an epithelium can result in the necrotic loss of tissue from the surface, exposing lower layers. The subsequent excavation in the epithelium is known as an ulcer.
Disorders
Inflammatory abnormalities are a large group of disorders that underlie a vast variety of human diseases. The immune system is often involved with inflammatory disorders, as demonstrated in both allergic reactions and some myopathies, with many immune system disorders resulting in abnormal inflammation. Non-immune diseases with causal origins in inflammatory processes include cancer, atherosclerosis, and ischemic heart disease.
Examples of disorders associated with inflammation include:
Acne vulgaris
Asthma
Autoimmune diseases
Autoinflammatory diseases
Celiac disease
Chronic prostatitis
Colitis
Diverticulitis
Familial Mediterranean Fever
Glomerulonephritis
Hidradenitis suppurativa
Hypersensitivities
Inflammatory bowel diseases
Interstitial cystitis
Lichen planus
Mast Cell Activation Syndrome
Mastocytosis
Otitis
Pelvic inflammatory disease
Peripheral ulcerative keratitis
Pneumonia
Reperfusion injury
Rheumatic fever
Rheumatoid arthritis
Rhinitis
Sarcoidosis
Transplant rejection
Vasculitis
Atherosclerosis
Atherosclerosis, formerly considered a lipid storage disorder, is now understood as a chronic inflammatory condition involving the arterial walls. Research has established a fundamental role for inflammation in mediating all stages of atherosclerosis from initiation through progression and, ultimately, the thrombotic complications from it. These new findings reveal links between traditional risk factors like cholesterol levels and the underlying mechanisms of atherogenesis.
Clinical studies have shown that this emerging biology of inflammation in atherosclerosis applies directly to people. For instance, elevation in markers of inflammation predicts outcomes of people with acute coronary syndromes, independently of myocardial damage. In addition, low-grade chronic inflammation, as indicated by levels of the inflammatory marker C-reactive protein, prospectively defines risk of atherosclerotic complications, thus adding to prognostic information provided by traditional risk factors, such as LDL levels.
Moreover, certain treatments that reduce coronary risk also limit inflammation. Notably, lipid-lowering medications such as statins have shown anti-inflammatory effects, which may contribute to their efficacy beyond just lowering LDL levels. This emerging understanding of inflammation's role in atherosclerosis has had significant clinical implications, influencing both risk stratification and therapeutic strategies.
Emerging treatments
Recent developments in the treatment of atherosclerosis have focused on addressing inflammation directly. New anti-inflammatory drugs, such as monoclonal antibodies targeting IL-1β, have been studied in large clinical trials, showing promising results in reducing cardiovascular events. These drugs offer a potential new avenue for treatment, particularly for patients who do not respond adequately to statins. However, concerns about long-term safety and cost remain significant barriers to widespread adoption.
Connection to depression
Inflammatory processes can be triggered by negative cognition or their consequences, such as stress, violence, or deprivation. Negative cognition may therefore contribute to inflammation, which in turn can lead to depression. A 2019 meta-analysis found that chronic inflammation is associated with a 30% increased risk of developing major depressive disorder, supporting the link between inflammation and mental health.
Allergy
An allergic reaction, formally known as type 1 hypersensitivity, is the result of an inappropriate immune response triggering inflammation, vasodilation, and nerve irritation. A common example is hay fever, which is caused by a hypersensitive response by mast cells to allergens. Pre-sensitised mast cells respond by degranulating, releasing vasoactive chemicals such as histamine. These chemicals propagate an excessive inflammatory response characterised by blood vessel dilation, production of pro-inflammatory molecules, cytokine release, and recruitment of leukocytes. Severe inflammatory response may mature into a systemic response known as anaphylaxis.
Myopathies
Inflammatory myopathies are caused by the immune system inappropriately attacking components of muscle, leading to signs of muscle inflammation. They may occur in conjunction with other immune disorders, such as systemic sclerosis, and include dermatomyositis, polymyositis, and inclusion body myositis.
Leukocyte defects
Due to the central role of leukocytes in the development and propagation of inflammation, defects in leukocyte functionality often result in a decreased capacity for inflammatory defense with subsequent vulnerability to infection. Dysfunctional leukocytes may be unable to correctly bind to blood vessels due to surface receptor mutations, digest bacteria (Chédiak–Higashi syndrome), or produce microbicides (chronic granulomatous disease). In addition, diseases affecting the bone marrow may result in abnormal or few leukocytes.
Pharmacological
Certain drugs or exogenous chemical compounds are known to affect inflammation. Vitamin A deficiency, for example, causes an increase in inflammatory responses, and anti-inflammatory drugs work specifically by inhibiting the enzymes that produce inflammatory eicosanoids. Additionally, certain illicit drugs such as cocaine and ecstasy may exert some of their detrimental effects by activating transcription factors intimately involved with inflammation (e.g. NF-κB).
Cancer
Inflammation orchestrates the microenvironment around tumours, contributing to proliferation, survival and migration. Cancer cells use selectins, chemokines and their receptors for invasion, migration and metastasis. On the other hand, many cells of the immune system contribute to cancer immunology, suppressing cancer.
Molecular intersection between receptors of steroid hormones, which have important effects on cellular development, and transcription factors that play key roles in inflammation, such as NF-κB, may mediate some of the most critical effects of inflammatory stimuli on cancer cells. This capacity of a mediator of inflammation to influence the effects of steroid hormones in cells is very likely to affect carcinogenesis. On the other hand, due to the modular nature of many steroid hormone receptors, this interaction may offer ways to interfere with cancer progression, through targeting of a specific protein domain in a specific cell type. Such an approach may limit side effects that are unrelated to the tumor of interest, and may help preserve vital homeostatic functions and developmental processes in the organism.
There is some evidence from 2009 to suggest that cancer-related inflammation (CRI) may lead to accumulation of random genetic alterations in cancer cells.
Role in cancer
In 1863, Rudolf Virchow hypothesized that the origin of cancer was at sites of chronic inflammation. As of 2012, chronic inflammation was estimated to contribute to approximately 15% to 25% of human cancers.
Mediators and DNA damage in cancer
An inflammatory mediator is a messenger that acts on blood vessels and/or cells to promote an inflammatory response. Inflammatory mediators that contribute to neoplasia include prostaglandins, inflammatory cytokines such as IL-1β, TNF-α, IL-6 and IL-15 and chemokines such as IL-8 and GRO-alpha. These inflammatory mediators, and others, orchestrate an environment that fosters proliferation and survival.
Inflammation also causes DNA damages due to the induction of reactive oxygen species (ROS) by various intracellular inflammatory mediators. In addition, leukocytes and other phagocytic cells attracted to the site of inflammation induce DNA damages in proliferating cells through their generation of ROS and reactive nitrogen species (RNS). ROS and RNS are normally produced by these cells to fight infection. ROS, alone, cause more than 20 types of DNA damage. Oxidative DNA damages cause both mutations and epigenetic alterations. RNS also cause mutagenic DNA damages.
A normal cell may undergo carcinogenesis to become a cancer cell if it is frequently subjected to DNA damage during long periods of chronic inflammation. DNA damages may cause genetic mutations due to inaccurate repair. In addition, mistakes in the DNA repair process may cause epigenetic alterations. Mutations and epigenetic alterations that are replicated and provide a selective advantage during somatic cell proliferation may be carcinogenic.
Genome-wide analyses of human cancer tissues reveal that a single typical cancer cell may possess roughly 100 mutations in coding regions, 10–20 of which are "driver mutations" that contribute to cancer development. However, chronic inflammation also causes epigenetic changes such as DNA methylations, that are often more common than mutations. Typically, several hundreds to thousands of genes are methylated in a cancer cell (see DNA methylation in cancer). Sites of oxidative damage in chromatin can recruit complexes that contain DNA methyltransferases (DNMTs), a histone deacetylase (SIRT1), and a histone methyltransferase (EZH2), and thus induce DNA methylation. DNA methylation of a CpG island in a promoter region may cause silencing of its downstream gene (see CpG site and regulation of transcription in cancer). DNA repair genes, in particular, are frequently inactivated by methylation in various cancers (see hypermethylation of DNA repair genes in cancer). A 2018 report evaluated the relative importance of mutations and epigenetic alterations in progression to two different types of cancer. This report showed that epigenetic alterations were much more important than mutations in generating gastric cancers (associated with inflammation). However, mutations and epigenetic alterations were of roughly equal importance in generating esophageal squamous cell cancers (associated with tobacco chemicals and acetaldehyde, a product of alcohol metabolism).
HIV and AIDS
It has long been recognized that infection with HIV is characterized not only by development of profound immunodeficiency but also by sustained inflammation and immune activation. A substantial body of evidence implicates chronic inflammation as a critical driver of immune dysfunction, premature appearance of aging-related diseases, and immune deficiency. Many now regard HIV infection not only as an evolving virus-induced immunodeficiency, but also as chronic inflammatory disease. Even after the introduction of effective antiretroviral therapy (ART) and effective suppression of viremia in HIV-infected individuals, chronic inflammation persists. Animal studies also support the relationship between immune activation and progressive cellular immune deficiency: SIVsm infection of its natural nonhuman primate hosts, the sooty mangabey, causes high-level viral replication but limited evidence of disease. This lack of pathogenicity is accompanied by a lack of inflammation, immune activation and cellular proliferation. In sharp contrast, experimental SIVsm infection of rhesus macaque produces immune activation and AIDS-like disease with many parallels to human HIV infection.
Delineating how CD4 T cells are depleted and how chronic inflammation and immune activation are induced lies at the heart of understanding HIV pathogenesisone of the top priorities for HIV research by the Office of AIDS Research, National Institutes of Health. Recent studies demonstrated that caspase-1-mediated pyroptosis, a highly inflammatory form of programmed cell death, drives CD4 T-cell depletion and inflammation by HIV. These are the two signature events that propel HIV disease progression to AIDS. Pyroptosis appears to create a pathogenic vicious cycle in which dying CD4 T cells and other immune cells (including macrophages and neutrophils) release inflammatory signals that recruit more cells into the infected lymphoid tissues to die. The feed-forward nature of this inflammatory response produces chronic inflammation and tissue injury. Identifying pyroptosis as the predominant mechanism that causes CD4 T-cell depletion and chronic inflammation, provides novel therapeutic opportunities, namely caspase-1 which controls the pyroptotic pathway. In this regard, pyroptosis of CD4 T cells and secretion of pro-inflammatory cytokines such as IL-1β and IL-18 can be blocked in HIV-infected human lymphoid tissues by addition of the caspase-1 inhibitor VX-765, which has already proven to be safe and well tolerated in phase II human clinical trials. These findings could propel development of an entirely new class of "anti-AIDS" therapies that act by targeting the host rather than the virus. Such agents would almost certainly be used in combination with ART. By promoting "tolerance" of the virus instead of suppressing its replication, VX-765 or related drugs may mimic the evolutionary solutions occurring in multiple monkey hosts (e.g. the sooty mangabey) infected with species-specific lentiviruses that have led to a lack of disease, no decline in CD4 T-cell counts, and no chronic inflammation.
Resolution
The inflammatory response must be actively terminated when no longer needed to prevent unnecessary "bystander" damage to tissues. Failure to do so results in chronic inflammation, and cellular destruction. Resolution of inflammation occurs by different mechanisms in different tissues.
Mechanisms that serve to terminate inflammation include:
Connection to depression
There is evidence for a link between inflammation and depression. Inflammatory processes can be triggered by negative cognitions or their consequences, such as stress, violence, or deprivation. Thus, negative cognitions can cause inflammation that can, in turn, lead to depression.
In addition, there is increasing evidence that inflammation can cause depression because of the increase of cytokines, setting the brain into a "sickness mode".
Classical symptoms of being physically sick, such as lethargy, show a large overlap in behaviors that characterize depression. Levels of cytokines tend to increase sharply during the depressive episodes of people with bipolar disorder and drop off during remission. Furthermore, it has been shown in clinical trials that anti-inflammatory medicines taken in addition to antidepressants not only significantly improves symptoms but also increases the proportion of subjects positively responding to treatment.
Inflammations that lead to serious depression could be caused by common infections such as those caused by a virus, bacteria or even parasites.
Connection to delirium
There is evidence for a link between inflammation and delirium based on the results of a recent longitudinal study investigating CRP in COVID-19 patients.
Systemic effects
An infectious organism can escape the confines of the immediate tissue via the circulatory system or lymphatic system, where it may spread to other parts of the body. If an organism is not contained by the actions of acute inflammation, it may gain access to the lymphatic system via nearby lymph vessels. An infection of the lymph vessels is known as lymphangitis, and infection of a lymph node is known as lymphadenitis. When lymph nodes cannot destroy all pathogens, the infection spreads further. A pathogen can gain access to the bloodstream through lymphatic drainage into the circulatory system.
When inflammation overwhelms the host, systemic inflammatory response syndrome is diagnosed. When it is due to infection, the term sepsis is applied, with the terms bacteremia being applied specifically for bacterial sepsis and viremia specifically to viral sepsis. Vasodilation and organ dysfunction are serious problems associated with widespread infection that may lead to septic shock and death.
Acute-phase proteins
Inflammation also is characterized by high systemic levels of acute-phase proteins. In acute inflammation, these proteins prove beneficial; however, in chronic inflammation, they can contribute to amyloidosis. These proteins include C-reactive protein, serum amyloid A, and serum amyloid P, which cause a range of systemic effects including:
Fever
Increased blood pressure
Decreased sweating
Malaise
Loss of appetite
Somnolence
Leukocyte numbers
Inflammation often affects the numbers of leukocytes present in the body:
Leukocytosis is often seen during inflammation induced by infection, where it results in a large increase in the amount of leukocytes in the blood, especially immature cells. Leukocyte numbers usually increase to between 15 000 and 20 000 cells per microliter, but extreme cases can see it approach 100 000 cells per microliter. Bacterial infection usually results in an increase of neutrophils, creating neutrophilia, whereas diseases such as asthma, hay fever, and parasite infestation result in an increase in eosinophils, creating eosinophilia.
Leukopenia can be induced by certain infections and diseases, including viral infection, Rickettsia infection, some protozoa, tuberculosis, and some cancers.
Interleukins and obesity
With the discovery of interleukins (IL), the concept of systemic inflammation developed. Although the processes involved are identical to tissue inflammation, systemic inflammation is not confined to a particular tissue but involves the endothelium and other organ systems.
Chronic inflammation is widely observed in obesity. Obese people commonly have many elevated markers of inflammation, including:
IL-6 (Interleukin-6)
Low-grade chronic inflammation is characterized by a two- to threefold increase in the systemic concentrations of cytokines such as TNF-α, IL-6, and CRP. Waist circumference correlates significantly with systemic inflammatory response.
Loss of white adipose tissue reduces levels of inflammation markers. As of 2017 the association of systemic inflammation with insulin resistance and type 2 diabetes, and with atherosclerosis was under preliminary research, although rigorous clinical trials had not been conducted to confirm such relationships.
C-reactive protein (CRP) is generated at a higher level in obese people, and may increase the risk for cardiovascular diseases.
Outcomes
The outcome in a particular circumstance will be determined by the tissue in which the injury has occurred—and the injurious agent that is causing it. Here are the possible outcomes to inflammation:
ResolutionThe complete restoration of the inflamed tissue back to a normal status. Inflammatory measures such as vasodilation, chemical production, and leukocyte infiltration cease, and damaged parenchymal cells regenerate. Such is usually the outcome when limited or short-lived inflammation has occurred.
FibrosisLarge amounts of tissue destruction, or damage in tissues unable to regenerate, cannot be regenerated completely by the body. Fibrous scarring occurs in these areas of damage, forming a scar composed primarily of collagen. The scar will not contain any specialized structures, such as parenchymal cells, hence functional impairment may occur.
Abscess formationA cavity is formed containing pus, an opaque liquid containing dead white blood cells and bacteria with general debris from destroyed cells.
Chronic inflammationIn acute inflammation, if the injurious agent persists then chronic inflammation will ensue. This process, marked by inflammation lasting many days, months or even years, may lead to the formation of a chronic wound. Chronic inflammation is characterised by the dominating presence of macrophages in the injured tissue. These cells are powerful defensive agents of the body, but the toxins they release—including reactive oxygen species—are injurious to the organism's own tissues as well as invading agents. As a consequence, chronic inflammation is almost always accompanied by tissue destruction.
Examples
Inflammation is usually indicated by adding the suffix "itis", as shown below. However, some conditions, such as asthma and pneumonia, do not follow this convention. More examples are available at List of types of inflammation.
See also
Notes
References
External links
Immunology
Animal physiology
Inflammations
Human physiology | Inflammation | Biology | 8,095 |
2,824,202 | https://en.wikipedia.org/wiki/International%20Code%20of%20Nomenclature%20of%20Prokaryotes | The International Code of Nomenclature of Prokaryotes (ICNP) or Prokaryotic Code, formerly the International Code of Nomenclature of Bacteria (ICNB) or Bacteriological Code (BC), governs the scientific names for Bacteria and Archaea. It denotes the rules for naming taxa of bacteria, according to their relative rank. As such it is one of the nomenclature codes of biology.
Originally the International Code of Botanical Nomenclature dealt with bacteria, and this kept references to bacteria until these were eliminated at the 1975 International Botanical Congress. An early Code for the nomenclature of bacteria was approved at the 4th International Congress for Microbiology in 1947, but was later discarded.
The latest version to be printed in book form is the 1990 Revision, but the book does not represent the current rules. The 2008 and 2022 Revisions have been published in the International Journal of Systematic and Evolutionary Microbiology (IJSEM). Rules are maintained by the International Committee on Systematics of Prokaryotes (ICSP; formerly the International Committee on Systematic Bacteriology, ICSB).
The baseline for bacterial names is the Approved Lists with a starting point of 1980. New bacterial names are reviewed by the ICSP as being in conformity with the Rules of Nomenclature and published in the IJSEM.
Cyanobacteria
Since 1975, most bacteria were covered under the bacteriological code. However, cyanobacteria were still covered by the botanical code. Starting in 1999, cyanobacteria were covered by both the botanical and bacteriological codes. This situation has caused nomenclatural problems for the cyanobacteria. By 2020, there were three proposals for how to resolve the situation:
Exclude cyanobacteria from the bacteriological code.
Apply the bacteriological code to all cyanobacteria.
Treat valid publication under the botanical code as valid publication under the bacteriological code.
In 2021, the ICSP held a formal vote on the three proposals and the third option was chosen.
Type strain
Since 2001, when a new bacterial or archaeal species is described, a type strain must be designated. The type strain is a living culture to which the scientific name of that organism is formally attached. For a new species name to be validly published, the type strain must be deposited in a public culture collection in at least two different countries. Before 2001, a species could also be typified using a description, a preserved specimen, or an illustration. There is a single type strain for each prokaryotic species, but different culture collections may designate a unique name for the same strain. For example, the type strain of E. coli (originally strain U5/41) is called ATCC 11775 by the American Type Culture Collection, DSM 30083 by the German Collection of Microorganisms and Cell Cultures, JCM 1649 by the Japan Collection of Microorganisms, and LMG 2092 by the Belgian Coordinated Collections of Microorganisms. When a prokaryotic species cannot be cultivated in the laboratory (and therefore cannot be deposited in a culture collection), it may be given a provisional candidatus name, but is not considered validly published. Species that are not cultivated in a laboratory but are known only from their DNA sequences are covered by the Code of Nomenclature of Prokaryotes Described from Sequence Data.
Versions
Buchanan, R. E., and Ralph St. John-Brooks. (1947, June) (Editors). Proposed Bacteriological Code of Nomenclature. Developed from proposals approved by International Committee on Bacteriological Nomenclature at the Meeting of the Third International Congress for Microbiology. Publication authorized in Plenary Session, pp. 61. Iowa State College Press, Ames, Iowa. U.S.A. Hathi Trust.
Reprinted 1949, Journal of General Microbiology 3, 444–462.
International Committee on Bacteriological Nomenclature. (1958, June). International code of nomenclature of bacteria and viruses. Ames, Iowa State College Press. BHL.
Lapage, S.P., Sneath, P.H.A., Lessel, E.F., Skerman, V.B.D., Seeliger, H.P.R. & Clark, W.A. (1975). International Code of Nomenclature of Bacteria. 1975 Revision. American Society of Microbiology, Washington, D.C.
Lapage, S.P., Sneath, P.H.A., Lessel, E.F., Skerman, V.B.D., Seeliger, H.P.R. & Clark, W.A. (1992). International Code of Nomenclature of Bacteria. Bacteriological Code. 1990 Revision. American Society for Microbiology, Washington, D.C. link.
Parker, C.T., Tindall, B.J. & Garrity, G.M., eds. (2019). International Code of Nomenclature of Prokaryotes. Prokaryotic Code (2008 Revision). International Journal of Systematic and Evolutionary Microbiology 69(1A): S1–S111. doi: 10.1099/ijsem.0.000778
See also
Glossary of scientific naming
International Committee on Taxonomy of Viruses
Microbiology Society
Code of Nomenclature of Prokaryotes Described from Sequence Data – separate system
References
External links
International Journal of Systematic and Evolutionary Microbiology Online
List of Prokaryotic Names with Standing in Nomenclature
Search of Prokaryotic Nomenclature provided by NamesforLife
International standards
Bacterial nomenclature
Nomenclature codes
International classification systems | International Code of Nomenclature of Prokaryotes | Biology | 1,168 |
74,990,551 | https://en.wikipedia.org/wiki/Competence%20stimulating%20peptide | Competence stimulating peptides (CSP) are chemical messengers that assist the initiation of quorum sensing, and exist in many bacterial genera. Bacterial transformation of DNA is driven by CSP-coupled quorum sensing.
Competence stimulating peptides are a subset of proteins that promote quorum sensing in numerous bacterial genera including Streptococcus and Bacillus. Quorum sensing contributes to regulation of specific gene expressions in response to cell population density fluctuations. Streptococcus pneumonia, a highly studied gram-positive bacterium, is capable of quorum sensing and can release autoinducers, chemical signals that increase as concentration based on density. CSPs are part of a unique form of regulation involved in DNA processing. The form of DNA processing starts abruptly and at the same time in all cells when in a constantly or exponentially growing culture, and then growth rapidly decreases after about 12 minutes of exponential growth.
Background
Competence is the ability of bacteria to pull DNA fragments from the environment and integrate it into their chromosome. Competence stimulating peptides (CSP) are a 17-amino acid signal peptide that triggers quorum sensing, which aids competence, biofilm formation, and virulence. The propensity of S. pneumoniae to become competent is critical to the bacterium's development of antibiotic resistance.
A substantial fraction of cells in the culture of species whose appearance of competence has been studied shows that specific growth conditions (ex. growth-limiting conditions) have led to the development of competence. S. pneumoniae is unique in the sense that virtually all cells of a culture develop the ability to become competent at the same time. The density that the cells have reached during exponential growth plays a role at determining when the competency is triggered. This competency period only lasts for a short period of time, and studies indicate that this does not affect the growth rate of the culture. There are two main specificity groups that S. pneumoniae can be divided into based on the CSP signal they produce and their compatible receptors. The CSP1 signal is received by receptor ComD1 and the CSP2 signal is received by ComD2.
Physiology and biochemistry
Streptococcus pneumoniae is one of the mostly highly studied bacterial species containing CSP, though other genus and species also utilize the hormone-like protein. Variations in structure, receptor specificity, and codon sequence occur even between different strains of the same species. However, homology between CSP's retain a single negatively charged N-terminus, an arginine residue in position three (C3), and a positively charged C-terminus. Signal-receptor specificity is demonstrated in Streptococcal species through the relationship between CSP1 and CSP2 signals, and the receptors ComD1 and ComD2. Variations of receptor specificity and composition can be estimated based on nuclear magnetic resonance (NMR) spectroscopy analysis.
Alterations in the structure of CSP signals, such as CSP1 and CSP2, are shown to inhibit the cellular response to these peptides, often resulting in reduced biofilm production. Replacement of the first glutamate residue in CSP1 inhibits receptor activation of competency genes, and hydrophobic regions on the CSP1 molecule play key roles in effective ComD1 and Com2 binding. Interspecies interactions between biofilm producing organisms induce the release of chemical signals that inhibit binding or receptor activation in competence stimulating processes.
Initiation of DNA transformation begins as a threshold concentration of CSP is met within a bacterial cell. Cellular density is proportional to CSP concentration. After meeting threshold concentration, transmembrane histidine kinases are activated via binding of corresponding peptides. Regulator proteins in turn are phosphorylated by the activated kinases, thereby inducing competency gene expression. Such genes produce proteins responsible for inducing DNA transformation.
Implications in health and industry
Quorum sensing bacteria within the human microbiome are responsible for many diseases including sinusitis, otitis media, pneumonia, bacteraemia, osteomyelitis, septic arthritis, and meningitis. In the United States alone there is a death toll of >22,000 a year tracing back to this pathogen. S. pneumoniae uses the competence stimulating peptide and quorum sensing to initiate its attack, establish an infection, and develop antibiotic resistance genes. Overall, competence stimulating peptide allows S. pneumoniae to initiate a more pervading attack on the human host.
Currently in health and industry, studies center on explaining and intercepting the competence region within the S. pneumoniae. The goal is to limit cell–cell communication with the hopes of attenuating S. pneumoniae infectivity. Inhibiting the competence stimulating peptide shows potential as a way to combat pneumococcal infections.
References
Peptides
Microbial population biology | Competence stimulating peptide | Chemistry | 987 |
60,876 | https://en.wikipedia.org/wiki/Markov%20chain | In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairs now." A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). Markov processes are named in honor of the Russian mathematician Andrey Markov.
Markov chains have many applications as statistical models of real-world processes. They provide the basis for general stochastic simulation methods known as Markov chain Monte Carlo, which are used for simulating sampling from complex probability distributions, and have found application in areas including Bayesian statistics, biology, chemistry, economics, finance, information theory, physics, signal processing, and speech processing.
The adjectives Markovian and Markov are used to describe something that is related to a Markov process.
Principles
Definition
A Markov process is a stochastic process that satisfies the Markov property (sometimes characterized as "memorylessness"). In simpler terms, it is a process for which predictions can be made regarding future outcomes based solely on its present state and—most importantly—such predictions are just as good as the ones that could be made knowing the process's full history. In other words, conditional on the present state of the system, its future and past states are independent.
A Markov chain is a type of Markov process that has either a discrete state space or a discrete index set (often representing time), but the precise definition of a Markov chain varies. For example, it is common to define a Markov chain as a Markov process in either discrete or continuous time with a countable state space (thus regardless of the nature of time), but it is also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space).
Types of Markov chains
The system's state space and time parameter index need to be specified. The following table gives an overview of the different instances of Markov processes for different levels of state space generality and for discrete time v. continuous time:
Note that there is no definitive agreement in the literature on the use of some of the terms that signify special cases of Markov processes. Usually the term "Markov chain" is reserved for a process with a discrete set of times, that is, a discrete-time Markov chain (DTMC), but a few authors use the term "Markov process" to refer to a continuous-time Markov chain (CTMC) without explicit mention. In addition, there are other extensions of Markov processes that are referred to as such but do not necessarily fall within any of these four categories (see Markov model). Moreover, the time index need not necessarily be real-valued; like with the state space, there are conceivable processes that move through index sets with other mathematical constructs. Notice that the general state space continuous-time Markov chain is general to such a degree that it has no designated term.
While the time parameter is usually discrete, the state space of a Markov chain does not have any generally agreed-on restrictions: the term may refer to a process on an arbitrary state space. However, many applications of Markov chains employ finite or countably infinite state spaces, which have a more straightforward statistical analysis. Besides time-index and state-space parameters, there are many other variations, extensions and generalizations (see Variations). For simplicity, most of this article concentrates on the discrete-time, discrete state-space case, unless mentioned otherwise.
Transitions
The changes of state of the system are called transitions. The probabilities associated with various state changes are called transition probabilities. The process is characterized by a state space, a transition matrix describing the probabilities of particular transitions, and an initial state (or initial distribution) across the state space. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and the process does not terminate.
A discrete-time random process involves a system which is in a certain state at each step, with the state changing randomly between steps. The steps are often thought of as moments in time, but they can equally well refer to physical distance or any other discrete measurement. Formally, the steps are the integers or natural numbers, and the random process is a mapping of these to states. The Markov property states that the conditional probability distribution for the system at the next step (and in fact at all future steps) depends only on the current state of the system, and not additionally on the state of the system at previous steps.
Since the system changes randomly, it is generally impossible to predict with certainty the state of a Markov chain at a given point in the future. However, the statistical properties of the system's future can be predicted. In many applications, it is these statistical properties that are important.
History
Andrey Markov studied Markov processes in the early 20th century, publishing his first paper on the topic in 1906. Markov Processes in continuous time were discovered long before his work in the early 20th century in the form of the Poisson process. Markov was interested in studying an extension of independent random sequences, motivated by a disagreement with Pavel Nekrasov who claimed independence was necessary for the weak law of large numbers to hold. In his first paper on Markov chains, published in 1906, Markov showed that under certain conditions the average outcomes of the Markov chain would converge to a fixed vector of values, so proving a weak law of large numbers without the independence assumption, which had been commonly regarded as a requirement for such mathematical laws to hold. Markov later used Markov chains to study the distribution of vowels in Eugene Onegin, written by Alexander Pushkin, and proved a central limit theorem for such chains.
In 1912 Henri Poincaré studied Markov chains on finite groups with an aim to study card shuffling. Other early uses of Markov chains include a diffusion model, introduced by Paul and Tatyana Ehrenfest in 1907, and a branching process, introduced by Francis Galton and Henry William Watson in 1873, preceding the work of Markov. After the work of Galton and Watson, it was later revealed that their branching process had been independently discovered and studied around three decades earlier by Irénée-Jules Bienaymé. Starting in 1928, Maurice Fréchet became interested in Markov chains, eventually resulting in him publishing in 1938 a detailed study on Markov chains.
Andrey Kolmogorov developed in a 1931 paper a large part of the early theory of continuous-time Markov processes. Kolmogorov was partly inspired by Louis Bachelier's 1900 work on fluctuations in the stock market as well as Norbert Wiener's work on Einstein's model of Brownian movement. He introduced and studied a particular set of Markov processes known as diffusion processes, where he derived a set of differential equations describing the processes. Independent of Kolmogorov's work, Sydney Chapman derived in a 1928 paper an equation, now called the Chapman–Kolmogorov equation, in a less mathematically rigorous way than Kolmogorov, while studying Brownian movement. The differential equations are now called the Kolmogorov equations or the Kolmogorov–Chapman equations. Other mathematicians who contributed significantly to the foundations of Markov processes include William Feller, starting in 1930s, and then later Eugene Dynkin, starting in the 1950s.
Examples
Mark V. Shaney is a third-order Markov chain program, and a Markov text generator. It ingests the sample text (the Tao Te Ching, or the posts of a Usenet group) and creates a massive list of every sequence of three successive words (triplet) which occurs in the text. It then chooses two words at random, and looks for a word which follows those two in one of the triplets in its massive list. If there is more than one, it picks at random (identical triplets count separately, so a sequence which occurs twice is twice as likely to be picked as one which only occurs once). It then adds that word to the generated text. Then, in the same way, it picks a triplet that starts with the second and third words in the generated text, and that gives a fourth word. It adds the fourth word, then repeats with the third and fourth words, and so on.
Random walks based on integers and the gambler's ruin problem are examples of Markov processes. Some variations of these processes were studied hundreds of years earlier in the context of independent variables. Two important examples of Markov processes are the Wiener process, also known as the Brownian motion process, and the Poisson process, which are considered the most important and central stochastic processes in the theory of stochastic processes. These two processes are Markov processes in continuous time, while random walks on the integers and the gambler's ruin problem are examples of Markov processes in discrete time.
A famous Markov chain is the so-called "drunkard's walk", a random walk on the number line where, at each step, the position may change by +1 or −1 with equal probability. From any position there are two possible transitions, to the next or previous integer. The transition probabilities depend only on the current position, not on the manner in which the position was reached. For example, the transition probabilities from 5 to 4 and 5 to 6 are both 0.5, and all other transition probabilities from 5 are 0. These probabilities are independent of whether the system was previously in 4 or 6.
A series of independent states (for example, a series of coin flips) satisfies the formal definition of a Markov chain. However, the theory is usually applied only when the probability distribution of the next state depends on the current one.
A non-Markov example
Suppose that there is a coin purse containing five quarters (each worth 25¢), five dimes (each worth 10¢), and five nickels (each worth 5¢), and one by one, coins are randomly drawn from the purse and are set on a table. If represents the total value of the coins set on the table after draws, with , then the sequence is not a Markov process.
To see why this is the case, suppose that in the first six draws, all five nickels and a quarter are drawn. Thus . If we know not just , but the earlier values as well, then we can determine which coins have been drawn, and we know that the next coin will not be a nickel; so we can determine that with probability 1. But if we do not know the earlier values, then based only on the value we might guess that we had drawn four dimes and two nickels, in which case it would certainly be possible to draw another nickel next. Thus, our guesses about are impacted by our knowledge of values prior to .
However, it is possible to model this scenario as a Markov process. Instead of defining to represent the total value of the coins on the table, we could define to represent the count of the various coin types on the table. For instance, could be defined to represent the state where there is one quarter, zero dimes, and five nickels on the table after 6 one-by-one draws. This new model could be represented by possible states, where each state represents the number of coins of each type (from 0 to 5) that are on the table. (Not all of these states are reachable within 6 draws.) Suppose that the first draw results in state . The probability of achieving now depends on ; for example, the state is not possible. After the second draw, the third draw depends on which coins have so far been drawn, but no longer only on the coins that were drawn for the first state (since probabilistically important information has since been added to the scenario). In this way, the likelihood of the state depends exclusively on the outcome of the state.
Formal definition
Discrete-time Markov chain
A discrete-time Markov chain is a sequence of random variables X1, X2, X3, ... with the Markov property, namely that the probability of moving to the next state depends only on the present state and not on the previous states:
if both conditional probabilities are well defined, that is, if
The possible values of Xi form a countable set S called the state space of the chain.
Variations
Time-homogeneous Markov chains are processes where for all n. The probability of the transition is independent of n.
Stationary Markov chains are processes where for all n and k. Every stationary chain can be proved to be time-homogeneous by Bayes' rule.A necessary and sufficient condition for a time-homogeneous Markov chain to be stationary is that the distribution of is a stationary distribution of the Markov chain.
A Markov chain with memory (or a Markov chain of order m) where m is finite, is a process satisfying In other words, the future state depends on the past m states. It is possible to construct a chain from which has the 'classical' Markov property by taking as state space the ordered m-tuples of X values, i.e., .
Continuous-time Markov chain
A continuous-time Markov chain (Xt)t ≥ 0 is defined by a finite or countable state space S, a transition rate matrix Q with dimensions equal to that of the state space and initial probability distribution defined on the state space. For i ≠ j, the elements qij are non-negative and describe the rate of the process transitions from state i to state j. The elements qii are chosen such that each row of the transition rate matrix sums to zero, while the row-sums of a probability transition matrix in a (discrete) Markov chain are all equal to one.
There are three equivalent definitions of the process.
Infinitesimal definition
Let be the random variable describing the state of the process at time t, and assume the process is in a state i at time t.
Then, knowing , is independent of previous values , and as h → 0 for all j and for all t,
where is the Kronecker delta, using the little-o notation.
The can be seen as measuring how quickly the transition from i to j happens.
Jump chain/holding time definition
Define a discrete-time Markov chain Yn to describe the nth jump of the process and variables S1, S2, S3, ... to describe holding times in each of the states where Si follows the exponential distribution with rate parameter −qYiYi.
Transition probability definition
For any value n = 0, 1, 2, 3, ... and times indexed up to this value of n: t0, t1, t2, ... and all states recorded at these times i0, i1, i2, i3, ... it holds that
where pij is the solution of the forward equation (a first-order differential equation)
with initial condition P(0) is the identity matrix.
Finite state space
If the state space is finite, the transition probability distribution can be represented by a matrix, called the transition matrix, with the (i, j)th element of P equal to
Since each row of P sums to one and all elements are non-negative, P is a right stochastic matrix.
Stationary distribution relation to eigenvectors and simplices
A stationary distribution is a (row) vector, whose entries are non-negative and sum to 1, is unchanged by the operation of transition matrix P on it and so is defined by
By comparing this definition with that of an eigenvector we see that the two concepts are related and that
is a normalized () multiple of a left eigenvector e of the transition matrix P with an eigenvalue of 1. If there is more than one unit eigenvector then a weighted sum of the corresponding stationary states is also a stationary state. But for a Markov chain one is usually more interested in a stationary state that is the limit of the sequence of distributions for some initial distribution.
The values of a stationary distribution are associated with the state space of P and its eigenvectors have their relative proportions preserved. Since the components of π are positive and the constraint that their sum is unity can be rewritten as we see that the dot product of π with a vector whose components are all 1 is unity and that π lies on a simplex.
Time-homogeneous Markov chain with a finite state space
If the Markov chain is time-homogeneous, then the transition matrix P is the same after each step, so the k-step transition probability can be computed as the k-th power of the transition matrix, Pk.
If the Markov chain is irreducible and aperiodic, then there is a unique stationary distribution . Additionally, in this case Pk converges to a rank-one matrix in which each row is the stationary distribution :
where 1 is the column vector with all entries equal to 1. This is stated by the Perron–Frobenius theorem. If, by whatever means, is found, then the stationary distribution of the Markov chain in question can be easily determined for any starting distribution, as will be explained below.
For some stochastic matrices P, the limit does not exist while the stationary distribution does, as shown by this example:
(This example illustrates a periodic Markov chain.)
Because there are a number of different special cases to consider, the process of finding this limit if it exists can be a lengthy task. However, there are many techniques that can assist in finding this limit. Let P be an n×n matrix, and define
It is always true that
Subtracting Q from both sides and factoring then yields
where In is the identity matrix of size n, and 0n,n is the zero matrix of size n×n. Multiplying together stochastic matrices always yields another stochastic matrix, so Q must be a stochastic matrix (see the definition above). It is sometimes sufficient to use the matrix equation above and the fact that Q is a stochastic matrix to solve for Q. Including the fact that the sum of each the rows in P is 1, there are n+1 equations for determining n unknowns, so it is computationally easier if on the one hand one selects one row in Q and substitutes each of its elements by one, and on the other one substitutes the corresponding element (the one in the same column) in the vector 0, and next left-multiplies this latter vector by the inverse of transformed former matrix to find Q.
Here is one method for doing so: first, define the function f(A) to return the matrix A with its right-most column replaced with all 1's. If [f(P − In)]−1 exists then
Explain: The original matrix equation is equivalent to a system of n×n linear equations in n×n variables. And there are n more linear equations from the fact that Q is a right stochastic matrix whose each row sums to 1. So it needs any n×n independent linear equations of the (n×n+n) equations to solve for the n×n variables. In this example, the n equations from "Q multiplied by the right-most column of (P-In)" have been replaced by the n stochastic ones.
One thing to notice is that if P has an element Pi,i on its main diagonal that is equal to 1 and the ith row or column is otherwise filled with 0's, then that row or column will remain unchanged in all of the subsequent powers Pk. Hence, the ith row or column of Q will have the 1 and the 0's in the same positions as in P.
Convergence speed to the stationary distribution
As stated earlier, from the equation (if exists) the stationary (or steady state) distribution is a left eigenvector of row stochastic matrix P. Then assuming that P is diagonalizable or equivalently that P has n linearly independent eigenvectors, speed of convergence is elaborated as follows. (For non-diagonalizable, that is, defective matrices, one may start with the Jordan normal form of P and proceed with a bit more involved set of arguments in a similar way.)
Let U be the matrix of eigenvectors (each normalized to having an L2 norm equal to 1) where each column is a left eigenvector of P and let Σ be the diagonal matrix of left eigenvalues of P, that is, Σ = diag(λ1,λ2,λ3,...,λn). Then by eigendecomposition
Let the eigenvalues be enumerated such that:
Since P is a row stochastic matrix, its largest left eigenvalue is 1. If there is a unique stationary distribution, then the largest eigenvalue and the corresponding eigenvector is unique too (because there is no other which solves the stationary distribution equation above). Let ui be the i-th column of U matrix, that is, ui is the left eigenvector of P corresponding to λi. Also let x be a length n row vector that represents a valid probability distribution; since the eigenvectors ui span we can write
If we multiply x with P from right and continue this operation with the results, in the end we get the stationary distribution . In other words, = a1 u1 ← xPP...P = xPk as k → ∞. That means
Since is parallel to u1(normalized by L2 norm) and (k) is a probability vector, (k) approaches to a1 u1 = as k → ∞ with a speed in the order of λ2/λ1 exponentially. This follows because hence λ2/λ1 is the dominant term. The smaller the ratio is, the faster the convergence is. Random noise in the state distribution can also speed up this convergence to the stationary distribution.
General state space
Harris chains
Many results for Markov chains with finite state space can be generalized to chains with uncountable state space through Harris chains.
The use of Markov chains in Markov chain Monte Carlo methods covers cases where the process follows a continuous state space.
Locally interacting Markov chains
"Locally interacting Markov chains" are Markov chains with an evolution that takes into account the state of other Markov chains. This corresponds to the situation when the state space has a (Cartesian-) product form.
See interacting particle system and stochastic cellular automata (probabilistic cellular automata).
See for instance Interaction of Markov Processes
or.
Properties
Two states are said to communicate with each other if both are reachable from one another by a sequence of transitions that have positive probability. This is an equivalence relation which yields a set of communicating classes. A class is closed if the probability of leaving the class is zero. A Markov chain is irreducible if there is one communicating class, the state space.
A state has period if is the greatest common divisor of the number of transitions by which can be reached, starting from . That is:
The state is periodic if ; otherwise and the state is aperiodic.
A state i is said to be transient if, starting from i, there is a non-zero probability that the chain will never return to i. It is called recurrent (or persistent) otherwise. For a recurrent state i, the mean hitting time is defined as:
State i is positive recurrent if is finite and null recurrent otherwise. Periodicity, transience, recurrence and positive and null recurrence are class properties — that is, if one state has the property then all states in its communicating class have the property.
A state i is called absorbing if there are no outgoing transitions from the state.
Irreducibility
Since periodicity is a class property, if a Markov chain is irreducible, then all its states have the same period. In particular, if one state is aperiodic, then the whole Markov chain is aperiodic.
If a finite Markov chain is irreducible, then all states are positive recurrent, and it has a unique stationary distribution given by .
Ergodicity
A state i is said to be ergodic if it is aperiodic and positive recurrent. In other words, a state i is ergodic if it is recurrent, has a period of 1, and has finite mean recurrence time.
If all states in an irreducible Markov chain are ergodic, then the chain is said to be ergodic. Equivalently, there exists some integer such that all entries of are positive.
It can be shown that a finite state irreducible Markov chain is ergodic if it has an aperiodic state. More generally, a Markov chain is ergodic if there is a number N such that any state can be reached from any other state in any number of steps less or equal to a number N. In case of a fully connected transition matrix, where all transitions have a non-zero probability, this condition is fulfilled with N = 1.
A Markov chain with more than one state and just one out-going transition per state is either not irreducible or not aperiodic, hence cannot be ergodic.
Terminology
Some authors call any irreducible, positive recurrent Markov chains ergodic, even periodic ones. In fact, merely irreducible Markov chains correspond to ergodic processes, defined according to ergodic theory.
Some authors call a matrix primitive iff there exists some integer such that all entries of are positive. Some authors call it regular.
Index of primitivity
The index of primitivity, or exponent, of a regular matrix, is the smallest such that all entries of are positive. The exponent is purely a graph-theoretic property, since it depends only on whether each entry of is zero or positive, and therefore can be found on a directed graph with as its adjacency matrix.
There are several combinatorial results about the exponent when there are finitely many states. Let be the number of states, then
The exponent is . The only case where it is an equality is when the graph of goes like .
If has diagonal entries, then its exponent is .
If is symmetric, then has positive diagonal entries, which by previous proposition means its exponent is .
(Dulmage-Mendelsohn theorem) The exponent is where is the girth of the graph. It can be improved to , where is the diameter of the graph.
Measure-preserving dynamical system
If a Markov chain has a stationary distribution, then it can be converted to a measure-preserving dynamical system: Let the probability space be , where is the set of all states for the Markov chain. Let the sigma-algebra on the probability space be generated by the cylinder sets. Let the probability measure be generated by the stationary distribution, and the Markov chain transition. Let be the shift operator: . Similarly we can construct such a dynamical system with instead.
Since irreducible Markov chains with finite state spaces have a unique stationary distribution, the above construction is unambiguous for irreducible Markov chains.
In ergodic theory, a measure-preserving dynamical system is called "ergodic" iff any measurable subset such that implies or (up to a null set).
The terminology is inconsistent. Given a Markov chain with a stationary distribution that is strictly positive on all states, the Markov chain is irreducible iff its corresponding measure-preserving dynamical system is ergodic.
Markovian representations
In some cases, apparently non-Markovian processes may still have Markovian representations, constructed by expanding the concept of the "current" and "future" states. For example, let X be a non-Markovian process. Then define a process Y, such that each state of Y represents a time-interval of states of X. Mathematically, this takes the form:
If Y has the Markov property, then it is a Markovian representation of X.
An example of a non-Markovian process with a Markovian representation is an autoregressive time series of order greater than one.
Hitting times
The hitting time is the time, starting in a given set of states until the chain arrives in a given state or set of states. The distribution of such a time period has a phase type distribution. The simplest such distribution is that of a single exponentially distributed transition.
Expected hitting times
For a subset of states A ⊆ S, the vector kA of hitting times (where element represents the expected value, starting in state i that the chain enters one of the states in the set A) is the minimal non-negative solution to
Time reversal
For a CTMC Xt, the time-reversed process is defined to be . By Kelly's lemma this process has the same stationary distribution as the forward process.
A chain is said to be reversible if the reversed process is the same as the forward process. Kolmogorov's criterion states that the necessary and sufficient condition for a process to be reversible is that the product of transition rates around a closed loop must be the same in both directions.
Embedded Markov chain
One method of finding the stationary probability distribution, , of an ergodic continuous-time Markov chain, Q, is by first finding its embedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain, sometimes referred to as a jump process. Each element of the one-step transition probability matrix of the EMC, S, is denoted by sij, and represents the conditional probability of transitioning from state i into state j. These conditional probabilities may be found by
From this, S may be written as
where I is the identity matrix and diag(Q) is the diagonal matrix formed by selecting the main diagonal from the matrix Q and setting all other elements to zero.
To find the stationary probability distribution vector, we must next find such that
with being a row vector, such that all elements in are greater than 0 and = 1. From this, may be found as
(S may be periodic, even if Q is not. Once is found, it must be normalized to a unit vector.)
Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observing X(t) at intervals of δ units of time. The random variables X(0), X(δ), X(2δ), ... give the sequence of states visited by the δ-skeleton.
Special types of Markov chains
Markov model
Markov models are used to model changing systems. There are 4 main types of models, that generalize Markov chains depending on whether every sequential state is observable or not, and whether the system is to be adjusted on the basis of observations made:
Bernoulli scheme
A Bernoulli scheme is a special case of a Markov chain where the transition probability matrix has identical rows, which means that the next state is independent of even the current state (in addition to being independent of the past states). A Bernoulli scheme with only two possible states is known as a Bernoulli process.
Note, however, by the Ornstein isomorphism theorem, that every aperiodic and irreducible Markov chain is isomorphic to a Bernoulli scheme; thus, one might equally claim that Markov chains are a "special case" of Bernoulli schemes. The isomorphism generally requires a complicated recoding. The isomorphism theorem is even a bit stronger: it states that any stationary stochastic process is isomorphic to a Bernoulli scheme; the Markov chain is just one such example.
Subshift of finite type
When the Markov matrix is replaced by the adjacency matrix of a finite graph, the resulting shift is termed a topological Markov chain or a subshift of finite type. A Markov matrix that is compatible with the adjacency matrix can then provide a measure on the subshift. Many chaotic dynamical systems are isomorphic to topological Markov chains; examples include diffeomorphisms of closed manifolds, the Prouhet–Thue–Morse system, the Chacon system, sofic systems, context-free systems and block-coding systems.
Applications
Markov chains have been employed in a wide range of topics across the natural and social sciences, and in technological applications. They have been used for forecasting in several areas: for example, price trends, wind power, stochastic terrorism, and solar irradiance. The Markov chain forecasting models utilize a variety of settings, from discretizing the time series, to hidden Markov models combined with wavelets, and the Markov chain mixture distribution model (MCM).
Physics
Markovian systems appear extensively in thermodynamics and statistical mechanics, whenever probabilities are used to represent unknown or unmodelled details of the system, if it can be assumed that the dynamics are time-invariant, and that no relevant history need be considered which is not already included in the state description. For example, a thermodynamic state operates under a probability distribution that is difficult or expensive to acquire. Therefore, Markov Chain Monte Carlo method can be used to draw samples randomly from a black-box to approximate the probability distribution of attributes over a range of objects.
Markov chains are used in lattice QCD simulations.
Chemistry
A reaction network is a chemical system involving multiple reactions and chemical species. The simplest stochastic models of such networks treat the system as a continuous time Markov chain with the state being the number of molecules of each species and with reactions modeled as possible transitions of the chain. Markov chains and continuous-time Markov processes are useful in chemistry when physical systems closely approximate the Markov property. For example, imagine a large number n of molecules in solution in state A, each of which can undergo a chemical reaction to state B with a certain average rate. Perhaps the molecule is an enzyme, and the states refer to how it is folded. The state of any single enzyme follows a Markov chain, and since the molecules are essentially independent of each other, the number of molecules in state A or B at a time is n times the probability a given molecule is in that state.
The classical model of enzyme activity, Michaelis–Menten kinetics, can be viewed as a Markov chain, where at each time step the reaction proceeds in some direction. While Michaelis-Menten is fairly straightforward, far more complicated reaction networks can also be modeled with Markov chains.
An algorithm based on a Markov chain was also used to focus the fragment-based growth of chemicals in silico towards a desired class of compounds such as drugs or natural products. As a molecule is grown, a fragment is selected from the nascent molecule as the "current" state. It is not aware of its past (that is, it is not aware of what is already bonded to it). It then transitions to the next state when a fragment is attached to it. The transition probabilities are trained on databases of authentic classes of compounds.
Also, the growth (and composition) of copolymers may be modeled using Markov chains. Based on the reactivity ratios of the monomers that make up the growing polymer chain, the chain's composition may be calculated (for example, whether monomers tend to add in alternating fashion or in long runs of the same monomer). Due to steric effects, second-order Markov effects may also play a role in the growth of some polymer chains.
Similarly, it has been suggested that the crystallization and growth of some epitaxial superlattice oxide materials can be accurately described by Markov chains.
Biology
Markov chains are used in various areas of biology. Notable examples include:
Phylogenetics and bioinformatics, where most models of DNA evolution use continuous-time Markov chains to describe the nucleotide present at a given site in the genome.
Population dynamics, where Markov chains are in particular a central tool in the theoretical study of matrix population models.
Neurobiology, where Markov chains have been used, e.g., to simulate the mammalian neocortex.
Systems biology, for instance with the modeling of viral infection of single cells.
Compartmental models for disease outbreak and epidemic modeling.
Testing
Several theorists have proposed the idea of the Markov chain statistical test (MCST), a method of conjoining Markov chains to form a "Markov blanket", arranging these chains in several recursive layers ("wafering") and producing more efficient test sets—samples—as a replacement for exhaustive testing.
Solar irradiance variability
Solar irradiance variability assessments are useful for solar power applications. Solar irradiance variability at any location over time is mainly a consequence of the deterministic variability of the sun's path across the sky dome and the variability in cloudiness. The variability of accessible solar irradiance on Earth's surface has been modeled using Markov chains, also including modeling the two states of clear and cloudiness as a two-state Markov chain.
Speech recognition
Hidden Markov models have been used in automatic speech recognition systems.
Information theory
Markov chains are used throughout information processing. Claude Shannon's famous 1948 paper A Mathematical Theory of Communication, which in a single step created the field of information theory, opens by introducing the concept of entropy by modeling texts in a natural language (such as English) as generated by an ergodic Markov process, where each letter may depend statistically on previous letters. Such idealized models can capture many of the statistical regularities of systems. Even without describing the full structure of the system perfectly, such signal models can make possible very effective data compression through entropy encoding techniques such as arithmetic coding. They also allow effective state estimation and pattern recognition. Markov chains also play an important role in reinforcement learning.
Markov chains are also the basis for hidden Markov models, which are an important tool in such diverse fields as telephone networks (which use the Viterbi algorithm for error correction), speech recognition and bioinformatics (such as in rearrangements detection).
The LZMA lossless data compression algorithm combines Markov chains with Lempel-Ziv compression to achieve very high compression ratios.
Queueing theory
Markov chains are the basis for the analytical treatment of queues (queueing theory). Agner Krarup Erlang initiated the subject in 1917. This makes them critical for optimizing the performance of telecommunications networks, where messages must often compete for limited resources (such as bandwidth).
Numerous queueing models use continuous-time Markov chains. For example, an M/M/1 queue is a CTMC on the non-negative integers where upward transitions from i to i + 1 occur at rate λ according to a Poisson process and describe job arrivals, while transitions from i to i – 1 (for i > 1) occur at rate μ (job service times are exponentially distributed) and describe completed services (departures) from the queue.
Internet applications
The PageRank of a webpage as used by Google is defined by a Markov chain. It is the probability to be at page in the stationary distribution on the following Markov chain on all (known) webpages. If is the number of known webpages, and a page has links to it then it has transition probability for all pages that are linked to and for all pages that are not linked to. The parameter is taken to be about 0.15.
Markov models have also been used to analyze web navigation behavior of users. A user's web link transition on a particular website can be modeled using first- or second-order Markov models and can be used to make predictions regarding future navigation and to personalize the web page for an individual user.
Statistics
Markov chain methods have also become very important for generating sequences of random numbers to accurately reflect very complicated desired probability distributions, via a process called Markov chain Monte Carlo (MCMC). In recent years this has revolutionized the practicability of Bayesian inference methods, allowing a wide range of posterior distributions to be simulated and their parameters found numerically.
Conflict and combat
In 1971 a Naval Postgraduate School Master's thesis proposed to model a variety of combat between adversaries as a Markov chain "with states reflecting the control, maneuver, target acquisition, and target destruction actions of a weapons system" and discussed the parallels between the resulting Markov chain and Lanchester's laws.
In 1975 Duncan and Siverson remarked that Markov chains could be used to model conflict between state actors, and thought that their analysis would help understand "the behavior of social and political organizations in situations of conflict."
Economics and finance
Markov chains are used in finance and economics to model a variety of different phenomena, including the distribution of income, the size distribution of firms, asset prices and market crashes. D. G. Champernowne built a Markov chain model of the distribution of income in 1953. Herbert A. Simon and co-author Charles Bonini used a Markov chain model to derive a stationary Yule distribution of firm sizes. Louis Bachelier was the first to observe that stock prices followed a random walk. The random walk was later seen as evidence in favor of the efficient-market hypothesis and random walk models were popular in the literature of the 1960s. Regime-switching models of business cycles were popularized by James D. Hamilton (1989), who used a Markov chain to model switches between periods of high and low GDP growth (or, alternatively, economic expansions and recessions). A more recent example is the Markov switching multifractal model of Laurent E. Calvet and Adlai J. Fisher, which builds upon the convenience of earlier regime-switching models. It uses an arbitrarily large Markov chain to drive the level of volatility of asset returns.
Dynamic macroeconomics makes heavy use of Markov chains. An example is using Markov chains to exogenously model prices of equity (stock) in a general equilibrium setting.
Credit rating agencies produce annual tables of the transition probabilities for bonds of different credit ratings.
Social sciences
Markov chains are generally used in describing path-dependent arguments, where current structural configurations condition future outcomes. An example is the reformulation of the idea, originally due to Karl Marx's , tying economic development to the rise of capitalism. In current research, it is common to use a Markov chain to model how once a country reaches a specific level of economic development, the configuration of structural factors, such as size of the middle class, the ratio of urban to rural residence, the rate of political mobilization, etc., will generate a higher probability of transitioning from authoritarian to democratic regime.
Music
Markov chains are employed in algorithmic music composition, particularly in software such as Csound, Max, and SuperCollider. In a first-order chain, the states of the system become note or pitch values, and a probability vector for each note is constructed, completing a transition probability matrix (see below). An algorithm is constructed to produce output note values based on the transition matrix weightings, which could be MIDI note values, frequency (Hz), or any other desirable metric.
A second-order Markov chain can be introduced by considering the current state and also the previous state, as indicated in the second table. Higher, nth-order chains tend to "group" particular notes together, while 'breaking off' into other patterns and sequences occasionally. These higher-order chains tend to generate results with a sense of phrasal structure, rather than the 'aimless wandering' produced by a first-order system.
Markov chains can be used structurally, as in Xenakis's Analogique A and B. Markov chains are also used in systems which use a Markov model to react interactively to music input.
Usually musical systems need to enforce specific control constraints on the finite-length sequences they generate, but control constraints are not compatible with Markov models, since they induce long-range dependencies that violate the Markov hypothesis of limited memory. In order to overcome this limitation, a new approach has been proposed.
Games and sports
Markov chains can be used to model many games of chance. The children's games Snakes and Ladders and "Hi Ho! Cherry-O", for example, are represented exactly by Markov chains. At each turn, the player starts in a given state (on a given square) and from there has fixed odds of moving to certain other states (squares).
Markov chain models have been used in advanced baseball analysis since 1960, although their use is still rare. Each half-inning of a baseball game fits the Markov chain state when the number of runners and outs are considered. During any at-bat, there are 24 possible combinations of number of outs and position of the runners. Mark Pankin shows that Markov chain models can be used to evaluate runs created for both individual players as well as a team.
He also discusses various kinds of strategies and play conditions: how Markov chain models have been used to analyze statistics for game situations such as bunting and base stealing and differences when playing on grass vs. AstroTurf.
Markov text generators
Markov processes can also be used to generate superficially real-looking text given a sample document. Markov processes are used in a variety of recreational "parody generator" software (see dissociated press, Jeff Harrison, Mark V. Shaney, and Academias Neutronium). Several open-source text generation libraries using Markov chains exist.
See also
Dynamics of Markovian particles
Gauss–Markov process
Markov chain approximation method
Markov chain geostatistics
Markov chain mixing time
Markov chain tree theorem
Markov decision process
Markov information source
Markov odometer
Markov operator
Markov random field
Master equation
Quantum Markov chain
Semi-Markov process
Stochastic cellular automaton
Telescoping Markov chain
Variable-order Markov model
Notes
References
A. A. Markov (1906) "Rasprostranenie zakona bol'shih chisel na velichiny, zavisyaschie drug ot druga". Izvestiya Fiziko-matematicheskogo obschestva pri Kazanskom universitete, 2-ya seriya, tom 15, pp. 135–156.
A. A. Markov (1971). "Extension of the limit theorems of probability theory to a sum of variables connected in a chain". reprinted in Appendix B of: R. Howard. Dynamic Probabilistic Systems, volume 1: Markov Chains. John Wiley and Sons.
Classical Text in Translation:
Leo Breiman (1992) [1968] Probability. Original edition published by Addison-Wesley; reprinted by Society for Industrial and Applied Mathematics . (See Chapter 7)
J. L. Doob (1953) Stochastic Processes. New York: John Wiley and Sons .
S. P. Meyn and R. L. Tweedie (1993) Markov Chains and Stochastic Stability. London: Springer-Verlag . online: MCSS . Second edition to appear, Cambridge University Press, 2009.
; (NB. This was originally published in Russian as (Markovskiye protsessy) by Fizmatgiz in 1963 and translated to English with the assistance of the author.)
S. P. Meyn. Control Techniques for Complex Networks. Cambridge University Press, 2007. . Appendix contains abridged Meyn & Tweedie. online: CTCN
] Extensive, wide-ranging book meant for specialists, written for both theoretical computer scientists as well as electrical engineers. With detailed explanations of state minimization techniques, FSMs, Turing machines, Markov processes, and undecidability. Excellent treatment of Markov processes pp. 449ff. Discusses Z-transforms, D transforms in their context.
Classical text. cf Chapter 6 Finite Markov Chains pp. 384ff.
John G. Kemeny & J. Laurie Snell (1960) Finite Markov Chains, D. van Nostrand Company
E. Nummelin. "General irreducible Markov chains and non-negative operators". Cambridge University Press, 1984, 2004.
Seneta, E. Non-negative matrices and Markov chains. 2nd rev. ed., 1981, XVI, 288 p., Softcover Springer Series in Statistics. (Originally published by Allen & Unwin Ltd., London, 1973)
Kishor S. Trivedi, Probability and Statistics with Reliability, Queueing, and Computer Science Applications, John Wiley & Sons, Inc. New York, 2002. .
K. S. Trivedi and R.A.Sahner, SHARPE at the age of twenty-two, vol. 36, no. 4, pp. 52–57, ACM SIGMETRICS Performance Evaluation Review, 2009.
R. A. Sahner, K. S. Trivedi and A. Puliafito, Performance and reliability analysis of computer systems: an example-based approach using the SHARPE software package, Kluwer Academic Publishers, 1996. .
G. Bolch, S. Greiner, H. de Meer and K. S. Trivedi, Queueing Networks and Markov Chains, John Wiley, 2nd edition, 2006. .
External links
Markov Chains chapter in American Mathematical Society's introductory probability book
A visual explanation of Markov Chains
Original paper by A.A Markov (1913): An Example of Statistical Investigation of the Text Eugene Onegin Concerning the Connection of Samples in Chains (translated from Russian)
Markov processes
Markov models
Graph theory
Random text generation | Markov chain | Mathematics | 10,430 |
655,358 | https://en.wikipedia.org/wiki/Standard%20time | Standard time is the synchronization of clocks within a geographical region to a single time standard, rather than a local mean time standard. Generally, standard time agrees with the local mean time at some meridian that passes through the region, often near the centre of the region. Historically, standard time was established during the 19th century to aid weather forecasting and train travel. Applied globally in the 20th century, the geographical regions became time zones. The standard time in each time zone has come to be defined as an offset from Universal Time. A further offset is applied for part of the year in regions with daylight saving time.
The adoption of standard time, because of the inseparable correspondence between longitude and time, solidified the concept of halving the globe into the Eastern Hemisphere and the Western Hemisphere, with one Prime Meridian replacing the various prime meridians that had previously been used.
History of standard time
During the 19th century, scheduled steamships and trains required time standardisation in the industrialized world.
Great Britain
A standardised time system was first used by British railways on 1 December 1847, when they switched from local mean time, which varied from place to place, to Greenwich Mean Time (GMT). It was also given the name railway time, reflecting the important role the railway companies played in bringing it about. The vast majority of Great Britain's public clocks were standardised to GMT by 1855.
North America
Until 1883, each United States railroad chose its own time standards. The Pennsylvania Railroad used the "Allegheny Time" system, an astronomical timekeeping service which had been developed by Samuel Pierpont Langley at the University of Pittsburgh's Allegheny Observatory (then known as the Western University of Pennsylvania, located in Pittsburgh, Pennsylvania). Instituted in 1869, the Allegheny Observatory's service is believed to have been the first regular and systematic system of time distribution to railroads and cities as well as the origin of the modern standard time system. By 1870 the Allegheny Time service extended over 2,500 miles with 300 telegraph offices receiving time signals.
However, almost all railroads out of New York ran on New York time, and railroads west from Chicago mostly used Chicago time, but between Chicago and Pittsburgh/Buffalo the norm was Columbus time, even on railroads such as the PFtW&C and LS&MS, which did not run through Columbus. The Santa Fe Railroad used Jefferson City (Missouri) time all the way to its west end at Deming, New Mexico, as did the east–west lines across Texas; Central Pacific and Southern Pacific Railroads used San Francisco time all the way to El Paso. The Northern Pacific Railroad had seven time zones between St. Paul and the 1883 west end of the railroad at Wallula Jct; the Union Pacific Railway was at the other extreme, with only two time zones between Omaha and Ogden.
In 1870, Charles F. Dowd proposed four time zones based on the meridian through Washington, DC, for North American railroads. In 1872 he revised his proposal to base it on the Greenwich meridian. Sandford Fleming, a Scottish-born Canadian engineer, proposed worldwide Standard Time at a meeting of the Royal Canadian Institute on February 8, 1879. Cleveland Abbe advocated standard time to better coordinate international weather observations and resultant weather forecasts, which had been coordinated using local solar time. In 1879 he recommended four time zones across the contiguous United States, based upon Greenwich Mean Time. The General Time Convention (renamed the American Railway Association in 1891), an organization of US railroads charged with coordinating schedules and operating standards, became increasingly concerned that if the US government adopted a standard time scheme it would be disadvantageous to its member railroads. William F. Allen, the Convention secretary, argued that North American railroads should adopt a five-zone standard, similar to the one in use today, to avoid government action. On October 11, 1883, the heads of the major railroads met in Chicago at the Grand Pacific Hotel and agreed to adopt Allen's proposed system.
The members agreed that on Sunday, November 18, 1883, all United States and Canadian railroads would readjust their clocks and watches to reflect the new five-zone system on a telegraph signal from the Allegheny Observatory in Pittsburgh at exactly noon on the 90th meridian. Although most railroads adopted the new system as scheduled, some did so early on October 7 and others late on December 2. The Intercolonial Railway serving the Canadian maritime provinces of New Brunswick and Nova Scotia just east of Maine decided not to adopt Intercolonial Time based on the 60th meridian west of Greenwich, instead adopting Eastern Time, so only four time zones were actually adopted by American and Canadian railroads in 1883. Major American observatories, including the Allegheny Observatory, the United States Naval Observatory, the Harvard College Observatory, and the Yale University Observatory, agreed to provide telegraphic time signals at noon Eastern Time.
Standard time was not enacted into US law until the 1918 Standard Time Act established standard time in time zones; the law also instituted daylight saving time (DST). The daylight saving time portion of the law was repealed in 1919 over a presidential veto, but was re-established nationally during World War II. In 2007 the US enacted a federal law formalising the use of Coordinated Universal Time as the basis of standard time, and the role of the Secretary of Commerce (effectively, the National Institute of Standards and Technology) and the Secretary of the Navy (effectively, the US Naval Observatory) in interpreting standard time.
In 1999, standard time was inducted into the North America Railway Hall of Fame in the category "National: Technical Innovations."
The Dominion of Newfoundland, whose capital St. John's falls almost exactly midway between the meridians anchoring the Atlantic Time Zone and the Greenland Time Zone, voted in 1935 to create a half-hour offset time zone known as the Newfoundland Time Zone, at three and a half hours behind Greenwich time.
The Netherlands
In the Netherlands, introduction of the railways made it desirable to create a standard time. On 1 May 1909, Amsterdam Time or Dutch Time was introduced. Before that, time was measured in different cities; in the east of the country, this was a few minutes earlier than in the west. After that, all parts of the country had the same local time—that of the Wester Tower in Amsterdam (Westertoren/4°53'01.95" E). This time was indicated as GMT +0h 19m 32.13s until 17 March 1937, after which it was simplified to GMT+0h20m. This time zone was also known as the Loenen time or Gorinchem time, as this was the exact time in both Loenen and Gorinchem. At noon in Amsterdam, it was 11:40 in London and 12:40 in Berlin.
The shift to the current Central European Time zone took place on 16 May 1940. The German occupiers ordered the clock to be moved an hour and forty minutes forward. This time was kept in summer and winter throughout 1941 and 1942. It was only in November 1942 that a different Winter time was introduced, and the time was adjusted one hour backwards. This lasted for only three years; after the liberation of the Netherlands in 1945, Summer time was abolished for over thirty years, so during those years, standard time was 40 minutes ahead of the original Amsterdam Time. As of 2017, the Netherlands is in line with Central European Time (GMT+1 in the winter, GMT+2 in the summer, which is significantly different from Amsterdam Time).
New Zealand
In 1868, New Zealand was the first country in the world to establish a nationwide standard time.
A telegraph cable between New Zealand's two main islands became the instigating factor for the establishment of "New Zealand time". In 1868, the Telegraph Department adopted "Wellington time" as the standard time across all their offices so that opening and closing times could be synchronised. The Post Office, which usually shared the same building, followed suit. However, protests that time was being dictated by one government department, led to a resolution in parliament to establish a standard time for the whole country.
The director of the Geological Survey, James Hector, selected New Zealand time to be at the meridian 172°30′E. This was very close to the country's mean longitude and exactly 11.5 hours in advance of Greenwich Mean Time. It came into effect on 2 November 1868.
For over fifty years, the Colonial Time Service Observatory in Wellington, determined the correct time each morning. At 9 a.m. each day, it was transmitted by Morse code to post offices and railway stations around the country. In 1920, radio time signals began broadcasting, greatly increasing the accuracy of the time nationwide.
See also
Daylight saving time
International Meridian Conference of 1884
Mecca Time
Time standard
Time zone
Universal Time
References
Further reading
Time scales | Standard time | Physics,Astronomy | 1,795 |
41,164,606 | https://en.wikipedia.org/wiki/Interfungin | Interfungins are a group of chemical compounds isolated from fungi in the genus Phellinus which have NF-κB inhibitory activities.
References
2-Pyrones
Phellinus
Catechols | Interfungin | Chemistry | 44 |
15,498,897 | https://en.wikipedia.org/wiki/Chief%20Scientist%20Office | The Chief Scientist Office is part of the Health and Wellbeing Directorate of the Scottish Government. The Chief Scientist (Health) is currently Professor Dame Anna Dominiczak.
See also
Health Science Scotland
References
External links
Scotland
Health
Health in Scotland
Scotland
Healthcare science in the United Kingdom
NHS Scotland | Chief Scientist Office | Technology | 57 |
3,815,666 | https://en.wikipedia.org/wiki/Lattice%20model%20%28finance%29 | In quantitative finance, a lattice model is a numerical approach to the valuation of derivatives in situations requiring a discrete time model. For dividend paying equity options, a typical application would correspond to the pricing of an American-style option, where a decision to exercise is allowed at the closing of any calendar day up to the maturity. A continuous model, on the other hand, such as the standard Black–Scholes one, would only allow for the valuation of European options, where exercise is limited to the option's maturity date. For interest rate derivatives lattices are additionally useful in that they address many of the issues encountered with continuous models, such as pull to par. The method is also used for valuing certain exotic options, because of path dependence in the payoff. Traditional Monte Carlo methods for option pricing fail to account for optimal decisions to terminate the derivative by early exercise, but some methods now exist for solving this problem.
Equity and commodity derivatives
In general the approach is to divide time between now and the option's expiration into N discrete periods. At the specific time n, the model has a finite number of outcomes at time n + 1 such that every possible change in the state of the world between n and n + 1 is captured in a branch. This process is iterated until every possible path between n = 0 and n = N is mapped. Probabilities are then estimated for every n to n + 1 path. The outcomes and probabilities flow backwards through the tree until a fair value of the option today is calculated.
For equity and commodities the application is as follows. The first step is to trace the evolution of the option's key underlying variable(s), starting with today's spot price, such that this process is consistent with its volatility; log-normal Brownian motion with constant volatility is usually assumed. The next step is to value the option recursively: stepping backwards from the final time-step, where we have exercise value at each node; and applying risk neutral valuation at each earlier node, where option value is the probability-weighted present value of the up- and down-nodes in the later time-step. See for more detail, as well as for logic and formulae derivation.
As stated above, the lattice approach is particularly useful in valuing American options, where the choice whether to exercise the option early, or to hold the option, may be modeled at each discrete time/price combination; this is also true for Bermudan options. For similar reasons, real options and employee stock options are often modeled using a lattice framework, though with modified assumptions. In each of these cases, a third step is to determine whether the option is to be exercised or held, and to then apply this value at the node in question. Some exotic options, such as barrier options, are also easily modeled here; for other Path-Dependent Options, simulation would be preferred.
(Although, tree-based methods have been developed.
)
The simplest lattice model is the binomial options pricing model; the standard ("canonical") method is that proposed by Cox, Ross and Rubinstein (CRR) in 1979; see diagram for formulae. Over 20 other methods have been developed, with each "derived under a variety of assumptions" as regards the development of the underlying's price. In the limit, as the number of time-steps increases, these converge to the Log-normal distribution, and hence produce the "same" option price as Black-Scholes: to achieve this, these will variously seek to agree with the underlying's central moments, raw moments and / or log-moments at each time-step, as measured discretely. Further enhancements are designed to achieve stability relative to Black-Scholes as the number of time-steps changes. More recent models, in fact, are designed around direct convergence to Black-Scholes.
A variant on the Binomial, is the Trinomial tree, developed by Phelim Boyle in 1986.
Here, the share price may remain unchanged over the time-step, and option valuation is then based on the value of the share at the up-, down- and middle-nodes in the later time-step.
As for the binomial, a similar (although smaller) range of methods exist. The trinomial model is considered to produce more accurate results than the binomial model when fewer time steps are modelled, and is therefore used when computational speed or resources may be an issue. For vanilla options, as the number of steps increases, the results rapidly converge, and the binomial model is then preferred due to its simpler implementation. For exotic options the trinomial model (or adaptations) is sometimes more stable and accurate, regardless of step-size.
Various of the Greeks can be estimated directly on the lattice, where the sensitivities are calculated using finite differences. Delta and gamma, being sensitivities of option value w.r.t. price, are approximated given differences between option prices – with their related spot – in the same time step. Theta, sensitivity to time, is likewise estimated given the option price at the first node in the tree and the option price for the same spot in a later time step. (Second time step for trinomial, third for binomial. Depending on method, if the "down factor" is not the inverse of the "up factor", this method will not be precise.) For rho, sensitivity to interest rates, and vega, sensitivity to input volatility, the measurement is indirect, as the value must be calculated a second time on a new lattice built with these inputs slightly altered – and the sensitivity here is likewise returned via finite difference. See also Fugit, the estimated time to exercise for a non-European option, which is typically calculated using a lattice.
For multiple underlyers – e.g. Rainbow- and Basket options – "multinomial lattices" can be built.
Each underlyer will have its own tree, and the per node option-value will be a function of the corresponding nodes on all underlying trees. In the case of two assets, the tree will then be referred to as a "binomial pyramid".
Two additional complications exist here.
First, the number of nodes increases exponentially with the number of underlyers.
Second, in these products, correlations between assets play a significant role, and these must also inhere in the modelling.
Beginning with the 1987 crash, and especially since the 2007–2008 financial crisis, it has become important to incorporate the volatility smile / surface into pricing models. This recognizes the fact that the underlying price-change distribution displays a term structure and is non-normal, unlike that assumed by Black-Scholes; see and . To do so, banks typically apply stochastic- or local volatility models. In the Lattice framework, implied trees can be constructed; these essentially discretize the latter.
Here, the tree is solved such that it successfully reproduces selected (all) market prices, across various strikes and expirations. These trees thus "ensure that all European standard options (with strikes and maturities coinciding with the tree nodes) will have theoretical values which match their market prices". Using the calibrated lattice one can then price options with strike / maturity combinations not quoted in the market, such that these prices are consistent with observed volatility patterns. For risk management, the Greeks returned will reflect sensitivities more appropriately.
There exist both implied binomial trees, often Rubinstein IBTs (R-IBT), and implied trinomial trees, often Derman-Kani-Chriss (DKC; superseding the DK-IBT). The former is easier built, but is consistent with one maturity only; the latter will be consistent with, but at the same time requires, known (or interpolated) prices at all time-steps and nodes.
As regards the construction, for an R-IBT the first step is to recover the "Implied Ending Risk-Neutral Probabilities" of spot prices. Then by the assumption that all paths which lead to the same ending node have the same risk-neutral probability, a "path probability" is attached to each ending node. Thereafter "it's as simple as One-Two-Three", and a three step backwards recursion allows for the node probabilities to be recovered for each time step. Option valuation then proceeds as standard, with these probabilities replacing above. For DKC, the first step is to recover the state prices corresponding to each node in the tree, such that these are consistent with observed option prices (i.e. with the volatility surface). Thereafter the up-, down- and middle-probabilities are found for each node such that: these sum to 1; spot prices adjacent time-step-wise evolve risk neutrally, incorporating dividend yield; state prices similarly "grow" at the risk free rate. (The solution here is iterative per time step as opposed to simultaneous.) As for R-IBTs, option valuation is then by standard backward recursion.
As an alternative, Edgeworth binomial trees
may be employed, as these allow for an analyst-specified skew and kurtosis in spot-price returns (see Edgeworth series).
Here, options with differing strikes will return differing implied volatilities, and the tree may then be calibrated to the volatility smile, by a "judicious choice" of parameter values.
For pricing American options, the valuation will be on an R-IBT as combined with the calibrated maturity distribution.
The Edgeworth approach is limited as to the set of skewness and kurtosis pairs for which valid distributions are available:
the more recent Johnson binomial trees,
then, use the Johnson "family" of distributions, as this is capable of accommodating all possible pairs.
Edgeworth (or Johnson) trees are also useful for other applications where the underlying's behavior departs (markedly) from normal. As an example, these trees can be applied to multinomial options: Basket options, for instance, can be priced using an "approximating distribution"
which provides the end-nodes, and skew and kurtosis, on which the tree is then built.
Re the modelling of CVA / XVA via lattice, see below.
Interest rate derivatives
Lattices are commonly used in valuing bond options, swaptions, and other interest rate derivatives In these cases the valuation is largely as above, but requires an additional, zeroeth, step of constructing an interest rate tree, on which the price of the underlying is then based. The next step also differs: the underlying price here is built via "backward induction" i.e. flows backwards from maturity, accumulating the present value of scheduled cash flows at each node, as opposed to flowing forwards from valuation date as above. The final step, option valuation, then proceeds as standard. See top for graphic, and aside for description.
The initial lattice is built by discretizing either a short-rate model, such as Hull–White or Black Derman Toy, or a forward rate-based model, such as the LIBOR market model or HJM. As for equity, trinomial trees may also be employed for these models; this is usually the case for Hull-White trees.
Under HJM, the condition of no arbitrage implies that there exists a martingale probability measure, as well as a corresponding restriction on the "drift coefficients" of the forward rates. These, in turn, are functions of the volatility(s) of the forward rates. A "simple" discretized expression for the drift then allows for forward rates to be expressed in a binomial lattice. For these forward rate-based models, dependent on volatility assumptions, the lattice might not recombine. (This means that an "up-move" followed by a "down-move" will not give the same result as a "down-move" followed by an "up-move".) In this case, the Lattice is sometimes referred to as a "bush", and the number of nodes grows exponentially as a function of number of time-steps. A recombining binomial tree methodology is also available for the Libor Market Model.
As regards the short-rate models, these are, in turn, further categorized: these will be either equilibrium-based (Vasicek and CIR) or arbitrage-free (Ho–Lee and subsequent).
This distinction: for equilibrium-based models the yield curve is an output from the model, while for arbitrage-free models the yield curve is an input to the model. In the former case, the approach is to "calibrate" the model parameters, such that bond prices produced by the model, in its continuous form, best fit observed market prices. The tree is then built as a function of these parameters.
In the latter case, the calibration is directly on the lattice: the fit is to both the current term structure of interest rates (i.e. the yield curve), and the corresponding volatility structure.
Here, calibration means that the interest-rate-tree reproduces the prices of the zero-coupon bonds—and any other interest-rate sensitive securities—used in constructing the yield curve; note the parallel to the implied trees for equity above, and compare Bootstrapping (finance).
For models assuming a normal distribution (such as Ho-Lee), calibration may be performed analytically, while for log-normal models the calibration is via a root-finding algorithm; see for example, the boxed-description under Black–Derman–Toy model.
The volatility structure—i.e. vertical node-spacing—here reflects the volatility of rates during the quarter, or other period, corresponding to the lattice time-step. (Some analysts use "realized volatility", i.e. of the rates applicable historically for the time-step; to be market-consistent, analysts generally prefer to use current interest rate cap prices, and the implied volatility for the Black-76-prices of each component caplet; see .)
Given this functional link to volatility, note now the resultant difference in the construction relative to equity implied trees:
for interest rates, the volatility is known for each time-step, and the node-values (i.e. interest rates) must be solved for specified risk neutral probabilities;
for equity, on the other hand, a single volatility cannot be specified per time-step, i.e. we have a "smile", and the tree is built by solving for the probabilities corresponding to specified values of the underlying at each node.
Once calibrated, the interest rate lattice is then used in the valuation of various of the fixed income instruments and derivatives.
The approach for bond options is described aside—note that this approach addresses the problem of pull to par experienced under closed form approaches; see .
For swaptions the logic is almost identical, substituting swaps for bonds in step 1, and swaptions for bond options in step 2.
For caps (and floors) step 1 and 2 are combined: at each node the value is based on the relevant nodes at the later step, plus, for any caplet (floorlet) maturing in the time-step, the difference between its reference-rate and the short-rate at the node (and reflecting the corresponding day count fraction and notional-value exchanged).
For callable- and putable bonds a third step would be required: at each node in the time-step incorporate the effect of the embedded option on the bond price and / or the option price there before stepping-backwards one time-step. (And noting that these options are not mutually exclusive, and so a bond may have several options embedded; hybrid securities are treated below.)
For other, more exotic interest rate derivatives, similar adjustments are made to steps 1 and onward.
For the "Greeks", largely as for equity, see under next section.
An alternative approach to modeling (American) bond options, particularly those struck on yield to maturity (YTM), employs modified equity-lattice methods. Here the analyst builds a CRR tree of YTM, applying a constant volatility assumption, and then calculates the bond price as a function of this yield at each node; prices here are thus pulling-to-par. The second step is to then incorporate any term structure of volatility by building a corresponding DKC tree (based on every second time-step in the CRR tree: as DKC is trinomial whereas CRR is binomial) and then using this for option valuation.
Since the 2007–2008 financial crisis, swap pricing is (generally) under a "multi-curve framework", whereas previously it was off a single, "self discounting", curve; see . Here, payoffs are set as a function of the Reference rate or forecast rate (LIBOR) specific to the tenor in question, while discounting is at the OIS rate.
To accommodate this in the lattice framework, the OIS rate and the relevant reference rate are jointly modeled in a three-dimensional tree, constructed so as to return the input OIS- and Libor-swap prices, while also inhering any correlation between the two rate sets.
With the zeroeth step thus accomplished, the valuation will proceed largely as previously, using steps 1 and onwards, but here – similar to the above "pyramid" – with cashflows based on the LIBOR tree, and discounting using the corresponding nodes from the OIS tree.
A related development is that banks will make a credit valuation adjustment, CVA – as well as various of the other XVA – when assessing the value of derivative contracts that they have entered into. The purpose of these is twofold: primarily to hedge for possible losses due to the other parties' failures to pay amounts due on the derivative contracts; but also to determine (and hedge) the amount of capital required under the bank capital adequacy rules. Although usually calculated under a simulation framework, tree-based methods can be applied here also.
In the case of a swap, for example, the potential future exposure, PFE, facing the bank on each date is the probability-weighted average of the positive settlement payments and swap values over the lattice-nodes at the date; each node's probability is in turn a function of the tree's cumulative up- and down-probabilities. This PFE is combined with the counterparty's (tree-exogenous) probability of default and recovery rate to derive the expected loss for the date. Finally, the aggregated present value of these is the CVA for the counterparty on that position.
Hybrid securities
Hybrid securities, incorporating both equity- and bond-like features are also valued using trees. For convertible bonds (CBs) the approach of Tsiveriotis and Fernandes (1998) is to divide the value of the bond at each node into an "equity" component, arising from situations where the CB will be converted, and a "debt" component, arising from situations where CB is redeemed. Correspondingly, twin trees are constructed where discounting is at the risk free and credit risk adjusted rate respectively, with the sum being the value of the CB. There are other methods, which similarly combine an equity-type tree with a short-rate tree. An alternate approach, originally published by Goldman Sachs (1994), does not decouple the components, rather, discounting is at a conversion-probability-weighted risk-free and risky interest rate within a single tree. See , Contingent convertible bond.
More generally, equity can be viewed as a call option on the firm: where the value of the firm is less than the value of the outstanding debt shareholders would choose not to repay the firm's debt; they would choose to repay—and not to liquidate (i.e. exercise their option)—otherwise. Lattice models have been developed for equity analysis here, particularly as relates to distressed firms. Relatedly, as regards corporate debt pricing, the relationship between equity holders' limited liability and potential Chapter 11 proceedings has also been modelled via lattice.
The calculation of "Greeks" for interest rate derivatives proceeds as for equity. There is however an additional requirement, particularly for hybrid securities: that is, to estimate sensitivities related to overall changes in interest rates. For a bond with an embedded option, the standard yield to maturity based calculations of duration and convexity do not consider how changes in interest rates will alter the cash flows due to option exercise. To address this, effective duration and -convexity are introduced. Here, similar to rho and vega above, the interest rate tree is rebuilt for an upward and then downward parallel shift in the yield curve and these measures are calculated numerically given the corresponding changes in bond value.
References
Bibliography
Options (finance)
Short-rate models
Mathematical finance
Models of computation
Trees (data structures)
Financial models | Lattice model (finance) | Mathematics | 4,384 |
2,701,625 | https://en.wikipedia.org/wiki/List%20of%20countries%20by%20life%20expectancy | This list of countries by life expectancy provides a comprehensive list of countries alongside their respective life expectancy figures. The data is differentiated by sex, presenting life expectancies for males, females, and a combined average. In addition to sovereign nations, the list encompasses several non-sovereign entities and territories. The figures serve as an indicator of the quality of healthcare in the respective countries and are influenced by various factors, including the prevalence of diseases such as HIV/AIDS. This article introduces the concept of Healthy life expectancy (HALE), which denotes the average number of years a person is expected to live in "full health". There are challenges in comparing life expectancies across countries due to disparities in data reporting and collection standards. The primary source of the most recent data presented is the World Bank Group's 2022 report.
Methodology
The life expectancy is shown separately for males and for females, as well as a combined figure. Several non-sovereign entities and territories are also included in this list.
The figures reflect the quality of healthcare in the countries listed as well as other factors including HIV infections.
From the beginning of the current century there is a tendency to also estimate healthy life expectancy (HALE), the average number of years that a person can expect to live in "full health".
Comparing life expectancies across countries can be problematic. For example, due to poor reporting in some countries and various local standards in collecting statistics. This is especially true for Healthy life expectancy, the definition of which criteria may change over time, even within a country. For example, Canada is a country with a fairly high overall life expectancy at 81.63 years; however, this number decreases to 75.5 years for Indigenous people in the country. This discrepancy is echoed in most quality of life metrics across Canada.
United Nations (2023)
Estimation of the analytical agency of the UN. Only countries with populations over 50,000 are listed. Due to this criterion, the table does not include such countries as Monaco (LE 86.37 years, population 39,000), San Marino (LE 85.71 years, population 34,000), and Saint Barthélemy (LE 84.29 years, population 11,000). The values are rounded, all calculations were done on raw data.
These values are used to calculate the Human Development Index.
World Bank Group (2022)
Estimation of the World Bank Group for 2022. Only countries with populations over 50,000 are listed. The values in the World Bank Group tables are rounded. All calculations were done on raw data, therefore, due to the nuances of rounding, in some places illusory inconsistencies of indicators arose, with a size of 0.01 year.
In 2014, some of the world's leading countries had a local peak in life expectancy, so this year is chosen for comparison with 2019 and 2022.
World Health Organization (2019)
According to data published by the World Health Organization in December 2020. By default, data is sorted by life expectancy at birth for all population, and in case of equal values by HALE for all population.
CIA World Factbook (2022)
The US CIA published the following life expectancy data in its World Factbook.
OECD (2022)
Estimation of OECD for 2022. By default, the list is sorted by 2019.
Gallery
See also
References
External links
Global Agewatch has the latest internationally comparable statistics on life expectancy from 195 countries.
Life expectancy trends interactive graph
Life expectancy interactive world map
Global Life Expectancy (Infographic)|LiveScience
U.S. States life expectancy compared to the world (Infographic & Study)|Simply Insurance
Life expectancy
Life expectancy
Demographic lists | List of countries by life expectancy | Biology | 772 |
12,069,928 | https://en.wikipedia.org/wiki/Gaps%20and%20gores | Gaps and gores are portions of land areas that do not conform to boundaries found in cadastre and other land surveys based upon imprecise measurements and other ambiguities of metes and bounds. A gap, also known as a hiatus, occurs where the descriptions in deeds describing adjacent properties (unintentionally) overlook a space or "gap" between them. A gore occurs where descriptions in larger administrative boundaries (towns, counties) of adjacent jurisdictions or, large parcels, all fail to include some portion of land between them, forming an unclaimed, characteristically triangular "sliver" of land.
Disputes often arise regarding the ownership of gaps and gores when they are discovered, usually when developers detect sufficient value in the local land. Local laws will determine whether they are considered abandoned or rather adhere to (or may be absorbed by) one adjacent parcel or another. For example, in Tennessee law, tax map boundaries can become property boundaries (notwithstanding a survey and deed to the contrary) merely by paying the taxes on the land for twenty years in the belief that it was part of the ownership, even if it encompasses adjacent gaps and gores. See adverse possession.
See also
Gore (surveying)
Gore (segment)
Land survey
Surveying
Real property law | Gaps and gores | Engineering | 257 |
30,407,344 | https://en.wikipedia.org/wiki/Fort%20de%20Chillon | The Fort de Chillon is a twentieth-century fortification directly adjacent to the medieval Château de Chillon on the edge of Lake Geneva in Switzerland. The fort secures the road and rail lines that pass along the lakeshore running east from Lausanne to the mountainous interior of Switzerland. The position is an advanced work protecting the approaches to Fortress Saint-Maurice, part of the Swiss National Redoubt. Deactivated as a military post in 1995, it is privately owned and has been converted into a museum.
Description
The Fort de Chillon lies directly adjacent to the Château de Chillon, a major tourist attraction. The site has been fortified since at least the 12th century. The Fort de Chillon is a mixed infantry-artillery fort, located almost entirely underground in the steep slope rising to Veytaux above the rail line. The location is spanned by the Viaduc de Chillon. Built at the opening of the Second World War (1941), it was armed with a mixture of 75mm guns and 90mm anti-tank guns. Initial work was completed in 1942. The fort was designated A390 in the Swiss fortification nomenclature. The ensemble included permanent and rapidly deployable anti-tank obstacles, designed to stall or trap an enemy while the fort's weapons fired on them.
The fort was initially armed with two 75mm anti-tank guns for direct fire, replaced with 90mm guns in 1962. Two other 75mm guns were retained until 1978. Two additional 90mm guns and five machine guns completed the offensive armament. Defensive arms included two machine guns each at the Rocher de Veytaux and Montagnette, with three at Champ-Babaud, along with two 81mm mortars. Infantry bunkers command the road and rail lines adjacent to the castle, linked to the main fort by underground galleries.
The Fort de Chillon was manned by units of Mountain Brigade 10. The garrison comprised 131 men. Nine detached bunkers were provided for infantry forces defending the surface of the installation.
This position was covered from the direction of Saint-Maurice by the Fort de Champillon, an artillery fort. The Chablais plain to the south was covered by additional fortifications. The Chablais and Chillon ensembles were not considered part of Fortress Saint-Maurice proper, but were important advance works to delay and weaken an attacker before they reached the Saint-Maurice stopping line, or fort d'arrêt. The entire region is fortified with anti-tank barriers, permanent minefields and other barriers, while tunnels, bridges and retaining walls are mined or prepared for demolition.
Present status
As a result of the reorganization of Swiss fortifications under the Army 95 program, the fort was deactivated and declared surplus. Chillon was considered for purchase by the Canton of Vaud in 1998 and was briefly opened for public tours, with the intention of making it into a museum. However, the transaction fell through. The town of Chillon waived its rights in 2004, followed by the neighboring town of Veytaux. In 2010 the property was purchased by investors who planned to make the site a wine storage facility and tasting facility. Tours are available.
References
External links
Fort de Chillon Official website of the Fort de Chillon
Article on the Fort de Chillon with diagrams
Association Fort de Litroz at Association Fort de Litroz
Visite humoristique du fort de Chillon at APSF
Fort de Chillon (facebook) Official Facebook page of the Fort de Chillon
Fortifications of Switzerland built in the 20th century
Government buildings completed in 1942
Military installations established in 1942
Forts in Switzerland
1942 establishments in Switzerland
20th-century architecture in Switzerland | Fort de Chillon | Engineering | 730 |
14,439,807 | https://en.wikipedia.org/wiki/P2RY13 | P2Y purinoceptor 13 is a protein that in humans is encoded by the P2RY13 gene.
The product of this gene, P2Y13, belongs to the family of G-protein coupled receptors. This family has several receptor subtypes with different pharmacological selectivity, which overlaps in some cases, for various adenosine and uridine nucleotides. This receptor is activated by ADP. Two transcript variants encoding the same protein have been identified for this gene.
See also
P2Y receptor
References
Further reading
External links
G protein-coupled receptors | P2RY13 | Chemistry | 125 |
506,906 | https://en.wikipedia.org/wiki/Density%20of%20air | The density of air or atmospheric density, denoted ρ, is the mass per unit volume of Earth's atmosphere. Air density, like air pressure, decreases with increasing altitude. It also changes with variations in atmospheric pressure, temperature and humidity. At 101.325 kPa (abs) and 20 °C (68 °F), air has a density of approximately , according to the International Standard Atmosphere (ISA). At 101.325kPa (abs) and , air has a density of approximately , which is about that of water, according to the International Standard Atmosphere (ISA). Pure liquid water is .
Air density is a property used in many branches of science, engineering, and industry, including aeronautics; gravimetric analysis; the air-conditioning industry; atmospheric research and meteorology; agricultural engineering (modeling and tracking of Soil-Vegetation-Atmosphere-Transfer (SVAT) models); and the engineering community that deals with compressed air.
Depending on the measuring instruments used, different sets of equations for the calculation of the density of air can be applied. Air is a mixture of gases and the calculations always simplify, to a greater or lesser extent, the properties of the mixture.
Temperature
Other things being equal (most notably the pressure and humidity), hotter air is less dense than cooler air and will thus rise while cooler air tends to fall. This can be seen by using the ideal gas law as an approximation.
Dry air
The density of dry air can be calculated using the ideal gas law, expressed as a function of temperature and pressure:
where:
, air density (kg/m3)
, absolute pressure (Pa)
, absolute temperature (K)
is the gas constant, in J⋅K−1⋅mol−1
is the molar mass of dry air, approximately in kg⋅mol−1.
is the Boltzmann constant, in J⋅K−1
is the molecular mass of dry air, approximately in kg.
, the specific gas constant for dry air, which using the values presented above would be approximately in J⋅kg−1⋅K−1.
Therefore:
At IUPAC standard temperature and pressure (0°C and 100kPa), dry air has a density of approximately 1.2754kg/m3.
At 20°C and 101.325kPa, dry air has a density of 1.2041 kg/m3.
At 70°F and 14.696psi, dry air has a density of 0.074887lb/ft3.
The following table illustrates the air density–temperature relationship at 1 atm or 101.325 kPa:
Humid air
The addition of water vapor to air (making the air humid) reduces the density of the air, which may at first appear counter-intuitive. This occurs because the molar mass of water vapor (18g/mol) is less than the molar mass of dry air (around 29g/mol). For any ideal gas, at a given temperature and pressure, the number of molecules is constant for a particular volume (see Avogadro's Law). So when water molecules (water vapor) are added to a given volume of air, the dry air molecules must decrease by the same number, to keep the pressure from increasing or temperature from decreasing. Hence the mass per unit volume of the gas (its density) decreases.
The density of humid air may be calculated by treating it as a mixture of ideal gases. In this case, the partial pressure of water vapor is known as the vapor pressure. Using this method, error in the density calculation is less than 0.2% in the range of −10 °C to 50 °C. The density of humid air is found by:
where:
, density of the humid air (kg/m3)
, partial pressure of dry air (Pa)
, specific gas constant for dry air, 287.058J/(kg·K)
, temperature (K)
, pressure of water vapor (Pa)
, specific gas constant for water vapor, 461.495J/(kg·K)
, molar mass of dry air, 0.0289652kg/mol
, molar mass of water vapor, 0.018016kg/mol
, universal gas constant, 8.31446J/(K·mol)
The vapor pressure of water may be calculated from the saturation vapor pressure and relative humidity. It is found by:
where:
, vapor pressure of water
, relative humidity (0.0–1.0)
, saturation vapor pressure
The saturation vapor pressure of water at any given temperature is the vapor pressure when relative humidity is 100%. One formula is Tetens' equation from used to find the saturation vapor pressure is:
where:
, saturation vapor pressure (kPa)
, temperature (K)
See vapor pressure of water for other equations.
The partial pressure of dry air is found considering partial pressure, resulting in:
where simply denotes the observed absolute pressure.
Variation with altitude
Troposphere
To calculate the density of air as a function of altitude, one requires additional parameters. For the troposphere, the lowest part (~10 km) of the atmosphere, they are listed below, along with their values according to the International Standard Atmosphere, using for calculation the universal gas constant instead of the air specific constant:
, sea level standard atmospheric pressure, 101325Pa
, sea level standard temperature, 288.15K
, earth-surface gravitational acceleration, 9.80665m/s2
, temperature lapse rate, 0.0065K/m
, ideal (universal) gas constant, 8.31446J/(mol·K)
, molar mass of dry air, 0.0289652kg/mol
Temperature at altitude meters above sea level is approximated by the following formula (only valid inside the troposphere, no more than ~18km above Earth's surface (and lower away from Equator)):
The pressure at altitude is given by:
Density can then be calculated according to a molar form of the ideal gas law:
where:
, molar mass
, ideal gas constant
, absolute temperature
, absolute pressure
Note that the density close to the ground is
It can be easily verified that the hydrostatic equation holds:
Exponential approximation
As the temperature varies with height inside the troposphere by less than 25%, and one may approximate:
Thus:
Which is identical to the isothermal solution, except that Hn, the height scale of the exponential fall for density (as well as for number density n), is not equal to RT0/gM as one would expect for an isothermal atmosphere, but rather:
Which gives Hn = 10.4km.
Note that for different gasses, the value of Hn differs, according to the molar mass M: It is 10.9 for nitrogen, 9.2 for oxygen and 6.3 for carbon dioxide. The theoretical value for water vapor is 19.6, but due to vapor condensation the water vapor density dependence is highly variable and is not well approximated by this formula.
The pressure can be approximated by another exponent:
Which is identical to the isothermal solution, with the same height scale . Note that the hydrostatic equation no longer holds for the exponential approximation (unless L is neglected).
Hp is 8.4km, but for different gasses (measuring their partial pressure), it is again different and depends upon molar mass, giving 8.7 for nitrogen, 7.6 for oxygen and 5.6 for carbon dioxide.
Total content
Further note that since g, Earth's gravitational acceleration, is approximately constant with altitude in the atmosphere, the pressure at height h is proportional to the integral of the density in the column above h, and therefore to the mass in the atmosphere above height h. Therefore, the mass fraction of the troposphere out of all the atmosphere is given using the approximated formula for p:
For nitrogen, it is 75%, while for oxygen this is 79%, and for carbon dioxide, 88%.
Tropopause
Higher than the troposphere, at the tropopause, the temperature is approximately constant with altitude (up to ~20km) and is 220K. This means that at this layer and , so that the exponential drop is faster, with for air (6.5 for nitrogen, 5.7 for oxygen and 4.2 for carbon dioxide). Both the pressure and density obey this law, so, denoting the height of the border between the troposphere and the tropopause as U:
Composition
See also
Air
Atmospheric drag
Lighter than air
Density
Atmosphere of Earth
International Standard Atmosphere
U.S. Standard Atmosphere
NRLMSISE-00
Notes
References
External links
Conversions of density units ρ by Sengpielaudio
Air density and density altitude calculations and by Richard Shelquist
Air density calculations by Sengpielaudio (section under Speed of sound in humid air)
Air density calculator by Engineering design encyclopedia
Atmospheric pressure calculator by wolfdynamics
Air iTools - Air density calculator for mobile by JSyA
Revised formula for the density of moist air (CIPM-2007) by NIST
Atmospheric thermodynamics
A
Meteorological quantities | Density of air | Physics,Mathematics | 1,891 |
47,856,659 | https://en.wikipedia.org/wiki/Penicillium%20tropicum | Penicillium tropicum is a species of fungus in the genus Penicillium which was isolated from soil beneath a Coffea arabica plant in Karnataka in India.
References
Further reading
tropicum
Fungi described in 2010
Fungus species | Penicillium tropicum | Biology | 51 |
16,859,900 | https://en.wikipedia.org/wiki/VAX1 | Ventral anterior homeobox 1 is a protein that in humans is encoded by the VAX1 gene.
Function
This gene appears to influence the development in humans of the forebrain. It is also present in mice and xenopus frogs, which suggests a long evolutionary history, and in those organisms its expression is confined to the forebrain, optic and olfactory areas.
VAX1 gene is a transcription factor that has a homeodomain located in the 100-159 amino acid position and an Ala–rich region located in 216-253 amino acid position of the gene. Expression studies in mice show that it is expressed in the palate, coloboma in the visual system, and the basal telencephalon, optic stalk, and visual eye fields where it is expressed along with the Shh and Bmp4 genes.
Clinical significance
Mice with homozygous VAX1 mutations have been reported to display craniofacial malformations including cleft palate.
Genome Wide Association Studies (GWAS) reported significant associations between non-syndromic clefts and SNPs in the VAX1 gene. Replication studies have confirmed these associations in different population groups
References
Further reading
External links
Transcription factors | VAX1 | Chemistry,Biology | 251 |
42,158,570 | https://en.wikipedia.org/wiki/Amplitude%20of%20low%20frequency%20fluctuations | Amplitude of Low Frequency Fluctuations (ALFF) and fractional Amplitude of Low Frequency Fluctuations (f/ALFF) are neuroimaging methods used to measure spontaneous fluctuations in BOLD-fMRI signal intensity for a given region in the resting brain. Electrophysiological studies suggest that low-frequency oscillations arise from spontaneous neuronal activity. Though ALFFs have been researched extensively in fMRI based theoretical models of brain function, their actual significance is still unknown.
Default mode network
Whole-brain ALFF shows greater signal in posterior cingulate, precuneus, and medial prefrontal areas of the default mode network, but also in non-cortical areas near the ventricles, cisterns and large blood vessels. f/ALFF reduces the sensitivity of ALFF to physiological noise by taking the ratio of each frequency (0.01-0.08 Hz) to the total frequency range (0-0.25 Hz). Both measures have been investigated as part of reliable biomarkers for many neurological conditions including schizophrenia, anorexia nervosa, and ADHD.
References
Magnetic resonance imaging
Medical imaging
Neuroscience | Amplitude of low frequency fluctuations | Chemistry,Biology | 237 |
41,548,363 | https://en.wikipedia.org/wiki/Aquascope | An aquascope (also called bathyscope) is an underwater viewing device. It is used to view the underwater world often from dry land or a boat. It eliminates the water surface glare and allows viewing as far as water clarity and light permit. The underwater viewer can be used for observing reefs, checking boat moorings, secchi disks and other survey work. It is also used as educational tool to watch plants, creatures and habitats underneath the surface of rivers, lakes and seas.
A more advanced version, an underwater telescope was patented by Sarah Mather in 1845 - U.S. Patent No. 3,995; it permitted sea-going vessels to survey the depths of the ocean. It used a camphine lamp in a glass globe that was sunk into the water. The device allowed the examination of a ship's hull and other details from the deck of a boat. In 1864, Mather added an improvement - U.S. Patent No. 43,465 to her previous invention to detect Southern underwater warships.
Construction
The instrument which has been popularly named the Water, or Marine Telescope, from the power given by its use to see into the water, consists of a tube of metal or wood, of a convenient length, to enable a person looking over the gunnel of a boat to rest the head on the one end, while the other is below the surface of the water; the upper end is so formed, that the head may rest on it, both eyes seeing freely into the tube. Into the lower end is fixed (water-tight) a plate of glass, which, when used, is to be kept under the surface of the water.
A very convenient size for the instrument represented in the above figure, is to make the length AC, 3 feet, and the mouth A, where the face is applied, of an irregular oval form, that both eyes may see freely into the tube, with an indentation on one side, that the nose may breathe freely, not throwing the moisture of the breath into the tube. B is a round plate of glass, 8 inches diameter, over which is the rim or edge C; this rim is best formed of lead, J of an inch thick, and 3 inches deep; the weight of the lead serves to sink the tube a little into the water. Holes must be provided at the junction of B to C, for the purpose of allowing the air to escape, and bring the water into contact with the glass; on each side there is a handle for holding the instrument.
Media
References
Bibliography
External links
Exploring-the-ocean-with-a-bathyscope
Optical devices | Aquascope | Materials_science,Engineering | 536 |
2,305,020 | https://en.wikipedia.org/wiki/Optode | An optode or optrode is an optical sensor device that optically measures a specific substance usually with the aid of a chemical transducer.
Construction
An optode requires three components to function: a chemical that responds to an analyte, a polymer to immobilise the chemical transducer and instrumentation (optical fibre, light source, detector and other electronics). Optodes usually have the polymer matrix coated onto the tip of an optical fibre, but in the case of evanescent wave optodes the polymer is coated on a section of fibre that has been unsheathed.
Operation
Optodes can apply various optical measurement schemes such as reflection, absorption, evanescent wave, luminescence (fluorescence and phosphorescences), chemiluminescence, surface plasmon resonance. By far the most popular methodology is luminescence.
Luminescence in solution obeys the linear Stern–Volmer relationship. Fluorescence of a molecule is quenched by specific analytes, e.g., ruthenium complexes are quenched by oxygen. When a fluorophore is immobilised within a polymer matrix myriad micro-environments are created. The micro-environments reflect varying diffusion co-efficients for the analyte. This leads to a non-linear relationship between the fluorescence and the quencher (analyte). This relationship is modelled in various ways, the most popular model is the two site model created by James Demas (University of Virginia).
The signal (fluorescence) to oxygen ratio is not linear, and an optode is most sensitive at low oxygen concentration, i.e., the sensitivity decreases as oxygen concentration increases. The optode sensors can however work in the whole region 0–100% oxygen saturation in water, and the calibration is done the same way as with the Clark type sensor. No oxygen is consumed and hence the sensor is stirring insensitive, but the signal will stabilize more quickly if the sensor is stirred after being put into the sample.
Popularity
Optical sensors are growing in popularity due to the low-cost, low power requirements and long term stability. They provide viable alternatives to electrode-based sensors or more complicated analytical instrumentation, especially in the field of environmental monitoring although in the case of oxygen optrodes, they do not have the resolution as the most recent cathodic microsensors.
See also
Oxygen sensor
References
Optical devices
Sensors
Spectroscopy
Fluorescence | Optode | Physics,Chemistry,Materials_science,Technology,Engineering | 506 |
67,834,099 | https://en.wikipedia.org/wiki/May%202107%20lunar%20eclipse | A penumbral lunar eclipse will occur at the Moon’s ascending node of orbit on Saturday, May 7, 2107, with an umbral magnitude of −0.9356. A lunar eclipse occurs when the Moon moves into the Earth's shadow, causing the Moon to be darkened. A penumbral lunar eclipse occurs when part or all of the Moon's near side passes into the Earth's penumbra. Unlike a solar eclipse, which can only be viewed from a relatively small area of the world, a lunar eclipse may be viewed from anywhere on the night side of Earth. The Moon's apparent diameter will be near the average diameter because it will occur 6.8 days after perigee (on April 30, 2107, at 10:00 UTC) and 6.9 days before apogee (on May 14, 2107, at 1:50 UTC).
This eclipse will be too small to be visually perceptible.
Visibility
The eclipse will be completely visible over much of North and South America, western Europe, west and southern Africa, and Antarctica.
Eclipse details
Shown below is a table displaying details about this particular solar eclipse. It describes various parameters pertaining to this eclipse.
Eclipse season
This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight. The first and last eclipse in this sequence is separated by one synodic month.
Related eclipses
Eclipses in 2107
A penumbral lunar eclipse on April 7.
An annular solar eclipse on April 23.
A penumbral lunar eclipse on May 7.
A penumbral lunar eclipse on October 2.
A total solar eclipse on October 16.
Metonic
Preceded by: Lunar eclipse of July 19, 2103
Tzolkinex
Followed by: Lunar eclipse of June 18, 2114
Lunar Saros 152
Followed by: Lunar eclipse of May 17, 2125
Inex
Followed by: Lunar eclipse of April 16, 2136
Triad
Preceded by: Lunar eclipse of July 5, 2020
Followed by: Lunar eclipse of March 7, 2194
Lunar eclipses of 2103–2107
This eclipse is a member of a semester series. An eclipse in a semester series of lunar eclipses repeats approximately every 177 days and 4 hours (a semester) at alternating nodes of the Moon's orbit.
The penumbral lunar eclipses on January 23, 2103 and July 19, 2103 occur in the previous lunar year eclipse set, and the penumbral lunar eclipses on April 7, 2107 and October 2, 2107 occur in the next lunar year eclipse set.
Metonic series
See also
List of lunar eclipses
List of 22nd-century lunar eclipses
References
2107-05
2107-05 | May 2107 lunar eclipse | Astronomy | 628 |
1,808,430 | https://en.wikipedia.org/wiki/Pensonic%20Group | Pensonic Holdings Berhad (stylized as PENSONIC) () is a Malaysian company that sells electrical appliances. It was founded by Chew Weng Khak in 1965 as a shop in Penang selling electrical appliances trading under the name of Keat Radio Co. with Chew as a sole proprietor. In 1982, Chew started the Pensonic brand name to produce locally manufactured electrical appliances.
Dispute over name
In some markets, Panasonic has accused Pensonic of being a knockoff of former's through the similar brand name and affecting their market share, leading to lawsuits. Pensonic says that its the name was created by combining Penang and Sonic to mean the "sound of Penang".
In 2008, Panasonic won an injunction against Pensonic in Singapore. In Sri Lanka, Pensonic successfully won a judgement to register its trademark in January 2009 against opposition by Panasonic. The judgment observed that Pensonic was previously registered in Malaysia, Japan, and many other countries without the objection of Panasonic.
References
External links
Official website
1965 establishments in Malaysia
Electronics companies of Malaysia
Retail companies of Malaysia
Electronics companies established in 1965
Retail companies established in 1965
Malaysian companies established in 1965
Companies based in Penang
Companies listed on Bursa Malaysia
Malaysian brands
Radio manufacturers | Pensonic Group | Engineering | 250 |
36,836,014 | https://en.wikipedia.org/wiki/Jennifer%20Doudna | Jennifer Anne Doudna (; born February 19, 1964) is an American biochemist who has pioneered work in CRISPR gene editing, and made other fundamental contributions in biochemistry and genetics. She received the 2020 Nobel Prize in Chemistry, with Emmanuelle Charpentier, "for the development of a method for genome editing." She is the Li Ka Shing Chancellor's Chair Professor in the department of chemistry and the department of molecular and cell biology at the University of California, Berkeley. She has been an investigator with the Howard Hughes Medical Institute since 1997.
In 2012, Doudna and Emmanuelle Charpentier were the first to propose that CRISPR-Cas9 (enzymes from bacteria that control microbial immunity) could be used for programmable editing of genomes, which has been called one of the most significant discoveries in the history of biology. Since then, Doudna has been a leading figure in what is referred to as the "CRISPR revolution" for her fundamental work and leadership in developing CRISPR-mediated genome editing.
Doudna's awards and fellowships include the 2000 Alan T. Waterman Award for her research on the structure of a ribozyme, as determined by X-ray crystallography and the 2015 Breakthrough Prize in Life Sciences for CRISPR-Cas9 genome editing technology, with Charpentier. She has been a co-recipient of the Gruber Prize in Genetics (2015), the Tang Prize (2016), the Canada Gairdner International Award (2016), and the Japan Prize (2017). She was named one of the Time 100 most influential people in 2015, and in 2023 was inducted into the National Inventors Hall of Fame.
Early life and education
Jennifer Doudna was born February 19, 1964, in Washington, D.C., as the daughter of Dorothy Jane (Williams) and Martin Kirk Doudna. Her father received his PhD in English literature from the University of Michigan, and her mother held a master's degree in education. When Doudna was seven years old, the family moved to Hawaii so her father could accept a teaching position in American literature at the University of Hawaii at Hilo. Doudna's mother earned a second master's degree in Asian history from the university and taught history at a local community college.
Growing up in Hilo, Hawaii, Doudna was fascinated by the environmental beauty of the island and its flora and fauna. Nature built her sense of curiosity and her desire to understand the underlying biological mechanisms of life. This was coupled with the atmosphere of intellectual pursuit that her parents encouraged at home. Her father enjoyed reading about science and filled the home with many books on popular science. When Doudna was in the sixth grade, he gave her a copy of James Watson's 1968 book on the discovery of the structure of DNA, The Double Helix, which was a major inspiration. Doudna also developed her interest in science and mathematics in school. Even though Doudna was told that "Women don't go into science," she knew that she wanted to be a scientist no matter what. Nothing said to her made her doubt it, Doudna said, "When someone tells me I can't do something and I know that I can, it just makes me more resolved to do it."
While she attended Hilo High School, Doudna's interest in science was nurtured by her 10th-grade chemistry teacher, Jeanette Wong, whom she has routinely cited as a significant influence in sparking her nascent scientific curiosity. A visiting lecturer on cancer cells further encouraged her pursuit of science as a career choice. She spent a summer working in the University of Hawaii at Hilo lab of noted mycologist Don Hemmes and graduated from Hilo High School in 1981.
Doudna was an undergraduate student at Pomona College in Claremont, California, where she studied biochemistry. During her freshman year, while taking a course in general chemistry, she questioned her own ability to pursue a career in science, and considered switching her major to French as a sophomore. However, her French teacher suggested she stick with science. Chemistry professors Fred Grieman and Corwin Hansch at Pomona had a major impact on her. She started her first scientific research in the lab of professor Sharon Panasenko. She earned her Bachelor of Arts degree in biochemistry in 1985. She chose Harvard Medical School for her doctoral study and earned a PhD in Biological Chemistry and Molecular Pharmacology in 1989. Her Ph.D. dissertation was on a system that increased the efficiency of a self-replicating catalytic RNA and was supervised by Jack W. Szostak.
Career and research
After her PhD, she held research fellowships in molecular biology at the Massachusetts General Hospital and in genetics at Harvard Medical School. From 1991 to 1994, she was Lucille P. Markey Postdoctoral Scholar in Biomedical Science at the University of Colorado Boulder, where she worked with Thomas Cech. , Doudna has an h-index of 141 according to Google Scholar and of 111 according to Scopus.
Research on ribozyme structure and function
Early in her scientific career, Doudna worked to uncover the structure and biological function of RNA enzymes or ribozymes. While in the Szostak lab, Doudna re-engineered the self-splicing Tetrahymena Group I catalytic intron into a true catalytic ribozyme that copied RNA templates. Her focus was on engineering ribozymes and understanding their underlying mechanisms; however, she came to realize that not being able to see the molecular mechanisms of ribozymes was a major problem. Doudna went to the lab of Thomas Cech at the University of Colorado Boulder to crystallize and determine the three-dimensional structure of a ribozyme for the first time, so ribozyme structure could be compared with that of Enzymes, the catalytic Proteins. She started this project at the Cech lab in 1991 and finished it at Yale University in 1996. Doudna joined Yale's Department of Molecular Biophysics and Biochemistry as an assistant professor in 1994.
X-ray diffraction-based structure of active site of a ribozyme at Yale
At Yale, Doudna's group was able to crystallize and solve the three-dimensional structure of the catalytic core of the Tetrahymena Group I ribozyme. They showed that a core of five magnesium ions clustered in one region of the P4-P6 domain of the ribozyme, forming a hydrophobic core around which the rest of the structure could fold. This is analogous but chemically distinct from, the way proteins typically have a core of hydrophobic amino acids. Her group has crystallized other ribozymes, including the Hepatitis Delta Virus ribozyme. This initial work to solve large RNA structures led to further structural studies on an internal ribosome entry site (IRES) and protein-RNA complexes such as the signal recognition particle.
Doudna was promoted to the position of Henry Ford II Professor of Molecular Biophysics and Biochemistry at Yale in 2000. In 2000–2001, she was Robert Burns Woodward Visiting Professor of Chemistry at Harvard University.
Move to Berkeley
In 2002, she joined her husband, Jamie Cate, at Berkeley, accepting a position as professor of biochemistry and molecular biology. Doudna also gained access to the synchrotron at Lawrence Berkeley National Laboratory for her experiments with high powered x-ray diffraction.
In 2009, she took a leave of absence from Berkeley to work at Genentech to lead discovery research. She left Genentech after two months and returned to Berkeley with the help of colleague Michael Marletta, canceling all of her obligations to study CRISPR.
As of 2023, Doudna was located at the University of California, Berkeley, where she directs the Innovative Genomics Institute, a collaboration between Berkeley and UCSF founded by Doudna to develop genome editing technology and apply it to some of society's greatest problems in human health, agriculture and climate change. Doudna holds the Li Ka Shing Chancellor's Professorship in Biomedicine and Health, and is the chair of the Chancellor's Advisor Committee on Biology. Her lab now focuses on the structure and function of CRISPR-Cas systems, developing new genome editing technology and delivery mechanisms for CRISPR therapeutics, and novel techniques for precisely editing microbiomes.
CRISPR-Cas9 genome editing discovery
Doudna was introduced to CRISPR by Jillian Banfield in 2006 who had found Doudna by way of a Google search, having typed "RNAi and UC Berkeley" into her browser, and Doudna's name came up at the top of the list. In 2012, Doudna and her colleagues made a new discovery that reduces the time and work needed to edit genomic DNA. Their discovery relies on a protein named Cas9 found in the Streptococcus bacterial "CRISPR" immune system that cooperates with guide RNA and works like scissors. The protein attacks its prey, the DNA of viruses, and slices it up, preventing it from infecting the bacterium. This system was first discovered by Yoshizumi Ishino and colleagues in 1987 and later characterized by Francisco Mojica, but Doudna and Emmanuelle Charpentier showed for the first time that they could use different RNAs to program it to cut and edit different DNAs.
As CRISPR becomes increasingly used to edit multicellular organisms, Doudna continues to be called upon to serve as a thought-leader on the ethics of changing an organism's function using CRISPR technology. Their discovery has since been further developed by many research groups for applications ranging from fundamental cell biology, plant, and animal research to treatments for diseases including sickle cell anemia, cystic fibrosis, Huntington's disease, and HIV. Doudna and several other leading biologists called for a worldwide moratorium on any clinical application of gene editing using CRISPR. Doudna supports the usage of CRISPR in somatic gene editing, gene alterations which do not get passed to the next generation, but not germline gene editing.
The CRISPR system created a new straightforward way to edit DNA and there was a rush to patent the technique. Doudna and UC Berkeley collaborators applied for a patent and so did a group at the Broad Institute affiliated with the Massachusetts Institute of Technology and Harvard. Feng Zhang at the Broad Institute had shown that CRISPR-Cas9 could edit genes in cultured human cells a few months after Doudna and Charpentier published their method. Before the UC Berkeley patent application was decided, a patent was granted to the Broad investigators and UC Berkeley filed a lawsuit against the decision. In 2017, the court decided in favor of the Broad Institute, who claimed that they had initiated the research earliest and had first applied it to human cell engineering thus supporting editing in human cells with evidence but that the UC Berkeley group had only suggested this application. UC Berkeley appealed on grounds that they had clearly discussed and spelled out how to do the application the Broad had pursued. In September 2018, the appeals court decided in favor of the Broad Institute's patent. Meanwhile, UC Berkeley and co-applicants' patent to cover the general technique was also granted. To further cloud the issue, in Europe the claim of the Broad Institute, to have initiated the research first, was disallowed. The rejection was due to a procedural flaw in the application involving a different set of personnel listed in the lawsuit and the patent application, leading to speculation that the UC Berkeley group would prevail in Europe. Doudna cofounded Caribou Biosciences, a company to commercialize CRISPR technology, in 2011. In September 2013, Doudna cofounded Editas Medicine with Zhang and others despite their legal battles, but she quit in June 2014; Charpentier then invited her to join CRISPR Therapeutics, but she declined following the "divorce"-like experience at Editas. Doudna is also a cofounder of Caribou spin-off Intellia Therapeutics and Scribe Therapeutics, which pioneered CasX, a more compact, next-generation Cas9 which can efficiently cut DNA.
In 2017, she co-authored A Crack in Creation: Gene Editing and the Unthinkable Power to Control Evolution, a rare case of the first-person account of a major scientific breakthrough, aimed at the general public.
In addition to the CRISPR breakthrough, Doudna has discovered that the hepatitis C virus utilizes an unusual strategy to synthesize viral proteins. This work could lead to new drugs to stop infections without causing harm to the tissues of the body.
"I have so much optimism about what CRISPR can do to help cure unaddressed genetic diseases and improve sustainable agriculture, but I'm also concerned that the benefits of the technology might not reach those who need it most if we're not thoughtful and deliberate about how we develop the technology," Doudna said.
Mammoth Biosciences
In 2017, Doudna co-founded Mammoth Biosciences, a San Francisco-based bioengineering tech startup. Initial funding raised $23 million, with a series B round of funding in 2020 raising $45 million. The business is focused on improving access to bio sensing tests which address "challenges across healthcare, agriculture, environmental monitoring, biodefense, and more."
COVID-19 response
Beginning in March 2020, Doudna organized an effort to use CRISPR-based technologies to address the COVID-19 pandemic along with Dave Savage, Robert Tjian, and other colleagues at the Innovative Genomics Institute (IGI), where they created a testing center. This center processed over 500,000 patient samples from UC Berkeley students, staff and faculty as well as members of the surrounding community and farm workers in the Salinas area. Mammoth Biosciences announced a peer-reviewed validation of a rapid, CRISPR-based point of need COVID-19 diagnostic which is faster and less expensive than qRT-PCR based tests.
Other activities
She is also the founder and chair of the governance board of the Innovative Genomics Institute, which she co-founded in 2014. Doudna is also a faculty scientist at Lawrence Berkeley National Laboratory, a senior investigator at the Gladstone Institutes, and an adjunct professor of cellular and molecular pharmacology at the University of California, San Francisco (UCSF).
Doudna is on the scientific advisory boards of the companies that she cofounded, such as Caribou, Intellia, Mammoth, and Scribe; as well as others such as Altos Labs, Isomorphic Labs, Johnson & Johnson, Synthego, Tempus AI, and Welch Foundation. She joined Sixth Street Partners in 2022 as their chief science advisor, to guide investment decisions related to CRISPR.
Personal life
Doudna's first marriage was in 1988 to a fellow graduate student at Harvard named Tom Griffin, but his interests were more broad and less focused on research than hers and they divorced a few years later. Griffin wanted to move to Boulder, Colorado, where Doudna was also interested in working with Thomas Cech. As a postdoctoral researcher at the University of Colorado, Doudna met Jamie Cate, then a graduate student. They worked together on the project to crystallize and determine the structure of the Tetrahymena Group I intron P4-P6 catalytic region. Doudna brought Cate with her to Yale, and they married in Hawaii in 2000. Cate later became a professor at the Massachusetts Institute of Technology and Doudna followed him to Boston at Harvard, but in 2002 they both accepted faculty positions at Berkeley and moved there together; Cate preferred the less formal environment on the West Coast from his earlier experiences at the University of California, Santa Cruz and the Lawrence Berkeley National Laboratory, and Doudna liked that Berkeley is a public university. Cate is a Berkeley professor and works on gene-editing yeast to increase their cellulose fermentation for biofuel production. Doudna and Cate have a son born in 2002 who attends UC Berkeley, studying electrical engineering and computer science. They live in Berkeley.
Awards and honors
Doudna was a Searle Scholar and received the 1996 Beckman Young Investigators Award. In 2000, she was awarded the Alan T. Waterman Award, the National Science Foundation's highest honor that annually recognizes an outstanding researcher under the age of 35, for her structure determination of a ribozyme. In 2001, she received the Eli Lilly Award in Biological Chemistry of the American Chemical Society.
In 2015, together with Emmanuelle Charpentier, she received the Breakthrough Prize in Life Sciences for her contributions to CRISPR/Cas9 genome editing technology. In 2016, together with Charpentier, Feng Zhang, Philippe Horvath and Rodolphe Barrangou, she received the Canada Gairdner International Award. Also in 2016, she received the Heineken Prize for Biochemistry and Biophysics. She has also been a co-recipient of the Gruber Prize in Genetics (2015), the Tang Prize (2016), the Japan Prize (2017) and the Albany Medical Center Prize (2017). In 2018, Doudna was awarded the NAS Award in Chemical Sciences, the Pearl Meister Greengard Prize from the Rockefeller University, and a Medal of Honor from the American Cancer Society. Also in 2018, she was awarded the Kavli Prize in Nanoscience (jointly with Emmanuelle Charpentier and Virginijus Šikšnys). In 2019 she received the Harvey Prize of the Technion/Israel for the year 2018 (jointly with Emmanuelle Charpentier and Feng Zhang) and the LUI Che Woo Prize in the category of Welfare Betterment. In 2020, she received the Wolf Prize in Medicine (jointly with Emmanuelle Charpentier). Also in 2020, Doudna and Charpentier were awarded the Nobel Prize in Chemistry "for the development of a method for genome editing." In 2025 she was awarded the National Medal of Technology and Innovation
She was elected to the National Academy of Sciences in 2002, the American Academy of Arts and Sciences in 2003, the National Academy of Medicine in 2010 and the National Academy of Inventors in 2014. In 2015, together with Charpentier, she became a fellow of the American Academy of Microbiology. She was elected a Foreign Member of the Royal Society (ForMemRS) in 2016. In 2017, Doudna was awarded the Golden Plate Award of the American Academy of Achievement. In 2020, she was awarded a Guggenheim Fellowship. In 2021 she received the Award for Excellence in Molecular Diagnostics from the Association for Molecular Pathology. In 2021, Pope Francis appointed Doudna, and two other women Nobel laureates Donna Strickland and Emmanuelle Charpentier, as members of the Pontifical Academy of Sciences.
She along with Charpentier was named one of the Time 100 most influential people in 2015, and she was a runner-up for Time Person of the Year in 2016 alongside other CRISPR researchers.
In 2018 and 2023, she received honorary Doctor of Science degrees from USC and Harvard, respectively.
References
Bibliography
External links
CRISPR Scientist's Biography Explores Ethics Of Rewriting The Code Of Life. Author interview, audio and transcript. Fresh Air, NPR, March 8, 2021.
1964 births
20th-century American chemists
20th-century American women scientists
21st-century American chemists
21st-century American women scientists
American Nobel laureates
American women biochemists
American crystallographers
Fellows of the American Academy of Microbiology
Foreign members of the Royal Society
Genome editing
Harvard Medical School alumni
Howard Hughes Medical Investigators
Kavli Prize laureates in Nanoscience
Living people
Members of the National Academy of Medicine
Members of the Pontifical Academy of Sciences
Members of the United States National Academy of Sciences
Nobel laureates in Chemistry
Pomona College alumni
Pomona College trustees
University of California, Berkeley College of Letters and Science faculty
University of California, San Francisco faculty
Winners of the Heineken Prize
Wolf Prize in Medicine laureates
Women Nobel laureates
Yale Department of Molecular Biophysics & Biochemistry faculty | Jennifer Doudna | Technology,Engineering,Biology | 4,135 |
70,196,876 | https://en.wikipedia.org/wiki/Gromia%20dubia | Gromia dubia is a species of testate rhizarian animal in the family Gromiidae. It is known from a single specimen discovered in 1884 by Gruber, and no other specimens have been found. Gruber did not actually make a proper description of the species itself.
See also
Gromia
Testate amoeba
References
Amoeboids
Rhizaria species
Protists described in 1884 | Gromia dubia | Biology | 87 |
12,341,831 | https://en.wikipedia.org/wiki/Maximum%20parcel%20level | The maximum parcel level (MPL) is the highest level in the atmosphere that a moist convectively rising air parcel will reach after ascending from the level of free convection (LFC) through the free convective layer (FCL) and reaching the equilibrium level (EL), near the tropopause. As the parcel rises through the FCL it expands adiabatically causing its temperature to drop, often below the temperature of its surroundings, and eventually lose buoyancy. Because of this, the EL is approximately the region where the distinct flat tops (called anvil clouds), often observed around the upper portions of cumulonimbus clouds. If the air parcel ascended quickly enough then it retains momentum after it has cooled and continues rising past the EL, ceasing at the MPL (visually represented by the overshooting top, above the anvil).
Dynamic processes within and between convective cells, such as updraft merging and cloud base areal size, factor into the actual ultimate cloud top height, in addition to atmospheric thermodynamics of the MPL. Updraft merging can lead to higher cloud tops, thus an implication is that organized convection can be taller convection.
See also
Atmospheric convection
Skew-T log-P diagram
External links
Skew-T: A Look at MPL
The Difference between the Equilibrium Level and Maximum Parcel Level
Skew-T and Maximum Parcel Level (San Francisco State University)
References
Atmospheric thermodynamics
Severe weather and convection
fr:Niveau d'équilibre convectif#Notions dérivées | Maximum parcel level | Physics,Chemistry | 330 |
16,008,681 | https://en.wikipedia.org/wiki/OGLE-2006-BLG-109Lb | OGLE-2006-BLG-109Lb is an extrasolar planet approximately 4,920 light-years away in the constellation of Sagittarius. The planet was detected orbiting the star OGLE-2006-BLG-109L in 2008 by a research team using Microlensing.
See also
Optical Gravitational Lensing Experiment or OGLE
47 Ursae Majoris b
OGLE-2005-BLG-390Lb
OGLE-2006-BLG-109Lc
References
External links
Sagittarius (constellation)
Exoplanets discovered in 2008
Giant planets
Exoplanets detected by microlensing | OGLE-2006-BLG-109Lb | Astronomy | 132 |
53,496,177 | https://en.wikipedia.org/wiki/Cable%20protection%20system | A cable protection system (CPS) protects subsea power cables against various factors that could reduce the cable's lifetime, when entering an offshore structure.
When a subsea power cable is laid, there is an area where the cable can be subjected to increased dynamic forces the cable is not necessarily designed to withstand over its lifetime.
Cable protection systems allow the specification, and thus cost, of a subsea power cable to be reduced by removing the need for additional armoring. Cables can be produced more cheaply, whilst still providing the 20-plus-year lifetime required.
Offshore windfarm developers have widely adopted cable protection systems due to the dynamic areas where the cable leaves from the seabed and enters the monopile/J-tube, in part due to the potential for localised scouring to occur near the structure.
A CPS generally consists of three sections: a Centraliser or Monopile interface, a protection system for the dynamic area, and a protection system for the static area.
The installation of J-Tubes for offshore renewable monopiles was viewed as a costly approach, and a 'latching' type of cable protection system which penetrates the outer wall of the monopile, via a specifically designed angled aperture enables the simplification of monopile design, and removes the need for additional works post pile driving which usually involved the use of divers. This approach has become an industry standard in monopile design, assisting developers to reduce their costs for construction.
History
Articulated half-pipe Cable protections systems have traditionally been used for the protection of cables at shore landings, and other areas where cable damage could be envisaged, and burial was not practical. Patents for variations of articulated pipe cable protections date back to 1929. The system was described as a cable armor shield
"adapted to protect the cable from damage and wear occasioned by rubbing on rocks, contacting with ships, anchors or other objects, and has for its object to provide a practical flexible armor shield of this class which can be readily applied to the cable at any point along its length."
From their outset cable protection systems were designed to be simple, effective, and easy to assemble. The systems consisted of a series of half shells which had a convex flange at one end and a larger socket flange at the other allowing the sections to form a flexible universal joint connection between them. Due to the intended use of heavy cast or forged metals they also had the added advantage of increasing the weight of the cable being installed, thus reducing movement on the seabed.
Over the years innovations have occurred improving the articulation of the joints with modern articulated pipes being more akin to ball-joints, and some manufacturers providing 'boltless' articulated pipes, thus saving assembly time.
Changes in the metallurgy have also happened, leading to most half shell articulated pipe now being made from ductile iron, due to its improved strength and elasticity characteristics.
Today these articulated pipes are also utilised for their bend restriction properties, allowing them to be utilised as bend restrictors for the protected cable.
Design considerations
Cable protection systems are predominantly designed to protect the system from damage throughout the lifetime of the cable caused by fatigue, overbending of the cable, and to provide protection of the cable until it reaches an area of burial.
Design life
The cable protection system will be designed to provide protection for a specific lifetime, the 'design life' of the system, which may vary dependent upon the conditions encountered.
Fatigue of CPS/cable within
Subsea cable protection systems can encounter wear due to movement, and general changes in composition due to being submerged for a prolonged period of time, such as corrosion or changes in polymer based compounds. Consideration should be given to the induced effects on the CPS resulting from the dynamic elements in the environment. Simple changes such as changes in temperature, current or salinity can result in changes in the ability of the CPS to offer protection for the life of the cable. It is advisable to carefully assess the potential effects of movement of the CPS, relating to the dynamic abilities of the cable. The CPS may withstand the worst conditions seen over a 100yr period, but would the cable inside the CPS survive these movements. In some instances, such as shore ends for fibre optic cables where rocky outcrops are present, dynamic influences can be reduced by securing the articulated pipe to the seabed rock, thus reducing the degree of movement remaining.
Some manufacturers have performed independent empirical testing, utilising the DMZC facility at the university of Exeter in the UK, to provide a simulated 25yr life cycle of the dynamic forces applicable to their product in order to provide customers with improved confidence in the survivability of the system.
Another cause for failure of subsea power cables is caused by overheating, which can occur where a cable is contained within a CPS without adequate ability to dissipate the heat produced by the cable. These lead to early fatigue of the cable insulation, necessitating the replacement of the cable.
Subsea cable incidents account for around 77% of the total global cost of wind farm losses. Since 2007 this percentage, which has varied between 70% and 80%, is statistically reported year after year.
Seabed stability
Seabed stability is an important factor associated with cable protection systems. Should the cable protection system be too buoyant, it is less likely to remain in contact with the seabed, thus the CPS is more likely to require additional remedial stability measures, such as installation of concrete mattresses, rockbags, or rockdumping.
Suspension strength
When a CPS is being installed to interface with a monopile structure, there is likely to be seabed scouring to some degree. Should the scouring become excessive, the CPS may be suspended within a scour hole, and needs to be capable of supporting its own weight, and that of the cable within. Failure to sustain this loading scenario will lead to failure of the CPS, which will in turn allow the forces to act upon the cable within, ultimately leading to cable damage.
Installation
Within the renewables market in particular, installation of CPS's are preferred to be completely diverless, as this reduces the developers cost, and removes risk to human life through diving in a hazardous area.
Removal/Reinstallation
A final consideration for CPS is that of removal of the cable should a failure occur. Some designs require diver intervention to recover the cable with the CPS. Due consideration should also be given to the removal of a CPS should the CPS itself fail. The costs associated with CPS replacement during the operational lifetime of an offshore wind farm are not insignificant, as the cable will most likely require repair/replacement as part of the process.
Bend restrictors
Various innovative systems have been developed to provide restriction of bending, including ductile iron articulated pipe, and polymer or metal based vertebrae systems. Vertebrae bend restrictors are available in both metal and polymer based forms. Some cable protection systems include a polymer based vertebrae system which restricts the bend radius to a maximum of a few degrees per segment. These systems are lighter (in water) than their metal equivalents and often more expensive to produce but must be carefully assessed for longevity in the proposed application. Due to the use of polymers these systems tend to be of a larger diameter than their metal counterparts, which presents a larger surface area for drag induced forces caused by currents.
Bend stiffeners
Bend stiffeners are conically shaped polymer mouldings designed to add local stiffness to the product contained within, limiting bending stresses and curvature to acceptable levels. Bend stiffeners are generally suitable for water depths of 35 metres or less, and their suitability is highly dependent on currents and seabed conditions at site. Extreme care must be taken when selecting a stiffener, especially relating to the lifespan of the system as these themselves can become fatigued/fragile. As the stiffness of these products are dependent upon the nature of the plastic used, careful testing and QA of plastics should be carefully considered as flaws introduced during material manufacture, processing, machining and molding.
Other systems
Various other polymer based systems have been developed which provide a flexible 'tube' which can be attached to the structure in advance of the cable being installed, although these are relatively new to the industry, and considered by some as unproven.
Applicable Standards
Although there are no specific standards for cable protections systems, DNVGL-RP-0360 Subsea power cables in shallow water includes a section on Cable Protection at the interface to a structure (Section 4.7).
Offshore Wind - CPS Issues
The use of CPS systems to protect offshore wind power cables has suffered from various CPS failures, resulting in costly repair of CPS systems and the power cables they were to protect. The exact extent, cost and frequency of occurrence of these failures is generally not disclosed, however there have been exceptions including announcements from developer/operator companies such as Orsted of the extent and anticipated repair costs of these.
References
Coastal construction
Submarine communications cables
Telecommunications equipment | Cable protection system | Engineering | 1,839 |
38,503,539 | https://en.wikipedia.org/wiki/Silent%20Circle%20%28software%29 | Silent Circle is an encrypted communications firm based in Washington DC. Silent Circle provides multi-platform secure communication services for mobile devices and desktops. Launched October 16, 2012, the company operates under a subscription business model. The encryption part of the software used is free software/open source and peer-reviewed. For the remaining parts of Silent Phone and Silent Text, the source code is available on GitHub, but under proprietary software licenses.
History
In November 2011, Mike Janke called Phil Zimmermann with an idea for a new kind of private, secure version of Skype. Zimmermann agreed to the project and called Jon Callas, co-founder of PGP Corporation and Vincent Moscaritolo. Janke brought in security expert Vic Hyder, and the founding team was established. The company was founded in the Caribbean island of Nevis, but moved its headquarters to Le Grand-Saconnex near Geneva, Switzerland in 2014 in search of a country with "stronger privacy laws to protect its customers' information."
On August 9, 2013, through their website, Silent Circle announced that the Silent Mail service would be shut down, because the company could "see the writing on the wall" and felt it was not possible to sufficiently secure email data with the looming threat of government compulsion and precedent set by the Lavabit shutdown the day before.
In January 2015, Silent Text had a serious vulnerability that allowed an attacker to remotely take control of a Blackphone device. A potential attacker only needed to know the target’s Silent Circle ID number or phone number. Blackphone and Silent Circle patched the vulnerability shortly after it had been disclosed.
In March 2015 there was a controversy when Information Security specialist and hacker Khalil Sehnaoui identified that Silent Circle's warrant canary had been removed from their site.
In January 2017 Gregg Smith was named CEO with a renewed focus on serving the large business space as well as Government entities. At the same time Tony Cole, VP and Global Government CTO of FireEye, was named to the Board of Directors. Shortly after Smith became CEO, the company moved back from Switzerland to the United States.
Reception
In November 2014, Silent Phone and Silent Text received top scores on the Electronic Frontier Foundation's secure messaging scorecard, along with "ChatSecure + Orbot", Cryptocat, TextSecure, and "Signal / RedPhone". They received points for having communications encrypted in transit, having communications encrypted with keys the providers don't have access to (end-to-end encryption), making it possible for users to independently verify their correspondent's identities, having past communications secure if the keys are stolen (forward secrecy), having their code open to independent review (open source), having their security designs well-documented, and having recent independent security audits.
However, as of August 2020, the page for the secure messaging scorecard states that it is out of date and should not be used in privacy- and security-related decision-making.
Products
The company's products enable encrypted mobile phone calls, text messaging, and video chat.
Current
Its current products include the following:
Silent Phone: Encrypted voice calls, video calls and text messages on mobile devices. Currently available for iOS, Android, and Silent Circle’s Silent OS on Blackphone. It can be used with Wi-Fi, EDGE, 3G or 4G cellular anywhere in the world.
Discontinued
Its discontinued products include the following:
Blackphone: A smartphone designed for privacy created by Silent Circle and built by SGP Technologies, a joint venture between Silent Circle and Geeksphone. There have been no more news or updates since 2018. PrivatOS was Discontinued on June 30, 2016.
GoSilent: Personal Firewall with integrated VPN and Cloud Analytics. The product was introduced after Silent Circle acquired Maryland start-up Kesala. It was sold by Silent Circle's new owner in 2018
Silent Text: Discontinued September 28, 2015. A stand-alone application for encrypted text messaging and secure cloud content transfer with a “burn notice” feature for permanently deleting messages from devices. Its features were merged into Silent Phone.
Silent Mail: Discontinued August 9, 2013. Silent Mail used to offer encrypted email on Silent Circle’s private, secure network and compatibility with popular email client software.
Silent Circle Instant Message Protocol
Silent Circle Instant Message Protocol (SCIMP) was an encryption scheme that was developed by Vincent Moscaritolo. It enabled private conversation over instant message transports such as XMPP (Jabber).
SCIMP provided encryption, perfect forward secrecy and message authentication. It also handled negotiating the shared secret keys.
History
The protocol was used in Silent Text. Silent Text was discontinued on September 28, 2015, when its features were merged into Silent Circle's encrypted voice calling application called Silent Phone. At the same time, Silent Circle transitioned to using a protocol that uses the Double Ratchet Algorithm instead of SCIMP.
Business model
The company is privately funded and operates under a subscription business model.
See also
Comparison of instant messaging clients
Comparison of VoIP software
Silent Circle Instant Messaging Protocol
References
Further reading
External links
Cryptographic software
Cryptography companies
Swiss brands | Silent Circle (software) | Mathematics | 1,074 |
27,108,098 | https://en.wikipedia.org/wiki/Einstein%20function | In mathematics, Einstein function is a name occasionally used for one of the functions
References
E W Lemmon, R Span, 2006, Short Fundamental Equations of State for 20 Industrial Fluids, J. Chem. Eng. Data 51 (3), 785–850 .
Wolfram MathWorld: http://mathworld.wolfram.com/EinsteinFunctions.html
Special functions | Einstein function | Mathematics | 81 |
75,113,634 | https://en.wikipedia.org/wiki/Marie%20Lopez%20del%20Puerto | Marie Lopez del Puerto is a condensed matter physicist whose research concerns the computational study of the electronic, optical, and quantum properties of nanocrystals and nanostructures. As a physics educator, she has worked to integrate computational physics into the undergraduate physics curriculum. Educated in Mexico and the US, she works in the US as a professor of physics and chair of the physics department at the University of St. Thomas, a private Catholic university in Minnesota.
Education and career
Lopez del Puerto earned a licenciatura in Physics from the Universidad de las Américas Puebla in Mexico in 2002. She began her work in physics education that year, teaching physics at a local college for a term between the end of her undergraduate and the beginning of her graduate program. She went to the University of Minnesota for graduate study in physics, earning a master's degree in 2004 and completing her Ph.D. there in 2008, supervised by James R. Chelikowsky. Chelikowsky moved to the Oden Institute for Computational Engineering and Sciences at the University of Texas at Austin in 2005, and Lopez del Puerto continued to work with him there, but earned her degree through the University of Minnesota.
She became a faculty member at the University of St. Thomas in 2008.
Recognition
Lopez del Puerto was a 2023 recipient of the Excellence in Physics Education Award of the American Physical Society (APS), for her work with the Partnership for Integrating Computation into Undergraduate Physics, a multi-university physics education project which she joined in 2012. She was named a Fellow of the American Physical Society in 2023, after a nomination from the APS Forum on Education, "for impactful work on integrating computation into the physics curriculum, for leadership in the Partnership for Integrating Computation into Undergraduate Physics, and for service to the American Physical Society and the American Association of Physics Teachers".
References
Year of birth missing (living people)
Living people
Mexican emigrants to the United States
Mexican physicists
Mexican women physicists
American physicists
American women physicists
American condensed matter physicists
Hispanic and Latino American physicists
Computational physicists
Physics educators
University of Minnesota alumni
Fellows of the American Physical Society | Marie Lopez del Puerto | Physics | 433 |
2,289,914 | https://en.wikipedia.org/wiki/Fr%C3%A9my%27s%20salt | Frémy's salt is a chemical compound with the formula (K4[ON(SO3)2]2), sometimes written as (K2[NO(SO3)2]). It is a bright yellowish-brown solid, but its aqueous solutions are bright violet. The related sodium salt, disodium nitrosodisulfonate (NDS, Na2ON(SO3)2, CAS 29554-37-8) is also referred to as Frémy's salt.
Regardless of the cations, the salts are distinctive because aqueous solutions contain the radical [ON(SO3)2]2−.
Applications
Frémy's salt, being a long-lived free radical, is used as a standard in electron paramagnetic resonance (EPR) spectroscopy, e.g. for quantitation of radicals. Its intense EPR spectrum is dominated by three lines of equal intensity with a spacing of about 13 G (1.3 mT).
The inorganic aminoxyl group is a persistent radical, akin to TEMPO.
It has been used in some oxidation reactions, such as for oxidation of some anilines and phenols allowing polymerization and cross-linking of peptides and peptide-based hydrogels.
It can also be used as a model for peroxyl radicals in studies that examine the antioxidant mechanism of action in a wide range of natural products.
Preparation
Frémy's salt is prepared from hydroxylaminedisulfonic acid. Oxidation of the conjugate base gives the purple dianion:
HON(SO3H)2 → [HON(SO3)2]2− + 2 H+
2 [HON(SO3)2]2− + PbO2 → 2 [ON(SO3)2]2− + PbO + H2O
The synthesis can be performed by combining nitrite and bisulfite to give the hydroxylaminedisulfonate. Oxidation is typically conducted at low-temperature, either chemically or by electrolysis.
Other reactions:
HNO2 + 2 → + H2O
3 + + H+ → 3 + MnO2 + 2 H2O
2 + 4 K+ → K4[ON(SO3)2]2
History
Frémy's salt was discovered in 1845 by Edmond Frémy (1814–1894). Its use in organic synthesis was popularized by Hans Teuber, such that an oxidation using this salt is called the Teuber reaction.
References
Further reading
Free radicals
Oxidizing agents
Sodium compounds
Potassium compounds
Nitrogen–oxygen compounds
Reagents for organic chemistry | Frémy's salt | Chemistry,Biology | 543 |
11,671,030 | https://en.wikipedia.org/wiki/Walking%20Stewart | John "Walking" Stewart (19 February 1747 – 20 February 1822) was an English philosopher and traveller. Stewart developed a unique system of materialistic pantheism.
Travels
Known as "Walking" Stewart to his contemporaries for having travelled on foot from Madras, India (where he had worked as a clerk for the East India Company) back to Europe between 1765 and the mid-1790s, Stewart is thought to have walked alone across Persia, Abyssinia, Arabia, and Africa before wandering into every European country as far east as Russia.
Over the next three decades Stewart wrote prolifically, publishing nearly thirty philosophical works, including The Opus Maximum (London, 1803) and the long verse-poem The Revelation of Nature (New York, 1795). In 1796, George Washington's portrait-painter, James Sharples, executed a pastel likeness of Stewart for a series of portraits which included such sitters as William Godwin, Joseph Priestley, and Humphry Davy, suggesting the intellectual esteem in which Stewart was once held.
After his travels in East India, Stewart became a vegetarian. He was also a teetotaler.
Philosophy
During his journeys, he developed a unique system of materialist philosophy which combines elements of Spinozistic pantheism with yogic notions of a single indissoluble consciousness. Stewart began to promote his ideas publicly in 1790 with the publication in two volumes of his works Travels over the most interesting parts of the Globe and The Apocalypse of Nature (London, 1790).
Historian David Fairer has written that "Stewart expounds what might be described as a panbiomorphic universe (it deserves an entirely new term just for itself), in which human identity is no different in category from a wave, flame, or wind, having an entirely modal existence.". According to Henry Stephens Salt, writing for the Temple Bar in 1893, Stewart repeatedly insisted upon "The immortality of matter and the sympathy that exists between all forms of nature". Stewart declared that if he were about to die, these should be his last words: "The only measure to save mankind and all sensitive life is to educate the judgment of man and not the memory, that he may be able through reflection to calculate the golden mean of good and evil".
Retirement
After retiring from travelling, Stewart eventually settled in London where he held philosophical soirées and earned a reputation as one of the city's celebrated eccentrics. He was often seen in public wearing a threadbare Armenian military uniform. John Timbs described Stewart as one of London's famous eccentrics.
Death
On 20 February 1822, the morning after his seventy-fifth birthday, 'Walking' Stewart's body was found in a rented room in Northumberland Place, near present-day Trafalgar Square, London. An empty bottle of laudanum lay beside him.
Literary influence
After Walking Stewart's travels came to an end around the turn of the nineteenth century, he became close friends with the English essayist and fellow-Londoner Thomas De Quincey, with the radical pamphleteer Thomas Paine, and with the Platonist Thomas Taylor (1758-1835).
In 1792, while residing in Paris in the weeks following the September Massacres, he made the acquaintance of the young Romantic poet William Wordsworth, who later concurred with De Quincey in describing Stewart as the most eloquent man on the subject of Nature that either had ever met. Recent scholarship by Kelly Grovier has suggested that Stewart's persona and philosophical writings had a major influence on Wordsworth's poetry.
References
Further reading
The life and adventures of the celebrated Walking Stewart: including his travels in the East Indies, Turkey, Germany, & America. By a relative, London, E. Wheatley, 1822.
Bertrand Harris Bronson, "Walking Stewart", Essays & Studies, xiv (University of California Press, 1943), pp. 123–55.
Gregory Claeys. "'The Only Man of Nature That Ever Appeared in the World'": 'Walking' John Stewart and the Trajectories of Social Radicalism, 1790-1822", Journal of British Studies, 53 (2014), 1–24.
Thomas De Quincey, The Works of Thomas De Quincey, ed. Grevel Lindop (London: Pickering & Chatto, 2000-), vol. xi, p. 247.
Kelly Grovier, 'Dream Walker: A Wordsworth Mystery Solved', Times Literary Supplement, 16 February 2007
Kelly Grovier, '"Shades of the Prison House": "Walking" Stewart and the making of Wordsworth's "two consciousnesses", Studies in Romanticism, Fall 2005 (Boston University), pp. 341–66.
Barry Symonds, 'Stewart, John (1747–1822)’, Oxford Dictionary of National Biography, Oxford University Press, 2004
John Taylor, "Walking Stewart", Record of My Life, pp. 163–68
External links
John Stewart's "Sensate Matter" in the Early Republic
The Most Unlikely Man to Influence A Generation of Writers: Walking Stewart
1747 births
1822 deaths
18th-century English writers
18th-century English philosophers
19th-century English writers
19th-century English philosophers
Drug-related deaths in London
English philosophers
Materialists
Pantheists | Walking Stewart | Physics | 1,081 |
3,838,003 | https://en.wikipedia.org/wiki/Seismic%20anisotropy | Seismic anisotropy is the directional dependence of the velocity of seismic waves in a medium (rock) within the Earth.
Description
A material is said to be anisotropic if the value of one or more of its properties varies with direction. Anisotropy differs from the property called heterogeneity in that anisotropy is the variation in values with direction at a point while heterogeneity is the variation in values between two or more points.
Seismic anisotropy can be defined as the dependence of seismic velocity on direction or upon angle. General anisotropy is described by a 4th order elasticity tensor with 21 independent elements. However, in practice observational studies are unable to distinguish all 21 elements, and anisotropy is usually simplified. In the simplest form, there are two main types of anisotropy, both of them are called transverse isotropy (it is called transverse isotropy because there is isotropy in either the horizontal or vertical plane) or polar anisotropy. The difference between them is in their axis of symmetry, which is an axis of rotational invariance such that if we rotate the formation about the axis, the material is still indistinguishable from what it was before. The symmetry axis is usually associated with regional stress or gravity.
Vertical transverse isotropy (VTI), transverse isotropy with a vertical axis of symmetry, is associated with layering and shale and is found where gravity is the dominant factor.
Horizontal transverse isotropy (HTI), transverse isotropy with a horizontal axis of symmetry, is associated with cracks and fractures and is found where regional stress is the dominant factor.
The transverse anisotropic matrix has the same form as the isotropic matrix, except that it has five non-zero values distributed among 12 non-zero elements.
Transverse isotropy is sometimes called transverse anisotropy or anisotropy with hexagonal symmetry. In many cases the axis of symmetry will be neither horizontal nor vertical, in which case it is often called "tilted".
History of the recognition of anisotropy
Anisotropy was first recognised in the 19th century following the theory of Elastic wave propagation. George Green (1838) and Lord Kelvin (1856) took anisotropy into account in their articles on wave propagation. Anisotropy entered seismology in the late 19th century and was introduced by Maurycy Rudzki. From 1898 till his death in 1916, Rudzki attempted to advance the theory of anisotropy, he attempted to determine the wavefront of a transversely isotropic medium (TI) in 1898 and in 1912 and 1913 he wrote on surface waves in transversely isotropic half space and on Fermat's principle in anisotropic media respectively.
With all these, the advancement of anisotropy was still slow and in the first 30 years (1920–1950) of exploration seismology only a few papers were written on the subject. More work was done by several scientists such as Helbig (1956) who observed while doing seismic work on Devonian schists that velocities along the foliation were about 20% higher than those across the foliation. However the appreciation of anisotropy increased with the proposition of a new model for the generation of anisotropy in an originally isotropic background and a new exploration concept by Crampin (1987). One of the main points by Crampin was that the polarization of three component shear waves carries unique information about the internal structure of the rock through which they pass, and that shear wave splitting may contain information about the distribution of crack orientations.
With these new developments and the acquisition of better and new types of data such as three component 3D seismic data, which clearly show the effects of shear wave splitting, and wide azimuth 3D data which show the effects of azimuthal anisotropy, and the availability of more powerful computers, anisotropy began to have great impact in exploration seismology in the past three decades.
Concept of seismic anisotropy
Since the understanding of seismic anisotropy is closely tied to the shear wave splitting, this section begins with a discussion of shear wave splitting.
Shear waves have been observed to split into two or more fixed polarizations which can propagate in the particular ray direction when entering an anisotropic medium. These split phases propagate with different polarizations and velocities. Crampin (1984) amongst others gives evidence that many rocks are anisotropic for shear wave propagation. In addition, shear wave splitting is almost routinely observed in three-component VSPs. Such shear wave splitting can be directly analyzed only on three component geophones recording either in the subsurface, or within the effective shear window at the free surface if there are no near surface low-velocity layers. Observation of these shear waves show that measuring the orientation and polarization of the first arrival and the delay between these split shear waves reveal the orientation of cracks and the crack density . This is particularly important in reservoir characterization.
In a linearly elastic material, which can be described by Hooke's law as one in which each component of stress is dependent on every component of strain, the following relationship exists:
where is the stress, is the elastic moduli or stiffness constant, and is the strain.
The elastic modulus matrix for an anisotropic case is
The above is the elastic modulus for a vertical transverse isotropic medium (VTI), which is the usual case. The elastic modulus for a horizontal transverse isotropic medium (HTI) is:
For an anisotropic medium, the directional dependence of the three phase velocities can be written by applying the elastic moduli in the wave equation is; The direction dependent wave speeds for elastic waves through the material can be found by using the Christoffel equation and are given by
where is the angle between the axis of symmetry and the wave propagation direction, is mass density and the are elements of the elastic stiffness matrix. The Thomsen parameters are used to simplify these expressions and make them easier to understand.
Seismic anisotropy has been observed to be weak, and Thomsen (1986) rewrote the velocities above in terms of their deviation from the vertical velocities as follows;
where
are the P and S wave velocities in the direction of the axis of symmetry () (in geophysics, this is usually, but not always, the vertical direction). Note that may be further linearized, but this does not lead to further simplification.
The approximate expressions for the wave velocities are simple enough to be physically interpreted, and sufficiently accurate for most geophysical applications. These expressions are also useful in some contexts where the anisotropy is not weak.
The Thomsen parameters are anisotropic and are three non-dimensional combinations which reduce to zero in isotropic cases, and are defined as
Origin of anisotropy
Anisotropy has been reported to occur in the Earth's three main layers: the crust, mantle and the core.
The origin of seismic anisotropy is non-unique, a range of phenomena may cause Earth materials to display seismic anisotropy. The anisotropy may be strongly dependent on wavelength if it is due to the average properties of aligned or partially aligned heterogeneity. A solid has intrinsic anisotropy when it is homogeneously and sinuously anisotropic down to the smallest particle size, which may be due to crystalline anisotropy. Relevant crystallographic anisotropy can be found in the upper mantle. When an otherwise isotropic rock contains a distribution of dry or liquid-filled cracks which have preferred orientation it is named crack induced anisotropy. The presence of aligned cracks, open or filled with some different material, is an important mechanism at shallow depth, in the crust. It is well known that the small-scale, or microstructural, factors include (e.g. Kern & Wenk 1985; Mainprice et al. 2003): (1) crystal lattice preferred orientation (LPO) of constituent mineral phases; (2) variations in spatial distribution of grains and minerals; (3) grain morphology and (4) aligned fractures, cracks and pores, and the nature of their infilling material (e.g. clays, hydrocarbons, water, etc.). Because of the overall microstructural control on seismic anisotropy, it follows that anisotropy can be diagnostic for specific rock types. Here, we consider whether seismic anisotropy can be used as an indicator of specific sedimentary lithologies within the Earth's crust.
In sedimentary rocks, anisotropy develops during and after deposition. For anisotropy to develop, there needs to be some degree of homogeneity or uniformity from point to point in the deposited clastics. During deposition, anisotropy is caused by the periodic layering associated with changes in sediment type which produces materials of different grain size, and also by the directionality of the transporting medium which tends to order the grains under gravity by grain sorting. Fracturing and some diagenetic processes such as compaction and dewatering of clays, and alteration etc. are post depositional processes that can cause anisotropy.
The importance of anisotropy in hydrocarbon exploration and production
In the past two decades, the seismic anisotropy has dramatically been gaining attention from academic and industry, due to advances in anisotropy parameter estimation, the transition from post stack imaging to pre stack depth migration, and the wider offset and azimuthal coverage of 3D surveys. Currently, many seismic processing and inversion methods utilize anisotropic models, thus providing a significant enhancement over the seismic imaging quality and resolution. The integration of anisotropy velocity model with seismic imaging has reduced uncertainty on internal and bounding-fault positions, thus greatly reduce the risk of investment decision based heavily on seismic interpretation.
In addition, the establishment of correlation between anisotropy parameters, fracture orientation, and density, lead to practical reservoir characterization techniques. The acquisition of such information, fracture spatial distribution and density, the drainage area of each producing well can be dramatically increased if taking the fractures into account during the drilling decision process. The increased drainage area per well will result in fewer wells, greatly reducing the drilling cost of exploration and production (E&P) projects.
The application of the anisotropy in petroleum exploration and production
Among several applications of seismic anisotropy, the following are the most important: anisotropic parameter estimation, prestack depth anisotropy migration, and fracture characterization based on anisotropy velocity models.
Anisotropy parameter estimation
The anisotropy parameter is most fundamental to all other anisotropy application in E&P area. In the early days of seismic petroleum exploration, the geophysicists were already aware of the anisotropy-induced distortion in P-wave imaging (the major of petroleum exploration seismic surveys). Although the anisotropy-induced distortion is less significant since the poststack processing of narrow-azimuth data is not sensitive to velocity. The advancement of seismic anisotropy is largely contributed by the Thomsen's work on anisotropy notation and also by the discovery of the P-wave time-process parameter . These fundamental works enable to parametrize the transverse isotropic (TI) models with only three parameters, while there are five full independent stiff tensor element in transverse isotropic (VTI or HTI) models. This simplification made the measurement of seismic anisotropy a plausible approach.
Most anisotropy parameter estimation work is based on shale and silts, which may be due to the fact that shale and silts are the most abundant sedimentary rocks in the Earth's crust. Also in the context of petroleum geology, organic shale is the source rock as well as seal rocks that trap oil and gas. In seismic exploration, shales represent the majority of the wave propagation medium overlying the petroleum reservoir. In conclusion, seismic properties of shale are important for both exploration and reservoir management.
Seismic velocity anisotropy in shale can be estimated from several methods, including deviated-well sonic logs, walkway VSP, and core measurement. These methods have their own advantages and disadvantages: the walkway VSP method suffers from scaling issues, and core measure is impractical for shale, since shale is hard to be cored during drilling.
Walkway VSP
The Walkway VSP array several seismic surface sources at different offset from the well. Meanwhile, a vertical receiver array with constant interval between receivers is mounted in a vertical well. The sound arrival times between multiple surface sources and receivers at multiple depths are recorded during measurement. These arrival times are used to derive the anisotropy parameter based on the following equations
where is the arrival time from source with offset, is the arrival time of zero offset, is NMO velocity, is Thompson anisotropy parameter.
Core measurement
Another technique used to estimate the anisotropy parameter is directly measure them from the core which is extracted through a special hollow drill bit during drill process. Since coring a sample will generate large extra cost, only limited number of core samples can be obtained for each well. Thus the anisotropy parameter obtained through core measurement technique only represent the anisotropy property of rock near the borehole at just several specific depth, rending this technique often provides little help on the field seismic survey application. The measurements on each shale plug require at least one week.
From the context of this article, wave propagation in a vertically transverse medium can be described with five elastic constants, and ratios among these parameters define the rock anisotropy. This anisotropy parameter can be obtained in the laboratory by measuring the velocity travel speed with transducer ultrasonic systems at variable saturation and pressure conditions. Usually, three directions of wave propagation on core samples are the minimum requirement to estimate the five elastic coefficients of the stiffness tensor. Each direction in core plug measurement yields three velocities (one P and two S).
The variation of wave propagation direction can be achieved by either cutting three samples at 0°, 45° and 90° from the cores or by using one core plug with transducers attached at these three angles. Since most shales are very friable and fissured, it is often difficult to cut shale core plug. Its edges break off easily. Thus the cutting sample method can only be used for hard, competent rocks.
Another way to get the wave propagation velocity at three directions is to arrange the ultrasonic transducer onto several specific location of the core sampler. This method avoids the difficulties encounter during the cutting of shale core sample. It also reduces the time of measurement by two thirds since three pairs of ultrasonic transducer work at the same time.
Once the velocities at three directions are measured by one of the above two methods, the five independent elastic constants are given by the following equations:
The P-wave anisotropy of a VTI medium can be described by using Thomsen's parameters . The quantifies the velocity difference for wave propagation along and perpendicular to the symmetry axis, while controls the P-wave propagation for angles near the symmetry axis.
Deviated well sonic log
The last technique can be used to measure the seismic anisotropy is related to the sonic logging information of a deviated well. In a deviated well, the wave propagation velocity is higher than the wave propagation velocity in a vertical well at the same depth. This difference in velocity between deviated well and vertical well reflects the anisotropy parameters of the rocks near the borehole. The detail of this technique will be shown on an example of this report.
Anisotropic prestack depth migration
In the situation of complex geology, e.g. faulting, folding, fracturing, salt bodies, and unconformities, pre-stack migration (PreSM) is used due to better resolution under such complex geology. In PreSM, all traces are migrated before being moved to zero-offset. As a result, much more information is used, which results in a much better image, along with the fact that PreSM honours velocity changes more accurately than post-stack migration. The PreSM is extremely sensitive to the accuracy of the velocity field. Thus the inadequacy of isotropic velocity models is not suitable for the pre stack depth migration. P-wave anisotropic prestack depth migration (APSDM) can produce a seismic image that is very accurate in depth and space. As a result, unlike isotropic PSDM, it is consistent with well data and provides an ideal input for reservoir characterization studies. However, this accuracy can only be achieved if correct anisotropy parameters are used. These parameters cannot be estimated from seismic data alone. They can only be determined with confidence through analysis of a variety of geoscientific material – borehole data and geological history.
During recent years, the industry has started to see the practical use of anisotropy in seismic imaging. We show case studies that illustrate this integration of the geosciences. We show that much better accuracy is being achieved. The logical conclusion is that, this integrated approach should extend the use of anisotropic depth imaging from complex geology only, to routine application on all reservoirs.
Fracture characterization
After considering applications of anisotropy that improved seismic imaging, two approaches for exploiting anisotropy for the analysis of fractures in the formation are worthy of discussing. Ones uses azimuthal variations in the amplitude versus offset (AVO) signature when the wave is reflected from the top or base of an anisotropic material, and a second exploits the polarizing effect that the fractures have on a transmitted shear-wave. In both cases, the individual fractures are below the resolving power of the seismic signal and it is the cumulative effect of the fracturing that is recorded. Based on the idea behind them, both approaches can be divided into two steps. The first step is to get the anisotropy parameters from seismic signals, and the second steps is to retreat the information of fractures from anisotropy parameters based on the fracture induce anisotropy model.
Fractures-azimuthal variations
Aligned subseismic-scale fracturing can produce seismic anisotropy (i.e., seismic velocity varies with direction) and leads to measurable directional differences in traveltimes and reflectivity.
If the fractures are vertically aligned, they will produce azimuthal anisotropy (the simplest case being horizontal transverse isotropy, or HTI) such that reflectivity of an interface depends on azimuth as well as offset. If either of the media bounding the interface is azimuthally anisotropic, the AVO will have an azimuthal dependence. The P-P wave reflection coefficient have the following relation with the azimuthal if anisotropy exist in the layers:
where is the azimuth from data acquisition grid, the terms are coefficients describing anisotropy parameter.
Fractures: shear-wave splitting
The behavior of shear waves as they pass through anisotropic media has been recognized for many years, with laboratory and field observations demonstrating how the shear wave splits into two polarized components with their planes aligned parallel and perpendicular to the anisotropy. For a fractured medium, the faster shear wave is generally aligned with the strike direction and the time delay between the split shear waves related to the fracture density and path length traveled. For layered medium, the shear wave polarized parallel to the layering arrives first.
Examples of the application of anisotropy
Example of anisotropy in petroleum E&P
Two examples will be discussed in there to show the anisotropy application in Petroleum E&P area. The first related to anisotropy parameter estimation via deviated well sonic logging tool. And the second example reflects the image quality improvement by PreStack Depth Migration technology.
Example of deviated well sonic logging
In this case, the sonic velocity in a deviated well is obtained by dipole sonic logging tool . The formation is mostly composed of shale. In order to use the TI model, several assumptions are made:
Rock should be in normally pressured regime.
Rock should have similar burial history.
Satisfying the above conditions, the following equation hold for a TI model:
Where is the deviated angle of the well, and , are anisotropy parameter.
The following plot shows typical velocity distribution vs density in a deviated well. The color of each data point represents the frequency of this data point. The red color means a high frequency while the blue color represents a low frequency. The black line shows a typical velocity trend without the effect of anisotropy. Since the existence of anisotropy effect, the sound velocity is higher than the trend line.
From the well logging data, the velocity vs plot can be drawn. On the basis of this plot, a no liner regression will give us an estimate of and . The following plot show the non-linear regression and its result.
Put the estimated and into the following equation, the correct can be obtained.
By doing the above correction calculation, the corrected is plot vs density in the following plot. As be seen in the plot, most of the data point falls on the trend line. It validate the correctness of the estimate of anisotropy parameter.
Example of prestack depth migration Imaging
In this case, the operator conducted several seismic surveys on a gas field in the north sea over the period of 1993-1998 . The early survey does not take anisotropy into account, while the later survey employs the PreStack Depth Migration imaging. This PSDM was done on a commercial seismic package developed by Total. The following two plots clearly reveal the resolution improvement of the PSDM method. The top plot is a convention 3D survey without anisotropy effect. The bottom plot used PSDM method. As can be seen in the bottom plot, more small structure features are revealed due to the reduce of error and improved resolution.
Limitations of seismic anisotropy
Seismic anisotropy relies on shear waves, shear waves carry rich information which can sometimes impede its utilization. Shear waves survey for anisotropy requires multi component (usually 3 component) geophones which are oriented at angles, these are more expensive than the widely used vertical oriented single component geophones. However, while expensive 3 component seismometers are much more powerful in their ability to collect valuable information about the Earth that vertical component seismometers simply cannot. While seismic waves do attenuate, large earthquakes (moment magnitude > 5) have the ability to produce observable shear waves. The second law of thermodynamics ensures a higher attenuation of shear wave reflected energy, this tends to impede the utilization of shear wave information for smaller earthquakes.
Crustal anisotropy
In the Earth's crust, anisotropy may be caused by preferentially aligned joints or microcracks, by layered bedding in sedimentary formations, or by highly foliated metamorphic rocks. Crustal anisotropy resulting from aligned cracks can be used to determine the state of stress in the crust, since in many cases, cracks are preferentially aligned with their flat faces oriented in the direction of minimum compressive stress. In active tectonic areas, such as near faults and volcanoes, anisotropy can be used to look for changes in preferred orientation of cracks that may indicate a rotation of the stress field.
Both seismic P-waves and S-waves may exhibit anisotropy. For both, the anisotropy may appear as a (continuous) dependence of velocity upon the direction of propagation. For S-waves, it may also appear as a (discrete) dependence of velocity upon the direction of polarization. For a given direction of propagation in any homogeneous medium, only two polarization directions are allowed, with other polarizations decomposing trigonometrically into these two. Hence, shear waves naturally "split" into separate arrivals with these two polarizations; in optics this is called birefringence.
Crustal anisotropy is very important in the production of oil reservoirs, as the seismically fast directions can indicate preferred directions of fluid flow.
In crustal geophysics, the anisotropy is usually weak; this enables a simplification of the expressions for seismic velocities and reflectivities, as functions of propagation (and polarization) direction. In the simplest geophysically plausible case, that of polar anisotropy, the analysis is most conveniently done in terms of Thomsen Parameters.
Mantle anisotropy
In the mantle, anisotropy is normally associated with crystals (mainly olivine) aligned with the mantle flow direction called lattice preferred orientation (LPO). Due to their elongate crystalline structure, olivine crystals tend to align with the flow due to mantle convection or small scale convection. Anisotropy has long been used to argue whether plate tectonics is driven from below by mantle convection or from above by the plates, i.e. slab pull and ridge push.
The favored methods for detecting seismic anisotropy are shear wave splitting, seismic tomography of surface waves and body waves, and converted-wave scattering in the context of a receiver function. In shear-wave splitting, the S wave splits into two orthogonal polarizations, corresponding to the fastest and slowest wavespeeds in that medium for that propagation direction. The period range for mantle splitting studies is typically 5-25-sec. In seismic tomography, one must have a spatial distribution of seismic sources (earthquakes or man-made blasts) to generate waves at multiple wave-propagation azimuths through a 3-D medium. For receiver functions, the P-to-S converted wave displays harmonic variation with earthquake back azimuth when the material at depth is anisotopic. This method allows determination of layers of anisotropic material at depth beneath a station.
In the transition zone, wadsleyite and/or ringwoodite could be aligned in LPO. Below the transition zone, the three main minerals, periclase, silicate perovskite (bridgmanite), and post-perovskite are all anisotropic and could be generating anisotropy observed in the D" region (a couple hundred kilometer thick layer about the core-mantle boundary).
References
Sources
Helbig, K., Thomsen, L., 75-plus years of anisotropy in exploration and reservoir seismics: A historical review of concepts and methods: Geophysics. VOL. 70, No. 6 (November–December 2005): p. 9ND–23ND http://www.geo.arizona.edu/geo5xx/geo596f/Readings/Helbig%20and%20Thomsen,%202005,%20historical%20review%20anisotropy%201.pdf
Crampin, S., 1984, Evaluation of anisotropy by shear wave splitting: Applied Seismic Anisotropy: Theory, Background, and Field Studies, Geophysics Reprint series, 20, 23–33.
Ikelle, L.T., Amundsen, L., Introduction to Petroleum Seismology, Investigations in Geophysics series No.12.
Thomsen, L., 1986, Weak elastic anisotropy: Applied Seismic Anisotropy: Theory, Background, and Field Studies, Geophysics Reprint series, 20, 34–46
Anderson et al., Oilfield Anisotropy: Its Origins and Electrical Characteristics: Oil field review, 48–56. https://www.slb.com/~/media/Files/resources/oilfield_review/ors94/1094/p48_56.pdf
Thomsen, L., : Geophysics, 51, 1954–1966, Weak elastic anisotropy.
Tsvankin, I., : Geophysics, 62, 1292-1309.1997, Anisotropic parameters and P-wave velocity for orthorhombic media.
Tsvankin, I., Seismic signatures and analysis of reflection data in anisotropic media: Elsevier Science Publ, 2001,.
Stephen A. H. and J-Michael K. GEOPHYSICS, VOL. 68, NO. 4, P1150–1160. Fracture characterization at Valhall: Application of P-wave amplitude variation with offset and azimuth (AVOA) analysis to a 3D ocean-bottom data set
Tushar P. and Robert V. SPE 146668. Improved Reservoir Characterization through Estimation of Velocity Anisotropy in Shales.
Jeffrey S., Rob R., Jean A., et al. www.cgg.com/technicalDocuments/cggv_0000000409.pdf Reducing Structural Uncertainties Through Anisotropic Prestack Depth Imaging: Examples from the Elgin/Franklin/Glenelg HP/HT Fields Area, Central North Sea
Helbig, K., 1984, Shear waves – what they are and how they are and how they can be used: Applied Seismic Anisotropy: Theory, Background, and Field Studies, Geophysics Reprint series, 20, 5–22.
External links
http://www1.gly.bris.ac.uk/~wookey/MMA/index.htm
https://web.archive.org/web/20050909171919/http://geophysics.asu.edu/anisotropy/
http://www.geo.arizona.edu/geo5xx/geo596f/Readings/Helbig%20and%20Thomsen,%202005,%20historical%20review%20anisotropy%201.pdf
https://www.slb.com/~/media/Files/resources/oilfield_review/ors94/1094/p48_56.pdf
Elasticity (physics)
Petroleum geology
Geophysics | Seismic anisotropy | Physics,Chemistry,Materials_science | 6,318 |
3,689,668 | https://en.wikipedia.org/wiki/Protein-fragment%20complementation%20assay | Within the field of molecular biology, a protein-fragment complementation assay, or PCA, is a method for the identification and quantification of protein–protein interactions. In the PCA, the proteins of interest ("bait" and "prey") are each covalently linked to fragments of a third protein (e.g. DHFR, which acts as a "reporter"). Interaction between the bait and the prey proteins brings the fragments of the reporter protein in close proximity to allow them to form a functional reporter protein whose activity can be measured. This principle can be applied to many different reporter proteins and is also the basis for the yeast two-hybrid system, an archetypical PCA assay.
Split protein assays
Any protein that can be split into two parts and reconstituted non-covalently to form a functional protein may be used in a PCA. The two fragments however have low affinity for each other and must be brought together by other interacting proteins fused to them (often called "bait" and "prey" since the bait protein can be used to identify a prey protein, see figure). The protein that produces a detectable readout is called "reporter". Usually enzymes which confer resistance to nutrient deprivation or antibiotics, such as dihydrofolate reductase or beta-lactamase respectively, or proteins that give colorimetric or fluorescent signals are used as reporters. When fluorescent proteins are reconstituted the PCA is called Bimolecular fluorescence complementation assay. The following proteins have been used in split protein PCAs:
Beta-lactamase
Dihydrofolate reductase (DHFR)
Focal adhesion kinase (FAK)
Gal4, a yeast transcription factor (as in the classical yeast two-hybrid system)
GFP (split-GFP), e.g. EGFP (enhanced green fluorescent protein)
Horseradish peroxidase
Infrared fluorescent protein IFP1.4, an engineered chromophore-binding domain (CBD) of a bacteriophytochrome from Deinococcus radiodurans
LacZ (beta-galactosidase)
Luciferase, including ReBiL (recombinase enhanced bimolecular luciferase) and Gaussia princeps luciferase. Commercial products using luciferase include NanoLuc and NanoBIT. A modification has also been developed for lipid droplet-associated interactions.
TEV (Tobacco etch virus protease)
Ubiquitin
Genome-wide applications
The methods mentioned above have been applied to whole genomes, e.g. yeast or syphilis bacteria.
References
Further reading
Molecular biology
Laboratory techniques | Protein-fragment complementation assay | Chemistry,Biology | 556 |
78,329 | https://en.wikipedia.org/wiki/AmigaGuide | AmigaGuide is a hypertext document file format designed for the Amiga. It was developed in the early 1990s and released to the market in 1992. Files are stored in ASCII so it is possible to read and edit a file without the need for special software.
Since Workbench 2.1 an Amiga Guide system for O.S. inline help files and reading manuals with hypertext formatting elements was launched in AmigaOS and based on a viewer called simply "AmigaGuide" and it has been included as standard feature on the Amiga system. Users with earlier versions of Workbench could view the files by downloading the program and library AmigaGuide 34 distributed with public domain collections of floppy disks (for example on Fred Fish collection) or it could be downloaded directly from Aminet software repository. Starting from AmigaOS 3.0 the AmigaGuide tool was replaced with the more complete and flexible MultiView.
AmigaGuide and MultiView
AmigaGuide is the default tool for viewing AmigaGuide files used with AmigaOS 2.1, and is also a basic text viewer for ASCII documents. It can handle multiple files thanks to cross-linking tables called XREF.
MultiView is basically a void container and a natural GUI for the various datatypes that open MultiView as a default tool when any media file (including AmigaGuide files) are invoked by mouse clicking and recognized by the existing correspondent datatype.
AmigaGuide readers on various platforms
Although the AmigaGuide format is almost solely used for documenting Amiga programs, viewers are available for several other platforms:
Java - JAGUaR
DOS - AGView
Windows - AGWViewer WinGuide (LHA archive) WinGuide (ZIP archive)
Linux - AGReader
Macintosh - Grotag A little free Open Source viewer of AmigaGuide files on Macintosh computers.
Syntax
An AmigaGuide document is a simple ASCII-formatted document, so it can be edited by any normal text editor and viewed by any text reader software.
AmigaGuide commands all begin with the '@' (pronounced 'at') symbol. To be recognized as an AmigaGuide document, the first line should include this text:
@database Amigaguide.guide
There are three categories of commands: Global, Node, and Attributes. Global commands are usually specified at the beginning of the document, before any nodes are defined, and apply to all the nodes in the document. Technically, they can be anywhere. Many commands can be used both globally and in nodes.
Node commands are usable inside a node (after an '@NODE' and before an "@ENDNODE"), and affect only the node in which they are used.
Attributes may be specified anywhere in a normal line. In addition to the '@' symbol, attributes always use a pair of braces ('{' and '}') to enclose the attribute name and possibly additional arguments.
Main commands
The commands "INDEX", "HELP", "NEXT", "PREV", and "TOC" and the all-purpose hypertext link specify other nodes to jump to. They all support the naming of nodes within the current document, but they also all support a path along with that name which lets the node be located in any AmigaGuide document.
They will be shown as simply embossed text squared buttons in the page of MultiView.
External links
To access a node in another document, it is simply required to put an AmigaDOS file path before the node name. From this point of view, AmigaGuide is a very simple hypertext language.
Multimedia
Since AmigaOS 3.0 or above, the user can jump to any file that is supported by Amiga datatypes – pictures, animation, anything. It uses the MultiView program to handle it through its internal support for the datatype Amiga standard. The user must still give a "node" name, even though the file is not an AmigaGuide file and has no "nodes" – so the syntax is:
"main": "name_of_picture.iff/main".
References
External links
TAWS online viewer by drag'n'dropping an AmigaGuide file onto its window.
AmigaGuide Tech Sheet AmigaGuide overview by the inventor.
AmigaGuide reference AmigaGuide tags and commands explained, organised by introduction version; open source AmigaGuide to HTML converter, and security inspector
AmigaGuide manual (in AmigaGuide format)
AmigaGuide V34 distribution archive (Aminet download for AmigaOS)
How to create an AmigaGuide (on EmuUnlimited site)
AmigaGuide AmigaDOS command in Guida rapida all'AmigaDOS from Amiga Magazine Italia, 1996 (In Italian Language), contents of the site preserving online all issues of this magazine.
Computer file formats
AmigaOS
Hypertext
MorphOS | AmigaGuide | Technology | 970 |
46,871,975 | https://en.wikipedia.org/wiki/Phlegmacium%20brunneiaurantium | Phlegmacium brunneiaurantius is a species of fungus in the family Cortinariaceae.
Taxonomy
It was originally described in 2014 by the mycologists Ilkka Kytövuori, Kare Liimatainen and Tuula Niskanen who classified it as Cortinarius brunneiaurantium. It was placed in the (subgenus Phlegmacium) of the large mushroom genus Cortinarius.
In 2022 the species was transferred from Cortinarius and reclassified as Phlegmacium brunneiaurantium based on genomic data.
Habitat and distribution
Found only in south Finland, where it grows in deciduous forests, it was described as new to science in 2014. Close relatives include C. sobrius, C. balteatus, C. brunneoviolaceus, C. pseudonaeovosus and C. clarobaltoides var. longispermus.
See also
List of Cortinarius species
References
External links
brunneiaurantius
Fungi described in 2014
Fungi of Finland
Fungus species | Phlegmacium brunneiaurantium | Biology | 227 |
49,575,179 | https://en.wikipedia.org/wiki/Noroxymorphone | Noroxymorphone is an opioid which is both a metabolite of oxymorphone and oxycodone and is manufactured specifically as an intermediate in the production of narcotic antagonists such as naltrexone and others. It is a potent agonist of the μ-opioid receptor, but is poorly able to cross the blood-brain-barrier into the central nervous system, and for this reason, has only minimal analgesic activity.
In the United States, noroxymorphone is controlled as a Schedule II Narcotic controlled substance with an ACSCN of 9637 and in 2014 the DEA set annual aggregate manufacturing quotas of 17 500 kilogrammes for conversion and 1262.5 kg for sale. In other countries, it may be similarly controlled, controlled at a lower level, or regulated in another way.
See also
Oxymorphone hydrazone
Oxymorphol - a metabolite of oxymorphone and an intermediate in the creation of hydromorphone
Hydromorphone
Oxycodone
Norbuprenorphine
Norbinaltorphimine
References
4,5-Epoxymorphinans
Semisynthetic opioids
German inventions
Hydroxyarenes
Ketones
Tertiary alcohols
Ethers
Mu-opioid receptor agonists
Euphoriants | Noroxymorphone | Chemistry | 280 |
39,316,600 | https://en.wikipedia.org/wiki/Manson%20Medal | The Manson Medal (full name Sir Patrick Manson Medal, originally the Manson Memorial Medal), named in honour of Sir Patrick Manson, is the highest accolade the Royal Society of Tropical Medicine and Hygiene awards. Started in 1923, it is awarded triennially to an individual whose contribution to tropical medicine or hygiene is deemed worthy by the council.
Patrick Manson was a pioneer in medical science called tropical medicine. His discoveries of parasitic infections such as lymphatic filariasis, schistosomiasis (the parasite Schistosoma mansoni), sparganosis, and contribution to malaria research earned him the title "father of tropical medicine." Soon after his death in 1922, the Royal Society of Tropical Medicine and Hygiene decided to create a new medal in his honour. The first Manson Medal was awarded to Sir David Bruce in 1923. The 2022 Manson Medal was awarded to Sir Alimuddin Zumla, the first time in a hundred years that it was awarded to an ethnic minority scientist.
History
Background
Sir Patrick Manson was a Scottish physician who made important discoveries in parasitology. Working as a medical officer to the Chinese Imperial Maritime Customs at Amoy, in 1887, he discovered that the disease lymphatic filariasis (notably as elephantiasis) was due to a tiny roundworm (now called Wuchereria bancrofti) that was transmitted by the bite of a mosquito (Culex fatigans, now Culex quinquefasciatus). This was the first discovery that certain diseases could be transmitted by insects, the establishment of vector biology. In 1902, he discovered the species of blood fluke, Schistosoma, that caused intestinal bilharziasis. The first Schistosoma species, S. haemtabobium, that caused urinary bilharziasia was discovered by a German physician Theodor Bilharz in 1851. Louis Westenra Sambon gave the name of the second species, Schistosomum mansoni in 1907 in honour of the discoverer. In 1882, Manson discovered sparganosis, a parasitic infection caused by the tapeworm Spirometra.
In 1894, Manson formulated the mosquito-malaria theory to explain the hitherto unknown process of the transmission of malaria, one of the deadliest parasitic diseases in humans. Based on his experiences in parasitic infections, he predicted that malarial parasites were protozoans and that they were transmitted by mosquitos. The theory was experimentally proved by Ronald Ross in India who received the Nobel Prize in Physiology or Medicine in 1902 for the discovery. For his contributions, Manson had been recognised as the "father of tropical medicine."
Establishment
The Royal Society of Tropical Medicine and Hygiene, (RSTMH) was founded in 1907 by Sir James Cantlie and George Carmichael Low. Manson became the first elected president of the society, serving from 1907 to 1909. Sir William Boog Leishman, Major-General of the Army Medical Services, felt that the London School of Hygiene and Tropical Medicine, an institute Manson had established, should contain a respectable portrait of the founder. In 1921, Leishman collected donations from friends and admirers as the Portrait Fund. There was a leftover of fund after completion of the project. After Manson's death in 1922, the surplus money was given to the RSTMH to institute an award for scientists with outstanding contributions to tropical medicine and hygiene.
On 26 September 1922, the first Manson Memorial Medal (as an honorary award) was given to Lady Manson (Henrietta Isabella Manson) in recognition of her support to Manson throughout the latter's career. The medal was made in bronze having Manson's portrait on one side and the reverse an inscription, "London School of Tropical Medicine."
Sir Charles Scott Sherrington, the president of the Royal Society announced on 30 November 1922:The Manson Memorial Medal, this year instituted there for triennial award to work of special distinction in Tropical Medicine, is a tribute to Manson's work of example and leadership in that field of medical science.
First medal and modifications
After the first medal to Lady Manson, RSTMH decided to change the inscription to "Tropical Medicine and Hygiene." The first official medal was given to Sir David Bruce in 1923. Bruce had made pioneering studies and discoveries in tropical medicine. In 1886 he led the Malta Fever Commission that investigated an outbreak of Malta fever (later eponymously called brucellosis) in Malta. He discovered that the disease was due to a bacterium, later named Brucella. In 1894, he discovered a protozoan parasite (later named after him as Trypanosoma brucei) that caused animal sleeping sickness (nagana) in Zululand. Then he led Sleeping Sickness Commission in 1902 to investigate the cause of human sleeping sickness. His team discovered that the infection was transmitted by the tsetse fly (Glossina palpalis).
The inscription had been changed to "Tropical Medicine. A.D. 1922" to commemorate the death year of Manson.
Recipients
The Manson Medal is given every three years since 1923, as follows:
See also
List of medicine awards
List of prizes named after people
References
British science and technology awards
Medicine awards
Awards established in 1923
1923 establishments in the United Kingdom
Royal Society of Tropical Medicine and Hygiene | Manson Medal | Technology | 1,079 |
3,522,161 | https://en.wikipedia.org/wiki/Askaryan%20radiation | The Askaryan radiation also known as Askaryan effect is the phenomenon whereby a particle traveling faster than the phase velocity of light in a dense dielectric (such as salt, ice or the lunar regolith) produces a shower of secondary charged particles which contains a charge anisotropy and emits a cone of coherent radiation in the radio or microwave part of the electromagnetic spectrum. The signal is a result of the Cherenkov radiation from individual particles in the shower. Wavelengths greater than the extent of the shower interfere constructively and thus create a radio or microwave signal which is strongest at the Cherenkov angle. The effect is named after Gurgen Askaryan, a Soviet-Armenian physicist who postulated it in 1962.
The radiation was first observed experimentally in 2000, 38 years after its theoretical prediction. So far the effect has been observed in silica sand, rock salt, ice, and Earth's atmosphere.
The effect is of primary interest in using bulk matter to detect ultra-high energy neutrinos. The Antarctic Impulse Transient Antenna (ANITA) experiment uses antennas attached to a balloon flying over Antarctica to detect the Askaryan radiation produced by showers of particles when cosmic neutrinos interact in the ice. Several experiments have also used the Moon as a neutrino detector based on detection of the Askaryan radiation.
See also
Cherenkov radiation
References
External links
RADHEP-2000 Write-ups
Physical phenomena
Particle physics | Askaryan radiation | Physics | 297 |
36,236,625 | https://en.wikipedia.org/wiki/Metal%20carbonyl%20hydride | Metal carbonyl hydrides are complexes of transition metals with carbon monoxide and hydride as ligands. These complexes are useful in organic synthesis as catalysts in homogeneous catalysis, such as hydroformylation.
Preparation
Walter Hieber et al. prepared the first metal carbonyl hydride in 1931 by the so-called Hieber base reaction of metal carbonyls. In this reaction a hydroxide ion reacts with the carbon monoxide ligand of a metal carbonyl such as iron pentacarbonyl in a nucleophilic attack to form a metallacarboxylic acid. This intermedia releases of carbon dioxide in a second step, giving the iron tetracarbonyl hydride anion. The synthesis of cobalt tetracarbonyl hydride (HCo(CO)4) proceeds in the same way.
Fe(CO)5 + NaOH → Na[Fe(CO)4CO2H]
Na[Fe(CO)4CO2H] → Na[HFe(CO)4] + CO2
A further synthetic route is the reaction of the metal carbonyl with hydrogen. The protonation of metal carbonyl anions, e.g. [Co(CO)4]−, leads also to the formation of metal carbonyl hydrides.
Properties
The neutral metal carbonyl hydrides are often volatile and can be quite acidic. The hydrogen atom is directly bounded to the metal. The metal-hydrogen bond length is for cobalt 114 pm, the metal-carbon bond length is for axial ligands 176 and 182 for the equatorial ligands.
A direct metal-hydrogen bond was suspected by Hieber for H2Fe(CO)4. A number of metal carbonyl hydrides have been characterized by X-ray crystallography and neutron diffraction. Nuclear magnetic resonance spectroscopy has also proved to be a useful characterization tool.
Applications and occurrence
Metal carbonyl hydrides are used as catalysts in the hydroformylation of olefins. The catalyst is usually formed in situ in a reaction of a metal salt precursor with the syngas. The hydroformylation starts with the generation of a coordinatively unsaturated 16-electron metal carbonyl hydride complex like HCo(CO)3 or HRh(CO)(PPh3)2 by dissociation of a ligand. Such complexes bind olefins in a first step via π-complexation, thus beginning the transformation of the alkene to the aldehyde.
Iron carbonyl hydrides occur in nature at the active sites of hydrogenase enzymes.
Further reading
References
External links
Carbonyl complexes
Organometallic chemistry
Transition metals
Hydrido complexes | Metal carbonyl hydride | Chemistry | 566 |
42,666 | https://en.wikipedia.org/wiki/Zerg | The Zerg are a fictional race of insectoid aliens obsessed with assimilating other races into their swarm in pursuit of genetic perfection, and the overriding antagonists for much of the StarCraft franchise. Unlike the fictional universe's other primary races, the Protoss and Terrans, the Zerg lack technological inclination. Instead, they "force-evolve" genetic traits by directed mutation in order to match such technology. Operating as a hive mind-linked "chain of command", the Zerg strive for "genetic perfection" by assimilating the unique genetic code of advanced species deemed "worthy" into their own gene pool, creating numerous variations of specialized strains of Zerg gifted with unique adaptations. Despite being notoriously cunning and ruthlessly efficient, the majority of Zerg species have low intelligence, becoming mindless beasts if not connected to a "hive-cluster" or a "command entity".
As with the other races, the Zerg are the subject of a full single-player campaign in each of the series' real-time strategy video games. Zerg units are designed to be cost-efficient and fast to produce, encouraging players to overwhelm their opponents with sheer numerical advantage. Since the release of StarCraft, the Zerg have become a gaming icon, described by PC Gamer UK as "the best race in strategy history". The term "Zerg Rush", or "zerging", is now commonly used to describe sacrificing economic development in favor of using many cheap, yet weak units to overwhelm an enemy by attrition or sheer numbers. The tactic is infamous; most experienced real-time strategy players are familiar with the tactic in one form or another.
Attributes
Biology
Zerg have two cell types at birth — one which creates random mutations, and another that hunts these mutations — the result being that any mutations that survive are the strongest of all. Despite being a hive-minded species, the Zerg understand the principles of evolution and incorporate this into the development of their species. Zerg purposely situate themselves in harsh climates in order to further their own evolution through natural selection. Only the strongest with the best mutations survive, and they assimilate only the strongest species into their gene pool.
Society
It was stated in an interview by Blizzard employees that the Zerg Swarm are universally feared, hated, and hunted by all the sapient species of the Milky Way. The Zerg are a collective consciousness of a variety of different species assimilated into the Zerg genome. The Swarms' organization, psychological and economic, is more like that of ants or termites: they are communal entities, a people efficiently adapted by evolution. Every thousand Zerg warriors killed at a cost of one Terran soldier was a net victory for the Zerg; the Zerg Swarm didn't care any more about expending warriors than the Terrans cared about expending ammunition.
The Zerg were originally commanded by and unified by their absolute obedience to the Zerg collective sentience known as the Zerg Overmind, a manifestation of this hive mind, and under the Overmind's control the Zerg strove for genetic perfection by assimilating the favorable traits of other species. Zerg creatures are rapidly and selectively evolved into deadly and efficient killers to further the driving Zerg imperative of achieving absolute domination. After a species has been assimilated into the Swarm, it is mutated toward a different function within its hierarchy, from being a hive worker to a warrior strain. StarCraft's manual notes that some species bear little resemblance to their original forms after just a short time into assimilation (an example would be the formerly peaceful Slothien species, which was assimilated and mutated into the vicious Hydralisk strain and so on). The Overmind controls the Swarm through secondary agents called cerebrates. Cerebrates command an individual brood of Zerg, each with a distinct tactical role within the hierarchy. Cerebrates further delegate power through the use of overlords for battlefield direction and queens for hive watch.
The quest for 'genetic perfection' is a pseudo-religious concept to Zerg that drives them on a steady state of evolution and conflict; the zerg believed there was a state that the zerg could reach where they no longer needed to evolve, that their evolutionary form would never have to change again because they could already adapt to any situation. Abathur, an evolution master, doubted that this was possible, but reasoned that "chasing the illusion of perfection" was, regardless, tactically sound.
The vast majority of the Zerg do not have any free will as they are genetically forced to obey the commands of those further up the Zerg hierarchy, although they are sufficiently intelligent to form strategies and work as a team on the battlefield. Despite this, the average Zerg has no sense of self preservation. Along with the Overmind, the cerebrates are the only Zerg with full sapience, each with its own personality and methods, although they too are genetically incapable of disobeying the Overmind. The Overmind also possesses the ability to reincarnate its cerebrates should their bodies be killed, although Protoss dark templar energies are capable of disrupting this process. If a cerebrate is completely dead and cannot be reincarnated, the Overmind loses control of the cerebrate's brood, causing it to mindlessly rampage and attack anything. As a result of the Overmind's death in StarCraft and the subsequent destruction of a new Overmind in Brood War, the remaining cerebrates perished, as they could not survive without an Overmind. Sarah Kerrigan replaced the cerebrates with "brood mothers". These creatures fulfil much the same purpose, but are loyal to Kerrigan and could survive her temporary departure during the events of Starcraft 2.
An exception to all of this would be the Primal Zerg, who inhabit the original Zerg homeworld of Zerus (as seen in Heart of the Swarm). The Zerg Hive Mind was created to control the Zerg, and eventually put them under the control of the main antagonist of the series, the fallen Xel'Naga Amon. Some Zerg, however, managed to avoid being subsumed. These are the Primal Zerg, who have much the same genetic abilities but are not bound to the Overmind. These creatures are each independently sapient, and if they follow a leader it is because they choose to. Their lack of a Hive Mind also shields them from specific psionic attacks engineered to counter the Zerg Hive Mind.
Zerg worlds
Zerg have conquered and/or infested many worlds, but only two of them are important:
Zerus: Jungle world. Located in the galactic core' Theta quadrant of the Milky Way. It is the birthworld of the zerg. The position of the Zerus is known from the manual. It is situated near the Galaxy core. The Galaxy disc has a radius of 49 000 LY. The Zerg covered some 30-40 000 LY during their search for Protoss homeworld.
Char: Volcanic world. Current Zerg capital world. Former Terran Dominions world. The new Overmind grew here, but later it was enslaved by the United Earth Directorate (UED). Eventually the planet was regained by the Zerg under Kerrigan's control. The Zerg settled to infest other Terran worlds during their search, lies on the borders of the Terran sector. Because the Zerg ran across the Terrans first, it can implicate, that the Koprulu sector lies approximately on the line with Zerus, Char and Aiur.
Depiction
The Zerg were created from the native lifeforms of Zerus, who had the natural ability to absorb the "essence" of creatures they killed, transforming their bodies to gain new adaptations. The Xel'Naga created the Overmind and bound the primal Zerg to its will. They gave the Overmind a powerful desire to travel across the stars and absorb useful lifeforms into the Swarm, particularly the Protoss, their previous creation, so as to become the ultimate lifeform.
The Zerg are a completely organic race, making no use of lifeless technology and instead using specialized organisms for every function efficiently fulfilled through biological adaptation and planned mutation of the Zerg strains. Their buildings are specialized organs within the living, growing organism of a Zerg nest, as are the Leviathans "space ships" that carry them across space. Zerg colonies produce a carpet of bio-matter referred to as the "creep", which essentially provides nourishment for Zerg structures and creatures. The visual aesthetic of the Zerg greatly resembles that of invertebrates such as crustaceans and insects (and certainly draws inspiration from the creatures from the Alien movies). The Zerg are shown to be highly dependent on their command structure: if a Zerg should lose its connection to the hive mind, it may turn passive and incapable of action, or become completely uncontrollable and attack allies and enemies alike.
Zerg buildings and units are entirely organic in-game, and all Zerg can regenerate slowly without assistance (though not as quickly as Protoss shields or Terran medivac). Zerg production is far more centralized than with the Terrans and Protoss; a central hatchery must be utilized to create new Zerg, with other structures providing the necessary technology tree assets, whereas the other two races can produce units from several structures. Zerg units tend to be weaker than those of the other two races, but are also cheaper, allowing for rush tactics to be used. Some Zerg units are capable of infesting enemies with various parasites that range from being able to see what an enemy unit sees to spawning Zerg inside an enemy unit. In addition, Zerg can infest some Terran buildings, allowing for the production of special infested Terran units.
Appearances
In StarCraft, the Zerg are obsessed with the pursuit of genetic purity, and are the focus of the game's second episode. With the Xel'Naga–empowered Protoss targeted as the ultimate lifeform, the Zerg invade the Terran colonies in the Koprulu Sector to assimilate the Terrans' psionic potential and give the Zerg an edge over the Protoss. Through the actions of the Sons of Korhal, the Zerg are lured to the Confederate capital Tarsonis, where they capture the psionic ghost agent Sarah Kerrigan and infest her. Returning to the Zerg base of operations on Char, the Zerg are attacked by the dark templar Zeratul, who accidentally gives the location of the Protoss homeworld Aiur to the Zerg Overmind. With victory in sight, the Overmind launches an invasion of Aiur and manifests itself on the planet. However, at the end of the game, the Protoss high templar Tassadar sacrifices himself to destroy the Overmind, leaving the Zerg to run rampant and leaderless across the planet.
The Zerg return in Brood War initially as uncontrolled indiscriminate killers without the will of the Overmind to guide them. Through the early portions of Brood War, Sarah Kerrigan is at odds with the surviving cerebrates, who have formed a new Overmind to restore control of the Swarm. Through allying herself with the Protoss, Kerrigan strikes at the cerebrates, causing disruption of their plans. Eventually, the UED fleet takes control of Char and pacifies the new Overmind with drugs, putting the cerebrates and most of the Zerg under their control. Kerrigan retaliates by forming a tenuous alliance with the remnants of the Dominion and the forces of Jim Raynor and Fenix, their subsequent victories turning the tide against the UED. However, she later betrays the alliance by dealing long-term damage to the infrastructures of her allies and killing Fenix. Proceeding to blackmail Zeratul into killing the new Overmind, Kerrigan's forces destroy the remnants of the UED fleet, giving her full control of the Zerg and establishing the Swarm as the most powerful faction in the sector.
In StarCraft II: Wings of Liberty, Jim Raynor and the rebel forces who oppose both the Dominion and the Zerg, manage to secure an ancient Xel'Naga artifact and after successfully infiltrating Char, they use it to subjugate the Zerg and restore Kerrigan's human form. Once again without a unified leadership, the Zerg get divided into multiple broods feuding over control of the Swarm. This situation persists until the events of StarCraft II: Heart of the Swarm. Kerrigan, believing Raynor to have been killed in a Dominion surprise attack, enters the original Zerg spawning pool to become the Queen of Blades again. This time she is no longer motivated to destroy humanity, having kept more of her original mindset due to the non-interference of the Zerg Hive Mind, and by extension, the Dark Voice, Amon.
Kerrigan is the protagonist and player character of StarCraft II: Heart of the Swarm. After being deinfested, she was taken in Valerian Mengsk's hideout to research on her, until the Dominion attacks the facility, she escaped along with the rest of the facility, except Raynor, who was captured by Nova. She later learned that Raynor was executed and seeks revenge on Arcturus. As she enters a Leviathan, she controls the local Swarm inside, and starts rebuilding her forces from scratch. She later evolved into a Primal Zerg after a confrontation with Zeratul, leading her to the origins of the Zerg. She became a Primal, after absorbing the spawning pool and killing Primal Leaders to collect essence. With her newfound power, she initially takes the fight to the Dominion after subduing countless Queens. She was later shocked after she knows that Raynor survived and held by Dominion as a bargaining chip. She organizes a raid, rescuing Raynor but the man fell into disbelief that the one he saved, returns to being a monster. She also confronted an ancient Shapeshifter creating Hybrids at the behest of her former rival named Alexei Stukov. She later prepared to end Arcturus Mengsk's reign by killing him in his palace in Korhal. She later left to confront the Shapeshifter's master and only by allied effort, they finished them. Kerrigan left them to her Broodmother Zagara's control.
Critical reception
One of the main factors responsible for StarCraft's positive reception is the attention paid to the three unique playable races, for each of which Blizzard developed completely different characteristics, graphics, backstories, and styles of gameplay, while keeping them balanced in performance against each other. Previous to this, most real-time strategy games consisted of factions and races with the same basic "chess" play styles and units with only superficial differences. The use of unique sides and asymmetric warfare in StarCraft has been credited with popularizing the concept within the real-time strategy genre. Contemporary reviews of the game have mostly praised the attention to the gameplay balance between the species, as well as the fictional narratives built around them.
In their review for StarCraft, IGN's Tom Chick stated that the balance and difference between the races was "remarkable", continuing to praise the game's "radical" approach to different races and its high degree of success when compared with other games in the genre. IGN was also positive about the unit arrangements for the three races, crediting Blizzard Entertainment for not letting units become obsolete during extended play and for showing an "extraordinary amount of patience in balancing them." GameSpot was complimentary of the species in its review for StarCraft, describing the races as being full of personality. Stating that the use of distinct races allowed for the game "to avoid the problem [of equal sides] that has plagued every other game in the genre", GameSpot praised Blizzard Entertainment for keeping it "well balanced despite the great diversity."
Other reviews have echoed much of this positive reception. The site The Gamers' Temple described the species as "very diverse but well-balanced, " stating that this allowed for "a challenging and fun gaming experience." Allgame stated that the inclusion of three "dynamic" species "raises the bar" for real-time strategy games, complimenting the game for forcing the player to "learn how [the aliens'] minds work and not think like a human". Commentators have also praised the aesthetic design of the three races; in particular, the powered armor worn by the Terran Marine was rated eleventh on in a Maxim feature on the top armor suits in video games, and ninth in a similar feature by Machinima.com.
This positive view, however, is not universally held. For example, Computer and Video Games, while describing the game as "highly playable, " nevertheless described a "slight feeling of déjà vu" between the three races.
References
Fictional extraterrestrial species and races
Fictional superorganisms
StarCraft characters
Video game species and races
Video game characters introduced in 1998 | Zerg | Biology | 3,518 |
468,915 | https://en.wikipedia.org/wiki/While%20loop | In most computer programming languages, a while loop is a control flow statement that allows code to be executed repeatedly based on a given Boolean condition. The while loop can be thought of as a repeating if statement.
Overview
The while construct consists of a block of code and a condition/expression. The condition/expression is evaluated, and if the condition/expression is true, the code within all of their following in the block is executed. This repeats until the condition/expression becomes false. Because the while loop checks the condition/expression before the block is executed, the control structure is often also known as a pre-test loop. Compare this with the do while loop, which tests the condition/expression after the loop has executed.
For example, in the languages C, Java, C#, Objective-C, and C++, (which use the same syntax in this case), the code fragment
int x = 0;
while (x < 5) {
printf ("x = %d\n", x);
x++;
}
first checks whether x is less than 5, which it is, so then the {loop body} is entered, where the printf function is run and x is incremented by 1. After completing all the statements in the loop body, the condition, (x < 5), is checked again, and the loop is executed again, this process repeating until the variable x has the value 5.
It is possible, and in some cases desirable, for the condition to always evaluate to true, creating an infinite loop. When such a loop is created intentionally, there is usually another control structure (such as a break statement) that controls termination of the loop.
For example:
while (true) {
// do complicated stuff
if (someCondition)
break;
// more stuff
}
Demonstrating while loops
These while loops will calculate the factorial of the number 5:
ActionScript 3
var counter: int = 5;
var factorial: int = 1;
while (counter > 1) {
factorial *= counter;
counter--;
}
Printf("Factorial = %d", factorial);
Ada
with Ada.Integer_Text_IO;
procedure Factorial is
Counter : Integer := 5;
Factorial : Integer := 1;
begin
while Counter > 0 loop
Factorial := Factorial * Counter;
Counter := Counter - 1;
end loop;
Ada.Integer_Text_IO.Put (Factorial);
end Factorial;
APL
counter ← 5
factorial ← 1
:While counter > 0
factorial ×← counter
counter -← 1
:EndWhile
⎕ ← factorial
or simply
!5
AutoHotkey
counter := 5
factorial := 1
While counter > 0
factorial *= counter--
MsgBox % factorial
Small Basic
counter = 5 ' Counter = 5
factorial = 1 ' initial value of variable "factorial"
While counter > 0
factorial = factorial * counter
counter = counter - 1
TextWindow.WriteLine(counter)
EndWhile
Visual Basic
Dim counter As Integer = 5 ' init variable and set value
Dim factorial As Integer = 1 ' initialize factorial variable
Do While counter > 0
factorial = factorial * counter
counter = counter - 1
Loop ' program goes here, until counter = 0
'Debug.Print factorial ' Console.WriteLine(factorial) in Visual Basic .NET
Bourne (Unix) shell
counter=5
factorial=1
while [ $counter -gt 0 ]; do
factorial=$((factorial * counter))
counter=$((counter - 1))
done
echo $factorial
C, C++
int main() {
int count = 5;
int factorial = 1;
while (count > 1)
factorial *= count--;
printf("%d", factorial);
}
ColdFusion Markup Language (CFML)
Script syntax
counter = 5;
factorial = 1;
while (counter > 1) {
factorial *= counter--;
}
writeOutput(factorial);
Tag syntax
<cfset counter = 5>
<cfset factorial = 1>
<cfloop condition="counter GT 1">
<cfset factorial *= counter-->
</cfloop>
<cfoutput>#factorial#</cfoutput>
Fortran
program FactorialProg
integer :: counter = 5
integer :: factorial = 1
do while (counter > 0)
factorial = factorial * counter
counter = counter - 1
end do
print *, factorial
end program FactorialProg
Go
Go has no while statement, but it has the function of a for statement when omitting some elements of the for statement.
counter, factorial := 5, 1
for counter > 1 {
counter, factorial = counter-1, factorial*counter
}
Java, C#, D
The code for the loop is the same for Java, C# and D:
int counter = 5;
int factorial = 1;
while (counter > 1)
factorial *= counter--;
JavaScript
let counter = 5;
let factorial = 1;
while (counter > 1)
factorial *= counter--;
console.log(factorial);
Lua
counter = 5
factorial = 1
while counter > 0 do
factorial = factorial * counter
counter = counter - 1
end
print(factorial)
MATLAB, Octave
counter = 5;
factorial = 1;
while (counter > 0)
factorial = factorial * counter; %Multiply
counter = counter - 1; %Decrement
end
factorial
Mathematica
Block[{counter=5,factorial=1}, (*localize counter and factorial*)
While[counter>0, (*While loop*)
factorial*=counter; (*Multiply*)
counter--; (*Decrement*)
];
factorial
]
Oberon, Oberon-2, Oberon-07, Component Pascal
MODULE Factorial;
IMPORT Out;
VAR
Counter, Factorial: INTEGER;
BEGIN
Counter := 5;
Factorial := 1;
WHILE Counter > 0 DO
Factorial := Factorial * Counter;
DEC(Counter)
END;
Out.Int(Factorial,0)
END Factorial.
Maya Embedded Language
int $counter = 5;
int $factorial = 1;
int $multiplication;
while ($counter > 0) {
$multiplication = $factorial * $counter;
$counter -= 1;
print("Counter is: " + $counter + ", multiplication is: " + $multiplication + "\n");
}
Nim
var
counter = 5 # Set counter value to 5
factorial = 1 # Set factorial value to 1
while counter > 0: # While counter is greater than 0
factorial *= counter # Set new value of factorial to counter.
dec counter # Set the counter to counter - 1.
echo factorial
Non-terminating while loop:
while true:
echo "Help! I'm stuck in a loop!"
Pascal
Pascal has two forms of the while loop, while and repeat. While repeats one statement (unless enclosed in a begin-end block) as long as the condition is true. The repeat statement repetitively executes a block of one or more statements through an until statement and continues repeating unless the condition is false. The main difference between the two is the while loop may execute zero times if the condition is initially false, the repeat-until loop always executes at least once.
program Factorial1;
var
Fv: integer;
procedure fact(counter:integer);
var
Factorial: integer;
begin
Factorial := 1;
while Counter > 0 do
begin
Factorial := Factorial * Counter;
Counter := Counter - 1
end;
WriteLn(Factorial)
end;
begin
Write('Enter a number to return its factorial: ');
readln(fv);
repeat
fact(fv);
Write('Enter another number to return its factorial (or 0 to quit): ');
until fv=0;
end.
Perl
my $counter = 5;
my $factorial = 1;
while ($counter > 0) {
$factorial *= $counter--; # Multiply, then decrement
}
print $factorial;
While loops are frequently used for reading data line by line (as defined by the $/ line separator) from open filehandles:
open IN, "<test.txt";
while (<IN>) {
print;
}
close IN;
PHP
$counter = 5;
$factorial = 1;
while ($counter > 0) {
$factorial *= $counter--; // Multiply, then decrement.
}
echo $factorial;
PL/I
declare counter fixed initial(5);
declare factorial fixed initial(1);
do while(counter > 0)
factorial = factorial * counter;
counter = counter - 1;
end;
Python
counter = 5 # Set the value to 5
factorial = 1 # Set the value to 1
while counter > 0: # While counter(5) is greater than 0
factorial *= counter # Set new value of factorial to counter.
counter -= 1 # Set the counter to counter - 1.
print(factorial) # Print the value of factorial.
Non-terminating while loop:
while True:
print("Help! I'm stuck in a loop!")
Racket
In Racket, as in other Scheme implementations, a named-let is a popular way to implement loops:
#lang racket
(define counter 5)
(define factorial 1)
(let loop ()
(when (> counter 0)
(set! factorial (* factorial counter))
(set! counter (sub1 counter))
(loop)))
(displayln factorial)
Using a macro system, implementing a while loop is a trivial exercise (commonly used to introduce macros):
#lang racket
(define-syntax-rule (while test body ...) ; implements a while loop
(let loop () (when test body ... (loop))))
(define counter 5)
(define factorial 1)
(while (> counter 0)
(set! factorial (* factorial counter))
(set! counter (sub1 counter)))
(displayln factorial)
However, an imperative programming style is often discouraged in Scheme and Racket.
Ruby
# Calculate the factorial of 5
i = 1
factorial = 1
while i <= 5
factorial *= i
i += 1
end
puts factorial
Rust
fn main() {
let mut counter = 5;
let mut factorial = 1;
while counter > 1 {
factorial *= counter;
counter -= 1;
}
println!("{}", factorial);
}
Smalltalk
Contrary to other languages, in Smalltalk a while loop is not a language construct but defined in the class BlockClosure as a method with one parameter, the body as a closure, using self as the condition.
Smalltalk also has a corresponding whileFalse: method.
| count factorial |
count := 5.
factorial := 1.
[count > 0] whileTrue:
[factorial := factorial * count.
count := count - 1].
Transcript show: factorial
Swift
var counter = 5 // Set the initial counter value to 5
var factorial = 1 // Set the initial factorial value to 1
while counter > 0 { // While counter(5) is greater than 0
factorial *= counter // Set new value of factorial to factorial x counter.
counter -= 1 // Set the new value of counter to counter - 1.
}
print(factorial) // Print the value of factorial.
Tcl
set counter 5
set factorial 1
while {$counter > 0} {
set factorial [expr $factorial * $counter]
incr counter -1
}
puts $factorial
VEX
int counter = 5;
int factorial = 1;
while (counter > 1)
factorial *= counter--;
printf("%d", factorial);
PowerShell
$counter = 5
$factorial = 1
while ($counter) {
$factorial *= $counter--
}
$factorial
While (language)
While is a simple programming language constructed from assignments, sequential composition, conditionals, and while statements, used in the theoretical analysis of imperative programming language semantics.
C := 5;
F := 1;
while (C > 1) do
F := F * C;
C := C - 1;
See also
Do while loop
For loop
Foreach
Primitive recursive function
General recursive function
LOOP (programming language) – a programming language with the property that the functions it can compute are exactly the primitive recursive functions
References
Control flow
Iteration in programming
Programming language comparisons
Articles with example Ada code
Articles with example BASIC code
Articles with example C code
Articles with example C++ code
Articles with example C Sharp code
Articles with example D code
Articles with example Fortran code
Articles with example Java code
Articles with example JavaScript code
Articles with example MATLAB/Octave code
Articles with example Pascal code
Articles with example Perl code
Articles with example PHP code
Articles with example Python (programming language) code
Articles with example Racket code
Articles with example Ruby code
Articles with example Rust code
Articles with example Smalltalk code
Articles with example Swift code
Articles with example Tcl code | While loop | Technology | 2,816 |
75,766,808 | https://en.wikipedia.org/wiki/Penoxsulam | Penoxsulam is sulfonamide and triazolopyrimidine herbicide that acts as an acetolactate synthase inhibitor. It is primarily used for rice production.
References
Herbicides
Triazolopyrimidines
Sulfonamides
Methoxy compounds
Trifluoromethyl compounds | Penoxsulam | Chemistry,Biology | 67 |
10,090 | https://en.wikipedia.org/wiki/Erythromycin | Erythromycin is an antibiotic used for the treatment of a number of bacterial infections. This includes respiratory tract infections, skin infections, chlamydia infections, pelvic inflammatory disease, and syphilis. It may also be used during pregnancy to prevent Group B streptococcal infection in the newborn, and to improve delayed stomach emptying. It can be given intravenously and by mouth. An eye ointment is routinely recommended after delivery to prevent eye infections in the newborn.
Common side effects include abdominal cramps, vomiting, and diarrhea. More serious side effects may include Clostridioides difficile colitis, liver problems, prolonged QT, and allergic reactions. It is generally safe in those who are allergic to penicillin. Erythromycin also appears to be safe to use during pregnancy. While generally regarded as safe during breastfeeding, its use by the mother during the first two weeks of life may increase the risk of pyloric stenosis in the baby. This risk also applies if taken directly by the baby during this age. It is in the macrolide family of antibiotics and works by decreasing bacterial protein production.
Erythromycin was first isolated in 1952 from the bacteria Saccharopolyspora erythraea. It is on the World Health Organization's List of Essential Medicines. In 2022, it was the 271st most commonly prescribed medication in the United States, with more than 800,000 prescriptions.
Medical uses
Erythromycin can be used to treat bacteria responsible for causing infections of the skin and upper respiratory tract, including Streptococcus, Staphylococcus, Haemophilus and Corynebacterium genera. The following represents MIC susceptibility data for a few medically significant bacteria:
Haemophilus influenzae: 0.015 to 256 μg/ml
Staphylococcus aureus: 0.023 to 1024 μg/ml
Streptococcus pyogenes: 0.004 to 256 μg/ml
Corynebacterium minutissimum: 0.015 to 64 μg/ml
It may be useful in treating gastroparesis due to this promotility effect. It has been shown to improve feeding intolerances in those who are critically ill. Intravenous erythromycin may also be used in endoscopy to help clear stomach contents to enhance endoscopic visualization, potentially improving diagnostic accuracy and subsequent management.
Available forms
Erythromycin is available in enteric-coated tablets, slow-release capsules, oral suspensions, ophthalmic solutions, ointments, gels, enteric-coated capsules, non enteric-coated tablets, non enteric-coated capsules, and injections.
The following erythromycin combinations are available for oral dosage:
erythromycin base (capsules, tablets)
erythromycin estolate (capsules, oral suspension, tablets), contraindicated during pregnancy
erythromycin ethylsuccinate (oral suspension, tablets)
erythromycin stearate (oral suspension, tablets)
For injection, the available combinations are:
erythromycin gluceptate
erythromycin lactobionate
For ophthalmic use:
erythromycin base (ointment)
Adverse effects
Gastrointestinal disturbances, such as diarrhea, nausea, abdominal pain, and vomiting, are very common because erythromycin is a motilin agonist.
More serious side effects include arrhythmia with prolonged QT intervals, including torsades de pointes, and reversible deafness. Allergic reactions range from urticaria to anaphylaxis. Cholestasis and Stevens–Johnson syndrome are some other rare side effects that may occur.
Studies have shown evidence both for and against the association of pyloric stenosis and exposure to erythromycin prenatally and postnatally. Exposure to erythromycin (especially long courses at antimicrobial doses, and also through breastfeeding) has been linked to an increased probability of pyloric stenosis in young infants. Erythromycin used for feeding intolerance in young infants has not been associated with hypertrophic pyloric stenosis.
Erythromycin estolate has been associated with reversible hepatotoxicity in pregnant women in the form of elevated serum glutamic-oxaloacetic transaminase and is not recommended during pregnancy. Some evidence suggests similar hepatotoxicity in other populations.
It can also affect the central nervous system, causing psychotic reactions, nightmares, and night sweats.
Interactions
Erythromycin is metabolized by enzymes of the cytochrome P450 system, in particular, by isozymes of the CYP3A superfamily. The activity of the CYP3A enzymes can be induced or inhibited by certain drugs (e.g., dexamethasone), which can cause it to affect the metabolism of many different drugs, including erythromycin. If other CYP3A substrates — drugs that are broken down by CYP3A — such as simvastatin (Zocor), lovastatin (Mevacor), or atorvastatin (Lipitor) — are taken concomitantly with erythromycin, levels of the substrates increase, often causing adverse effects. A noted drug interaction involves erythromycin and simvastatin, resulting in increased simvastatin levels and the potential for rhabdomyolysis. Another group of CYP3A4 substrates are drugs used for migraine such as ergotamine and dihydroergotamine; their adverse effects may be more pronounced if erythromycin is associated.
Earlier case reports on sudden death prompted a study on a large cohort that confirmed a link between erythromycin, ventricular tachycardia, and sudden cardiac death in patients also taking drugs that prolong the metabolism of erythromycin (like verapamil or diltiazem) by interfering with CYP3A4. Hence, erythromycin should not be administered to people using these drugs, or drugs that also prolong the QT interval. Other examples include terfenadine (Seldane, Seldane-D), astemizole (Hismanal), cisapride (Propulsid, withdrawn in many countries for prolonging the QT time) and pimozide (Orap). Interactions with theophylline, which is used mostly in asthma, were also shown.
Erythromycin and doxycycline can have a synergistic effect when combined and kill bacteria (E. coli) with a higher potency than the sum of the two drugs together. This synergistic relationship is only temporary. After approximately 72 hours, the relationship shifts to become antagonistic, whereby a 50/50 combination of the two drugs kills less bacteria than if the two drugs were administered separately.
It may alter the effectiveness of combined oral contraceptive pills because of its effect on the gut flora. A review found that when erythromycin was given with certain oral contraceptives, there was an increase in the maximum serum concentrations and AUC of estradiol and dienogest.
Erythromycin is an inhibitor of the cytochrome P450 system, which means it can have a rapid effect on levels of other drugs metabolised by this system, e.g., warfarin.
Pharmacology
Mechanism of action
Erythromycin displays bacteriostatic activity or inhibits growth of bacteria, especially at higher concentrations. By binding to the 50s subunit of the bacterial rRNA complex, protein synthesis and subsequent structure and function processes critical for life or replication are inhibited. Erythromycin interferes with aminoacyl translocation, preventing the transfer of the tRNA bound at the A site of the rRNA complex to the P site of the rRNA complex. Without this translocation, the A site remains occupied, thus the addition of an incoming tRNA and its attached amino acid to the nascent polypeptide chain is inhibited. This interferes with the production of functionally useful proteins, which is the basis of this antimicrobial action.
Erythromycin increases gut motility by binding to motilin receptor, thus it is a motilin receptor agonist in addition to its antimicrobial properties. It can be therefore administered intravenously as a stomach emptying stimulant.
Pharmacokinetics
Erythromycin is easily inactivated by gastric acid; therefore, all orally administered formulations are given as either enteric-coated or more-stable salts or esters, such as erythromycin ethylsuccinate. Erythromycin is very rapidly absorbed, and diffuses into most tissues and phagocytes. Due to the high concentration in phagocytes, erythromycin is actively transported to the site of infection, where, during active phagocytosis, large concentrations of erythromycin are released.
Metabolism
Most of erythromycin is metabolised by demethylation in the liver by the hepatic enzyme CYP3A4. Its main elimination route is in the bile with little renal excretion, 2%–15% unchanged drug. Erythromycin's elimination half-life ranges between 1.5 and 2.0 hours and is between 5 and 6 hours in patients with end-stage renal disease. Erythromycin levels peak in the serum 4 hours after dosing; ethylsuccinate peaks 0.5–2.5 hours after dosing, but can be delayed if digested with food.
Erythromycin crosses the placenta and enters breast milk. The American Association of Pediatrics determined erythromycin is safe to take while breastfeeding. Absorption in pregnant patients has been shown to be variable, frequently resulting in levels lower than in nonpregnant patients.
Chemistry
Composition
Standard-grade erythromycin is primarily composed of four related compounds known as erythromycins A, B, C, and D. Each of these compounds can be present in varying amounts and can differ by lot. Erythromycin A has been found to have the most antibacterial activity, followed by erythromycin B. Erythromycins C and D are about half as active as erythromycin A. Some of these related compounds have been purified and can be studied and researched individually.
Synthesis
Over the three decades after the discovery of erythromycin A and its activity as an antimicrobial, many attempts were made to synthesize it in the laboratory. The presence of 10 stereogenic carbons and several points of distinct substitution has made the total synthesis of erythromycin A a formidable task. Complete syntheses of erythromycins’ related structures and precursors such as 6-deoxyerythronolide B have been accomplished, giving way to possible syntheses of different erythromycins and other macrolide antimicrobials. Woodward successfully completed the synthesis of erythromycin A, which was published in 1981.
History
In 1949 Abelardo B. Aguilar, a Filipino scientist, sent some soil samples to his employer at Eli Lilly. Aguilar managed to isolate erythromycin from the metabolic products of a strain of Streptomyces erythreus (designation changed to Saccharopolyspora erythraea) found in the samples. Aguilar received no further credit or compensation for his discovery.
The scientist was allegedly promised a trip to the company's manufacturing plant in Indianapolis, but it was never fulfilled. In a letter to the company's president, Aguilar wrote: “A leave of absence is all I ask as I do not wish to sever my connection with a great company which has given me wonderful breaks in life.” The request was not granted.
Aguilar reached out to Eli Lilly again in 1993, requesting royalties from sales of the drug over the years, intending to use them to put up a foundation for poor and sickly Filipinos. This request was also denied. He died in September of the same year.
Lilly filed for patent protection on the compound which was granted in 1953. The product was launched commercially in 1952 under the brand name Ilosone (after the Philippine region of Iloilo where it was originally collected). Erythromycin was formerly also called Ilotycin.
The antibiotic clarithromycin was invented by scientists at the Japanese drug company Taisho Pharmaceutical in the 1970s as a result of their efforts to overcome the acid instability of erythromycin.
Society and culture
Economics
It is available as a generic medication.
In the United States, in 2014, the price increased to seven dollars per 500mg tablet.
The US price of erythromycin rose three times between 2010 and 2015, from 24 cents per 500mg tablet in 2010 to $8.96 in 2015. In 2017, a Kaiser Health News study found that the per-unit cost of dozens of generics doubled or even tripled from 2015 to 2016, increasing spending by the Medicaid program. Due to price increases by drug manufacturers, Medicaid paid on average $2,685,330 more for Erythromycin in 2016 compared to 2015 (not including rebates). In the US by 2018, generic drug prices had climbed another 5% on average.
The UK price listed in the BNF for erythromycin 500mg tablets was £36.40 for 100 tablets (36.4 pence each) . This price is not paid by NHS patients: there is no NHS prescription charge in Scotland, Wales, and Northern Ireland; while NHS patients in England without an exemption are liable for a flat rate prescription charge. , that charge was £9.90 for each prescribed medicine.
Brand names
Brand names include Robimycin, E-Mycin, E.E.S. Granules, E.E.S.-200, E.E.S.-400, E.E.S.-400 Filmtab, Erymax, Ery-Tab, Eryc, Ranbaxy, Erypar, EryPed, Eryped 200, Eryped 400, Erythrocin Stearate Filmtab, Erythrocot, E-Base, Erythroped, Ilosone, MY-E, Pediamycin, Zineryt, Abboticin, Abboticin-ES, Erycin, PCE Dispertab, Stiemycine, Acnasol, and Tiloryth.
Veterinary uses
Erythromycin is also used in fishcare for the "broad spectrum treatment and control of bacterial disease". Body slime, mouth fungus, furunculosis, bacterial gill illness, and hemorrhagic septicaemia are all examples of bacterial diseases in fish that may be treated and controlled with this therapy. The usage of Erythromycin in fishcare is mainly limited to therapies targeting gram-positive bacteria.
References
Tertiary alcohols
CYP3A4 inhibitors
Dimethylamino compounds
Ethers
Hepatotoxins
HERG blocker
Lactones
Drugs developed by Pfizer
Drugs developed by Eli Lilly and Company
Macrolide antibiotics
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Erythromycin | Chemistry | 3,311 |
53,546,903 | https://en.wikipedia.org/wiki/Pavel%20Jungwirth | Pavel Jungwirth (born 20 May 1966 in Prague, Czech Republic) is a Czech physical chemist. Since 2004, he has been the head of the Senior Research Group at the Institute of Organic Chemistry and Biochemistry of the Czech Academy of Sciences. He has also been a professor in the Faculty of Mathematics and Physics at Charles University since 2000. He has also been a senior editor of the Journal of Physical Chemistry since 2009. He is popularly known for studying the explosive reaction between alkali metals, such as sodium and potassium, and water; his research on this subject indicates that these reactions result from a Coulomb explosion. He and his colleagues have also discovered a way to slow down this reaction, which they used to determine the source of a blue flash that is briefly produced during the reaction.
Pavel Jungwirth is a coordinator of an international science competition Dream Chemistry Award.
References
Czech chemists
Living people
1966 births
Charles University alumni
Academic staff of Charles University
Scientists from Prague
Organic chemists
Institute of Organic Chemistry and Biochemistry of the CAS | Pavel Jungwirth | Chemistry | 207 |
1,604,631 | https://en.wikipedia.org/wiki/Joseph%20Oesterl%C3%A9 | Joseph Oesterlé (born 1954) is a French mathematician who, along with David Masser, formulated the abc conjecture which has been called "the most important unsolved problem in diophantine analysis".
He is a member of Bourbaki.
References
External links
The ABC conjecture
Oesterlé on the origin of the abc Conjecture
1954 births
Living people
People from Alsace
École Normale Supérieure alumni
20th-century French mathematicians
University of Paris alumni
French number theorists
Nicolas Bourbaki
Abc conjecture | Joseph Oesterlé | Mathematics | 105 |
13,418,956 | https://en.wikipedia.org/wiki/Arabicodium | Arabicodium is a fossil genus of green algae in the family Codiaceae.
References
Ulvophyceae genera
Bryopsidales
Fossil algae | Arabicodium | Biology | 31 |
28,747,453 | https://en.wikipedia.org/wiki/Southern%20and%20Eastern%20Serbia | The Southern and Eastern Serbia () is one of five statistical regions of Serbia. It is also a Level-2 statistical region according to the Nomenclature of Territorial Units for Statistics (NUTS).
History
In 2009, the National Assembly adopted a law which divided Serbia into seven statistical regions. At first, it was decided that in the territory of current statistical region of Southern and Eastern Serbia there would be two statistical regions – Eastern Region () and Southern Region (). However, in 2010, the law was changed, thus the Eastern and Southern regions were merged into a single statistical region named Southern and Eastern Serbia.
Administrative districts
The statistical region of Southern and Eastern Serbia is composed of 9 administrative districts:
Demographics
The region is heavily affected by depopulation. Most critical situation is in municipalities of Gadžin Han, Crna Trava, Ražanj, Trgovište, Dimitrovgrad, and Bosilegrad. A stark example of depopulation is Crna Trava, which used to have 13,614 inhabitants in 1948, while in 2022 only 1,063 people were registered.
Cities and towns
The following list include cities and towns with over 20,000 inhabitants.
Ethnic structure
See also
Statistical regions of Serbia
Šumadija and Western Serbia
References
External links
Usvojene izmene i dopune Zakona o regionalnom-razvoju (in Serbian)
Statistical regions of Serbia | Southern and Eastern Serbia | Mathematics | 292 |
19,751,291 | https://en.wikipedia.org/wiki/Transfer%20of%20the%20S%C3%A3o%20Francisco%20River | The Transfer of the São Francisco River is a large-scale interbasin transfer to the dry sertão in the four northeastern states of Ceará, Rio Grande do Norte, Paraíba and Pernambuco in Brazil. The project, which was given the green light to go ahead by Brazil's government in 2005, is estimated to cost US$2 billion and is expected to improve the lives of 12 million people. After legal challenges were brought against the project, the Supreme Court allowed it to go ahead in December 2007.
Technical aspects
The project would divert 1.4% of the river's water for municipal water supply, industry and irrigation. Municipal water supply would receive priority over other uses, which would only be catered for when the Sobradinho Reservoir behind the Sobradinho Dam on the São Francisco River, which produces much of the region's electricity, is nearly full about 40% of the time. The project actually consists of two transfers: The East axis would transfer water to the Paraíba do Norte River, while the North axis would transfer water to the Jaguaribe and Piranhas rivers. The project includes 700 km of canals and tunnels, as well as several dams. It is expected to displace almost a million people and construction is expected to take 20 years to complete.
Criticism and controversy
Critics of the project argue that beneficiary states should improve management of their own water before importing it from outside the region. Bishop Luiz Flávio Cappio from Bahia also wonders why water is being exported when 3m poor live along the Sao Francisco river's course, many of them without running water and proper sanitation. He argues that the transfer “will demand huge resources that could be spent on other projects much closer to the reality of the people”. It is also being said that the project will mainly benefit richer farmers who already have irrigation infrastructure in place and not rainfed farmers that are hardest hit by drought. The alleged insufficiency of water in the Sao Francisco River itself during the dry season, and its consequent impact on aquatic ecosystems, is another argument of critics. For example, João Alves Filho, governor of the state of Sergipe, says that there are already “signs of mortality” where the river joins the sea. Marco Antônio Tavares Coelho, a prominent opponent, says that "aridity is the natural state of the sertão" and that soaking it would be like "removing ice from the North Pole". In 2001 the World Bank reportedly refused to finance the project because of its limited impact in combating poverty and drought.
See also
Water resources management in Brazil
Water supply and sanitation in Brazil
References
Water resource management in Brazil
Interbasin transfer
Water supply and sanitation in Brazil
Irrigation in Brazil
Droughts in Brazil | Transfer of the São Francisco River | Environmental_science | 565 |
42,055,006 | https://en.wikipedia.org/wiki/Clitocine | Clitocine is a mushroom nucleoside isolate with anticancer activity in vitro.
References
Nucleosides | Clitocine | Chemistry | 26 |
3,624,902 | https://en.wikipedia.org/wiki/Position%20weight%20matrix | A position weight matrix (PWM), also known as a position-specific weight matrix (PSWM) or position-specific scoring matrix (PSSM), is a commonly used representation of motifs (patterns) in biological sequences.
PWMs are often derived from a set of aligned sequences that are thought to be functionally related and have become an important part of many software tools for computational motif discovery.
Background
Creation
Conversion of sequence to position probability matrix
A PWM has one row for each symbol of the alphabet (4 rows for nucleotides in DNA sequences or 20 rows for amino acids in protein sequences) and one column for each position in the pattern. In the first step in constructing a PWM, a basic position frequency matrix (PFM) is created by counting the occurrences of each nucleotide at each position. From the PFM, a position probability matrix (PPM) can now be created by dividing that former nucleotide count at each position by the number of sequences, thereby normalising the values. Formally, given a set X of N aligned sequences of length l, the elements of the PPM M are calculated:
where i (1,...,N), j (1,...,l), k is the set of symbols in the alphabet and I(a=k) is an indicator function where I(a=k) is 1 if a=k and 0 otherwise.
For example, given the following DNA sequences:
{| class="wikitable"
|-
|
GAGGTAAAC
TCCGTAAGT
CAGGTTGGA
ACAGTCAGT
TAGGTCATT
TAGGTACTG
ATGGTAACT
CAGGTATAC
TGTGTGAGT
AAGGTAAGT
|}
The corresponding PFM is:
Therefore, the resulting PPM is:
Both PPMs and PWMs assume statistical independence between positions in the pattern, as the probabilities for each position are calculated independently of other positions. From the definition above, it follows that the sum of values for a particular position (that is, summing over all symbols) is 1. Each column can therefore be regarded as an independent multinomial distribution. This makes it easy to calculate the probability of a sequence given a PPM, by multiplying the relevant probabilities at each position. For example, the probability of the sequence S = given the above PPM M can be calculated:
Pseudocounts (or Laplace estimators) are often applied when calculating PPMs if based on a small dataset, in order to avoid matrix entries having a value of 0. This is equivalent to multiplying each column of the PPM by a Dirichlet distribution and allows the probability to be calculated for new sequences (that is, sequences which were not part of the original dataset). In the example above, without pseudocounts, any sequence which did not have a in the 4th position or a in the 5th position would have a probability of 0, regardless of the other positions.
Conversion of position probability matrix to position weight matrix
Most often the elements in PWMs are calculated as log odds. That is, the elements of a PPM are transformed using a background model so that:
describes how an element in the PWM (left), , can be calculated.
The simplest background model assumes that each letter appears equally frequently in the dataset. That is, the value of for all symbols in the alphabet (0.25 for nucleotides and 0.05 for amino acids). Applying this transformation to the PPM M from above (with no pseudocounts added) gives:
The entries in the matrix make clear the advantage of adding pseudocounts, especially when using small datasets to construct M. The background model need not have equal values for each symbol: for example, when studying organisms with a high GC-content, the values for and may be increased with a corresponding decrease for the and values.
When the PWM elements are calculated using log likelihoods, the score of a sequence can be calculated by adding (rather than multiplying) the relevant values at each position in the PWM. The sequence score gives an indication of how different the sequence is from a random sequence. The score is 0 if the sequence has the same probability of being a functional site and of being a random site. The score is greater than 0 if it is more likely to be a functional site than a random site, and less than 0 if it is more likely to be a random site than a functional site. The sequence score can also be interpreted in a physical framework as the binding energy for that sequence.
Information content
The information content (IC) of a PWM is sometimes of interest, as it says something about how different a given PWM is from a uniform distribution.
The self-information of observing a particular symbol at a particular position of the motif is:
The expected (average) self-information of a particular element in the PWM is then:
Finally, the IC of the PWM is then the sum of the expected self-information of every element:
Often, it is more useful to calculate the information content with the background letter frequencies of the sequences you are studying rather than assuming equal probabilities of each letter (e.g., the GC-content of DNA of thermophilic bacteria range from 65.3 to 70.8, thus a motif of ATAT would contain much more information than a motif of CCGG). The equation for information content thus becomes
where is the background frequency for letter . This corresponds to the Kullback–Leibler divergence or relative entropy. However, it has been shown that when using PSSM to search genomic sequences (see below) this uniform correction can lead to overestimation of the importance of the different bases in a motif, due to the uneven distribution of n-mers in real genomes, leading to a significantly larger number of false positives.
Uses
There are various algorithms to scan for hits of PWMs in sequences. One example is the MATCH algorithm which has been implemented in the ModuleMaster. More sophisticated algorithms for fast database searching with nucleotide as well as amino acid PWMs/PSSMs are implemented in the possumsearch software.
The basic PWM/PSSM is unable to deal with insertions and deletions. A PSSM with additional probabilities for insertion and deletion at each position can be interpreted as a hidden Markov model. This is the approach used by Pfam.
See also
ScerTF
References
External links
3PFDB – a database of Best Representative PSSM Profiles (BRPs) of Protein Families generated using a novel data mining approach.
UGENE – PSS matrices design, integrated interface to JASPAR, UniPROBE and SITECON databases.
Bioinformatics
Evaluation methods | Position weight matrix | Engineering,Biology | 1,426 |
20,776,857 | https://en.wikipedia.org/wiki/Reptilase%20time | Reptilase time (RT) is a blood test used to detect deficiency or abnormalities in fibrinogen, especially in cases of heparin contamination.
Reptilase, an enzyme found in the venom of Bothrops snakes, has activity similar to thrombin. Unlike thrombin, reptilase is resistant to inhibition by antithrombin III. Thus, the reptilase time is not prolonged in blood samples containing heparin, hirudin, or direct thrombin inhibitors, whereas the thrombin time will be prolonged in these samples. Reptilase also differs from thrombin by releasing fibrinopeptide A, but not fibrinopeptide B, in its cleavage of fibrinogen.
References
Blood tests | Reptilase time | Chemistry | 160 |
3,720,236 | https://en.wikipedia.org/wiki/Boundary%20conformal%20field%20theory | In theoretical physics, boundary conformal field theory (BCFT) is a conformal field theory defined on a spacetime with a boundary (or boundaries). Different kinds of boundary conditions for the fields may be imposed on the fundamental fields; for example, Neumann boundary condition or Dirichlet boundary condition is acceptable for free bosonic fields. BCFT was developed by John Cardy.
In the context of string theory, physicists are often interested in two-dimensional BCFTs. The specific types of boundary conditions in a specific CFT describe different kinds of D-branes.
BCFT is also used in condensed matter physics - it can be used to study boundary critical behavior and to solve quantum impurity models.
See also
Conformal field theory
Operator product expansion
Critical point
References
Further reading
Conformal field theory | Boundary conformal field theory | Physics | 164 |
187,750 | https://en.wikipedia.org/wiki/Large%20numbers | Large numbers, far beyond those encountered in everyday life—such as simple counting or financial transactions—play a crucial role in various domains. These expansive quantities appear prominently in mathematics, cosmology, cryptography, and statistical mechanics. While they often manifest as large positive integers, they can also take other forms in different contexts (such as P-adic number). Googology delves into the naming conventions and properties of these immense numerical entities.
Since the customary, traditional (non-technical) decimal format of large numbers can be lengthy, other systems have been devised that allows for shorter representation. For example, a billion is represented as 13 characters (1,000,000,000) in decimal format, but is only 3 characters (109) when expressed in exponential format. A trillion is 17 characters in decimal, but only 4 (1012) in exponential. Values that vary dramatically can be represented and compared graphically via logarithmic scale.
Natural language numbering
A natural language numbering system allows for representing large numbers using names that more clearly distinguish numeric scale than a series of digits. For example "billion" may be easier to comprehend for some readers than "1,000,000,000". But, as names, a numeric value can be lengthy. For example, "2,345,789" is "two million, three hundred forty five thousand, seven hundred and eighty nine".
Standard notation
Standard notation is a variation of English's natural language numbering, where it is shortened into a suffix. Examples are 2,345,678,900 = 2.34 B (B = billion).
Scientific notation
Scientific notation was devised to represent the vast range of values encountered in scientific research in a format that is more compact than traditional formats yet allows for high precision when called for. A value is represented as a decimal fraction times a multiple power of 10. The factor is intended to make reading comprehension easier than a lengthy series of zeros. For example, 1.0 expresses one billion—1 followed by nine zeros. The reciprocal, one billionth, is 1.0.
Examples
Examples of large numbers describing real-world things:
The number of cells in the human body (estimated at 3.72), or 37.2 trillion/37.2 T
The number of bits on a computer hard disk (, typically about 1013, 1–2 TB), or 10 trillion/10T
The number of neuronal connections in the human brain (estimated at 1014), or 100 trillion/100 T
The Avogadro constant is the number of "elementary entities" (usually atoms or molecules) in one mole; the number of atoms in 12 grams of carbon-12 approximately , or 602.2 sextillion/60.2Sx.
The total number of DNA base pairs within the entire biomass on Earth, as a possible approximation of global biodiversity, is estimated at , or 53±36 undecillion/17 - 89 UDc
The mass of Earth consists of about 4 × 1051, or 4 sexdecillion/4 SxDc, nucleons
The estimated number of atoms in the observable universe (1080), or 100 quinvigintillion/100 QiVg
The lower bound on the game-tree complexity of chess, also known as the "Shannon number" (estimated at around 10120), or 1 novemtrigintillion/1 NTg Note that this value of the Shannon number is for Standard Chess. It has even larger values for larger-board chess variants such as Grant Acedrex, Tai Shogi, and Taikyoku Shogi.
Astronomical
In the vast expanse of astronomy and cosmology, we encounter staggering numbers related to length and time. For instance, according to the prevailing Big Bang model, our universe is approximately 13.8 billion years old (equivalent to seconds). The observable universe spans an incredible 93 billion light years (approximately meters) and hosts around stars, organized into roughly 125 billion galaxies (as observed by the Hubble Space Telescope). As a rough estimate, there are about atoms within the observable universe.
According to Don Page, physicist at the University of Alberta, Canada, the longest finite time that has so far been explicitly calculated by any physicist is
which corresponds to the scale of an estimated Poincaré recurrence time for the quantum state of a hypothetical box containing a black hole with the estimated mass of the entire universe, observable or not, assuming a certain inflationary model with an inflaton whose mass is 10−6 Planck masses. This time assumes a statistical model subject to Poincaré recurrence. A much simplified way of thinking about this time is in a model where the universe's history repeats itself arbitrarily many times due to properties of statistical mechanics; this is the time scale when it will first be somewhat similar (for a reasonable choice of "similar") to its current state again.
Combinatorial processes give rise to astonishingly large numbers. The factorial function, which quantifies permutations of a fixed set of objects, grows exponentially as the number of objects increases. Stirling's formula provides a precise asymptotic expression for this rapid growth.
In statistical mechanics, combinatorial numbers reach such immense magnitudes that they are often expressed using logarithms.
Gödel numbers, along with similar representations of bit-strings in algorithmic information theory, are vast—even for mathematical statements of moderate length. Remarkably, certain pathological numbers surpass even the Gödel numbers associated with typical mathematical propositions.
Logician Harvey Friedman has made significant contributions to the study of very large numbers, including work related to Kruskal's tree theorem and the Robertson–Seymour theorem.
"Billions and billions"
To help viewers of Cosmos distinguish between "millions" and "billions", astronomer Carl Sagan stressed the "b". Sagan never did, however, say "billions and billions". The public's association of the phrase and Sagan came from a Tonight Show skit. Parodying Sagan's effect, Johnny Carson quipped "billions and billions". The phrase has, however, now become a humorous fictitious number—the Sagan. Cf., Sagan Unit.
Examples
googol = /10 DTg
centillion = /1Ce or , depending on number naming system
millinillion = /1MI or , depending on number naming system
The largest known Smith number = (101031−1) × (104594 + 3 + 1)1476
The largest known Mersenne prime =
googolplex =
Skewes's numbers: the first is approximately , the second
Graham's number, larger than what can be represented even using power towers (tetration). However, it can be represented using layers of Knuth's up-arrow notation.
Kruskal's tree theorem is a sequence relating to graphs. TREE(3) is larger than Graham's number.
Rayo's number is a large number named after Agustín Rayo which has been claimed to be the largest named number. It was originally defined in a "big number duel" at MIT on 26 January 2007.
Standardized system of writing
A standardized way of writing very large numbers allows them to be easily sorted in increasing order, and one can get a good idea of how much larger a number is than another one.
To compare numbers in scientific notation, say 5×104 and 2×105, compare the exponents first, in this case 5 > 4, so 2×105 > 5×104. If the exponents are equal, the mantissa (or coefficient) should be compared, thus 5×104 > 2×104 because 5 > 2.
Tetration with base 10 gives the sequence , the power towers of numbers 10, where denotes a functional power of the function (the function also expressed by the suffix "-plex" as in googolplex, see the googol family).
These are very round numbers, each representing an order of magnitude in a generalized sense. A crude way of specifying how large a number is, is specifying between which two numbers in this sequence it is.
More precisely, numbers in between can be expressed in the form , i.e., with a power tower of 10s, and a number at the top, possibly in scientific notation, e.g. , a number between and (note that if ). (See also extension of tetration to real heights.)
Thus googolplex is .
Another example:
(between and )
Thus the "order of magnitude" of a number (on a larger scale than usually meant), can be characterized by the number of times (n) one has to take the to get a number between 1 and 10. Thus, the number is between and . As explained, a more precise description of a number also specifies the value of this number between 1 and 10, or the previous number (taking the logarithm one time less) between 10 and 1010, or the next, between 0 and 1.
Note that
I.e., if a number x is too large for a representation the power tower can be made one higher, replacing x by log10x, or find x from the lower-tower representation of the log10 of the whole number. If the power tower would contain one or more numbers different from 10, the two approaches would lead to different results, corresponding to the fact that extending the power tower with a 10 at the bottom is then not the same as extending it with a 10 at the top (but, of course, similar remarks apply if the whole power tower consists of copies of the same number, different from 10).
If the height of the tower is large, the various representations for large numbers can be applied to the height itself. If the height is given only approximately, giving a value at the top does not make sense, so the double-arrow notation (e.g. ) can be used. If the value after the double arrow is a very large number itself, the above can recursively be applied to that value.
Examples:
(between and )
(between and )
Similarly to the above, if the exponent of is not exactly given then giving a value at the right does not make sense, and instead of using the power notation of , it is possible to add to the exponent of , to obtain e.g. .
If the exponent of is large, the various representations for large numbers can be applied to this exponent itself. If this exponent is not exactly given then, again, giving a value at the right does not make sense, and instead of using the power notation of it is possible use the triple arrow operator, e.g. .
If the right-hand argument of the triple arrow operator is large the above applies to it, obtaining e.g. (between and ). This can be done recursively, so it is possible to have a power of the triple arrow operator.
Then it is possible to proceed with operators with higher numbers of arrows, written .
Compare this notation with the hyper operator and the Conway chained arrow notation:
= ( a → b → n ) = hyper(a, n + 2, b)
An advantage of the first is that when considered as function of b, there is a natural notation for powers of this function (just like when writing out the n arrows): . For example:
= ( 10 → ( 10 → ( 10 → b → 2 ) → 2 ) → 2 )
and only in special cases the long nested chain notation is reduced; for obtains:
= ( 10 → 3 → 3 )
Since the b can also be very large, in general it can be written instead a number with a sequence of powers with decreasing values of n (with exactly given integer exponents ) with at the end a number in ordinary scientific notation. Whenever a is too large to be given exactly, the value of is increased by 1 and everything to the right of is rewritten.
For describing numbers approximately, deviations from the decreasing order of values of n are not needed. For example, , and . Thus is obtained the somewhat counterintuitive result that a number x can be so large that, in a way, x and 10x are "almost equal" (for arithmetic of large numbers see also below).
If the superscript of the upward arrow is large, the various representations for large numbers can be applied to this superscript itself. If this superscript is not exactly given then there is no point in raising the operator to a particular power or to adjust the value on which it act, instead it is possible to simply use a standard value at the right, say 10, and the expression reduces to with an approximate n. For such numbers the advantage of using the upward arrow notation no longer applies, so the chain notation can be used instead.
The above can be applied recursively for this n, so the notation is obtained in the superscript of the first arrow, etc., or a nested chain notation, e.g.:
(10 → 10 → (10 → 10 → ) ) =
If the number of levels gets too large to be convenient, a notation is used where this number of levels is written down as a number (like using the superscript of the arrow instead of writing many arrows). Introducing a function = (10 → 10 → n), these levels become functional powers of f, allowing us to write a number in the form where m is given exactly and n is an integer which may or may not be given exactly (for example: ). If n is large, any of the above can be used for expressing it. The "roundest" of these numbers are those of the form fm(1) = (10→10→m→2). For example,
Compare the definition of Graham's number: it uses numbers 3 instead of 10 and has 64 arrow levels and the number 4 at the top; thus , but also .
If m in is too large to give exactly, it is possible to use a fixed n, e.g. n = 1, and apply the above recursively to m, i.e., the number of levels of upward arrows is itself represented in the superscripted upward-arrow notation, etc. Using the functional power notation of f this gives multiple levels of f. Introducing a function these levels become functional powers of g, allowing us to write a number in the form where m is given exactly and n is an integer which may or may not be given exactly. For example, if (10→10→m→3) = gm(1). If n is large any of the above can be used for expressing it. Similarly a function h, etc. can be introduced. If many such functions are required, they can be numbered instead of using a new letter every time, e.g. as a subscript, such that there are numbers of the form where k and m are given exactly and n is an integer which may or may not be given exactly. Using k=1 for the f above, k=2 for g, etc., obtains (10→10→n→k) = . If n is large any of the above can be used to express it. Thus is obtained a nesting of forms where going inward the k decreases, and with as inner argument a sequence of powers with decreasing values of n (where all these numbers are exactly given integers) with at the end a number in ordinary scientific notation.
When k is too large to be given exactly, the number concerned can be expressed as =(10→10→10→n) with an approximate n. Note that the process of going from the sequence =(10→n) to the sequence =(10→10→n) is very similar to going from the latter to the sequence =(10→10→10→n): it is the general process of adding an element 10 to the chain in the chain notation; this process can be repeated again (see also the previous section). Numbering the subsequent versions of this function a number can be described using functions , nested in lexicographical order with q the most significant number, but with decreasing order for q and for k; as inner argument yields a sequence of powers with decreasing values of n (where all these numbers are exactly given integers) with at the end a number in ordinary scientific notation.
For a number too large to write down in the Conway chained arrow notation it size can be described by the length of that chain, for example only using elements 10 in the chain; in other words, one could specify its position in the sequence 10, 10→10, 10→10→10, .. If even the position in the sequence is a large number same techniques can be applied again.
Examples
Numbers expressible in decimal notation:
22 = 4
222 = 2 ↑↑ 3 = 16
33 = 27
44 = 256
55 = 3,125
66 = 46,656
= 2 ↑↑ 4 = 2↑↑↑3 = 65,536
77 = 823,543
106 = 1,000,000 = 1 million
88 = 16,777,216
99 = 387,420,489
109 = 1,000,000,000 = 1 billion
1010 = 10,000,000,000
1012 = 1,000,000,000,000 = 1 trillion
333 = 3 ↑↑ 3 = 7,625,597,484,987 ≈ 7.63 × 1012
1015 = 1,000,000,000,000,000 = 1 million billion = 1 quadrillion
1018 = 1,000,000,000,000,000,000 = 1 billion billion = 1 quintilion
Numbers expressible in scientific notation:
Approximate number of atoms in the observable universe = 1080 = 100,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000
googol = 10100 = 10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000
444 = 4 ↑↑ 3 = 2512 ≈ 1.34 × 10154 ≈ (10 ↑)2 2.2
Approximate number of Planck volumes composing the volume of the observable universe = 8.5 × 10184
555 = 5 ↑↑ 3 = 53125 ≈ 1.91 × 102184 ≈ (10 ↑)2 3.3
666 = 6 ↑↑ 3 ≈ 2.66 × 1036,305 ≈ (10 ↑)2 4.6
777 = 7 ↑↑ 3 ≈ 3.76 × 10695,974 ≈ (10 ↑)2 5.8
888 = 8 ↑↑ 3 ≈ 6.01 × 1015,151,335 ≈ (10 ↑)2 7.2
, the 52nd and the largest known Mersenne prime.
999 = 9 ↑↑ 3 ≈ 4.28 × 10369,693,099 ≈ (10 ↑)2 8.6
101010 =10 ↑↑ 3 = 1010,000,000,000 = (10 ↑)3 1
Numbers expressible in (10 ↑)n k notation:
googolplex =
10 ↑↑ 5 = (10 ↑)5 1
3 ↑↑ 6 ≈ (10 ↑)5 1.10
2 ↑↑ 8 ≈ (10 ↑)5 4.3
10 ↑↑ 6 = (10 ↑)6 1
10 ↑↑↑ 2 = 10 ↑↑ 10 = (10 ↑)10 1
2 ↑↑↑↑ 3 = 2 ↑↑↑ 4 = 2 ↑↑ 65,536 ≈ (10 ↑)65,533 4.3 is between 10 ↑↑ 65,533 and 10 ↑↑ 65,534
Bigger numbers:
3 ↑↑↑ 3 = 3 ↑↑ (3 ↑↑ 3) ≈ 3 ↑↑ 7.6 × 1012 ≈ 10 ↑↑ 7.6 × 1012 is between (10 ↑↑)2 2 and (10 ↑↑)2 3
= ( 10 → 3 → 3 )
= ( 10 → 4 → 3 )
= ( 10 → 5 → 3 )
= ( 10 → 6 → 3 )
= ( 10 → 7 → 3 )
= ( 10 → 8 → 3 )
= ( 10 → 9 → 3 )
= ( 10 → 2 → 4 ) = ( 10 → 10 → 3 )
The first term in the definition of Graham's number, g1 = 3 ↑↑↑↑ 3 = 3 ↑↑↑ (3 ↑↑↑ 3) ≈ 3 ↑↑↑ (10 ↑↑ 7.6 × 1012) ≈ 10 ↑↑↑ (10 ↑↑ 7.6 × 1012) is between (10 ↑↑↑)2 2 and (10 ↑↑↑)2 3 (See Graham's number#Magnitude)
= (10 → 3 → 4)
= ( 4 → 4 → 4 )
= ( 10 → 4 → 4 )
= ( 10 → 5 → 4 )
= ( 10 → 6 → 4 )
= ( 10 → 7 → 4 )
= ( 10 → 8 → 4 )
= ( 10 → 9 → 4 )
= ( 10 → 2 → 5 ) = ( 10 → 10 → 4 )
( 2 → 3 → 2 → 2 ) = ( 2 → 3 → 8 )
( 3 → 2 → 2 → 2 ) = ( 3 → 2 → 9 ) = ( 3 → 3 → 8 )
( 10 → 10 → 10 ) = ( 10 → 2 → 11 )
( 10 → 2 → 2 → 2 ) = ( 10 → 2 → 100 )
( 10 → 10 → 2 → 2 ) = ( 10 → 2 → ) =
The second term in the definition of Graham's number, g2 = 3 ↑g1 3 > 10 ↑g1 – 1 10.
( 10 → 10 → 3 → 2 ) = (10 → 10 → (10 → 10 → ) ) =
g3 = (3 → 3 → g2) > (10 → 10 → g2 – 1) > (10 → 10 → 3 → 2)
g4 = (3 → 3 → g3) > (10 → 10 → g3 – 1) > (10 → 10 → 4 → 2)
...
g9 = (3 → 3 → g8) is between (10 → 10 → 9 → 2) and (10 → 10 → 10 → 2)
( 10 → 10 → 10 → 2 )
g10 = (3 → 3 → g9) is between (10 → 10 → 10 → 2) and (10 → 10 → 11 → 2)
...
g63 = (3 → 3 → g62) is between (10 → 10 → 63 → 2) and (10 → 10 → 64 → 2)
( 10 → 10 → 64 → 2 )
Graham's number, g64
( 10 → 10 → 65 → 2 )
( 10 → 10 → 10 → 3 )
( 10 → 10 → 10 → 4 )
( 10 → 10 → 10 → 10 )
( 10 → 10 → 10 → 10 → 10 )
( 10 → 10 → 10 → 10 → 10 → 10 )
( 10 → 10 → 10 → 10 → 10 → 10 → 10 → ... → 10 → 10 → 10 → 10 → 10 → 10 → 10 → 10 ) where there are ( 10 → 10 → 10 ) "10"s
Other notations
Some notations for extremely large numbers:
Knuth's up-arrow notation/hyperoperators/Ackermann function, including tetration
Conway chained arrow notation
Steinhaus-Moser notation; apart from the method of construction of large numbers, this also involves a graphical notation with polygons. Alternative notations, like a more conventional function notation, can also be used with the same functions.
Fast-growing hierarchy
These notations are essentially functions of integer variables, which increase very rapidly with those integers. Ever-faster-increasing functions can easily be constructed recursively by applying these functions with large integers as argument.
A function with a vertical asymptote is not helpful in defining a very large number, although the function increases very rapidly: one has to define an argument very close to the asymptote, i.e. use a very small number, and constructing that is equivalent to constructing a very large number, e.g. the reciprocal.
Comparison of base values
The following illustrates the effect of a base different from 10, base 100. It also illustrates representations of numbers and the arithmetic.
, with base 10 the exponent is doubled.
, ditto.
, the highest exponent is very little more than doubled (increased by log102).
(thus if n is large it seems fair to say that is "approximately equal to" )
(compare ; thus if n is large it seems fair to say that is "approximately equal to" )
(compare )
(compare )
(compare ; if n is large this is "approximately" equal)
Accuracy
For a number , one unit change in n changes the result by a factor 10. In a number like , with the 6.2 the result of proper rounding using significant figures, the true value of the exponent may be 50 less or 50 more. Hence the result may be a factor too large or too small. This seems like extremely poor accuracy, but for such a large number it may be considered fair (a large error in a large number may be "relatively small" and therefore acceptable).
For very large numbers
In the case of an approximation of an extremely large number, the relative error may be large, yet there may still be a sense in which one wants to consider the numbers as "close in magnitude". For example, consider
and
The relative error is
a large relative error. However, one can also consider the relative error in the logarithms; in this case, the logarithms (to base 10) are 10 and 9, so the relative error in the logarithms is only 10%.
The point is that exponential functions magnify relative errors greatly – if a and b have a small relative error,
and
the relative error is larger, and
and
will have an even larger relative error. The question then becomes: on which level of iterated logarithms to compare two numbers? There is a sense in which one may want to consider
and
to be "close in magnitude". The relative error between these two numbers is large, and the relative error between their logarithms is still large; however, the relative error in their second-iterated logarithms is small:
and
Such comparisons of iterated logarithms are common, e.g., in analytic number theory.
Classes
One solution to the problem of comparing large numbers is to define classes of numbers, such as the system devised by Robert Munafo, which is based on different "levels" of perception of an average person. Class 0 – numbers between zero and six – is defined to contain numbers that are easily subitized, that is, numbers that show up very frequently in daily life and are almost instantly comparable. Class 1 – numbers between six and 1,000,000=10 – is defined to contain numbers whose decimal expressions are easily subitized, that is, numbers who are easily comparable not by cardinality, but "at a glance" given the decimal expansion.
Each class after these are defined in terms of iterating this base-10 exponentiation, to simulate the effect of another "iteration" of human indistinguishibility. For example, class 5 is defined to include numbers between 10 and 10, which are numbers where becomes humanly indistinguishable from (taking iterated logarithms of such yields indistinguishibility firstly between log() and 2log(), secondly between log(log()) and 1+log(log()), and finally an extremely long decimal expansion whose length can't be subitized).
Approximate arithmetic
There are some general rules relating to the usual arithmetic operations performed on very large numbers:
The sum and the product of two very large numbers are both "approximately" equal to the larger one.
Hence:
A very large number raised to a very large power is "approximately" equal to the larger of the following two values: the first value and 10 to the power the second. For example, for very large there is (see e.g. the computation of mega) and also . Thus , see table.
Systematically creating ever-faster-increasing sequences
Given a strictly increasing integer sequence/function (n≥1), it is possible to produce a faster-growing sequence (where the superscript n denotes the nth functional power). This can be repeated any number of times by letting , each sequence growing much faster than the one before it. Thus it is possible to define , which grows much faster than any for finite k (here ω is the first infinite ordinal number, representing the limit of all finite numbers k). This is the basis for the fast-growing hierarchy of functions, in which the indexing subscript is extended to ever-larger ordinals.
For example, starting with f0(n) = n + 1:
f1(n) = f0n(n) = n + n = 2n
f2(n) = f1n(n) = 2nn > (2 ↑) n for n ≥ 2 (using Knuth up-arrow notation)
f3(n) = f2n(n) > (2 ↑)n n ≥ 2 ↑2 n for n ≥ 2
fk+1(n) > 2 ↑k n for n ≥ 2, k < ω
fω(n) = fn(n) > 2 ↑n – 1 n > 2 ↑n − 2 (n + 3) − 3 = A(n, n) for n ≥ 2, where A is the Ackermann function (of which fω is a unary version)
fω+1(64) > fω64(6) > Graham's number (= g64 in the sequence defined by g0 = 4, gk+1 = 3 ↑gk 3)
This follows by noting fω(n) > 2 ↑n – 1 n > 3 ↑n – 2 3 + 2, and hence fω(gk + 2) > gk+1 + 2
fω(n) > 2 ↑n – 1 n = (2 → n → n-1) = (2 → n → n-1 → 1) (using Conway chained arrow notation)
fω+1(n) = fωn(n) > (2 → n → n-1 → 2) (because if gk(n) = X → n → k then X → n → k+1 = gkn(1))
fω+k(n) > (2 → n → n-1 → k+1) > (n → n → k)
fω2(n) = fω+n(n) > (n → n → n) = (n → n → n→ 1)
fω2+k(n) > (n → n → n → k)
fω3(n) > (n → n → n → n)
fωk(n) > (n → n → ... → n → n) (Chain of k+1 n'''s)
fω2(n) = fωn(n) > (n → n → ... → n → n) (Chain of n+1 n's)
In some noncomputable sequences
The busy beaver function Σ is an example of a function which grows faster than any computable function. Its value for even relatively small input is huge. The values of Σ(n) for n'' = 1, 2, 3, 4, 5 are 1, 4, 6, 13, 4098 . Σ(6) is not known but is at least 10↑↑15.
Infinite numbers
Although all the numbers discussed above are very large, they are all still decidedly finite. Certain fields of mathematics define infinite and transfinite numbers. For example, aleph-null is the cardinality of the infinite set of natural numbers, and aleph-one is the next greatest cardinal number. is the cardinality of the reals. The proposition that is known as the continuum hypothesis.
See also
References
Mathematical notation | Large numbers | Mathematics | 6,794 |
3,393,903 | https://en.wikipedia.org/wiki/Pushback%20%28aviation%29 | In aviation, pushback is an airport procedure during which an aircraft is pushed backwards away from its parking position, usually at an airport gate by external power. Pushbacks are carried out by special, low-profile vehicles called pushback tractors or tugs.
Although many aircraft are capable of moving themselves backwards on the ground using reverse thrust (a procedure referred to as a powerback), the resulting jet blast or prop wash would cause increased noise, damage to the terminal building or equipment, and can cause injury to airport staff due to flying debris. This debris would also be sucked into the engine, as it is in normal use, and cause excessive wear - a major cause of wear on aircraft engines is during ground use. A pushback is therefore the preferred method when ground-handling aircraft.
Definition
IATA defines aircraft pushback as "rearward moving of an aircraft from a parking position to a taxi position by use of specialized ground support equipment."
Procedure
Pushbacks at busy aerodromes are usually subject to ground control clearance to facilitate ground movement on taxiways. Once clearance is obtained, the pilot will communicate with the tractor driver (or a ground handler walking alongside the aircraft in some cases) to start the pushback. To communicate, a headset may be connected near the nose gear.
Since the pilots cannot see what is behind the aircraft, steering is done by the pushback tractor driver and not by the pilots. Depending on the aircraft type and airline procedure, a bypass pin may be temporarily installed into the nose gear to disconnect it from the aircraft's normal steering mechanism.
Once the pushback is completed, the towbar is disconnected, and any bypass pin removed. The ground handler will show the bypass pin to the pilots to make it clear that it has been removed. The pushback is then complete, and the aircraft can taxi forward under its own power.
Equipment
Moving light aircraft
Very small airplanes may be moved by human power alone. The airplane may be pushed or pulled by landing gear or wing struts since they're known to be strong enough to drag the airplane through the air. To allow for turns, a person may either pick up or push down on the tail to raise either the nose wheel or tail wheel off the ground, then rotate the airplane by hand. A less cumbersome method involves attaching a short tow bar to either the nose wheel or tail wheel, which provides a solid handhold and leverage to steer with, as well as eliminates the danger of handling the propeller. These tow bars are usually a lightweight aluminum alloy construction which allows them to be carried on board the airplane. Other small tow bars have a powered wheel to help move the airplane, with power sources as diverse as lawnmower engines or battery-operated electric drills. However, powered tow bars are usually too large and heavy to be practically carried on small airplanes.
Tractors and towbars
Large aircraft cannot be moved by hand and must have a tractor or tug. Pushback tractors use a low profile design to fit under the aircraft nose. For sufficient traction the tractor must be heavy, and most models can have extra ballast added. A typical tractor for large aircraft weighs up to and has a drawbar pull of . Often the driver's cabin can be raised for increased visibility when reversing and lowered to fit under aircraft. There are two types of pushback tractors: conventional and towbarless (TBL).
Conventional tugs use a tow bar to connect the tug to the nose landing gear of the aircraft. The tow bar is fixed laterally at the nose landing gear, but may move slightly vertically for height adjustment. At the end that attaches to the tug, the tow bar may pivot freely laterally and vertically. In this manner the tow bar acts as a large lever to rotate the nose landing gear. Each aircraft type has a unique tow fitting so the towbar also acts as an adapter between the standard-sized tow pin on the tug and the type-specific fitting on the aircraft's landing gear. The tow bar must be long enough to place the tug far away enough to avoid hitting the aircraft and to provide sufficient leverage to facilitate turns. On heavy tow bars for large aircraft the towbar rides on its own wheels when not connected to an aircraft. The wheels are attached to a hydraulic jacking mechanism which can lift the towbar to the correct height to mate to both the airplane and the tug, and once this is accomplished the same mechanism is used in reverse to raise the tow bar wheels from the ground during the pushback process. The tow bar can be connected at the front or the rear of the tractor, depending on whether the aircraft will be pushed or pulled. The towbar has a shear pin which prevents the aircraft from being mishandled by the tug; when overstressed the shear pin will snap, disconnecting the bar from the nose gear to prevent damage to the aircraft and tug.
Towbarless (TBL) tractors do not use a towbar; they scoop up the nose landing gear and lift it off the ground. This avoids the time penalty of connecting/disconnecting a towbar, and entirely removes the cost/complexity of maintaining towbars on the ramp. The tug itself does not need to be particularly massive - the aircraft's nosewheel weight provides the necessary downward force. Lastly, a TBL tug is much shorter (compared to a tug+towbar system) and has only a single pivot point instead of one at either end of the towbar, so it has much simpler and precise control of the aircraft. This is very useful in general aviation settings with a wider variety of aircraft in more confined spaces than their airline counterparts.
Manufacturers of electric TBL tugs offer models capable of moving any aircraft from the smallest single-engine type to narrow-body airliners, military cargo and airline-sized business jets. Just as specialized towbars are required for a wide range of aircraft, many TBL tugs use adapters which enable the movement of many unique aircraft. The majority of aircraft do not require adapters and can be moved without any special adjustments to the tug. This is in contrast to conventional tugs which often use so-called "universal" towbars which must be adjustable to suit many aircraft types. Electric TBL tugs are gaining popularity among general aviation operators and FBOs as an alternative to gas or diesel-powered conventional tugs. Being electric rather than internal combustion-powered, electric tugs are low-emission which is a major advantage for environmentally-conscious operators; this also enables the tug to be safely operated inside a closed hangar.
Robotic tractor/tug
The Lahav Division of Israel Aerospace Industries has developed a semi-robotic towbarless tractor it calls TaxiBot that can tow an aircraft from the terminal gate to the take-off point (taxi-out phase) and return it to the gate after landing (taxi-in phase). The TaxiBot eliminates the use of airplane engines during taxi-in and until immediately prior to take-off during taxi-out potentially saving airlines billions of dollars in fuel that is used. The TaxiBot is controlled by the pilot from the cockpit using the regular pilot controls.
British Airways has been using a similar sort of tug too.
Other equipment applications
While the vehicle is referred to as a pushback tug, it is also used to tow aircraft in areas where taxiing the aircraft is not practical or is unsafe, such as moving aircraft in and out of maintenance hangars, or moving aircraft that are not under their own power.
Some airlines, notably Virgin Atlantic, advocated towing aircraft to the holding point of the runway to save fuel and reduce environmental impact. However, the practice was discontinued after landing gear maintenance costs increased due to the stress put on the landing gear during the towing process.
Some fuel must still be burned to operate the auxiliary power unit to provide electrical and pneumatic power to run lighting, environmental and communications systems, unless the tug itself provides these sources of power, which some do. This method also places a larger workload on ground crews and equipment, especially if the aircraft and tow tractor ends up having to wait in a long line of aircraft.
In media
In an advertising campaign, also documented on the television show Fifth Gear, a Volkswagen Touareg was used to pull a Boeing 747. As cited before, the "tractor" needs to be heavy to aid traction. The Touareg carried 4.3 tons worth of cement bags, and the tires were inflated to twice the normal pressure to handle the extra weight. Tractor tires have high sidewall ratios for that reason. This is the world record for the heaviest load towed by a production car.
Gallery
See also
Aircraft ground handling
Ground support equipment
Visual Guidance Docking Systems (VGDS)
References
Sources
Operations Manual Bucher aircraft tractor Kp20FlApt13
Schweizerische Militärmuseum Full
Aircraft ground handling
Tractors
Articles containing video clips | Pushback (aviation) | Engineering | 1,805 |
17,910,574 | https://en.wikipedia.org/wiki/Digital%20ecosystem | A digital ecosystem is a distributed, adaptive, open socio-technical system with properties of self-organization, scalability and sustainability inspired from natural ecosystems. Digital ecosystem models are informed by knowledge of natural ecosystems, especially for aspects related to competition and collaboration among diverse entities. The term is used in the computer industry, the entertainment industry, and the World Economic Forum.
History
The concept of Digital Business Ecosystem was put forward in 2002 by a group of European researchers and practitioners, including Francesco Nachira, Paolo Dini and Andrea Nicolai, who applied the general notion of digital ecosystems to model the process of adoption and development of ICT-based products and services in competitive, highly fragmented markets like the European one
. Elizabeth Chang, Ernesto Damiani and Tharam Dillon started in 2007 the IEEE Digital EcoSystems and Technologies Conference (IEEE DEST). Richard Chbeir, Youakim Badr, Dominique Laurent, and Hiroshi Ishikawa started in 2009 the ACM Conference on Management of Digital EcoSystems (MEDES).
Perspectives
The digital ecosystem metaphor and models have been applied to a number of business areas related to the production and distribution of knowledge-intensive products and services, including higher education. The perspective of this research is providing methods and tools to achieve a set of objectives of the ecosystem (e.g. sustainability, fairness, bounded information asymmetry, risk control and gracious failure). These objectives are seen as desirable properties whose emergence should be fostered by the digital ecosystem self-organization, rather than as explicit design goals like in conventional IT.
See also
Oikos
Ecology
Ecosystem
Information ecosystem
Software ecosystem
Platform ecosystem
Knowledge commons
Knowledge ecosystem
Digital distribution
Media ecology
Social system
References
External links
International ACM Conference on Management of Digital EcoSystems (MEDES) (since 2009) - computer science Conference for interdisciplinary studies on Digital Ecosystems and Analysis
IEEE International Conference on Digital Ecosystems and Technologies (IEEE-DEST 2012) - computer science Conference for Digital Ecosystems and related Technologies. Held from 2007 to 2013.
Computing and society
Information systems
Systems
Social systems | Digital ecosystem | Technology | 408 |
31,285,090 | https://en.wikipedia.org/wiki/Cantharellus%20minor | Cantharellus minor is a fungus native to eastern North America. It is one of the smallest of the genus Cantharellus, which includes other edible chanterelles.
Description
Cantharellus minor is colored bright yellow to yellowish-orange. The cap ranges from wide and is convex and umbonate, often shallowly depressed, becoming funnel-shaped in some. The yellowish gills are decurrent, fade to yellowish white in maturity, and may seem large in proportion to the small fruiting body. The stipe is tall and thick.
Similar species
Lookalikes include the Gulf Coast's C. tabernensis which has a darker center, Craterellus ignicolor which has shallower ridges and usually a depression in the cap, and Gloioxanthomyces nitidus which has a very circular margin, fairly straight stem and non-forking gills.
Distribution and habitat
Native to eastern North America, the fungi fruits from June to September.
It is suspected of being mycorrhizal, found in association with oaks and moss. Recently, C. minor has been reported from semi-evergreen to evergreen forests in the Western Ghats, Kerala, India forming ectomycorrhizal associations with tree species like Vateria indica, Diospyros malabarica, Hopea parviflora, and Myristica species.
Uses
Although insubstantial, the mushrooms are edible.
References
External links
minor
Fungi described in 1872
Fungi of Asia
Fungi of North America
Edible fungi
Taxa named by Charles Horton Peck
Fungus species | Cantharellus minor | Biology | 320 |
26,919,476 | https://en.wikipedia.org/wiki/Jeff%20Kahn%20%28mathematician%29 | Jeffry Ned Kahn is a professor of mathematics at Rutgers University notable for his work in combinatorics.
Education
Kahn received his Ph.D. from Ohio State University in 1979 after completing his dissertation under his advisor Dijen K. Ray-Chaudhuri.
Research
In 1980 he showed the importance of the bundle theorem for ovoidal Möbius planes. In 1993, together with Gil Kalai, he disproved Borsuk's conjecture. In 1996 he was awarded the Pólya Prize (SIAM).
Awards and honors
He was an invited speaker at the 1994 International Congress of Mathematicians in Zurich.
In 2012, he was awarded the Fulkerson Prize (jointly with Anders Johansson and Van H. Vu) for determining the threshold of edge density above which a random graph can be covered by disjoint copies of a given smaller graph. Also in 2012, he became a fellow of the American Mathematical Society.
References
Living people
20th-century American mathematicians
21st-century American mathematicians
Combinatorialists
Rutgers University faculty
Fellows of the American Mathematical Society
Year of birth missing (living people) | Jeff Kahn (mathematician) | Mathematics | 223 |
9,517,883 | https://en.wikipedia.org/wiki/Spongivore | A spongivore is an animal anatomically and physiologically adapted to eating animals of the phylum Porifera, commonly called sea sponges, for the main component of its diet. As a result of their diet, spongivore animals like the hawksbill turtle have developed sharp, narrow bird-like beak that allows them to reach within crevices on the reef to obtain sponges.
Examples
The hawksbill turtle are one of the few animals known to feed primarily on sponges. It is the only known spongivorous reptile. Sponges of various select species constitute up to 95% of the diets of Caribbean hawksbill turtle populations.
Pomacanthus imperator, the emperor angelfish; Lactophrys bicaudalis, the spotted trunkfish; and Stephanolepis hispidus, the planehead filefish are known spongivorous coral reef fish. The rock beauty Holocanthus tricolor is also spongivorous, with sponges making up 96% of their diet.
Certain species of nudibranchs are known to feed selectively on specific species of sponges.
Attacks and counter-attacks
Spongivore offense
The many defenses displayed by sponges means that their spongivores need to learn skills to overcome these defenses to obtain their food. These skills allow spongivores to increase their feeding and use of sponges. Spongivores have three primary strategies for dealing with sponge defenses: choice based on colour, able to handle secondary metabolites and brain development for memory.
Choice based on colour was involved based on which sponge the spongivore would choose to eat. A spongivore would bite small sample of sponges and if they were unharmed that they would continue eating that specific sponge and then move on to another sponge of the same colour.
Spongivores have adapted to be able to handle the secondary metabolites that sponges have. Therefore, spongivores are able to consume a variety of sponges without getting harmed.
Spongivores also have enough brain development to be able to remember the same species of sponge it has eaten in the past and will continue to eat in the future.
Sponge defense
A sponge defense is a trait that increases a sponge fitness when faced with a spongivore. This is measured relative to another sponge that lacks the defensive trait. Sponge defenses increase survival and/or reproduction (fitness) of sponges under pressure of predation from a spongivore.
The use of structural and chemical strategies found in sponges are used to deter predation. One of the most common structural strategies that sponges have that prevents them from being consumed by predators is by having spicules. If a sponge contains spicules along with organic compounds, the likelihood of those sponges being consumed by spongivores decrease.
Sponges have also developed aposematism to help avoid predation. Spongivores have learned four things about sponges aposematism and they are as follows:
If it is poison some predators will not eat it
If It is conspicuously coloured, or advertises itself by means of some other signals;
Some predators avoid attacking it because of its signals
These conspicuous signals provide better protection to the individual or to its genes than would other (e.g. cryptic) signals.
Unfortunately, sponges that live in the deep sea are not at an advantage due to their colour because most colour in the deep sea is lost.
Impacts
Sponges play an important role in the benthic fauna throughout temperate, tropical and polar habitats. If there is a high volume of predation it can effect bio erosion, reef creation, multiple habitats, other species and help with the nitrogen levels.
Bio erosion that occurs in the production of reef sediments and the structural component of corals are partly produced by sponges, where solid carbonate is processed into smaller fragments and fine sediments. Sponges also play a role in increasing the survival of live coral on Caribbean reefs by binding fragments together and is expected to increase the rates of carbonate accretion.
The coral reefs that contain higher amounts of sponges have better survival rate than the reefs with fewer sponges. Sponges can act as a stabilizer during storms as they help keep the reefs intact when presented with a strong currents. Sponges also grown between rocks and boulders, providing a more stable environment and lowering the disturbance levels. Sponges also provide habitats for other organisms to live in, without them, these organisms would not have a protected habitat.
Scientists have discovered that sponges play an important role in the nitrogen cycle. There are low amounts of nitrogen found in the water around coral reefs and most of the nitrogen that is found it bound into particulate or dissolved organic matter. Before this dissolved organic matter is able to be used by other reef organisms it must undergo a series of microbial transformations. The nitrogen cycle that occurs in sponges are able to cycle the nitrogen back into the water column and can be used by other organisms, especially cyanobacteria. The cyanobacteria then can then fix the atmospheric nitrogen and then the sponges can use it. Therefore, if there is a high amount of spongivores present in an environment, it can affect other aspects of the environment besides sponges.
References
Carnivory
Sponge biology
Animals by eating behaviors | Spongivore | Biology | 1,089 |
25,290,996 | https://en.wikipedia.org/wiki/Elapsed%20real%20time | In computing, elapsed real time, real time, wall-clock time, wall time, or walltime is the actual time taken from the start of a computer program to the end. In other words, it is the difference between the time at which a task finishes and the time at which the task started.
Wall time is thus different from CPU time, which measures only the time during which the processor is actively working on a certain task or process. The difference between the two can arise from architecture and run-time dependent factors, e.g. programmed delays or waiting for system resources to become available. Consider the example of a mathematical program that reports that it has used "CPU time 0m0.04s, Wall time 6m6.01s". This means that while the program was active for six minutes and one second, during that time the computer's processor spent only 4/100 of a second performing calculations for the program.
Conversely, programs running in parallel on more than one processing unit can spend CPU time many times beyond their elapsed time. Since in concurrent computing the definition of elapsed time is non-trivial, the conceptualization of the elapsed time as measured on a separate, independent wall clock is convenient.
Another definition of "wall time" is the measurement of time via a separate, independent clock as opposed to the local system time (internal), i.e. with regard to the difference between the two.
In simulation
The term wall-clock time has also found widespread adoption in computer simulation, to distinguish between (1) the (often compressed or expanded) simulation time, and (2) the time as it passes for the user of the simulation tool.
References
Computing terminology
Durations | Elapsed real time | Physics,Technology | 354 |
43,576,627 | https://en.wikipedia.org/wiki/International%20Student%20Science%20Fair | The International Student Science Fair (ISSF) is an annual event providing a platform for international interaction to take place in science education. The fair brings together high school students, teachers and school leaders to share and develop the learning and teaching of science research and education.
The ISSF is the major event of its type in the world, bringing together students, teachers, school and university leaders to share and develop their ideas about science in the modern world.
History of ISSF
2005 — Mahidol Wittayanusorn School, Bangkok, Thailand (2005)
2006 — Korea Science Academy, Busan, Republic of Korea (August 2006)
2007 — City Montessori School, Lucknow, India (August 2007)
2008 — Ritsumeikan Senior High School, Kyoto, Japan (October 2008)
2009 — National Junior College, Singapore (May 2009)
2010 — Australian Science and Mathematics School, Adelaide, Australia (September 2010)
2011 — Mahidol Wittayanusorn School, Bangkok, Thailand (September 2011)
2012 — Fort Richmond Collegiate, Winnipeg, Canada (24-30 April 2012)
2013 — Camborne Science & International Academy, Camborne, United Kingdom (11-16 July 2013)
2014 — Moscow Chemical Lyceum, Moscow, Russian Federation (8-12 August 2014)
2015 — John Monash Science School, Melbourne, Australia (December 2015)
2016 — NUS High School of Math and Science, Singapore (May 2016)
2017 — Korea Science Academy, Busan, South Korea (June 2017)
2018 — Illinois Mathematics and Science Academy, Aurora, Illinois, United States (June 2018)
2019 — National Junior College, Singapore (March 2019)
2020 — Kamnoetvidya Science Academy, Rayong, Thailand (January 2020)
2021 — The High School Affiliated to the Beihang University, Beijing, China
2022 — Lewiston-Porter High School, Lewiston, New York, United States
2023 — Queensland Academy for Science, Mathematics and Technology, Australia
Participating schools
Australia
Australian Science and Mathematics School
John Monash Science School
Queensland Academy for Science, Mathematics and Technology
Aberfoyle Park High School
Suzanne Cory High School
Canada
Fort Richmond Collegiate
China
The High School affiliated to BeiHang University
Hong Kong SAR
Ho Yu College and Primary School (Sponsored by Sik Sik Yuen)
India
City Montessori School
Indonesia
Budi Mulia Dua International High School
Center for Young Scientists (affiliated to Surya University)
Iran
Manzoumeh Kherad Institute
Japan
Tokyo Tech High School of Science and Technology
Ritsumeikan Junior and Senior High School
Ritsumeikan Keisho Junior and Senior High School
Ritsumeikan Uji High School
Toyonaka High School
Kenya
Brookhouse School
Macau
Keang Peng School
Malaysia
Alam Shah Science School
Kepala Batas Secondary Science School
Mongolia
New Beginning International School
Netherlands
St.-Odulphuslyceum
Philippines
Philippine Science High School
Republic of Korea
Korean Minjok Leadership Academy
Korea Science Academy of KAIST
Russian Federation
Moscow Chemical Lyceum
Lyceum "Physical-Technical High School"
Singapore
National Junior College
NUS High School of Math and Science
School of Science and Technology, Singapore
Taiwan
Kaohsiung Municipal Kaohsiung Girls' Senior High School
Thailand
Chulalongkorn University Demonstration Secondary School
Mahidol Wittayanusorn School
Kamnoetvidya Science Academy
United Kingdom
Camborne Science and International Academy
Lancaster Girls' Grammar School
United States of America
Illinois Mathematics and Science Academy
West Aurora High School
Lewiston-Porter High School
References
Science competitions | International Student Science Fair | Technology | 712 |
15,730,240 | https://en.wikipedia.org/wiki/Haller%20index | The Haller index, created in 1987 by J. Alex Haller, S. S. Kramer, and S. A. Lietman, is a mathematical relationship that exists in a human chest section observed with a CT scan. It is defined as the ratio of the transverse diameter (the horizontal distance of the inside of the ribcage) and the anteroposterior diameter (the shortest distance between the vertebrae and sternum).
where:
HI is the Haller Index
distance 1 is the distance of the inside ribcage (at the level of maximum deformity or at the lower third of the sternum)
distance 2 is the distance between the sternal notch and vertebrae.
More recent studies show that simple chest x-rays are just as effective as CT scans for calculating the Haller index and recommend replacing CT scans with CXR to reduce radiation exposure in all but gross deformities.
A normal Haller index should be about 2.5. Chest wall deformities such as pectus excavatum can cause the sternum to invert, thus increasing the index. In severe asymmetric cases, where the sternum dips below the level of the vertebra, the index can be a negative value.
See also
Pectus carinatum
Nuss procedure
Sources
Equations | Haller index | Mathematics | 269 |
23,980,804 | https://en.wikipedia.org/wiki/Gravity%20and%20Extreme%20Magnetism%20Small%20Explorer | Gravity and Extreme Magnetism Small Explorer (GEMS or SMEX-13) mission was a NASA space observatory mission. The main scientific goal of GEMS was to be the first mission to systematically measure the polarization of X-ray sources. GEMS would have provided data to help scientists study the shape of spacetime that has been distorted by a spinning black hole's gravity and the structure and effects of the magnetic fields around neutron stars. It was cancelled by NASA in June 2012 for potential cost overruns due to delays in developing the technology and never moved into the development phase.
GEMS was managed by the NASA Goddard Space Flight Center (GSFC). The project was an astrophysics program reporting to NASA's Science Mission Directorate (SMD) in Washington, D.C.
Cancelled missions can be reinstated - for example, NuSTAR was cancelled in 2006, but reinstated a year later and launched in June 2012. However, NuSTAR was not cancelled due to project overruns, but rather due to changes in the overall NASA budget, so the circumstances for cancellation were very different. Small missions of the Explorer program offer much flexibility and launch opportunities, and the lessons learned can be applied to the same missions goals, but on a different mission (compare, for instance, Vanguard 1 to Explorer 1). Several years later two new X-ray polarimetry missions won a NASA award to develop X-ray polarimetry missions. NASA's IXPE X-ray polarimetry telescope was launched in 2021; its X-ray observational capabilities and mission objectives are very similar to those (proposed) of the GEMS.
Launch
The spacecraft would have been launched in July 2014 on a nine-month mission with a possible 15-month extension for a guest observer phase; but the mission was terminated at the Confirmation Review stage on 10 May 2012 due to expected cost overruns.
Mission
The GEMS X-ray telescope was designed to indirectly measure the regions of distorted space around spinning black holes through a measurement of the polarization of X-rays emitted. It would have also probed the structure and effects of the magnetic fields around magnetars and other star remnants with magnetic fields trillions of times stronger than Earth's.
GEMS could reveal:
How spinning black holes affect space-time and matter as it is drawn in and compressed by strong gravitational fields
What happens in the very strong magnetic fields near pulsars and magnetars
How cosmic rays are accelerated by shocks in supernova remnants
Current missions cannot do this because the required angular resolution is limited and magnetic fields are invisible.
The detector in GEMS would have been a small chamber filled with gas. When an X-ray is absorbed in the gas, an electron carries off most of the energy, and starts out in a direction related to the polarization direction of the X-ray. This electron loses energy by ionizing the gas; the instrument measures the direction of the ionization track, and thereby the polarization of the X-ray. The GEMS detector readout was to employ a time projection chamber to image the track. The GEMS instrument was planned to be about 100 times more sensitive than previous X-ray polarization experiments.
Cancellation
Mission costs were capped at US$105 million (in Fiscal Year 2008 dollars), excluding the launch vehicle, but an independent confirmation review board at NASA claimed it would grow to an estimated US$150 million, leading to cancellation of the mission. The cancellation of GEMS marked the end of a multi-year-long binge of cancellations and attempted cancellations of current and future missions: it was at the time the last funded future U.S. space telescope besides James Webb Space Telescope (JWST). The cancellation of GEMS may have jeopardized the Pegasus XL launcher. (The Pegasus XL has successfully launched other small explorer missions)
Project status
GEMS was one of six Small Explorer missions selected in May 2008 for the NASA Small Explorer (SMEX) Program Phase A study. In June 2009, GEMS was chosen to be the second of these missions to go forward into Phase B, starting in October 2010 for a launch in April 2014.
The project completed and successfully passed the Systems Requirements Review (SRR) in December 2010.
GEMS did not pass a confirmation review conducted on 10 May 2012, which effectively cancelled the project. The project team intended to appeal the cancellation.
On 7 June 2012, NASA officially announced the cancellation of the GEMS project. The mission was supposed to launch in July 2014 to study black holes and neutron stars, but external reviews found the project would likely exceed its budget. GEMS was supposed to hold at US$119 million, not counting the launch vehicle. NASA's astrophysics director, Paul Hertz, says the technology needed for the instrument took longer to develop than expected, and that drove up the price.
NASA continued studying X-ray polarimetry missions in 2015 for future Explorer program observatories.
Project and Science Team
The GEMS principal investigator was Dr Jean H. Swank, of NASA's Goddard Space Flight Center, Greenbelt, Maryland.
Project Team
GSFC was responsible for the GEMS instrument, system engineering, spacecraft contract, and the overall program management.
Ames Research Center would have provided co-investigators and performed the Education and Public Outreach (EPO) support.
The satellite would have been built by Orbital Sciences Corporation using its LEOStar-2 satellite bus design, and would also conduct mission operations, under a US$40 million contract.
Alliant Techsystems (ATK) would build a deployable boom to place the X-ray mirrors at the proper distance from the detectors or polarimeters.
University of Iowa would have provided instrument calibration assistance and would have had students prepare a small instrument that could be part of the mission.
Science Team
Co-investigators
NASA Goddard Space Flight Center
NASA Ames Research Center
University of Iowa
Massachusetts Institute of Technology (MIT)
Science collaborators
Other GEMS collaborators are from universities include:
Johns Hopkins University
Cornell University
Rice University
University of Oulu (Finland)
North Carolina State University
Washington University in St. Louis
See also
Explorer program
References
External links
NASA Project Homepage
NASA Science Homepage
GEMS Homepage at Orbital Sciences
GEMS Homepage at University of Iowa
NASA programs
Explorers Program
Space telescopes
X-ray telescopes
Proposed NASA space probes | Gravity and Extreme Magnetism Small Explorer | Astronomy | 1,259 |
15,417,038 | https://en.wikipedia.org/wiki/ZNF34 | Zinc finger protein 34 is a protein that in humans is encoded by the ZNF34 gene.
References
Further reading
External links
Transcription factors | ZNF34 | Chemistry,Biology | 29 |
44,483,753 | https://en.wikipedia.org/wiki/Ikeda%20lift | In mathematics, the Ikeda lift is a lifting of modular forms to Siegel modular forms. The existence of the lifting was conjectured by W. Duke and Ö. Imamoḡlu and also by T. Ibukiyama, and the lifting was constructed by . It generalized the Saito–Kurokawa lift from modular forms of weight 2k to genus 2 Siegel modular forms of weight k + 1.
Statement
Suppose that k and n are positive integers of the same parity. The Ikeda lift takes a Hecke eigenform of weight 2k for SL2(Z) to a Hecke eigenform in the space of Siegel modular forms of weight k+n, degree 2n.
Example
The Ikeda lift takes the Delta function (the weight 12 cusp form for SL2(Z)) to the Schottky form, a weight 8 Siegel cusp form of degree 4. Here k=6 and n=2.
References
Modular forms | Ikeda lift | Mathematics | 197 |
312,229 | https://en.wikipedia.org/wiki/Chiral%20anomaly | In theoretical physics, a chiral anomaly is the anomalous nonconservation of a chiral current. In everyday terms, it is equivalent to a sealed box that contained equal numbers of left and right-handed bolts, but when opened was found to have more left than right, or vice versa.
Such events are expected to be prohibited according to classical conservation laws, but it is known there must be ways they can be broken, because we have evidence of charge–parity non-conservation ("CP violation"). It is possible that other imbalances have been caused by breaking of a chiral law of this kind. Many physicists suspect that the fact that the observable universe contains more matter than antimatter is caused by a chiral anomaly. Research into chiral symmetry breaking laws is a major endeavor in particle physics research at this time.
Informal introduction
The chiral anomaly originally referred to the anomalous decay rate of the neutral pion, as computed in the current algebra of the chiral model. These calculations suggested that the decay of the pion was suppressed, clearly contradicting experimental results. The nature of the anomalous calculations was first explained in 1969 by Stephen L. Adler and John Stewart Bell & Roman Jackiw. This is now termed the Adler–Bell–Jackiw anomaly of quantum electrodynamics. This is a symmetry of classical electrodynamics that is violated by quantum corrections.
The Adler–Bell–Jackiw anomaly arises in the following way. If one considers the classical (non-quantized) theory of electromagnetism coupled to massless fermions (electrically charged Dirac spinors solving the Dirac equation), one expects to have not just one but two conserved currents: the ordinary electrical current (the vector current), described by the Dirac field as well as an axial current When moving from the classical theory to the quantum theory, one may compute the quantum corrections to these currents; to first order, these are the one-loop Feynman diagrams. These are famously divergent, and require a regularization to be applied, to obtain the renormalized amplitudes. In order for the renormalization to be meaningful, coherent and consistent, the regularized diagrams must obey the same symmetries as the zero-loop (classical) amplitudes. This is the case for the vector current, but not the axial current: it cannot be regularized in such a way as to preserve the axial symmetry. The axial symmetry of classical electrodynamics is broken by quantum corrections. Formally, the Ward–Takahashi identities of the quantum theory follow from the gauge symmetry of the electromagnetic field; the corresponding identities for the axial current are broken.
At the time that the Adler–Bell–Jackiw anomaly was being explored in physics, there were related developments in differential geometry that appeared to involve the same kinds of expressions. These were not in any way related to quantum corrections of any sort, but rather were the exploration of the global structure of fiber bundles, and specifically, of the Dirac operators on spin structures having curvature forms resembling that of the electromagnetic tensor, both in four and three dimensions (the Chern–Simons theory). After considerable back and forth, it became clear that the structure of the anomaly could be described with bundles with a non-trivial homotopy group, or, in physics lingo, in terms of instantons.
Instantons are a form of topological soliton; they are a solution to the classical field theory, having the property that they are stable and cannot decay (into plane waves, for example). Put differently: conventional field theory is built on the idea of a vacuum – roughly speaking, a flat empty space. Classically, this is the "trivial" solution; all fields vanish. However, one can also arrange the (classical) fields in such a way that they have a non-trivial global configuration. These non-trivial configurations are also candidates for the vacuum, for empty space; yet they are no longer flat or trivial; they contain a twist, the instanton. The quantum theory is able to interact with these configurations; when it does so, it manifests as the chiral anomaly.
In mathematics, non-trivial configurations are found during the study of Dirac operators in their fully generalized setting, namely, on Riemannian manifolds in arbitrary dimensions. Mathematical tasks include finding and classifying structures and configurations. Famous results include the Atiyah–Singer index theorem for Dirac operators. Roughly speaking, the symmetries of Minkowski spacetime, Lorentz invariance, Laplacians, Dirac operators and the U(1)xSU(2)xSU(3) fiber bundles can be taken to be a special case of a far more general setting in differential geometry; the exploration of the various possibilities accounts for much of the excitement in theories such as string theory; the richness of possibilities accounts for a certain perception of lack of progress.
The Adler–Bell–Jackiw anomaly is seen experimentally, in the sense that it describes the decay of the neutral pion, and specifically, the width of the decay of the neutral pion into two photons. The neutral pion itself was discovered in the 1940s; its decay rate (width) was correctly estimated by J. Steinberger in 1949. The correct form of the anomalous divergence of the axial current is obtained by Schwinger in 1951 in a 2D model of electromagnetism and massless fermions. That the decay of the neutral pion is suppressed in the current algebra analysis of the chiral model is obtained by Sutherland and Veltman in 1967. An analysis and resolution of this anomalous result is provided by Adler and Bell & Jackiw in 1969. A general structure of the anomalies is discussed by Bardeen in 1969.
The quark model of the pion indicates it is a bound state of a quark and an anti-quark. However, the quantum numbers, including parity and angular momentum, taken to be conserved, prohibit the decay of the pion, at least in the zero-loop calculations (quite simply, the amplitudes vanish.) If the quarks are assumed to be massive, not massless, then a chirality-violating decay is allowed; however, it is not of the correct size. (Chirality is not a constant of motion of massive spinors; they will change handedness as they propagate, and so mass is itself a chiral symmetry-breaking term. The contribution of the mass is given by the Sutherland and Veltman result; it is termed "PCAC", the partially conserved axial current.) The Adler–Bell–Jackiw analysis provided in 1969 (as well as the earlier forms by Steinberger and Schwinger), do provide the correct decay width for the neutral pion.
Besides explaining the decay of the pion, it has a second very important role. The one loop amplitude includes a factor that counts the grand total number of leptons that can circulate in the loop. In order to get the correct decay width, one must have exactly three generations of quarks, and not four or more. In this way, it plays an important role in constraining the Standard model. It provides a direct physical prediction of the number of quarks that can exist in nature.
Current day research is focused on similar phenomena in different settings, including non-trivial topological configurations of the electroweak theory, that is, the sphalerons. Other applications include the hypothetical non-conservation of baryon number in GUTs and other theories.
General discussion
In some theories of fermions with chiral symmetry, the quantization may lead to the breaking of this (global) chiral symmetry. In that case, the charge associated with the chiral symmetry is not conserved. The non-conservation happens in a process of tunneling from one vacuum to another. Such a process is called an instanton.
In the case of a symmetry related to the conservation of a fermionic particle number, one may understand the creation of such particles as follows. The definition of a particle is different in the two vacuum states between which the tunneling occurs; therefore a state of no particles in one vacuum corresponds to a state with some particles in the other vacuum. In particular, there is a Dirac sea of fermions and, when such a tunneling happens, it causes the energy levels of the sea fermions to gradually shift upwards for the particles and downwards for the anti-particles, or vice versa. This means particles which once belonged to the Dirac sea become real (positive energy) particles and particle creation happens.
Technically, in the path integral formulation, an anomalous symmetry is a symmetry of the action , but not of the measure and therefore not of the generating functional
of the quantized theory ( is Planck's action-quantum divided by 2). The measure consists of a part depending on the fermion field and a part depending on its complex conjugate . The transformations of both parts under a chiral symmetry do not cancel in general. Note that if is a Dirac fermion, then the chiral symmetry can be written as where is the chiral gamma matrix acting on . From the formula for one also sees explicitly that in the classical limit, anomalies don't come into play, since in this limit only the extrema of remain relevant.
The anomaly is proportional to the instanton number of a gauge field to which the fermions are coupled. (Note that the gauge symmetry is always non-anomalous and is exactly respected, as is required for the theory to be consistent.)
Calculation
The chiral anomaly can be calculated exactly by one-loop Feynman diagrams, e.g. Steinberger's "triangle diagram", contributing to the pion decays, and . The amplitude for this process can be calculated directly from the change in the measure of the fermionic fields under the chiral transformation.
Wess and Zumino developed a set of conditions on how the partition function ought to behave under gauge transformations called the Wess–Zumino consistency condition.
Fujikawa derived this anomaly using the correspondence between functional determinants and the partition function using the Atiyah–Singer index theorem. See Fujikawa's method.
An example: baryon number non-conservation
The Standard Model of electroweak interactions has all the necessary ingredients for successful baryogenesis, although these interactions have never been observed and may be insufficient to explain the total baryon number of the observed universe if the initial baryon number of the universe at the time of the Big Bang is zero. Beyond the violation of charge conjugation and CP violation (charge+parity), baryonic charge violation appears through the Adler–Bell–Jackiw anomaly of the group.
Baryons are not conserved by the usual electroweak interactions due to quantum chiral anomaly. The classic electroweak Lagrangian conserves baryonic charge. Quarks always enter in bilinear combinations , so that a quark can disappear only in collision with an antiquark. In other words, the classical baryonic current is conserved:
However, quantum corrections known as the sphaleron destroy this conservation law: instead of zero in the right hand side of this equation, there is a non-vanishing quantum term,
where is a numerical constant vanishing for ℏ =0,
and the gauge field strength is given by the expression
Electroweak sphalerons can only change the baryon and/or lepton number by 3 or multiples of 3 (collision of three baryons into three leptons/antileptons and vice versa).
An important fact is that the anomalous current non-conservation is proportional to the total derivative of a vector operator, (this is non-vanishing due to instanton configurations of the gauge field, which are pure gauge at the infinity), where the anomalous current is
which is the Hodge dual of the Chern–Simons 3-form.
Geometric form
In the language of differential forms, to any self-dual curvature form we may assign the abelian 4-form . Chern–Weil theory shows that this 4-form is locally but not globally exact, with potential given by the Chern–Simons 3-form locally:
.
Again, this is true only on a single chart, and is false for the global form unless the instanton number vanishes.
To proceed further, we attach a "point at infinity" onto to yield , and use the clutching construction to chart principal A-bundles, with one chart on the neighborhood of and a second on . The thickening around , where these charts intersect, is trivial, so their intersection is essentially . Thus instantons are classified by the third homotopy group , which for is simply the third 3-sphere group .
The divergence of the baryon number current is (ignoring numerical constants)
,
and the instanton number is
.
See also
Anomaly (physics)
Chiral magnetic effect
Global anomaly
Gravitational anomaly
Strong CP problem
References
Further reading
Published articles
Textbooks
Preprints
Anomalies (physics)
Quantum chromodynamics
Standard Model
Conservation laws | Chiral anomaly | Physics | 2,731 |
224,312 | https://en.wikipedia.org/wiki/Lapse%20rate | The lapse rate is the rate at which an atmospheric variable, normally temperature in Earth's atmosphere, falls with altitude. Lapse rate arises from the word lapse (in its "becoming less" sense, not its "interruption" sense). In dry air, the adiabatic lapse rate (i.e., decrease in temperature of a parcel of air that rises in the atmosphere without exchanging energy with surrounding air) is 9.8 °C/km (5.4 °F per 1,000 ft). The saturated adiabatic lapse rate (SALR), or moist adiabatic lapse rate (MALR), is the decrease in temperature of a parcel of water-saturated air that rises in the atmosphere. It varies with the temperature and pressure of the parcel and is often in the range 3.6 to (2 to ), as obtained from the International Civil Aviation Organization (ICAO). The environmental lapse rate is the decrease in temperature of air with altitude for a specific time and place (see below). It can be highly variable between circumstances.
Lapse rate corresponds to the vertical component of the spatial gradient of temperature. Although this concept is most often applied to the Earth's troposphere, it can be extended to any gravitationally supported parcel of gas.
Definition
A formal definition from the Glossary of Meteorology is:
The decrease of an atmospheric variable with height, the variable being temperature unless otherwise specified.
Typically, the lapse rate is the negative of the rate of temperature change with altitude change:
where (sometimes ) is the lapse rate given in units of temperature divided by units of altitude, T is temperature, and z is altitude.
Environmental lapse rate
The environmental lapse rate (ELR), is the actual rate of decrease of temperature with altitude in the atmosphere at a given time and location.
The ELR is the observed lapse rate, and is to be distinguished from the adiabatic lapse rate which is a theoretical construct. The ELR is forced towards the adiabatic lapse rate whenever air is moving vertically.
As an average, the International Civil Aviation Organization (ICAO) defines an international standard atmosphere (ISA) with a temperature lapse rate of or from sea level to 11 km or . From 11 km up to 20 km or , the constant temperature is , which is the lowest assumed temperature in the ISA. The standard atmosphere contains no moisture.
Unlike the idealized ISA, the temperature of the actual atmosphere does not always fall at a uniform rate with height. For example, there can be an inversion layer in which the temperature increases with altitude.
Cause
The temperature profile of the atmosphere is a result of the interaction between radiative heating from sunlight, cooling to space via thermal radiation, and upward heat transport via natural convection (which carries hot air and latent heat upward). Above the tropopause, convection does not occur and all cooling is radiative.
Within the troposphere, the lapse rate is a essentially the consequence of a balance between (a) radiative cooling of the air, which by itself would lead to a high lapse rate; and (b) convection, which is activated when the lapse rate exceeds a critical value; convection stabilizes the environmental lapse rate and prevents it from substantially exceeding the adiabatic lapse rate.
Sunlight hits the surface of the earth (land and sea) and heats them. The warm surface heats the air above it. In addition, nearly a third of absorbed sunlight is absorbed within the atmosphere, heating the atmosphere directly.
Thermal conduction helps transfer heat from the surface to the air; this conduction occurs within the few millimeters of air closest to the surface. However, above that thin interface layer, thermal conduction plays a negligible role in transferring heat within the atmosphere; this is because the thermal conductivity of air is very low.
The air is radiatively cooled by greenhouse gases (water vapor, carbon dioxide, etc.) and clouds emitting longwave thermal radiation to space.
If radiation were the only way to transfer energy within the atmosphere, then the lapse rate near the surface would be roughly 40 °C/km and the greenhouse effect of gases in the atmosphere would keep the ground at roughly .
However, when air gets hot or humid, its density decreases. Thus, air which has been heated by the surface tends to rise and carry internal energy upward, especially if the air has been moistened by evaporation from water surfaces. This is the process of convection. Vertical convective motion stops when a parcel of air at a given altitude has the same density as the other air at the same elevation.
Convection carries hot, moist air upward and cold, dry air downward, with a net effect of transferring heat upward. This makes the air below cooler than it would otherwise be and the air above warmer.
When convection happens, this shifts the environmental lapse rate towards the adiabatic lapse rate, which is a thermal gradient characteristic of vertically moving air packets.
Because convection is available to transfer heat within the atmosphere, the lapse rate in the troposphere is reduced to around 6.5 °C/km and the greenhouse effect is reduced to a point where Earth has its observed surface temperature of around .
Convection and adiabatic expansion
As convection causes parcels of air to rise or fall, there is little heat transfer between those parcels and the surrounding air. Air has low thermal conductivity, and the bodies of air involved are very large; so transfer of heat by conduction is negligibly small. Also, intra-atmospheric radiative heat transfer is relatively slow and so is negligible for moving air. Thus, when air ascends or descends, there is little exchange of heat with the surrounding air. A process in which no heat is exchanged with the environment is referred to as an adiabatic process.
Air expands as it moves upward, and contracts as it moves downward. The expansion of rising air parcels, and the contraction of descending air parcels, are adiabatic processes, to a good approximation.
When a parcel of air expands, it pushes on the air around it, doing thermodynamic work. Since the upward-moving and expanding parcel does work but gains no heat, it loses internal energy so that its temperature decreases. Downward-moving and contracting air has work done on it, so it gains internal energy and its temperature increases.
Adiabatic processes for air have a characteristic temperature-pressure curve. As air circulates vertically, the air takes on that characteristic gradient. When the air contains little water, this lapse rate is known as the dry adiabatic lapse rate: the rate of temperature decrease is ( per 1,000 ft) (3.0 °C/1,000 ft). The reverse occurs for a sinking parcel of air.
When the environmental lapse rate is less than the adiabatic lapse rate the atmosphere is stable and convection will not occur.
Only the troposphere (up to approximately of altitude) in the Earth's atmosphere undergoes convection: the stratosphere does not generally convect. However, some exceptionally energetic convection processes, such as volcanic eruption columns and overshooting tops associated with severe supercell thunderstorms, may locally and temporarily inject convection through the tropopause and into the stratosphere.
Energy transport in the atmosphere is more complex than the interaction between radiation and dry convection. The water cycle (including evaporation, condensation, precipitation) transports latent heat and affects atmospheric humidity levels, significantly influencing the temperature profile, as described below.
Mathematics of the adiabatic lapse rate
The following calculations derive the temperature as a function of altitude for a packet of air which is ascending or descending without exchanging heat with its environment.
Dry adiabatic lapse rate
Thermodynamics defines an adiabatic process as:
the first law of thermodynamics can be written as
Also, since the density and , we can show that:
where is the specific heat at constant pressure.
Assuming an atmosphere in hydrostatic equilibrium:
where g is the standard gravity. Combining these two equations to eliminate the pressure, one arrives at the result for the dry adiabatic lapse rate (DALR),
The DALR () is the temperature gradient experienced in an ascending or descending packet of air that is not saturated with water vapor, i.e., with less than 100% relative humidity.
Moist adiabatic lapse rate
The presence of water within the atmosphere (usually the troposphere) complicates the process of convection. Water vapor contains latent heat of vaporization. As a parcel of air rises and cools, it eventually becomes saturated; that is, the vapor pressure of water in equilibrium with liquid water has decreased (as temperature has decreased) to the point where it is equal to the actual vapor pressure of water. With further decrease in temperature the water vapor in excess of the equilibrium amount condenses, forming cloud, and releasing heat (latent heat of condensation). Before saturation, the rising air follows the dry adiabatic lapse rate. After saturation, the rising air follows the moist (or wet) adiabatic lapse rate. The release of latent heat is an important source of energy in the development of thunderstorms.
While the dry adiabatic lapse rate is a constant ( per 1,000 ft, ), the moist adiabatic lapse rate varies strongly with temperature. A typical value is around , (, , ). The formula for the saturated adiabatic lapse rate (SALR) or moist adiabatic lapse rate (MALR) is given by:
where:
{| border="0" cellpadding="2"
|-
| style="text-align:right;" | ,
| wet adiabatic lapse rate, K/m
|-
| style="text-align:right;" | ,
| Earth's gravitational acceleration = 9.8076 m/s2
|-
| style="text-align:right;" | ,
| heat of vaporization of water =
|-
| style="text-align:right;" | ,
| specific gas constant of dry air = 287 J/kg·K
|-
| style="text-align:right;" | ,
| specific gas constant of water vapour = 461.5 J/kg·K
|-
| style="text-align:right;" | ,
| the dimensionless ratio of the specific gas constant of dry air to the specific gas constant for water vapour = 0.622
|-
| style="text-align:right;" | ,
| the water vapour pressure of the saturated air
|-
| style="text-align:right;" | ,
| the mixing ratio of the mass of water vapour to the mass of dry air
|-
| style="text-align:right;" | ,
| the pressure of the saturated air
|-
|-
| style="text-align:right;" | ,
| temperature of the saturated air, K
|-
| style="text-align:right;" | ,
| the specific heat of dry air at constant pressure, = 1003.5J/kg·K
|}
The SALR or MALR () is the temperature gradient experienced in an ascending or descending packet of air that is saturated with water vapor, i.e., with 100% relative humidity.
Effect on weather
The varying environmental lapse rates throughout the Earth's atmosphere are of critical importance in meteorology, particularly within the troposphere. They are used to determine if the parcel of rising air will rise high enough for its water to condense to form clouds, and, having formed clouds, whether the air will continue to rise and form bigger shower clouds, and whether these clouds will get even bigger and form cumulonimbus clouds (thunder clouds).
As unsaturated air rises, its temperature drops at the dry adiabatic rate. The dew point also drops (as a result of decreasing air pressure) but much more slowly, typically about per 1,000 m. If unsaturated air rises far enough, eventually its temperature will reach its dew point, and condensation will begin to form. This altitude is known as the lifting condensation level (LCL) when mechanical lift is present and the convective condensation level (CCL) when mechanical lift is absent, in which case, the parcel must be heated from below to its convective temperature. The cloud base will be somewhere within the layer bounded by these parameters.
The difference between the dry adiabatic lapse rate and the rate at which the dew point drops is around per 1,000 m. Given a difference in temperature and dew point readings on the ground, one can easily find the LCL by multiplying the difference by 125 m/°C.
If the environmental lapse rate is less than the moist adiabatic lapse rate, the air is absolutely stable — rising air will cool faster than the surrounding air and lose buoyancy. This often happens in the early morning, when the air near the ground has cooled overnight. Cloud formation in stable air is unlikely.
If the environmental lapse rate is between the moist and dry adiabatic lapse rates, the air is conditionally unstable — an unsaturated parcel of air does not have sufficient buoyancy to rise to the LCL or CCL, and it is stable to weak vertical displacements in either direction. If the parcel is saturated it is unstable and will rise to the LCL or CCL, and either be halted due to an inversion layer of convective inhibition, or if lifting continues, deep, moist convection (DMC) may ensue, as a parcel rises to the level of free convection (LFC), after which it enters the free convective layer (FCL) and usually rises to the equilibrium level (EL).
If the environmental lapse rate is larger than the dry adiabatic lapse rate, it has a superadiabatic lapse rate, the air is absolutely unstable — a parcel of air will gain buoyancy as it rises both below and above the lifting condensation level or convective condensation level. This often happens in the afternoon mainly over land masses. In these conditions, the likelihood of cumulus clouds, showers or even thunderstorms is increased.
Meteorologists use radiosondes to measure the environmental lapse rate and compare it to the predicted adiabatic lapse rate to forecast the likelihood that air will rise. Charts of the environmental lapse rate are known as thermodynamic diagrams, examples of which include Skew-T log-P diagrams and tephigrams. (See also Thermals).
The difference in moist adiabatic lapse rate and the dry rate is the cause of foehn wind phenomenon (also known as "Chinook winds" in parts of North America). The phenomenon exists because warm moist air rises through orographic lifting up and over the top of a mountain range or large mountain. The temperature decreases with the dry adiabatic lapse rate, until it hits the dew point, where water vapor in the air begins to condense. Above that altitude, the adiabatic lapse rate decreases to the moist adiabatic lapse rate as the air continues to rise. Condensation is also commonly followed by precipitation on the top and windward sides of the mountain. As the air descends on the leeward side, it is warmed by adiabatic compression at the dry adiabatic lapse rate. Thus, the foehn wind at a certain altitude is warmer than the corresponding altitude on the windward side of the mountain range. In addition, because the air has lost much of its original water vapor content, the descending air creates an arid region on the leeward side of the mountain.
Impact on the greenhouse effect
If the environmental lapse rate was zero, so that the atmosphere was the same temperature at all elevations, then there would be no greenhouse effect. This doesn't mean the lapse rate and the greenhouse effect are the same thing, just that the lapse rate is a prerequisite for the greenhouse effect.
The presence of greenhouse gases on a planet causes radiative cooling of the air, which leads to the formation of a non-zero lapse rate. So, the presence of greenhouse gases leads to there being a greenhouse effect at a global level. However, this need not be the case at a localized level.
The localized greenhouse effect is stronger in locations where the lapse rate is stronger. In Antarctica, thermal inversions in the atmosphere (so that air at higher altitudes is warmer) sometimes cause the localized greenhouse effect to become negative (signifying enhanced radiative cooling to space instead of inhibited radiative cooling as is the case for a positive greenhouse effect).
Lapse rate in an isolated column of gas
A question has sometimes arisen as to whether a temperature gradient will arise in a column of still air in a gravitational field without external energy flows. This issue was addressed by James Clerk Maxwell in 1902, who established that if any temperature gradient forms, then that temperature gradient must be universal (i.e., the gradient must be same for all materials) or the second law of thermodynamics would be violated. Maxwell also concluded that the universal result must be one in which the temperature is uniform, i.e., the lapse rate is zero.
Santiago and Visser (2019) confirm the correctness of Maxwell's conclusion (zero lapse rate) provided relativistic effects are neglected. When relativity is taken into account, gravity gives rise to an extremely small lapse rate, the Tolman gradient (derived by R. C. Tolman in 1930). At Earth's surface, the Tolman gradient would be about m, where is the temperature of the gas at the elevation of Earth's surface. Santiago and Visser remark that "gravity is the only force capable of creating temperature gradients in thermal equilibrium states without violating the laws of thermodynamics" and "the existence of Tolman's temperature gradient is not at all controversial (at least not within the general relativity community)."
See also
Adiabatic process
Atmospheric thermodynamics
Fluid dynamics
Foehn wind
Lapse rate climate feedback
Scale height
Notes
References
Further reading
www.air-dispersion.com
External links
Definition, equations and tables of lapse rate from the Planetary Data system.
National Science Digital Library glossary:
Lapse Rate
Environmental lapse rate
Absolute stable air
An introduction to lapse rate calculation from first principles from U. Texas
Atmospheric thermodynamics
Climate change feedbacks
Fluid mechanics
Meteorological quantities
Spatial gradient
Atmospheric temperature
Vertical position | Lapse rate | Physics,Mathematics,Engineering | 3,881 |
11,447,720 | https://en.wikipedia.org/wiki/Excel%20Services | Excel Services is a server technology included in SharePoint 2010 and SharePoint 2007. This shared service enables users to load, calculate, and display Excel 2010 workbooks on Microsoft Office SharePoint Server 2010.
Using Excel Services, users can reuse and share Excel workbooks on Microsoft Office SharePoint Server 2010 portals and dashboards. For example, they can create content in Excel 2010 and share it by using an Office SharePoint Server 2007 portal and dashboard. The entire workbook or just parts of it (such as just a single sheet, chart or table) can be shared.
End-users can view live, interactive workbooks using only a web browser. They can also interact with workbooks to explore data, and analyze Pivot Table reports and charts by using a browser. Excel Services supports workbooks that are connected to external data sources. Users can embed connection strings to external data sources in the workbook or save them centrally in a data connection library file.
Selected cells in worksheets can be made editable by making them named ranges or "parameters". Items which are set as "viewable", when they save to Excel Services, will appear in the Parameters pane in the browser. Users can change the values of these named ranges in the parameters pane and refresh the workbook. They can also use the portal's filter Web Part to filter several Web Parts (Excel Web Access and other types of web parts) together.
References
Asnash et al. (2007). Beginning Excel Services. Wiley.
Prish, S. (2007). Professional Excel Services. Wiley.
External links
Official Excel Services 2007 website
Official Excel Services 2010 website
Excel Services
Spreadsheet software
SharePoint | Excel Services | Mathematics | 346 |
6,394,545 | https://en.wikipedia.org/wiki/Propiomazine | Propiomazine, sold under the brand name Propavan among others, is an antihistamine which is used to treat insomnia and to produce sedation and relieve anxiety before or during surgery or other procedures and in combination with analgesics as well as during labor. Propiomazine is a phenothiazine, but is not used therapeutically as a neuroleptic because it does not block dopamine receptors well.
Medical uses
Propiomazine has been used in the treatment of insomnia.
Side effects
Drowsiness is a usual side effect. Rare, serious side effects include convulsions (seizures); difficult or unusually fast breathing; fast or irregular heartbeat or pulse; fever (high); high or low blood pressure; loss of bladder control; muscle stiffness (severe); unusual increase in sweating; unusually pale skin; and unusual tiredness or weakness.
Pharmacology
Pharmacodynamics
Propiomazine is an antagonist of the dopamine D1, D2, and D4 receptors, the serotonin 5-HT2A and 5-HT2C receptors, the muscarinic acetylcholine receptors M1, M2, M3, M4, and M5 receptors, α1-adrenergic receptor, and histamine H1 receptor.
The antipsychotic effect of propiomazine is thought to be due to antagonism of the dopamine D2 receptor and serotonin 5-HT2A receptor, with greater activity at the 5-HT2A receptor than at the D2 receptor. This may explain the lack of extrapyramidal effects with propiomazine. Propiomazine does not appear to block dopamine within the tuberoinfundibular pathway, which may explain its lower incidence of hyperprolactinemia than with typical antipsychotics or risperidone.
Chemistry
Propiomazine, also known as 10-(2-dimethylaminopropyl)-2-propionylphenothiazine or as propionylpromethazine, is a phenothiazine derivative and is structurally related to promethazine. The compound is provided medically as the hydrochloride and maleate salts.
Society and culture
Brand names
Propiomazine has been sold under the brand names Dorevan, Dorévane, Indorm, Largon, Phenoctyl, Propavan, Propial, and Serentin.
Availability
In 2000, propiomazine continued to be marketed only in Sweden.
References
H1 receptor antagonists
Hypnotics
Ketones
M1 receptor antagonists
M2 receptor antagonists
M3 receptor antagonists
M4 receptor antagonists
M5 receptor antagonists
Phenothiazines
Sedatives | Propiomazine | Chemistry,Biology | 591 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.