id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
20,028,453 | https://en.wikipedia.org/wiki/Dhokra | Dhokra (also spelt Dokra) is non–ferrous metal casting using the lost-wax casting technique. This sort of metal casting has been used in India for over 4,000 years and is still used. One of the earliest known lost wax artifacts is the dancing girl of Mohenjo-daro. The product of dhokra artisans are in great demand in domestic and foreign markets because of primitive simplicity, enchanting folk motifs and forceful form. Dhokra horses, elephants, peacocks, owls, religious images, measuring bowls, and lamp caskets etc., are highly appreciated. The lost wax technique for casting of copper based alloys has also been found in China, Egypt, Malaysia, Nigeria, Central America, and other places.
The process
There are two main processes of lost wax casting: solid casting and hollow casting. While the former is predominant in the south of India the latter is more common in Central and Eastern India. Solid casting does not use a clay core but instead a solid piece of wax to create the mould; hollow casting is the more traditional method and uses the clay core.
The first task in the lost wax hollow casting process consists of developing a clay core which is roughly the shape of the final cast image. Next, the clay core is covered by a layer of wax composed of pure beeswax, resin from the tree Damara orientalis (more properly Agathis dammara), and nut oil. The wax is then shaped and carved in all its finer details of design and decorations. It is then covered with layers of clay, which takes the negative form of the wax on the inside, thus becoming a mould for the metal that will be poured inside it. Drain ducts are left for the wax, which melts away when the clay is cooked. The wax is then replaced by the molten metal, often using brass scrap as basic raw material. The liquid metal poured in hardens between the core and the inner surface of the mould. The metal fills the mould and takes the same shape as the wax. The outer layer of clay is then chipped off and the metal icon is polished and finished as desired.
The name
Dhokra Damar tribes are the main traditional metalsmiths of Odisha and West Bengal. Their technique of lost wax casting is named after their tribe, hence Dhokra metal casting. The tribe extends from Jharkhand to West Bengal and Odisha; members are distant cousins of the Chhattisgarh Dhokras. A few hundred years ago, the Dhokras of Central and Eastern India traveled south as far as Tamilnadu and north as far as Rajasthan and hence are now found all over India. Dhokra, or Dokra from Dwariapur and Bikna, West Bengal, is extremely popular. Recently Adilabad Dokra from Telangana got Geographical Indicator tag in 2018.
Images
References
External links
Ancient Metal Casting Art of Dhokra at Dwariapur, West Bengal With Subtitles at YouTube
Lost Wax Process or Dhokra Art of Bastar at YouTube
Dhokra Art | A rare Bronze age craft | Bastar Art & Handicraft | The Tribal Hermit at YouTube
Some other art of Chhattisgarh, baas or bamboo
Artistic techniques
Culture of Bengal
Culture of West Bengal
Bastar district
Indian handicrafts
Indian metalwork
Geographical indications in Chhattisgarh
Metallurgical industry in India
Geographical indications in West Bengal | Dhokra | Chemistry | 704 |
15,071,485 | https://en.wikipedia.org/wiki/SOX8 | Transcription factor SOX-8 is a protein that in humans is encoded by the SOX8 gene.
This gene encodes a member of the SOX (SRY-related HMG-box) family of transcription factors involved in the regulation of embryonic development and in the determination of the cell fate. The encoded protein may act as a transcriptional activator after forming a protein complex with other proteins. This protein may be involved in brain development and function. Haploinsufficiency for this protein may contribute to the mental retardation found in haemoglobin H-related mental retardation (ATR-16 syndrome).
See also
SOX genes
References
Further reading
Transcription factors | SOX8 | Chemistry,Biology | 141 |
39,418,516 | https://en.wikipedia.org/wiki/List%20of%20language%20bindings%20for%20Qt%205 | — Columns detailing the features covered by the binding are missing. —
See also
List of language bindings for Qt 4
List of language bindings for GTK+
List of language bindings for wxWidgets
List of Qt language bindings from the qt-project.org wiki
References
Qt (software)
Lists of software | List of language bindings for Qt 5 | Technology | 71 |
2,902,653 | https://en.wikipedia.org/wiki/106%20Aquarii | 106 Aquarii, abbreviated 106 Aqr, is a single star in the equatorial constellation of Aquarius. 106 Aquarii is the Flamsteed designation, and it also bears the Bayer designation i1 Aquarii. It has an apparent visual magnitude of +5.2, making it bright enough to be viewed from the suburbs according to the Bortle Dark-Sky Scale. An annual parallax shift of 8.61 milliarcseconds yields an estimated distance of around from Earth.
The spectrum of this star fits a stellar classification of B9 V, indicating this is a B-type main sequence star. It is spinning rapidly with a projected rotational velocity of 328 km/s. The star has 3 times the mass of the Sun and is radiating 152 times the Sun's luminosity from its photosphere at an effective temperature of 11,555 K. X-ray emission with a luminosity of has been detected from this star. This is unusual since a B-type star normally does not have any significant X-ray emission. Instead, it may have an undetected lower mass companion.
References
External links
Aladin previewer, image
Aladin sky atlas, image
Aquarius (constellation)
Aquarii, i1
Aquarii, 106
B-type main-sequence stars
117089
222847
8998
BD-19 6500 | 106 Aquarii | Astronomy | 289 |
573,844 | https://en.wikipedia.org/wiki/Aeromancy | Aeromancy (from Greek ἀήρ aḗr, "air", and manteia, "divination") is divination that is conducted by interpreting atmospheric conditions. Alternate terms include "arologie", "aeriology", and "aërology".
Practice
Aeromancy uses cloud formations, wind currents, and cosmological events such as comets, to attempt to divine the past, present, or future. There are sub-types of this practice which are as follows: austromancy (wind divination), ceraunoscopy (observing thunder and lightning), chaomancy (aerial vision), meteormancy (meteors, AKA shooting stars), and nephomancy (cloud divination).
History
Variations on the concept have been used throughout history, the practice is thought to have been used by the ancient Babylonian priests, and is probably alluded to in the bible.
Damascius, the last of the Neoplatonists, records an account of nephomancy in the 5th century CE, during the reign of Leo I:
Cultural influence
The ancient Etruscans produced guides to brontoscopic and fulgural divination of the future, based upon the omens that were supposedly displayed by thunder or lightning that occurred on particular days of the year, or in particular places.
Divination by clouds was condemned by Moses in Deuteronomy 18:10 and 18:14 in the Hebrew Bible. In contrast, english christian bibles typically translate the same hebrew words into "soothsayers" and "conjurers" or the like.
In Renaissance magic, aeromancy was classified as one of the seven "forbidden arts", along with necromancy, geomancy, hydromancy, pyromancy, chiromancy (palmistry), and spatulamancy (scapulimancy). It was thus condemned by Albertus Magnus in Speculum Astronomiae as a derivative of necromancy. The practice was further debunked by Luis de Valladolid in his 1889 work Historia de vita et doctrina Alberti Magni.
See also
References
Divination
Weather prediction | Aeromancy | Physics | 460 |
78,854,502 | https://en.wikipedia.org/wiki/Voron%202.4 | Voron 2.4 (Russian: ворон, raven) is a CoreXY 3D printer released in May 2020. It has open-source software and hardware, and requires building by the user based on parts sourced individually or in kits from third-party vendors. The printer has been described as a resurgence of the RepRap culture.
An active user community maintains the specification, shares experiences, improvements and modifications. This contributes to continuous improvement, and there are several types of adaptations, extensions and further developments (for example, the StealthBurner interchangeable tool head).
Voron 2.4 has a reputation for being complex to build and requiring considerable effort to operate. In return, its open specification and extensive use of off-the-shelf software makes it highly maintainable, modular, and extensible.
History
The Voron project was started by Russian Maks Zolin (pseudonym russiancatfood, RCF) who wanted a better, faster, and quieter 3D printer. He built a printer and started the company MZ-Bot based on open source ideology.
In 2015, the Voron Geared Extruder was released as the first design to use the Voron name. In 2015, Zolin sold the first 18 printers as kits (Voron 1.0, later renamed Voron Trident, and quite similar to the later Voron Legacy), and marked them with serial numbers. In March 2016, the first Voron printer was publicly released via the company MZ-Bot.
The V24 was an experimental model with a build volume of 24×24×24" (610×610×610 mm). Only two were built, laying the foundation for the later Voron2. By February 2019, over 100 Voron2 printers had been built and serialized, and a year later in 2020, the number had increased to 350 Voron2 printers. The Voron2.0 was never officially launched.
Zolin found that he did not want to run a company and instead decided to release his work freely, inviting others to collaborate with him. The tradition of marking new builds with serial numbers has lived on, and users who build their own Voron printer can be assigned their own serial number as proof of the hard work they have put into sourcing parts, assembling, and configuring the printer.
In May 2020, Voron2.4 was launched, and over 2500 printers were registered with serial numbers before the 2.4R2 version was launched in February 2022.
Design
The Voron 2.4 is available as standard in the 250, 300 and 350 versions, which have build volumes of 250×250×250 mm (~15 L), 300×300×300 mm (~27 L) and 350×350×350 mm (~42 L), respectively. It features a closed build chamber, which provides stable temperatures that are favorable for certain types of 3D printing filament, reduces noise, and allows for controlled exhaust emissions (HEPA filter extensions are available).
The CoreXY design results in less moving mass, allowing for higher accelerations and speeds. The belt is based on the CoreXY pattern, but with the belts stacked on top of each other and without the crossover found in some other CoreXY designs, which allows for favorable motor placement. The build manual emphasizes that the two belts should be of the same make and have exactly the same length to achieve consistent tension.
The frame is constructed from lightweight and rigid 2020 aluminum profiles with 6 mm slots, which must meet certain requirements. Linear-motion guide rails of type MGN7, MGN9 or MGN12 are used along the three axes (alternatively guide rods can be used). The recommended belts are Gates Unitta 6 mm and/or 9 mm. A single stack of F695 flange bearings is often used for belt idlers, as the bearings are much larger than standard GT2 belt idlers.
Voron 2.4 has a flying gantry, which differs from most other "pioneer" CoreXY printers (like RatRig, VzBot and Voron Trident). In other words, the 2.4 model has a stationary print plate and separate belts for moving the print head along the z-axis, while most other CoreXY printers on the other hand have a fixed gantry and a print plate that moves vertically with lead screws. A stationary print plate gives the possibility to use a heavier print plate (for example of thick steel instead of thin aluminium) that warps less when heated. It also gives a more space efficient frame, and makes it easier to calibrate the print to be parallel with the build plate (less need for bed mesh trimming). A disadvantage is that the z-axis may sag when the printer is not in use, but it shall straighten itself again when the printer is turned on.
All movement control is done with Klipper software on a Raspberry Pi, which provides great flexibility and extensibility through various parameters that can be programmed in a configuration file. The printer has the option of automatic calibration to compensate for unevenness in the build plate.
Construction and operation
The Voron 2.4 can be used for both hobby and professional small-scale production and prototyping. If using high-quality components and taking care to assemble them properly, one can achieve high speed, precision and reliability. Construction of the printer is time-consuming. Examples of things to pay attention to during construction are that the frame is square, using threadlock on screws and proper torque, using precise 3D printed parts, and connecting all the electrical components correctly.
See also
RepRap, project to create affordable 3D printers that can print most of their own components
Prusa i3, Czech open source 3D printer
Bambu Lab, Chinese manufacturer of proprietary CoreXY printers
References
3D printers
Open hardware electronic devices
RepRap project | Voron 2.4 | Engineering,Biology | 1,211 |
1,959,519 | https://en.wikipedia.org/wiki/Attrition%20%28website%29 | Attrition is an information security-related website, created in October, 1998, which used to be updated at least weekly by an all-volunteer staff. Until 21 May 2001, Attrition maintained the largest mirror of defaced (or cracked) websites available on the World Wide Web. The defacement mirror has since ceased updating. The site contains a variety of information, including movie and music reviews, poetry, and security tips covering topics like forensics, data theft, and security advisories.
In 2001, attrition.org was given a cease and desist order by lawyers of MasterCard for posting parodies of its "Priceless" advertising campaign, which they claim violated copyright law. An argument between attrition.org and MasterCard ensued, resulting in their communications and one final "Priceless" parody being posted online.
In 2006, Republican communications aide Todd Shriber attempted to hire Attrition to crack his former university's website. Shriber was then sacked from his job for attempting to solicit a hacker to inflate his GPA.
Attrition formerly hosted several electronic mailing lists relating to information security, such as InfoSec News. It also maintained the Data Loss Database, which recorded company experienced data breaches.
References
External links
Attrition
Computer security organizations
Computing websites | Attrition (website) | Technology | 273 |
9,561,628 | https://en.wikipedia.org/wiki/Theology%20of%20creationism%20and%20evolution | The theology of creation and evolution is theology that deals with issues concerning the universe, the life, and especially man, in terms of creation or evolution.
Creationism
Creationism is the religious belief that the universe and life originated "from specific acts of divine creation", as opposed to the scientific conclusion that they came about through natural processes such as evolution.
Churches address the theological implications raised by creationism and evolution in different ways.
Evolution
Most contemporary Christian leaders and scholars from many mainstream churches, such as Roman Catholic, Anglican and some Lutheran denominations, reject reading the Bible as though it could shed light on the physics of creation instead of the spiritual meaning of creation. According to the Archbishop of Canterbury, Rowan Williams, "[for] most of the history of Christianity there's been an awareness that a belief that everything depends on the creative act of God, is quite compatible with a degree of uncertainty or latitude about how precisely that unfolds in creative time."
The Roman Catholic Church now explicitly accepts the theory of evolution, (albeit with most conservatives and traditionalists within the Church in dissent), as do Anglican scholars such as John Polkinghorne, arguing that evolution is one of the principles through which God created living beings. Earlier examples of this attitude include Frederick Temple, Asa Gray and Charles Kingsley, who were enthusiastic supporters of Darwin's theories on publication, and the French Jesuit priest and geologist Pierre Teilhard de Chardin, who saw evolution as confirmation of his Christian beliefs, despite condemnation from Church authorities for his more speculative theories.
Liberal theology assumes that Genesis is a poetic work, and that just as human understanding of God increases gradually over time, so does the understanding of his creation. In fact, both Jews and Christians have been considering the idea of the creation narrative as an allegory (instead of an historical description) long before the development of Darwin's theory of evolution. Two notable examples are Saint Augustine (4th century) who, on theological grounds, argued that everything in the universe was created by God in the same instant, (and not in seven days as a plain account of Genesis would require) and the 1st century Jewish scholar Philo of Alexandria, who wrote that it would be a mistake to think that creation happened in six days, or in any set amount of time.
See also
Anti-intellectualism
Faith and rationality
References
Creationism
Evolution and religion
Intelligent design controversies | Theology of creationism and evolution | Biology | 483 |
654,098 | https://en.wikipedia.org/wiki/Symmetric%20algebra | In mathematics, the symmetric algebra (also denoted on a vector space over a field is a commutative algebra over that contains , and is, in some sense, minimal for this property. Here, "minimal" means that satisfies the following universal property: for every linear map from to a commutative algebra , there is a unique algebra homomorphism such that , where is the inclusion map of in .
If is a basis of , the symmetric algebra can be identified, through a canonical isomorphism, to the polynomial ring , where the elements of are considered as indeterminates. Therefore, the symmetric algebra over can be viewed as a "coordinate free" polynomial ring over .
The symmetric algebra can be built as the quotient of the tensor algebra by the two-sided ideal generated by the elements of the form .
All these definitions and properties extend naturally to the case where is a module (not necessarily a free one) over a commutative ring.
Construction
From tensor algebra
It is possible to use the tensor algebra to describe the symmetric algebra . In fact, can be defined as the quotient algebra of by the two-sided ideal generated by the commutators
It is straightforward to verify that the resulting algebra satisfies the universal property stated in the introduction. Because of the universal property of the tensor algebra, a linear map from to a commutative algebra extends to an algebra homomorphism , which factors through because is commutative. The extension of
to an algebra homomorphism is unique because generates as a -algebra.
This results also directly from a general result of category theory, which asserts that the composition of two left adjoint functors is also a left adjoint functor. Here, the forgetful functor from commutative algebras to vector spaces or modules (forgetting the multiplication) is the composition of the forgetful functors from commutative algebras to associative algebras (forgetting commutativity), and from associative algebras to vectors or modules (forgetting the multiplication). As the tensor algebra and the quotient by commutators are left adjoint to these forgetful functors, their composition is left adjoint to the forgetful functor from commutative algebra to vectors or modules, and this proves the desired universal property.
From polynomial ring
The symmetric algebra can also be built from polynomial rings.
If is a -vector space or a free -module, with a basis , let be the polynomial ring that has the elements of as indeterminates. The homogeneous polynomials of degree one form a vector space or a free module that can be identified with . It is straightforward to verify that this makes a solution to the universal problem stated in the introduction. This implies that and are canonically isomorphic, and can therefore be identified. This results also immediately from general considerations of category theory, since free modules and polynomial rings are free objects of their respective categories.
If is a module that is not free, it can be written where is a free module, and is a submodule of . In this case, one has
where is the ideal generated by . (Here, equals signs mean equality up to a canonical isomorphism.) Again this can be proved by showing that one has a solution of the universal property, and this can be done either by a straightforward but boring computation, or by using category theory, and more specifically, the fact that a quotient is the solution of the universal problem for morphisms that map to zero a given subset. (Depending on the case, the kernel is a normal subgroup, a submodule or an ideal, and the usual definition of quotients can be viewed as a proof of the existence of a solution of the universal problem.)
Grading
The symmetric algebra is a graded algebra. That is, it is a direct sum
where called the th symmetric power of , is the vector subspace or submodule generated by the products of elements of . (The second symmetric power is sometimes called the symmetric square of ).
This can be proved by various means. One follows from the tensor-algebra construction: since the tensor algebra is graded, and the symmetric algebra is its quotient by a homogeneous ideal: the ideal generated by all where and are in , that is, homogeneous of degree one.
In the case of a vector space or a free module, the gradation is the gradation of the polynomials by the total degree. A non-free module can be written as , where is a free module of base ; its symmetric algebra is the quotient of the (graded) symmetric algebra of (a polynomial ring) by the homogeneous ideal generated by the elements of , which are homogeneous of degree one.
One can also define as the solution of the universal problem for -linear symmetric functions from into a vector space or a module, and then verify that the direct sum of all satisfies the universal problem for the symmetric algebra.
Relationship with symmetric tensors
As the symmetric algebra of a vector space is a quotient of the tensor algebra, an element of the symmetric algebra is not a tensor, and, in particular, is not a symmetric tensor. However, symmetric tensors are strongly related to the symmetric algebra.
A symmetric tensor of degree is an element of that is invariant under the action of the symmetric group More precisely, given the transformation defines a linear endomorphism of . A symmetric tensor is a tensor that is invariant under all these endomorphisms. The symmetric tensors of degree form a vector subspace (or module) . The symmetric tensors are the elements of the direct sum which is a graded vector space (or a graded module). It is not an algebra, as the tensor product of two symmetric tensors is not symmetric in general.
Let be the restriction to of the canonical surjection If is invertible in the ground field (or ring), then is an isomorphism. This is always the case with a ground field of characteristic zero. The inverse isomorphism is the linear map defined (on products of vectors) by the symmetrization
The map is not injective if the characteristic is less than +1; for example is zero in characteristic two. Over a ring of characteristic zero, can be non surjective; for example, over the integers, if and are two linearly independent elements of that are not in , then since
In summary, over a field of characteristic zero, the symmetric tensors and the symmetric algebra form two isomorphic graded vector spaces. They can thus be identified as far as only the vector space structure is concerned, but they cannot be identified as soon as products are involved. Moreover, this isomorphism does not extend to the cases of fields of positive characteristic and rings that do not contain the rational numbers.
Categorical properties
Given a module over a commutative ring , the symmetric algebra can be defined by the following universal property:
For every -linear map from to a commutative -algebra , there is a unique -algebra homomorphism such that where is the inclusion of in .
As for every universal property, as soon as a solution exists, this defines uniquely the symmetric algebra, up to a canonical isomorphism. It follows that all properties of the symmetric algebra can be deduced from the universal property. This section is devoted to the main properties that belong to category theory.
The symmetric algebra is a functor from the category of -modules to the category of -commutative algebra, since the universal property implies that every module homomorphism can be uniquely extended to an algebra homomorphism
The universal property can be reformulated by saying that the symmetric algebra is a left adjoint to the forgetful functor that sends a commutative algebra to its underlying module.
Symmetric algebra of an affine space
One can analogously construct the symmetric algebra on an affine space. The key difference is that the symmetric algebra of an affine space is not a graded algebra, but a filtered algebra: one can determine the degree of a polynomial on an affine space, but not its homogeneous parts.
For instance, given a linear polynomial on a vector space, one can determine its constant part by evaluating at 0. On an affine space, there is no distinguished point, so one cannot do this (choosing a point turns an affine space into a vector space).
Analogy with exterior algebra
The Sk are functors comparable to the exterior powers; here, though, the dimension grows with k; it is given by
where n is the dimension of V. This binomial coefficient is the number of n-variable monomials of degree k.
In fact, the symmetric algebra and the exterior algebra appear as the isotypical components of the trivial and sign representation of the action of acting on the tensor product (for example over the complex field)
As a Hopf algebra
The symmetric algebra can be given the structure of a Hopf algebra. See Tensor algebra for details.
As a universal enveloping algebra
The symmetric algebra S(V) is the universal enveloping algebra of an abelian Lie algebra, i.e. one in which the Lie bracket is identically 0.
See also
exterior algebra, the alternating algebra analog
graded-symmetric algebra, a common generalization of a symmetric algebra and an exterior algebra
Weyl algebra, a quantum deformation of the symmetric algebra by a symplectic form
Clifford algebra, a quantum deformation of the exterior algebra by a quadratic form
References
Algebras
Multilinear algebra
Polynomials
Ring theory | Symmetric algebra | Mathematics | 1,936 |
1,686,266 | https://en.wikipedia.org/wiki/Tinnitus%20retraining%20therapy | Tinnitus retraining therapy (TRT) is a form of habituation therapy designed to help people who experience tinnitus—a ringing, buzzing, hissing, or other sound heard when no external sound source is present. Two key components of TRT directly follow from the neurophysiological model of tinnitus: Directive counseling aims to help the sufferer reclassify tinnitus to a category of neutral signals, and sound therapy weakens tinnitus-related neuronal activity.
The goal of TRT is to allow a person to manage their reaction to their tinnitus: habituating themselves to it, and restoring unaffected perception. Neither Tinnitus Retraining Therapy or any other therapy reduces or eliminates tinnitus.
An alternative to TRT is tinnitus masking: the use of noise, music, or other environmental sounds to obscure or mask the tinnitus. Hearing aids can partially mask the condition. A review of tinnitus retraining therapy trials indicates that it may be more effective than tinnitus masking.
Applicability
Not everyone who experiences tinnitus is significantly bothered by it. However, some experience annoyance, anxiety, panic, loss of sleep, or difficulty concentrating. The distress of tinnitus is strongly with various psychological factors; the loudness, duration, and other characteristics of the tinnitus symptoms are secondary.
TRT may offer real though moderate improvement in tinnitus suffering for adults with moderate-to-severe tinnitus, in the absence of hyperacusis, significant hearing loss, or depression. Not everyone is a good candidate for TRT. Those most likely to have a favorable outcome from TRT are those with lower loudness of tinnitus, higher pitch of tinnitus, shorter duration of tinnitus since onset, , lower hearing thresholds (i.e. better hearing), high Tinnitus Handicap Inventory (THI) score, and positive attitude toward therapy.
Other secondary hearing symptoms
Although no studies have , TRT has been used to treat hyperacusis, misophonia, and phonophobia.
Cause
Physiological basis
Tinnitus may be the result of abnormal neural activity caused by discordant damage (dysfunction) of outer and inner hair cells of the cochlea.
Psychological model
The psychological basis for TRT is the hypothesis that the brain can change how it processes auditory stimuli. TRT is imputed to work by interfering with the neural activity causing the tinnitus at its source, in order to prevent it from spreading to other parts of the nervous systems such as the limbic and autonomic nervous systems.
Methodologies
The full TRT program lasts 12 to 24 months and consists of an initial classification of clients for different emphasis during therapy, then a combination of directed counseling and sound therapy.
Classification
Clients are classified into five categories, numbered 0 to 4, based on whether or not the patient has tinnitus with hearing loss, tinnitus with no hearing loss, tinnitus with hearing loss and hyperacusis, and tinnitus with hearing loss and hyperacusis for an extended amount of time.
Counseling
The first component of TRT, directive counseling, may change the way tinnitus is perceived. The patient is taught basic knowledge about the auditory system and its function, and how tinnitus and the annoyance associated with tinnitus is generated. The repetition of these points in follow-up visits helps the patient come to perceive the tinnitus signal as a non-danger.
Sound therapy
The second component of TRT uses a sound generator to partially mask the tinnitus. This is done with a device similar to a hearing aid that emits a low level broadband noise so that the ear can hear both the noise and tinnitus. This is intended to acclimate the brain to reducing its emphasis on the tinnitus versus the external sound.
One study found that a full tinnitus masker was just as effective as partial masking, nullifying a key component of habituation therapy. Other review studies have found no value to the sound therapy component of TRT.
Efficacy
Confounding factors make it difficult to measure the efficacy of TRT: tinnitus reporting is entirely subjective, varies over time, and repeated evaluations are not consistent. Researchers note there is a large placebo component to tinnitus management. In many commercial TRT practices, there is a large proportion of dropouts; reported "success" ratios may not take these subjects into account.
There are few available studies, but most show that tinnitus naturally declines over a period of years in a large proportion of subjects surveyed, without any treatment. The annoyance of tinnitus also tends to decline over time. In some people, tinnitus spontaneously disappears.
A Cochrane review found only one sufficiently rigorous study of TRT and noted that while the study suggested benefit in the treatment of tinnitus, the study quality was not good enough to draw firm conclusions. A separate Cochrane review of sound therapy (they called it "masking"), an integral part of TRT, found no convincing evidence of the efficacy of sound therapy in the treatment of tinnitus.
A summary in The Lancet concluded that in the only good study, TRT was more effective than masking; in another study in which TRT was used as a control, TRT showed a small benefit. A study that compared cognitive behavior therapy (CBT) in combination with the counselling part of TRT versus standard care (ENT, audiologist, maskers, hearing aid) found that the specialized care had a positive effect on quality of life as well as on specific tinnitus metrics.
Clinical practice
Tinnitus activities treatment (TAT) is a clinical adaptation of TRT that focuses on four areas: thoughts and emotions, hearing and communication, sleep, and concentration.
Progressive tinnitus management (PTM) is a five-step structured clinical protocol for management of tinnitus that may include tinnitus retraining therapy. The five steps are:
triage – determining appropriate referral, i.e. audiology, ENT, emergency medical intervention, or mental health evaluation;
audiologic evaluation of hearing loss, tinnitus, hyperacusis, and other symptoms;
group education about causes and management of tinnitus;
interdisciplinary evaluation of tinnitus;
individual management of tinnitus.
The U.S. Department of Veterans Affairs (VA) now employs PTM to help patients self-manage their tinnitus.
Research
Sound therapy for tinnitus may be more effective if the sound is patterned (i.e. varying in frequency or amplitude) rather than static.
For people with severe or disabling tinnitus, techniques that are minimally surgical, involving magnetic or electrical stimulation of areas of the brain that are involved in auditory processing, may suppress tinnitus.
Notched music therapy, in which ordinary music is altered by a one octave notch filter centered at the tinnitus frequency, may reduce tinnitus.
Alternatives
Cognitive behavioral therapy
Cognitive behavioral therapy (CBT), the counselling part of TRT, as a generalized type of psychological and behavioral counselling, has also been used by itself in the management of tinnitus.
Hearing aids
If tinnitus is associated with hearing loss, a tuned hearing aid that amplifies sound in the frequency range of the hearing loss (usually the high frequencies) may effectively mask tinnitus by raising the level of environmental sound, in addition to the benefit of restoring hearing.
Masking
White noise generators or environmental music may provide a background noise level that is of sufficient amplitude that it wholly or partially "masks" the tinnitus. Composite hearing aids that combine amplification and white noise generation are also available.
Other
Numerous non-TRT methods have been suggested for the treatment or management of tinnitus.
pharmacological – No drug has been approved by the U.S. Food and Drug Administration (FDA) for treating tinnitus. However, various pharmacological treatments, including antidepressants, anxiolytics, vasodilators and vasoactive substances, and intravenous lidocaine have been prescribed for tinnitus
lifestyle and support – Loud noise, alcohol, caffeine, nicotine, quiet environments, and psychological conditions like stress and depression may exacerbate tinnitus. Reducing or controlling these may help manage the condition.
alternative medicine – Vitamin, antioxidant, and herbal preparations (notably Ginkgo biloba extract, also called EGb761) are advertised as treatments or cures for tinnitus. None are approved by the FDA, and controlled clinical trials on their efficacy are lacking.
See also
Operant conditioning
Safe listening
Hearing loss
References
Literature
Ear procedures
Audiology
Mind–body interventions
Behaviorism
Music therapy
Counseling
Behavior therapy
Cognitive therapy
Alternative medicine | Tinnitus retraining therapy | Biology | 1,829 |
29,677,702 | https://en.wikipedia.org/wiki/Phenolic%20content%20in%20tea | The phenolic content in tea refers to the phenols and polyphenols, natural plant compounds which are found in tea. These chemical compounds affect the flavor and mouthfeel of tea. Polyphenols in tea include catechins, theaflavins, tannins, and flavonoids.
Polyphenols found in green tea include, but are not limited to, epigallocatechin gallate (EGCG), epigallocatechin, epicatechin gallate, and epicatechin; flavanols such as kaempferol, quercetin, and myricitin are also found in green tea.
Catechins
Catechins include epigallocatechin-3-gallate (EGCG), epicatechin (EC), epicatechin-3-gallate (ECg), epigallocatechin (EGC), catechin, and gallocatechin (GC). The content of EGCG is higher in green tea.
Catechins constitute about 25% of the dry mass of a fresh tea leaf, although total catechin content varies widely depending on species, clonal variation, growing location, season, light variation, and altitude. They are present in nearly all teas made from Camellia sinensis, including white tea, green tea, black tea and oolong tea.
A 2011 analysis by the European Food Safety Authority found that a cause and effect relationship could not be shown for a link between tea catechins and the maintenance of normal blood LDL-cholesterol concentration.
4-Hydroxybenzoic acid, 3,4-dihydroxybenzoic acid (protocatechuic acid), 3-methoxy-4-hydroxy-hippuric acid and 3-methoxy-4-hydroxybenzoic acid (vanillic acid) are the main catechins metabolites found in humans after consumption of green tea infusions.
Theaflavins
Catechin monomer structures are metabolized into dimers theaflavins and oligomers thearubigins with increasing degrees of oxidation of tea leaves. Theaflavins contribute to the bitterness and astringency of black tea. The mean amount of theaflavins in a cup of black tea (200 ml) is 12.18 mg.
Three main types of theaflavins are found in black tea, namely theaflavin (TF-1), theaflavin-3-gallate (TF-2), and theaflavin-3,3-digallate (TF-3).
Tannins
Tannins are astringent, bitter polyphenolic compounds that bind to and precipitate organic compounds.
Gallic acid conjugates all of the catechins, such as EGCG (Epigallocatechin gallate), which are tannins with astringent qualities.
Flavonoids
Phenols called flavonoids are under preliminary research, as of 2020, but there is no evidence that flavonoids have antioxidant activity in vivo, or affect physical health or diseases. Tea has one of the highest contents of flavonoids among common food and beverage products. Catechins are the largest type of flavonoids in growing tea leaves. According to a report released by USDA, in a 200-ml cup of tea, the mean total content of flavonoids is 266.68 mg for green tea, and 233.12 mg for black tea.
Research
A 2020 review found low- to moderate-quality evidence that daily tea consumption might lower the risk for cardiovascular disease and death.
See also
Health effects of tea
Phenolic content in wine
References
Tea
Natural phenols | Phenolic content in tea | Chemistry | 806 |
3,675,632 | https://en.wikipedia.org/wiki/Wine%20glass | A wine glass is a type of glass that is used for drinking or tasting wine. Most wine glasses are stemware (goblets), composed of three parts: the bowl, stem, and foot. There are a wide variety of slightly different shapes and sizes, some considered especially suitable for particular types of wine.
Some authors recommend one holds the glass by the stem, to avoid warming the wine and smudging the bowl; alternately, for red wine it may be good to add some warmth.
Before "glass" became adopted as a word for a glass drinking vessel, a usage first recorded in English c. 1382, wine was drunk from a wine cup, of which there were a huge variety of shapes over history, in many different materials. Wine cups in precious metals remained in use until the Early Modern period, but as glass got better and cheaper, were generally replaced everywhere except in churches, where chalices are still normally in metal. In wealthy homes in England, glasses replaced silver wine cups of very similar size and shape in the 1600s.
Shapes
The effect of glass shape on the taste of wine has not been demonstrated decisively by any scientific study and remains a matter of debate. One study suggests that the shape of the glass is important, as it concentrates the flavour and aroma (or bouquet) to emphasize the varietal's characteristic. One common belief is that the shape of the glass directs the wine itself into the best area of the mouth for the varietal despite flavour being perceived by olfaction in the upper nasal cavity, not the mouth. The importance of wine glass shape could also be based on false ideas about the arrangement of different taste buds on the tongue, such as the discredited tongue map.
Most wine glasses are stemware, composed of three parts: the bowl, stem, and foot. In some designs, the opening of the glass is narrower than the widest part of the bowl to concentrate the aroma. Others are more open, like inverted cones. In addition, "stemless" wine glasses (tumblers) are available in a variety of sizes and shapes. The latter are typically used more casually than their traditional counterparts.
According to the wine critic for The New York Times, the bowl of the glass should be large enough to generously fill a quarter of the glass, it should be transparent, widest at the base and tapering inward to the rim to channel aromas upward.
A 2015 study by Kohji Mitsubayashi of Tokyo Medical and Dental University and colleagues found that different glass shapes and temperatures can bring out completely different bouquets and finishes from the same wine. The scientists developed a camera system that images ethanol vapor escaping from a wine glass.
Some common types of wine glasses are described below.
Red wine glasses
Glasses for red wine are characterized by their rounder, wider bowl, which increases the rate of oxidation. As oxygen from the air chemically interacts with the wine, flavor and aroma are believed to be subtly altered. This process of oxidation is generally considered more compatible with red wines, whose complex flavours are said to be smoothed out after being exposed to air. According to a wine critic for Observer, the wider opening can help enhance wine flavors and evaporate ethanol. Red wine glasses can have particular styles of their own, such as
Bordeaux glass: tall with a broad bowl, and is designed for full bodied red wines like Cabernet Sauvignon and Syrah as it directs wine to the back of the mouth.
Burgundy glass: broader than the Bordeaux glass, it has a bigger bowl to accumulate aromas of more delicate red wines such as Pinot noir. This style of glass directs wine to the tip of the tongue.
White wine glasses
White wine glasses vary enormously in size and shape, from the delicately tapered Champagne flute, to the wide and shallow glasses used to drink Chardonnay. Different shaped glasses are used to accentuate the unique characteristics of different styles of wine. Wide-mouthed glasses function similarly to red wine glasses discussed above, promoting rapid oxidation which alters the flavor of the wine. White wines which are best served slightly oxidized are generally full-flavored wines, such as oaked chardonnay. For lighter, fresher styles of white wine, oxidation is less desirable as it is seen to mask the delicate nuances of the wine. To preserve a crisp, clean flavored wine, many white wine glasses will have a smaller mouth, which reduces surface area and in turn, the rate of oxidization. In the case of sparkling wine, such as Champagne or Asti, an even smaller mouth is used to keep the wine sparkling longer in the glass.
Champagne flutes
Champagne flutes are characterised by a long stem with a tall, narrow bowl on top. The shape is designed to keep sparkling wine desirable during its consumption. Just as with wine glasses, the flute is designed to be held by the stem to help prevent the heat from the hand from warming the liquid inside. The bowl itself is designed in a manner to help retain the signature carbonation in the beverage. This is achieved by reducing the surface area at the opening of the bowl. Additionally, the flute design adds to the aesthetic appeal of champagne, allowing the bubbles to travel further due to the narrow design, giving a more pleasant visual appeal.
Sherry glass
A sherry glass or schooner is drinkware generally used for serving aromatic alcoholic beverages, such as sherry, port, aperitifs, and liqueurs, and layered shooters. The copita, with its aroma-enhancing narrow taper, is a type of sherry glass.
Materials
High quality wine glasses once were made of lead glass, which has a higher index of refraction and is heavier than ordinary glass, but health concerns regarding the ingestion of lead resulted in their being replaced by lead-free glass. Wine glasses, with the exception of the hock glass, are generally not coloured or frosted as doing so would diminish appreciation of the wine's colour. There used to be an ISO standard (ISO/PAS IWA 8:2009) for glass clarity and freedom from lead and other heavy metals, but it was withdrawn.
Some producers of high-end wine glasses such as Schott Zwiesel have pioneered methods of infusing titanium into the glass to increase its durability and reduce the likelihood of the glass breaking.
Decoration
Cut glass, engraved glass and enamelled glass techniques have been widely used for wine glasses. In the 18th century, glass makers would draw spiral patterns in the stem as they made the glass. If they used air bubbles it was called an airtwist; if they used threads, either white or coloured, it would be called opaque twist.
Modern functional designs focus on aeration, such as glassmaker Kurt Josef Zalto's Josephinenhütte brand.
ISO wine tasting glass
The International Organization for Standardization has a specification (ISO 3591:1977) for a wine-tasting glass. It consists of a cup (an "elongated egg") supported on a stem resting on a base.
The glass of reference is the INAO wine glass, a tool defined by specifications of the French Association for Standardization (AFNOR), which was adopted by INAO as the official glass in 1970, received its standard AFNOR in June 1971 and its ISO 3591 standard in 1972. The INAO has not submitted a file at the National Institute of Industrial Property, it is therefore copied en masse and has gradually replaced other tasting glasses in the world.
The glass must be lead crystal (9% lead). Its dimensions give it a total volume between 210 millilitres (mL) and 225 mL, they are defined as follows:
Diameter of the rim: 46 mm
Calyx height: 100 mm
Height of the foot: 55 mm
Shoulder diameter: 65 mm
Foot diameter: 9 mm
Diameter of the base: 65 mm
The opening is narrower than the convex part so as to concentrate the bouquet. The capacity is approximately 215 ml, but it is intended to take a 50 ml pour. Some glasses of a similar shape, but with different capacities, may be loosely referred to as ISO glasses, but they form no part of the ISO specification.
Measures in licensed premises
In the UK many publicans have moved from serving wine in the standard size of 125mL, towards the larger size of 250mL. A code of practice, introduced in 2010 as an extension to the Licensing Act 2003, contains conditions for the sale of alcohol, including a requirement for customers to be informed that smaller measures are available.
In the United States, most laws governing alcohol exist at the state level. Federal law does not provide any guidance on a standard pour size, but is seen as typical for restaurants (one fifth of a standard 750 ml wine bottle), and with pour sizes for tastings typically being half as large.
Capacity measure
As a supplemental unit of apothecary measure and as a culinary measurement unit, the wine glass (also known as wineglass, wineglassful (pl. wineglassesful), or cyathus vinarius in pharmaceutical Latin) is defined as 2 US customary fluid ounces ( of a US customary pint; about 2·08 British imperial fluid ounces or 59·15mL) in the US and 2 British imperial fluid ounces ( of a British imperial pint; about 1·92 US customary fluid ounces or 56·83mL) in the UK. An older version (before c. 1800) was 1 fluid ounces. These units bear little relation to the capacity of most contemporary wineglasses (based on bottle, or 125mL; about 4·40 British imperial fluid ounces or 4·23 US customary fluid ounces) or to the ancient Roman cyathus (about 45mL, 1·58 British imperial fluid ounces, or 1·52 US customary fluid ounces).
In the UK, the wine glass, the tumbler (10 British imperial fluid ounces), the breakfast cup (8 British imperial fluid ounces), the cup (6 British imperial fluid ounces), the teacup (5 British imperial fluid ounces), and the coffee cup (2 British imperial fluid ounces) are the traditional British equivalents of the US customary cup and the metric cup, used in situations where a US cook would use the US customary cup and a cook using metric units the metric cup. The breakfast cup is the most similar in size to the US customary cup and the metric cup. Which of these six units is used depends on the quantity or volume of the ingredient: there is division of labour between these six units, like the tablespoon and the teaspoon. British cookery books and recipes, especially those from the days before the UK’s partial metrication, commonly use two or more of the aforesaid units simultaneously: for example, the same recipe may call for a ‘tumblerful’ of one ingredient and a ‘wineglassful’ of another one; or a ‘breakfastcupful’ or ‘cupful’ of one ingredient, a ‘teacupful’ of a second one, and a ‘coffeecupful’ of a third one. Unlike the US customary cup and the metric cup, a tumbler, a breakfast cup, a cup, a teacup, a coffee cup, and a wine glass are not measuring cups: they are simply everyday drinking vessels commonly found in British households and typically having the respective aforementioned capacities; due to long‑term and widespread use, they have been transformed into measurement units for cooking. There is not a British imperial unit–based culinary measuring cup.
See also
Decanter
Wine accessory
Glass harp
Tumbler (glass)#Culinary measurement unit
Breakfast cup
Cup (unit)#British cup
Teacup (unit)
Coffee cup (unit)
Cooking weights and measures
References
External links
Scientific study on the shape of a wine glass and perception
Drinking glasses
Wine accessories
Measurement
Units of volume
Imperial units
Cooking weights and measures | Wine glass | Physics,Mathematics | 2,430 |
20,958,178 | https://en.wikipedia.org/wiki/Noisy%20market%20hypothesis | In finance, the noisy market hypothesis contrasts the efficient-market hypothesis in that it claims that the prices of securities are not always the best estimate of the true underlying value of the firm. It argues that prices can be influenced by speculators and momentum traders, as well as by insiders and institutions that often buy and sell stocks for reasons unrelated to fundamental value, such as for diversification, liquidity and taxes. These temporary shocks referred to as "noise" can obscure the true value of securities and may result in mispricing of these securities, potentially for many years.
References
See also
Adaptive market hypothesis
Agent-based computational economics
Information cascade
Noise trader
Financial markets
Efficient-market hypothesis
Financial economics
Behavioral finance | Noisy market hypothesis | Biology | 143 |
1,345,219 | https://en.wikipedia.org/wiki/TK%20Solver | TK Solver (originally TK!Solver) is a mathematical modeling and problem solving software system based on a declarative, rule-based language, commercialized by Universal Technical Systems, Inc.
History
Invented by Milos Konopasek in the late 1970s and initially developed in 1982 by Software Arts, the company behind VisiCalc, TK Solver was acquired by Universal Technical Systems in 1984 after Software Arts fell into financial difficulty and was sold to Lotus Software. Konopasek's goal in inventing the TK Solver concept was to create a problem solving environment in which a given mathematical model built to solve a specific problem could be used to solve related problems (with a redistribution of input and output variables) with minimal or no additional programming required: once a user enters an equation, TK Solver can evaluate that equation as is—without isolating unknown variables on one side of the equals sign.
Software Arts also released a series of "Solverpacks" - "ready-made versions of some of the formulas most commonly used in specific areas of application."
The New York Times described TK Solver as doing "for science and engineering what word processing did for corporate communictions [sic] and calc packages did for finance."
Universal Technical Systems
Lotus, which had acquired Software Arts, including TK Solver, in 1984 sold its ownership of the software to Universal Technical Systems less than two years later. Release 5 was still considered "one of the longest–standing mathematical equation solvers on the market today" in 2012.
Core technology
TK Solver's core technologies are a declarative programming language, algebraic equation solver, an iterative equation solver, and a structured, object-based interface, using a command structure. The interface comprises nine classes of objects that can be shared between and merged into other TK files:
Rules: equations, formulas, function calls which may include logical conditions
Variables: a listing of the variables that are used in the rules, along with values (numeric or non-numeric) that have been entered by the user or calculated by the software
Units: all units conversion factors, in a single location, to allow automatic update of values when units are changed
Lists: ranges of numeric and non-numeric values which can be associated with a variable or processed directly by procedure functions
Tables: collections of lists displayed together
Plots: line charts, scatterplots, bar charts, and pie charts
Functions: rule-based, table look-up, and procedural programming components
Formats: settings for displaying numeric and string values
Comments: for explanation and documentation
Each class of object is listed and stored on its own worksheet—the Rule Sheet, Variable Sheet, Unit Sheet, etc. Within each worksheet, each object has properties summarized on subsheets or viewed in a property window. The interface uses toolbars and a hierarchal navigation bar that resembles the directory tree seen on the left side of the Windows Explorer.
The declarative programming structure is embodied in the rules, functions and variables that form the core of a mathematical model.
Rules, variables and units
All rules are entered in the Rule Sheet or in user-defined functions. Unlike a spreadsheet or imperative programming environment, the rules can be in any order or sequence and are not expressed as assignment statements. "A + B = C / D" is a valid rule in TK Solver and can be solved for any of its four variables. Rules can be added and removed as needed in the Rule Sheet without regard for their order and incorporated into other models. A TK Solver model can include up to 32,000 rules, and the library that ships with the current version includes utilities for higher mathematics, statistics, engineering and science, finances, and programming.
Variables in a rule are automatically posted to the Variable Sheet when the rule is entered and the rule is displayed in mathematical format in the MathLook View window at the bottom of the screen. Any variable can operate as an input or an output, and the model will be solved for the output variables depending on the choice of inputs.
A database of unit conversion factors also ships with TK Solver, and users can add, delete, or import unit conversions in a way similar to that for rules. Each variable is associated with a "calculation" unit, but variables can also be assigned "display" units and TK automatically converts the values. For example, rules may be based upon meters and kilograms, but units of inches and pounds can be used for input and output.
Problem-solving
TK Solver has three ways of solving systems of equations. The "direct solver" solves a system algebraically by the principle of consecutive substitution. When multiple rules contain multiple unknowns, the program can trigger an iterative solver which uses the Newton–Raphson algorithm to successively approximate based on initial guesses for one or more of the output variables. Procedure functions can also be used to solve systems of equations. Libraries of such procedures are included with the program and can be merged into files as needed. A list solver feature allows variables to be associated with ranges of data or probability distributions, solving for multiple values, which is useful for generating tables and plots and for running Monte Carlo simulations. The premium version now also includes a "Solution Optimizer" for direct setting of bounds and constraints in solving models for minimum, maximum, or specific conditions.
TK Solver includes roughly 150 built-in functions: mathematical, trigonometric, Boolean, numerical calculus, matrix operations, database access, and programming functions, including string handling and calls to externally compiled routines. Users may also define three types of functions: declarative rule functions; list functions, for table lookups and other operations involving pairs of lists; and procedure functions, for loops and other procedural operations which may also process or result in arrays (lists of lists). The complete NIST database of thermodynamic and transport properties is included, with built-in functions for accessing it. TK Solver is also the platform for engineering applications marketed by UTS, including Advanced Spring Design, Integrated Gear Software, Interactive Roark’s Formulas, Heat Transfer on TK, and Dynamics and Vibration Analysis.
Data display and sharing
Tables, plots, comments, and the MathLook notation display tool can be used to enrich TK Solver models. Models can be linked to other components with Microsoft Visual Basic and .NET tools, or they can be web-enabled using the RuleMaster product or linked with Excel spreadsheets using the Excel Toolkit product. There is also a DesignLink option linking TK Solver models with CAD drawings and solid models. In the premium version, standalone models can be shared with others who do not have a TK license, opening them in Excel or the free TK Player.
Reception
BYTEs 1982 preview of TK Solver said that it was "an interesting program that does for equation-solving what the pocket calculator does for arithmetic—replacees drudgery and the possibility of error with speed and accuracy". The magazine's 1984 review stated that "TK!Solver is superb for solving almost any kind of equation", but that it did not handle matrices, and that a programming language like Fortran or APL was superior for simultaneous solution of linear equations. The magazine concluded that despite limitations, it was a "powerful tool, useful for scientists and engineers. No similar product exists". By version 5.0, TK Solver added Matrix handling functionality.
Competitive products appeared by mid-1988: Mathsoft's Mathcad and Borland's Eureka: The Solver.
Dan Bricklin, known for VisiCalc and his Software Arts's initial development of TK Solver, was quoted as saying that the market "wasn't as big as we thought it would be because not that many people think in equations."
See also
Optimization (mathematics)
Multidisciplinary design optimization
References
1982 software
Numerical software | TK Solver | Mathematics | 1,646 |
31,823,380 | https://en.wikipedia.org/wiki/Vietnam%20Atomic%20Energy%20Commission | 13/9/1993: National Atomic Energy Commission was renamed Vietnam Atomic Energy Institute under the Ministry of Science, Technology and Environment (now the Ministry of Science and Technology)
(According to Decree No. 59/CP dated September 13, 1993 of the Government)
Vietnam Atomic Energy Institute is a special ranked scientific organization under Ministry of Science and Technology, of which function is to assist Minister to perform duties including basic research, application and deployment of research results in the field of atomic energy, technical support for governmental management on atomic energy, radiation and nuclear safety, education and training in the field
Functions & Duties
Function
Vietnam Atomic Energy Institute is a special ranked scientific organization under Ministry of Science and Technology, of which function is to assist Minister to perform duties including basic research, application and deployment of research results in the field of atomic energy, technical support for governmental management on atomic energy, radiation and nuclear safety, education and training in the field.
Duty
1. To provide professional insights to the formulation of state directions, policies, strategies, planning and projects for atomic energy development in Vietnam, and participate in the establishment of legal and regulatory documents related to atomic energy;
2. To conduct fundamental research in the field of nuclear science and technology;
3. To implement national science and technology projects in the area of nuclear energy; to appraise comprehensively projects and programs in the area of nuclear energy as required;
4. To play the role as an independent organization that provides technical support at national level in quality control, quality assessment for construction, devices, nuclear safety and security assurance, as well as envinronment protection in assisting the nuclear power plant programme;
5. To study and develop the application of nuclear techniques and radiation technology in various economic and industrial fields of the country;
6. To research, adopt, master and develop science and technology related to nuclear power plant construction and operation;
7. To conduct postgraduate education and training activities for technical staff of atomic energy field;
8. To convey science and technology services; transfer research achievements to mass production; develop and manufacture research achievements on experimental scale;
9. To organize investment business, import and export in the area of atomic energy;
10. To provide consultancy services on: project planning, supervision and verification, evaluation of design files and cost estimates of investment programs and projects as well as construction works in the field of atomic energy according to the provisions of law.
11. To perform the cooperation and collaboration with different organizations, individuals, both domestically and internationally, in terms of R&D, education and training in the field of nuclear energy.
History
26/4/1976: Establishment of Dalat Nuclear Research Institute under the State Committee of Science and Technology
(According to Decision No. 64-CP dated April 26, 1976 of the Government Council)
23/2/1979: Establishment of Nuclear Research Institute (formerly known as the Da Lat Nuclear Institute) under the direct management of the Prime Minister
(According to Decree No. 59-CP dated February 23, 1979 of the Government Council)
20/3/1984: Inauguration of the restoration and expansion of Dalat Nuclear Reactor –The Reactor was put into operation
11/6/1984: Nuclear Research Institute was renamed National Atomic Energy Commission under the direct direction of Chairman of Council of Ministers
(According to Decree 87-HDBT dated June 11, 1984 of the Council of Ministers)
11/3/1986: Establishment of Hanoi Irradiation Center
(According to Decision No. 43/QD dated March 11, 1986 of the National Atomic Energy Commission)
21/1/1991: Establishment of Institute for Technology of Radioactive Waste and Rare Elements and Institute for Nuclear Science and Technology
(According to Decision No. 18-CT dated January 21, 1991 of the Chairman of Council of Ministers)
22/7/1991: Inauguration of Hanoi Irradiation Center
11/6/1991: Establishment of Center for Nuclear Technique Ho Chi Minh City
(According to Decree No. 87/ND-HDBT dated June 11, 1984 of the Council of Ministers)
13/9/1993: National Atomic Energy Commission was renamed Vietnam Atomic Energy Institute under the Ministry of Science, Technology and Environment (now the Ministry of Science and Technology)
(According to Decree No. 59/CP dated September 13, 1993 of the Government)
26/2/1998: Inauguration of Ho Chi Minh City Irradiation Installation
14/2/2000: Establishment of Research and Development Center for Radiation Technology
(According to Decision No. 159/QD-BKHCNMT dated February 14, 2000 of the Ministry of Science, Technology and Environment)
06/5/2002: Establishment of Technology Application and Development Company
(According to Decision No. 25/2002/QD-BKHCNMT dated May 6, 2002 of the Ministry of Science, Technology and Environment)
17/4/2007: Establishment of Center for Application of Nuclear Technique in Industry
(According to Decision No. 591/QD-BKHCN dated April 17, 2007 of the Ministry of Science and Technology)
26/8/2008: Establishment of Center for Non- Destructive Evaluation
(According to Decision No. 1850/QD-BKHCN dated August 26, 2008 of the Ministry of Science and Technology)
02/12/2010: Establishment of Nuclear Training Center
(According to Decision No. 2700/QD-BKHCN dated December 2, 2010 of the Ministry of Science and Technology)
06/01/2016: The Vietnam Atomic Energy Institute is a special-class scientific and technological organization under the Ministry of Science and Technology
(According to Decision No. 30/QD-TTg dated January 6, 2016 of the Prime Minister)
Organizational Structures
Head Quarter
1. Administration and Personal
2. Department of Planning and R&D Management
3. Department of International Co-Operation
Nuclear Power & Technical Supports
1. Institute for Nuclear Science and Technique
2. Institute for Technology of Radioactive and Rare Elements
3. Nuclear Research Institute
4. Nuclear Training Center
Applicants Of Radioisotopes
1. Center for Nuclear Techniques in Ho Chi Minh City
2. Center for Application of Nuclear Technique in Insdustry
3. Research and Development Center for Radiation Technology
4. Ha Noi Irradiation Center
5. Center of Nondestructive Evaluation
See also
Nuclear energy in Vietnam
References
Nuclear power
Radiation protection organizations
Nuclear research institutes
Nuclear technology in Vietnam
Scientific organizations based in Vietnam | Vietnam Atomic Energy Commission | Physics,Engineering | 1,279 |
63,096,410 | https://en.wikipedia.org/wiki/FRB%20180916.J0158%2B65 | FRB 20180916B (previously known as FRB 180916.J0158+65, and less formally known as FRB 180916 or "R3"), is a repeating fast radio burst (FRB) discovered in 2018 by astronomers at the Canadian Hydrogen Intensity Mapping Experiment (CHIME) Telescope. According to a study published in the 9 January 2020 issue of the journal Nature, CHIME astronomers, in cooperation with the radio telescopes at European VLBI Network (VLBI) and the optical telescope Gemini North on Mauna Kea, Hawaii, were able to pinpoint the source of FRB 180916 to a location within a Milky Way-like galaxy named SDSS J015800.28+654253.0. This places the source at redshift 0.0337, approximately 457 million light-years from the Solar System.
Periodicity
Prior to the publication of the study in Nature, only two types of FRBs had been observed: non-repeaters and repeaters. Non-repeaters are 'one-off' FRBs, possibly associated with catastrophic stellar events. In contrast, repeaters are not one-off, but instead manifest recurring unpredictable, sporadic, and irregular radiation bursts; their sources are less well understood. FRB 180916 seems to represent a third and new type of FRB that may be termed periodic repeater. The radiation activity of FRB 180916 repeats over a period of 16.35 +/-0.18 days. Broadly, FRB 180916 emits a burst of radiation for approximately four days followed by an inactive period of about 12 days, then the cycle repeats. Additional follow-up studies of the repeating FRB by the Swift XRT and UVOT instruments were reported on 4 February 2020; by the Sardinia Radio Telescope (SRT) and Medicina Northern Cross Radio Telescope (MNC), on 17 February 2020; and, by the Galileo telescope in Asiago, also on 17 February 2020. Further observations were made by the Chandra X-ray Observatory on 3 and 18 December, 2019, with no significant x-ray emissions detected at the FRB 180916 location, or from the host galaxy SDSS J015800.28+654253.0. On 6 April 2020, follow-up studies by the Global MASTER-Net were reported on The Astronomer's Telegram and, on 4 June 2020, further follow-up studies were reported with the Giant Metrewave Radio Telescope (uGMRT). On 7 June 2020, astronomers from Jodrell Bank Observatory reported possible evidence that FRB 121102 exhibits the same radio burst behavior ("radio bursts observed in a window lasting approximately 90 days followed by a silent period of 67 days") every 157 days, suggesting that the bursts may be associated with "the orbital motion of a massive star, a neutron star or a black hole". This behavior is nearly "10 times longer than the 16-day periodicity" exhibited by FRB 180916. In March 2021, another burst from the FRB was reported. On 25 August 2021, further observations were reported.
Structure
The 4-day radiation burst is not homogeneous but is instead characterized by a pattern of sub-bursts. The pattern of radiation activity within the four-day bursts is never exactly repeated. However, there is enough similarity (i.e. alignment of the sub-bursts from period to period) to suggest that they form part of an original repeating pattern with internal structure of some complexity. In March 2021, astronomers reported that the area producing pulses of FRB 180916 is about in scale, based on studies at extremely short timescales.
References
Notes
Fast radio bursts
Cassiopeia (constellation) | FRB 180916.J0158+65 | Physics,Astronomy | 760 |
42,071,402 | https://en.wikipedia.org/wiki/Transcriptome%20in%20vivo%20analysis%20tag | A transcriptome in vivo analysis tag (TIVA tag) is a multifunctional, photoactivatable mRNA-capture molecule designed for isolating mRNA from a single cell in complex tissues.
Background
A transcript is an RNA molecule that is copied or transcribed from a DNA template. A transcript can be further processed by alternative splicing, which is the retention of different combinations of exons. These unique combinations of exons are termed RNA transcript isoforms. The transcriptome is a set of all RNA, including rRNA, mRNA, tRNA, and non-coding RNA. Specifically mRNA transcripts can be used to investigate differences in gene expression patterns. Transcriptome profiling is determining the composition of transcripts and their relative expression levels in a given reference set of cells. This analysis involves characterization of all functional genomic elements, coding and non-coding.
The current RNA capture methods involve sorting cells in suspension from acutely dissociated tissue, and thus can lose information about cell morphology and microenvironment. Transcript abundance and isoforms are significantly different across tissues and are continually changing throughout an individual’s life. Gene expression is highly tissue specific, therefore with traditional RNA capture methods one must be cautious in the interpretation of gene expression patterns, as they often reflect expression of a heterogeneous mix of cell populations. Even in the same cell type, tissue measurements, where a population of cells is obtained, mask both low-level mRNA expression in single cells and variation in expression between cells. The photoactivatable TIVA tag is engineered to capture the mRNA of a single cell in complex tissues.
Chemical structure
TIVA tags are created initially via solid-phase synthesis with the cell-penetrating peptide conjugated afterwards. The functional components of the tag can be summarized as following:
Biotin: binds to streptavidin beads for tag isolation.
Cy3 fluorophore: used to validated cleavage of photocleavable linker. If cleaved, cell will appear green upon exposure to 514 nm light.
Cy5 fluorophore: used to validate uptake into cells. If uptake is successful, and if Cy5 is not yet cleaved from the TIVA tag, energy from a 514 nm light will be absorbed via FRET from Cy3 to Cy5, where cells that have taken up the TIVA will appear red.
PolyU 18-mer oligonucleotide: used to bind mRNA via complementary base pairing of their polyadenylated tails. Before cleavage of photocleavable linkers, it is caged by complementary base pairing to two polyA 7-mer oligonucleotides.
PolyA 7-mer oligonucleotides: before the cleavage of photocleavable linkers, 2 polyA 7-mer molecules conjugate to polyU oligonucleotides to cage the TIVA tag, and thus prevent it from binding mRNA molecules. After photocleavable linkers are cleaved, the melting temperature decreases from 59 °C to less than 25 °C, leading to the disassociation of the PolyA 7-mer oligonucleotides from the TIVA tag.
Photocleavable linker: links and stabilizes Cy5 fluorophore and PolyA 7-mer oligonucleotides to the TIVA tag. It is cleaved upon photoactivation.
Cell-penetrating peptide CPP: guides the TIVA tag through cell membranes into tissues. It is linked to the TIVA tag by a disulphide bond that is cleaved once exposed to extracellular environment.
Methodology of a TIVA Experiment
Tissue preparation
Tissue fixation is performed by chemical fixation using formalin. This prevents the postmortem degeneration of the tissue and hardens soft tissue. The tissue is dehydrated using ethanol and the alcohol is cleared using an organic solvent such as xylene. The tissue is embedded in paraffin which infiltrates the microscopic spaces present throughout the tissue. The embedded tissue is sliced using a microtome and subsequently stained to produce contrast needed to visualize the tissue.
Loading of the TIVA tag into cells and validation
A cell saline buffer containing the TIVA tag is added to the coverslip and incubated. During the incubation period, the TIVA tag penetrates the cell membrane via the CPP that is bound to it. Subsequently, the cytosolic environment cleaves the CPP and the TIVA tag is trapped inside the cell. After incubation, the coverslip is rinsed twice with cell saline buffer and then transferred to an imaging chamber. Using a confocal microscope, loading of the tag is confirmed by detecting the Cy5 signal at a wavelength of 561 nm.
Photoactivation of the TIVA tag in target cell and validation
Photolysis is performed resulting in photoactivation of the TIVA tag in the target cell or cells. Specifically, uncaging of the TIVA tag is accomplished using a 405-nm laser while measuring FRET excited by 514 nm. During this process, the mRNA-capturing moiety is released and subsequently anneals to the poly(A) tail of cellular mRNA. To confirm that the cell is not damaged during photolysis, the cell is imaged with the confocal microscope.
Extraction, lysis of target cell and affinity purification of TIVA tag
Using a glass pipette, the photolysed cell is isolated by aspiration. Cells are lysed and affinity purification is performed using streptavidin-coated beads that bind, immobilize and purify the biotinylated TIVA tag.
RNA-seq analysis
RNA-seq uses reverse transcriptase to convert the mRNA template to cDNA. During library preparation, the cDNA is fragmented into small pieces, which then serve as the template for sequencing. After sequencing RNA-seq analysis can then be performed.
Advantages and Disadvantages
Advantages
Noninvasive method for capturing mRNA from single cells in living, intact tissues for transcriptome analysis.
Though other methods can be applied, such as laser capture microdissection and patch-pipette aspiration to isolate single cells, With TIVA Tags no damage to the cells and no tissue deformation from penetration of the pipette that may alter components of the transcriptional profile.
Can be performed on various cell types, while existing methods depend on transgenic rodent models to identify cells of interest.
Disadvantages
CPPs have been used to transport a variety of biomolecules into cells in both vitro and in vivo. One must be cautious of which CPPs are used. For example, different CPPs promote movement into different cell types and cellular components.
If the TIVA tag is not used within 3 months of synthesis, the FRET signal is weakened.
The storage of TIVA tag requires a -80 °C freezer and should be in dried form.
References
RNA
Gene expression | Transcriptome in vivo analysis tag | Chemistry,Biology | 1,424 |
13,341,622 | https://en.wikipedia.org/wiki/Unparticle%20physics | In theoretical physics, unparticle physics is a speculative theory that conjectures a form of matter that cannot be explained in terms of particles using the Standard Model of particle physics, because its components are scale invariant.
Howard Georgi proposed this theory in two 2007 papers, "Unparticle Physics"
and "Another Odd Thing About Unparticle Physics". His papers were followed by further work by other researchers into the properties and phenomenology of unparticle physics and its potential impact on particle physics, astrophysics, cosmology, CP violation, lepton flavour violation, muon decay, neutrino oscillations, and supersymmetry.
Background
All particles exist in states that may be characterized by a certain energy, momentum and mass. In most of the Standard Model of particle physics, particles of the same type cannot exist in another state with all these properties scaled up or down by a common factor – electrons, for example, always have the same mass regardless of their energy or momentum. But this is not always the case: massless particles, such as photons, can exist with their properties scaled equally. This immunity to scaling is called "scale invariance".
The idea of unparticles comes from conjecturing that there may be "stuff" that does not necessarily have zero mass but is still scale-invariant, with the same physics regardless of a change of length (or equivalently energy). This stuff is unlike particles, and described as unparticle. The unparticle stuff is equivalent to particles with a continuous spectrum of mass.
Such unparticle stuff has not been observed, which suggests that if it exists, it must couple with normal matter weakly at observable energies. Since the Large Hadron Collider (LHC) team announced it will begin probing a higher energy frontier in 2009, some theoretical physicists have begun to consider the properties of unparticle stuff and how it may appear in LHC experiments. One of the great hopes for the LHC is that it might come up with some discoveries that will help us update or replace our best description of the particles that make up matter and the forces that glue them together.
Properties
Unparticles would have properties in common with neutrinos, which have almost zero mass and are therefore nearly scale invariant. Neutrinos barely interact with matter – most of the time physicists can infer their presence only by calculating the "missing" energy and momentum after an interaction. By looking at the same interaction many times, a probability distribution is built up that tells more specifically how many and what sort of neutrinos are involved. They couple very weakly to ordinary matter at low energies, and the effect of the coupling increases as the energy increases.
A similar technique could be used to search for evidence of unparticles. According to scale invariance, a distribution containing unparticles would become apparent because it would resemble a distribution for a fractional number of massless particles.
This scale invariant sector would interact very weakly with the rest of the Standard Model, making it possible to observe evidence for unparticle stuff, if it exists. The unparticle theory is a high-energy theory that contains both Standard Model fields and Banks–Zaks fields, which have scale-invariant behavior at an infrared point. The two fields can interact through the interactions of ordinary particles if the energy of the interaction is sufficiently high.
These particle interactions would appear to have "missing" energy and momentum that would not be detected by the experimental apparatus. Certain distinct distributions of missing energy would signify the production of unparticle stuff. If such signatures are not observed, bounds on the model can be set and refined.
Experimental indications
Unparticle physics has been proposed as an explanation for anomalies in superconducting cuprate materials, where the charge measured by ARPES appears to exceed predictions from Luttinger's theorem for the quantity of electrons.
References
External links
Particle physics
Theoretical physics
Fringe physics | Unparticle physics | Physics | 827 |
1,691,376 | https://en.wikipedia.org/wiki/Amazon%20Web%20Services | Amazon Web Services, Inc. (AWS) is a subsidiary of Amazon that provides on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered, pay-as-you-go basis. Clients will often use this in combination with autoscaling (a process that allows a client to use more computing in times of high application usage, and then scale down to reduce costs when there is less traffic). These cloud computing web services provide various services related to networking, compute, storage, middleware, IoT and other processing capacity, as well as software tools via AWS server farms. This frees clients from managing, scaling, and patching hardware and operating systems.
One of the foundational services is Amazon Elastic Compute Cloud (EC2), which allows users to have at their disposal a virtual cluster of computers, with extremely high availability, which can be interacted with over the internet via REST APIs, a CLI or the AWS console. AWS's virtual computers emulate most of the attributes of a real computer, including hardware central processing units (CPUs) and graphics processing units (GPUs) for processing; local/RAM memory; hard-disk (HDD)/SSD storage; a choice of operating systems; networking; and pre-loaded application software such as web servers, databases, and customer relationship management (CRM).
AWS services are delivered to customers via a network of AWS server farms located throughout the world. Fees are based on a combination of usage (known as a "Pay-as-you-go" model), hardware, operating system, software, and networking features chosen by the subscriber requiring various degrees of availability, redundancy, security, and service options. Subscribers can pay for a single virtual AWS computer, a dedicated physical computer, or clusters of either. Amazon provides select portions of security for subscribers (e.g. physical security of the data centers) while other aspects of security are the responsibility of the subscriber (e.g. account management, vulnerability scanning, patching). AWS operates from many global geographical regions including seven in North America.
Amazon markets AWS to subscribers as a way of obtaining large-scale computing capacity more quickly and cheaply than building an actual physical server farm. All services are billed based on usage, but each service measures usage in varying ways. As of 2023 Q1, AWS has 31% market share for cloud infrastructure while the next two competitors Microsoft Azure and Google Cloud have 25%, and 11% respectively, according to Synergy Research Group.
Services
AWS comprises over 200 products and services including computing, storage, networking, database, analytics, application services, deployment, management, machine learning, mobile, developer tools, RobOps and tools for the Internet of Things. The most popular include Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (Amazon S3), Amazon Connect, and AWS Lambda (a serverless function that can perform arbitrary code written in any language that can be configured to be triggered by hundreds of events, including HTTP calls).
Services expose functionality through APIs for clients to use in their applications. These APIs are accessed over HTTP, using the REST architectural style and SOAP protocol for older APIs and exclusively JSON for newer ones. Clients can interact with these APIs in various ways, including from the AWS console (a website), by using SDKs written in various languages (such as Python, Java, and JavaScript), or by making direct REST calls.
History
Founding (2000–2005)
The genesis of AWS came in the early . After building Merchant.com, Amazon's e-commerce-as-a-service platform that offers third-party retailers a way to build their own web-stores, Amazon pursued service-oriented architecture as a means to scale its engineering operations, led by then CTO Allan Vermeulen.
Around the same time frame, Amazon was frustrated with the speed of its software engineering, and sought to implement various recommendations put forth by Matt Round, an engineering leader at the time, including maximization of autonomy for engineering teams, adoption of REST, standardization of infrastructure, removal of gate-keeping decision-makers (bureaucracy), and continuous deployment. He also called for increasing the percentage of the time engineers spent building the software rather than doing other tasks. Amazon created "a shared IT platform" so its engineering organizations, which were spending 70% of their time on "undifferentiated heavy-lifting" such as IT and infrastructure problems, could focus on customer-facing innovation instead. Besides, in dealing with unusual peak traffic patterns, especially during the holiday season, by migrating services to commodity Linux hardware and relying on open source software, Amazon's Infrastructure team, led by Tom Killalea, Amazon's first CISO, had already run its data centers and associated services in a "fast, reliable, cheap" way.
In July 2002 Amazon.com Web Services, managed by Colin Bryar, launched its first web services, opening up the Amazon.com platform to all developers. Over one hundred applications were built on top of it by 2004. This unexpected developer interest took Amazon by surprise and convinced them that developers were "hungry for more".
By the summer of 2003, Andy Jassy had taken over Bryar's portfolio at Rick Dalzell's behest, after Vermeulen, who was Bezos' first pick, declined the offer. Jassy subsequently mapped out the vision for an "Internet OS" made up of foundational infrastructure primitives that alleviated key impediments to shipping software applications faster. By fall 2003, databases, storage, and compute were identified as the first set of infrastructure pieces that Amazon should launch.
Jeff Barr, an early AWS employee, credits Vermeulen, Jassy, Bezos himself, and a few others for coming up with the idea that would evolve into EC2, S3, and RDS; Jassy recalls the idea was the result of brainstorming for about a week with "ten of the best technology minds and ten of the best product management minds" on about ten different internet applications and the most primitive building blocks required to build them. Werner Vogels cites Amazon's desire to make the process of "invent, launch, reinvent, relaunch, start over, rinse, repeat" as fast as it could was leading them to break down organizational structures with "two-pizza teams" and application structures with distributed systems; and that these changes ultimately paved way for the formation of AWS and its mission "to expose all of the atomic-level pieces of the Amazon.com platform". According to Brewster Kahle, co-founder of Alexa Internet, which was acquired by Amazon in 1999, his start-up's compute infrastructure helped Amazon solve its big data problems and later informed the innovations that underpinned AWS.
Jassy assembled a founding team of 57 employees from a mix of engineering and business backgrounds to kick-start these initiatives, with a majority of the hires coming from outside the company; Jeff Lawson, Twilio CEO, Adam Selipsky, Tableau CEO, and Mikhail Seregine, co-founder at Outschool among them.
In late 2003, the concept for compute, which would later launch as EC2, was reformulated when Chris Pinkham and Benjamin Black presented a paper internally describing a vision for Amazon's retail computing infrastructure that was completely standardized, completely automated, and would rely extensively on web services for services such as storage and would draw on internal work already underway. Near the end of their paper, they mentioned the possibility of selling access to virtual servers as a service, proposing the company could generate revenue from the new infrastructure investment. Thereafter Pinkham, Willem van Biljon, and lead developer Christopher Brown developed the Amazon EC2 service, with a team in Cape Town, South Africa.
In November 2004, AWS launched its first infrastructure service for public usage: Simple Queue Service (SQS).
S3, EC2, and other first generation services (2006–2010)
On March 14, 2006, AWS launched Amazon S3 cloud storage followed by EC2 in August 2006. Pi Corporation, a startup Paul Maritz co-founded, was the first beta-user of EC2 outside of Amazon, while Microsoft was among EC2's first enterprise customers. Later that year, SmugMug, one of the early AWS adopters, attributed savings of around US$400,000 in storage costs to S3. According to Vogels, S3 was built with 8 microservices when it launched in 2006, but had over 300 microservices by 2022.
In September 2007, AWS announced its annual Start-up Challenge, a contest with prizes worth $100,000 for entrepreneurs and software developers based in the US using AWS services such as S3 and EC2 to build their businesses. The first edition saw participation from Justin.tv, which Amazon would later acquire in 2014. Ooyala, an online media company, was the eventual winner.
Additional AWS services from this period include SimpleDB, Mechanical Turk, Elastic Block Store, Elastic Beanstalk, Relational Database Service, DynamoDB, CloudWatch, Simple Workflow, CloudFront, and Availability Zones.
Growth (2010–2015)
In November 2010, it was reported that all of Amazon.com's retail sites had migrated to AWS. Prior to 2012, AWS was considered a part of Amazon.com and so its revenue was not delineated in Amazon financial statements. In that year industry watchers for the first time estimated AWS revenue to be over $1.5 billion.
On November 27, 2012, AWS hosted its first major annual conference, re:Invent with a focus on AWS's partners and ecosystem, with over 150 sessions. The three-day event was held in Las Vegas because of its relatively cheaper connectivity with locations across the United States and the rest of the world. Andy Jassy and Werner Vogels presented keynotes, with Jeff Bezos joining Vogels for a fireside chat. AWS opened early registrations at US$1,099 per head for their customers from over 190 countries. On stage with Andy Jassy at the event which saw around 6000 attendees, Reed Hastings, CEO at Netflix, announced plans to migrate 100% of Netflix's infrastructure to AWS.
To support industry-wide training and skills standardization, AWS began offering a certification program for computer engineers, on April 30, 2013, to highlight expertise in cloud computing. Later that year, in October, AWS launched Activate, a program for start-ups worldwide to leverage AWS credits, third-party integrations, and free access to AWS experts to help build their business.
In 2014, AWS launched its partner network, AWS Partner Network (APN), which is focused on helping AWS-based companies grow and scale the success of their business with close collaboration and best practices.
In January 2015, Amazon Web Services acquired Annapurna Labs, an Israel-based microelectronics company for a reported US$350–370M.
In April 2015, Amazon.com reported AWS was profitable, with sales of $1.57 billion in the first quarter of the year and $265 million of operating income. Founder Jeff Bezos described it as a fast-growing $5 billion business; analysts described it as "surprisingly more profitable than forecast". In October, Amazon.com said in its Q3 earnings report that AWS's operating income was $521 million, with operating margins at 25 percent. AWS's 2015 Q3 revenue was $2.1 billion, a 78% increase from 2014's Q3 revenue of $1.17 billion. 2015 Q4 revenue for the AWS segment increased 69.5% y/y to $2.4 billion with a 28.5% operating margin, giving AWS a $9.6 billion run rate. In 2015, Gartner estimated that AWS customers are deploying 10x more infrastructure on AWS than the combined adoption of the next 14 providers.
Current era (2016–present)
In 2016 Q1, revenue was $2.57 billion with net income of $604 million, a 64% increase over 2015 Q1 that resulted in AWS being more profitable than Amazon's North American retail business for the first time. Jassy was thereafter promoted to CEO of the division. Around the same time, Amazon experienced a 42% rise in stock value as a result of increased earnings, of which AWS contributed 56% to corporate profits.
AWS had $17.46 billion in annual revenue in 2017. By the end of 2020, the number had grown to $46 billion. Reflecting the success of AWS, Jassy's annual compensation in 2017 hit nearly $36 million.
In January 2018, Amazon launched an autoscaling service on AWS.
In November 2018, AWS announced customized ARM cores for use in its servers. Also in November 2018, AWS is developing ground stations to communicate with customers' satellites.
In 2019, AWS reported 37% yearly growth and accounted for 12% of Amazon's revenue (up from 11% in 2018).
In April 2021, AWS reported 32% yearly growth and accounted for 32% of $41.8 billion cloud market in Q1 2021.
In January 2022, AWS joined the MACH Alliance, a non-profit enterprise technology advocacy group.
In June 2022, it was reported that in 2019 Capital One had not secured their AWS resources properly, and was subject to a data breach by a former AWS employee. The employee was convicted of hacking into the company's cloud servers to steal customer data and use computer power to mine cryptocurrency. The ex-employee was able to download the personal information of more than 100 million Capital One customers.
In June 2022, AWS announced they had launched the AWS Snowcone, a small computing device, to the International Space Station on the Axiom Mission 1.
In September 2023, AWS announced it would become AI startup Anthropic's primary cloud provider. Amazon has committed to investing up to $4 billion in Anthropic and will have a minority ownership position in the company. AWS also announced the GA of Amazon Bedrock, a fully managed service that makes foundation models (FMs) from leading AI companies available through a single application programming interface (API)
In April 2024, AWS announced a new service called Deadline Cloud, which lets customers set up, deploy and scale up graphics and visual effects rendering pipelines on AWS cloud infrastructure.
In December 2024, AWS announced Amazon Nova, its own family of foundation models. These models, offered through Amazon Bedrock, are designed for various tasks including content generation, video understanding, and building agentic applications. They are available in six different sizes.
Customer base
Notable customers include NASA, and the Obama presidential campaign of 2012.
In October 2013, AWS was awarded a $600M contract with the CIA.
In 2019, it was reported that more than 80% of Germany's listed DAX companies use AWS.
In August 2019, the U.S. Navy said it moved 72,000 users from six commands to an AWS cloud system as a first step toward pushing all of its data and analytics onto the cloud.
In 2021, DISH Network announced it will develop and launch its 5G network on AWS.
In October 2021, it was reported that spy agencies and government departments in the UK such as GCHQ, MI5, MI6, and the Ministry of Defence, have contracted AWS to host their classified materials.
In 2022 Amazon shared a $9 billion contract from the United States Department of Defense for cloud computing with Google, Microsoft, and Oracle.
Multiple financial services firms have shifted to AWS in some form.
Significant service outages
On April 20, 2011, AWS suffered a major outage. Parts of the Elastic Block Store service became "stuck" and could not fulfill read/write requests. It took at least two days for the service to be fully restored.
On June 29, 2012, several websites that rely on Amazon Web Services were taken offline due to a severe storm in Northern Virginia, where AWS's largest data center cluster is located.
On October 22, 2012, a major outage occurred, affecting many sites including Reddit, Foursquare, Pinterest. The cause was a memory leak bug in an operational data collection agent.
On December 24, 2012, AWS suffered another outage causing websites such as Netflix to be unavailable for customers in the Northeastern United States. AWS cited their Elastic Load Balancing service as the cause.
On February 28, 2017, AWS experienced a massive outage of S3 services in its Northern Virginia region. A majority of websites that relied on AWS S3 either hung or stalled, and Amazon reported within five hours that AWS was fully online again. No data has been reported to have been lost due to the outage. The outage was caused by a human error made while debugging, that resulted in removing more server capacity than intended, which caused a domino effect of outages.
On November 25, 2020, AWS experienced several hours of outage on the Kinesis service in North Virginia (US-East-1) region. Other services relying on Kinesis were also impacted.
On December 7, 2021, an outage mainly affected the Eastern United States, disrupting delivery service and streaming.
Availability and topology
AWS has distinct operations in 33 geographical "regions": eight in North America, one in South America, eight in Europe, three in the Middle East, one in Africa, and twelve in Asia Pacific.
Most AWS regions are enabled by default for AWS accounts. Regions introduced after 20 March 2019 are considered to be opt-in regions, requiring a user to explicitly enable them in order for the region to be usable in the account. For opt-in regions, Identity and Access Management (IAM) resources such as users and roles are only propagated to the regions that are enabled.
Each region is wholly contained within a single country and all of its data and services stay within the designated region. Each region has multiple "Availability Zones", which consist of one or more discrete data centers, each with redundant power, networking, and connectivity, housed in separate facilities. Availability Zones do not automatically provide additional scalability or redundancy within a region, since they are intentionally isolated from each other to prevent outages from spreading between zones. Several services can operate across Availability Zones (e.g., S3, DynamoDB) while others can be configured to replicate across zones to spread demand and avoid downtime from failures.
Amazon Web Services operated an estimated 1.4 million servers across 11 regions and 28 availability zones. The global network of AWS Edge locations consists of over 300 points of presence worldwide, including locations in North America, Europe, Asia, Australia, Africa, and South America.
AWS has announced the planned launch of six additional regions in Malaysia, Mexico, New Zealand, Thailand, Saudi Arabia, and the European Union. In mid March 2023, Amazon Web Services signed a cooperation agreement with the New Zealand Government to build large data centers in New Zealand.
In 2014, AWS claimed its aim was to achieve 100% renewable energy usage in the future. In the United States, AWS's partnerships with renewable energy providers include Community Energy of Virginia, to support the US East region; Pattern Development, in January 2015, to construct and operate Amazon Wind Farm Fowler Ridge; Iberdrola Renewables, LLC, in July 2015, to construct and operate Amazon Wind Farm US East; EDP Renewables North America, in November 2015, to construct and operate Amazon Wind Farm US Central; and Tesla Motors, to apply battery storage technology to address power needs in the US West (Northern California) region.
Pop-up lofts
AWS also has "pop-up lofts" in different locations around the world. These market AWS to entrepreneurs and startups in different tech industries in a physical location. Visitors can work or relax inside the loft, or learn more about what they can do with AWS. In June 2014, AWS opened their first temporary pop-up loft in San Francisco. In May 2015 they expanded to New York City, and in September 2015 expanded to Berlin. AWS opened its fourth location, in Tel Aviv from March 1, 2016, to March 22, 2016. A pop-up loft was open in London from September 10 to October 29, 2015. The pop-up lofts in New York and San Francisco are indefinitely closed due to the COVID-19 pandemic while Tokyo has remained open in a limited capacity.
Charitable work
In 2017, AWS launched AWS re/Start in the United Kingdom to help young adults and military veterans retrain in technology-related skills. In partnership with the Prince's Trust and the Ministry of Defence (MoD), AWS will help to provide re-training opportunities for young people from disadvantaged backgrounds and former military personnel. AWS is working alongside a number of partner companies including Cloudreach, Sage Group, EDF Energy, and Tesco Bank.
In April 2022, AWS announced the organization has committed more than $30 million over three years to early-stage start-ups led by Black, Latino, LGBTQIA+, and Women founders as part of its AWS impact Accelerator. The Initiative offers qualifying start-ups up to $225,000 in cash, credits, extensive training, mentoring, technical guidance and includes up to $100,000 in AWS service credits.
Reception
Environmental impact
In 2016, Greenpeace assessed major tech companies—including cloud services providers like AWS, Microsoft, Oracle, Google, IBM, Salesforce and Rackspace—based on their level of "clean energy" usage. Greenpeace evaluated companies on their mix of renewable-energy sources; transparency; renewable-energy commitment and policies; energy efficiency and greenhouse-gas mitigation; renewable-energy procurement; and advocacy. The group gave AWS an overall "C" grade. Greenpeace credited AWS for its advances toward greener computing in recent years and its plans to launch multiple wind and solar farms across the United States. The organization stated that Amazon is opaque about its carbon footprint.
In January 2021, AWS joined an industry pledge to achieve climate neutrality of data centers by 2030, the Climate Neutral Data Centre Pact. As of 2023, Amazon as a whole is the largest corporate purchaser of renewable energy in the world, a position it has held since 2020, and has a global portfolio of over 20 GW of renewable energy capacity. In 2022, 90% of all Amazon operations, including data centers, were powered by renewables.
Denaturalization protest
US Department of Homeland Security has employed the software ATLAS, which runs on Amazon Cloud. It scanned more than 16.5 million records of naturalized Americans and flagged approximately 124,000 of them for manual analysis and review by USCIS officers regarding denaturalization. Some of the scanned data came from the Terrorist Screening Database and the National Crime Information Center. The algorithm and the criteria for the algorithm were secret. Amazon faced protests from its own employees and activists for the anti-migrant collaboration with authorities.
Israeli–Palestinian conflict
The contract for Project Nimbus drew rebuke and condemnation from the companies' shareholders as well as their employees, over concerns that the project would lead to abuses of Palestinians' human rights in the context of the ongoing occupation and the Israeli–Palestinian conflict. Specifically, they voice concern over how the technology will enable further surveillance of Palestinians and unlawful data collection on them as well as facilitate the expansion of Israel's illegal settlements on Palestinian land. A government procurement document featuring 'obligatory customers' of Nimbus, including "two of Israel’s leading state-owned weapons manufacturers" Israel Aerospace Industries and Rafael Advanced Defense Systems, was published in 2021 with periodic updates since (up to Oct 2023).
Challenges
Like other cloud computing solutions, applications hosted on Amazon Web Services (AWS) are subject to the fallacies of distributed computing, a series of misconceptions that can lead to significant issues in software development and deployment.
Issues
Some AWS customers have complained about receiving unexpectedly large bills, commonly referred to as "surprise bills." This can occur due to various reasons, including but not limited to misconfigurations, security breaches, complex pricing—especially when multiple AWS services are used together—and unexpected data transfer charges.
Community-Driven AWS SDK Alternatives
AWS-Lite is an open-source, lightweight alternative to the official AWS SDK for Node.js, created and maintained by the team behind Deno, a runtime environment. It offers a reduced package size, which can lower memory usage and improve performance. However, AWS-Lite does not support the full range of AWS services and features available in the official SDK, limiting its applicability to targeted scenarios.
See also
Tim Bray
Cloud-computing comparison
Comparison of file hosting services
James Gosling
Explanatory notes
References
External links
2006 software
Cloud computing providers
Cloud infrastructure
Cloud platforms
Defense companies of the United States
Web hosting | Amazon Web Services | Technology | 5,219 |
24,231,653 | https://en.wikipedia.org/wiki/Shuttle%20valve | A shuttle valve is a type of valve which allows fluid to flow through it from one of two sources. Generally a shuttle valve is used in pneumatic systems, although sometimes it will be found in hydraulic systems.
Structure and function
The basic structure of a shuttle valve is like a tube with three openings; one on each end, and one in the middle. A ball or other blocking valve element (the shuttle) moves freely within the tube. When pressure of a fluid is exerted through the opening at one end it pushes the shuttle towards the opposite end, closing it. This prevents the fluid from passing through that opening, but allows it to flow out through the middle opening. In this way two different sources can provide pressure to a device without the threat of back flow from one source to the other.
In pneumatic logic a shuttle-valve works as an OR gate.
Applications
A shuttle valve has several applications including:
The use of more switches on one machine: by using the shuttle valve, more than one switch can be operated on a single machine for safety, and each switch can be placed at any suitable location. This application is normally used with heavy industrial machinery.
Winch brake circuit: a shuttle valve provides brake control in pneumatic winch applications. When the compressor is operated the shuttle valves direct air to open the brake shoes. When the control valve is centered, the brake cylinder is vented through the shuttle valve, and the brake shoes are allowed to close.
Air pilot control: converting from air to oil results in locking of the cylinder. Shifting the four-way valve to either extreme position applies the air pilot through the shuttle valve, holding the two air-operated valves open and applying oil under air pressure to the corresponding side of the cylinder. Positioning a manual valve to neutral exhausts the air pilot pressure, closing the two-way valves, and trapping oil on both sides of the cylinder to lock it in position.
Standby and emergency systems: compressor systems requiring standby or purge gases capability are pressure controlled by the shuttle valve. This is used for instrumentation, pressure cables, or any system requiring continuous pneumatic input. If the compressor fails, the standby tank—regulated to slightly under the compressor supply—will shift the shuttle valve and take over the function. When the compressor pressure is re-established, the shuttle valve shifts back and seals off the standby system until needed again.
References
Control devices
Valves | Shuttle valve | Physics,Chemistry,Engineering | 491 |
15,183,374 | https://en.wikipedia.org/wiki/Sp7%20transcription%20factor | Transcription factor Sp7, also called Osterix (Osx), is a protein that in humans is encoded by the SP7 gene. It is a member of the Sp family of zinc-finger transcription factors It is highly conserved among bone-forming vertebrate species It plays a major role, along with Runx2 and Dlx5 in driving the differentiation of mesenchymal precursor cells into osteoblasts and eventually osteocytes. Sp7 also plays a regulatory role by inhibiting chondrocyte differentiation maintaining the balance between differentiation of mesenchymal precursor cells into ossified bone or cartilage. Mutations of this gene have been associated with multiple dysfunctional bone phenotypes in vertebrates. During development, a mouse embryo model with Sp7 expression knocked out had no formation of bone tissue. Through the use of GWAS studies, the Sp7 locus in humans has been strongly associated with bone mass density. In addition there is significant genetic evidence for its role in diseases such as Osteogenesis imperfecta (OI).
Genetics
In humans Sp7 has been mapped to 12q13.13. It has 78% homology to another Sp family member, Sp1, especially in the regions which code for the three Cys-2 His-2 type DNA-binding zinc fingers. Sp7 consists of three exons the first two of which are alternatively spliced, encoding a 431-residue isoform and an amino-terminus truncated 413-residue short protein isoform
A GWAS study has found that bone mass density (BMD) is associated with the Sp7 locus, adults and children with either low or high BMD were analyzed showing that several common variant SNPs within the 12q13 region were in an area of linkage disequilibrium.
Transcriptional pathway
There are two main pathways which cause in the induction of Sp7/Osx gene expression. Msx2 induces Sp7 directly, whereas bone morphogenetic protein 2 (BMP2) induces it indirectly through either Dlx5 or Runx2. Once Sp7 expression is triggered, it then induces the expression of a slew of mature osteoblast genes such as Col1a1, osteonectin, osteopontin and bone sialoprotein which are all necessary for productive osteoblasts during the creation of ossified bone.
Negative regulation of this pathway comes in the form of p53, microRNAs and the TNF inflammatory pathway. Disregulation of the TNF pathway blocking appropriate bone growth by osteoblasts is a partial cause of the abnormal degradation of bone seen in osteoporosis or rheumatoid arthritis
Mechanism of action
The exact mechanisms of action for Sp7/Osterix are currently in contention and the full protein structure has yet to be solved. As a zinc-finger transcription factor, its relatively high homology with Sp1 seems to indicate that it might act in a similar fashion during gene regulatory processes. Previous studies done on Sp1 have shown that Sp1 utilizes the zinc-finger DNA binding domains in its structure to bind directly to a GC-rich region of the genome known as the GC box. creating downstream regulatory effects. There are a number of studies which support this mechanism as also applicable for Sp7, however other researchers were unable to replicate the GC box binding seen in Sp1 when looking at Sp7. Another proposed mechanism of action is indirect gene regulation through the protein known as homeobox transcription factor Dlx5. This is plausible because Dlx5 has much higher affinity to AT-rich gene regulatory regions than Sp7 has been shown to have to the GC box thus providing an alternate methodology through which regulation can occur.
Mass spectrometry and proteomics methods have shown that Sp7 also interacts with RNA helicase A and is possibly negatively regulated by RIOX1 both of which provide evidence for regulatory mechanisms outside of the GC box paradigm.
Function
Sp7 acts as a master regulator of bone formation during both embryonic development and during the homeostatic maintenance of bone in adulthood.
During development
In a developing organism, Sp7 serves as one of the most important regulatory shepherds for bone formation. The creation of ossified bone is preceded by the differentiation of mesenchymal stem cells into chondrocytes and the conversion of some of those chondrocytes into cartilage. Certain populations of that initial cartilage serves as a template for bone cells as skeletogenesis proceeds.
Sp7/Osx null mouse embryos displayed a severe phenotype in which there were unaffected chondrocytes and cartilage but absolutely no formation of bone tissue. Ablation of Sp7 genes also led to decreased expression of various other osteocyte-specific markers such as: Sost, Dkk1, Dmp1, and Phe. The close relationship between Sp7/Osx and Runx2 was also demonstrated through this particular experiment because the Sp7 knockout bone phenotype greatly resembled that of the Runx2 knockout, and further experiments proved that Sp7 is downstream of and very closely associated with Runx2. The important conclusion of this particular series of experiments was the clear regulatory role of Sp7 in the decision process made by mesenchymal stem cells to progress from their original highly Sox9 positive osteoprogenitors into either bone or cartilage. Without sustained Sp7 expression the progenitor cells take the pathway into becoming chondrocytes and eventually cartilage rather than creating ossified bone.
In adult organisms
Outside of the context of development, in adult mice ablation of Sp7 led to a lack of new bone formation, highly irregular cartilage accumulation beneath the growth plate and defects in osteocyte maturation and functionality. Other studies observed that a conditional knockout of Sp7 in adult mice osteoblasts resulted in osteopenia in the vertebrae of the animals, issues with bone turnover and more porosity in cortical outer surface of the long bones of the body. Observation of an opposite effect, overproliferation of Sp7+ osteoblasts, further supports the important regulatory effects of Sp7 in vertebrates. A mutation in the zebrafish homologue of Sp7 caused severe craniofacial irregularities in maturing organisms while leaving the rest of the skeleton largely unaffected. Instead of normal suture patterning along the developing skull, the affected organisms displayed a mosaic of sites where bone formation was being initiated but not completed. This caused the appearance of many small irregular bones instead of the normal smooth frontal and parietal bones. These phenotypic shifts corresponded to an overproliferation of Runx2+ osteoblast progenitors indicating that the phenotype observed was related to an abundance of initiation sites for bone proliferation creating many pseudo-sutures.
Clinical relevance
Osteogenesis imperfecta
The most direct example of the role of Sp7 in human disease has been in recessive osteogenesis imperfecta (OI), which is a type-I collagen related disease that causes a heterogeneous set of bone-related symptoms which can range from mild to very severe. Generally this disease is caused by mutations in Col1a1 or Col1a2 which are regulators of collagen growth. OI-causing mutations in these collagen genes are generally heritable in an autosomal-dominant fashion. However, there has been a recent case of a patient with recessive OI with a documented frameshift mutation in Sp7/Osx as the etiological origin of the disease. This patient displayed abnormal fracturing of the bones after relatively minor injuries and markedly delayed motor milestones, requiring assistance to stand at age 6 and was unable to walk at age 8 due to pronounced bowing of the arms and legs. This provides a direct link between the Sp7 gene and the OI disease phenotype.
Osteoporosis
GWAS studies have shown associations between adult and juvenile bone mass density (BMD) and the Sp7 locus in humans. Though low BMD is a good indicator of susceptibility for osteoporosis in adults, the amount of information currently available from these studies does not allow for a direct correlation to be made between osteoporosis and Sp7. Abnormal expression of inflammatory cytokines such as TNF-α is present in osteoporosis can have detrimental effects on the expression of Sp7.
Rheumatoid Arthritis
Adiponectin is a protein hormone that has been shown to be upregulated in rheumatoid arthritis disease pathology, causing the release of inflammatory cytokines and enhancing the breakdown of the bone matrix. In primary human cell cultures Sp7 was shown to be inhibited by adiponectin thus contributing downregulation of the creation of ossified bone. This data is further backed up by another study in which inflammatory cytokines such as TNF-α and IL-1β were shown to downregulate gene expression of Sp7 in mouse primary mesenchymal stem cells in culture. These studies seem to indicate that an inflammatory environment is detrimental to the creation of ossified bone.
Bone fracture repair
Accelerated bone fracture healing was found when researchers implanted Sp7 overexpressing bone marrow stroma cells at a site of bone fracture. It was found that the mechanism by which Sp7 expression accelerated bone healing was through triggering new bone formation by inducing neighboring cells to express genes characteristic of bone progenitors. Along similar mechanistic lines as bone repair is the integration of dental implants into alveolar bone, since the insertion of these implants causes bone damage that must be healed before the implant is successfully integrated. Researchers have shown that when bone marrow stromal cells are exposed to artificially elevated levels of Sp7/Osx, mice with dental implants were shown to have better outcomes through the promotion of healthy bone regeneration.
Treatment of osteosarcomas
Overall Sp7 expression is decreased in mouse and human osteosarcoma cell lines when compared to endogenous osteoblasts and this decrease in expression correlates with metastatic potential. Transfection of the SP7 gene into a mouse osteosarcoma cell line to create higher levels of expression reduced overall malignancy in-vitro and reduced tumor incidence, tumor volume, and lung metastasis when the cells were injected into mice. Sp7 expression was also found to decrease bone destruction by the sarcoma likely through supplementing the normal regulatory pathways controlling osteoblasts and osteocytes.
References
Further reading
External links
Transcription factors | Sp7 transcription factor | Chemistry,Biology | 2,213 |
23,636 | https://en.wikipedia.org/wiki/Perimeter | A perimeter is a closed path that encompasses, surrounds, or outlines either a two dimensional shape or a one-dimensional length. The perimeter of a circle or an ellipse is called its circumference.
Calculating the perimeter has several practical applications. A calculated perimeter is the length of fence required to surround a yard or garden. The perimeter of a wheel/circle (its circumference) describes how far it will roll in one revolution. Similarly, the amount of string wound around a spool is related to the spool's perimeter; if the length of the string was exact, it would equal the perimeter.
Formulas
The perimeter is the distance around a shape. Perimeters for more general shapes can be calculated, as any path, with , where is the length of the path and is an infinitesimal line element. Both of these must be replaced by algebraic forms in order to be practically calculated. If the perimeter is given as a closed piecewise smooth plane curve with
then its length can be computed as follows:
A generalized notion of perimeter, which includes hypersurfaces bounding volumes in -dimensional Euclidean spaces, is described by the theory of Caccioppoli sets.
Polygons
Polygons are fundamental to determining perimeters, not only because they are the simplest shapes but also because the perimeters of many shapes are calculated by approximating them with sequences of polygons tending to these shapes. The first mathematician known to have used this kind of reasoning is Archimedes, who approximated the perimeter of a circle by surrounding it with regular polygons.
The perimeter of a polygon equals the sum of the lengths of its sides (edges). In particular, the perimeter of a rectangle of width and length equals
An equilateral polygon is a polygon which has all sides of the same length (for example, a rhombus is a 4-sided equilateral polygon). To calculate the perimeter of an equilateral polygon, one must multiply the common length of the sides by the number of sides.
A regular polygon may be characterized by the number of its sides and by its circumradius, that is to say, the constant distance between its centre and each of its vertices. The length of its sides can be calculated using trigonometry. If is a regular polygon's radius and is the number of its sides, then its perimeter is
A splitter of a triangle is a cevian (a segment from a vertex to the opposite side) that divides the perimeter into two equal lengths, this common length being called the semiperimeter of the triangle. The three splitters of a triangle all intersect each other at the Nagel point of the triangle.
A cleaver of a triangle is a segment from the midpoint of a side of a triangle to the opposite side such that the perimeter is divided into two equal lengths. The three cleavers of a triangle all intersect each other at the triangle's Spieker center.
Circumference of a circle
The perimeter of a circle, often called the circumference, is proportional to its diameter and its radius. That is to say, there exists a constant number pi, (the Greek p for perimeter), such that if is the circle's perimeter and its diameter then,
In terms of the radius of the circle, this formula becomes,
To calculate a circle's perimeter, knowledge of its radius or diameter and the number suffices. The problem is that is not rational (it cannot be expressed as the quotient of two integers), nor is it algebraic (it is not a root of a polynomial equation with rational coefficients). So, obtaining an accurate approximation of is important in the calculation. The computation of the digits of is relevant to many fields, such as mathematical analysis, algorithmics and computer science.
Perception of perimeter
The perimeter and the area are two main measures of geometric figures. Confusing them is a common error, as well as believing that the greater one of them is, the greater the other must be. Indeed, a commonplace observation is that an enlargement (or a reduction) of a shape make its area grow (or decrease) as well as its perimeter. For example, if a field is drawn on a 1/ scale map, the actual field perimeter can be calculated multiplying the drawing perimeter by . The real area is times the area of the shape on the map. Nevertheless, there is no relation between the area and the perimeter of an ordinary shape. For example, the perimeter of a rectangle of width 0.001 and length 1000 is slightly above 2000, while the perimeter of a rectangle of width 0.5 and length 2 is 5. Both areas are equal to 1.
Proclus (5th century) reported that Greek peasants "fairly" parted fields relying on their perimeters. However, a field's production is proportional to its area, not to its perimeter, so many naive peasants may have gotten fields with long perimeters but small areas (thus, few crops).
If one removes a piece from a figure, its area decreases but its perimeter may not. The convex hull of a figure may be visualized as the shape formed by a rubber band stretched around it. In the animated picture on the left, all the figures have the same convex hull; the big, first hexagon.
Isoperimetry
The isoperimetric problem is to determine a figure with the largest area, amongst those having a given perimeter. The solution is intuitive; it is the circle. In particular, this can be used to explain why drops of fat on a broth surface are circular.
This problem may seem simple, but its mathematical proof requires some sophisticated theorems. The isoperimetric problem is sometimes simplified by restricting the type of figures to be used. In particular, to find the quadrilateral, or the triangle, or another particular figure, with the largest area amongst those with the same shape having a given perimeter. The solution to the quadrilateral isoperimetric problem is the square, and the solution to the triangle problem is the equilateral triangle. In general, the polygon with sides having the largest area and a given perimeter is the regular polygon, which is closer to being a circle than is any irregular polygon with the same number of sides.
Etymology
The word comes from the Greek περίμετρος perimetros, from περί peri "around" and μέτρον metron "measure".
See also
Arclength
Area
Coastline paradox
Girth (geometry)
Pythagorean theorem
Surface area
Volume
Wetted perimeter
References
External links
Elementary geometry
Length | Perimeter | Physics,Mathematics | 1,383 |
50,242,940 | https://en.wikipedia.org/wiki/1%2C7-Octadiene | 1,7-Octadiene (CH) is a light flammable organic compound.
Researchers have used 1,7-octadiene to assist ethylene in a cross-enyne metathesis Diels–Alder reaction.
Plasma polymerized 1,7-octadiene films deposited on silica can produce particles with tuned hydrophobicity.
Known to be incompatible with strong oxidizing agents.
References
External links
Alkadienes | 1,7-Octadiene | Chemistry | 93 |
43,929,588 | https://en.wikipedia.org/wiki/Kindness%20UK | Kindness UK is an independent London-based not-for-profit organisation that promotes kindness in United Kingdom.
History
Kindness UK was founded by social entrepreneur David Jamilly in 2011 with the aim of undertaking initiatives to enhance the value and profile of kindness in society. Jamilly previously founded Pod Children's Charity in 1977, the Good Deeds Organisation in 2007 and co-founded Kindness Day UK in 2010.
Educational initiatives
Schools
In 2016 Kindness UK in collaboration with Coolabi Group developed a month-long campaign Clangers for Kindness, designed to promote simple acts of kindness among kids and their parents.
Universities
In partnership with the University of Sussex the Kindness UK Doctoral Conference was launched in 2015. The
Kindness UK Doctoral Conference Award is open to all academic disciplines and is dedicated to the kindness and its effect on people and communities.
Interdisciplinary University of Sussex Kindness UK Symposium was held in 2016.
Public initiatives
In 2014 on Kindness Day UK, Kindness UK distributed 10,000 chocolate bars at London underground stations as a random act of kindness.
Kindness Day UK
Kindness Day UK encourages the public to recognize the value of kindness and perform at least one act of kindness on the day. Kindness UK is a lead UK organisation that promotes and celebrates Kindness Day UK.
External campaigns
Kindness UK were asked to guest judge Nissan's CARED4 competition which was aimed to search for and reward kind people in the community.
References
External links
Official Website
2011 establishments in the United Kingdom
Organizations established in 2011
Kindness
Non-profit organisations based in London | Kindness UK | Biology | 296 |
55,755,758 | https://en.wikipedia.org/wiki/Votive%20column | A votive column (also votive pillar) is the combination of a column (pillar) and a votive image.
The presence of columns supporting votive sculptures in Ancient Greek temples is well attested since at least the Archaic period.
The oldest known example of a Corinthian column is in the Temple of Apollo Epicurius at Bassae in Arcadia, c. 450–420 BC. It is not part of the order of the temple itself, which has a Doric colonnade surrounding the temple and an Ionic order within the cella enclosure. A single Corinthian column stands free, centered within the cella. It is often interpreted as a votive column.
In Imperial Rome, it was the practice to erect a statue of the Emperor atop a column. The last such a column was the Column of Phocas, erected in the Roman Forum and dedicated or rededicated in 608.
The Christian adaptation is the Marian column, attested from at least the 10th century (in Clermont-Ferrand in France).
The image of Our Lady of the Pillar in Zaragoza dates to the 15th century.
Marian columns became popular especially in the Counter-Reformation, beginning with the column in Piazza Santa Maria Maggiore in Rome. The column itself was ancient, a leftover of the Basilica of Constantine, which had been destroyed by an earthquake in the 9th century. In 1614 it was transported to Piazza Santa Maria Maggiore and crowned with a bronze statue of the Virgin and Child. Within decades it served as a model for many columns in Italy and other European countries, such as the Mariensäule in Munich (1638).
See also
Cult image
Stele
Asherah pole
Pillar of the Boatmen
Trajan column
Victory column
Zbruch Idol
Irminsul
Ceremonial pole
Totem pole
Axis mundi
References
Ancient Roman religion
Ancient Greek religion
Monumental columns
Religious objects | Votive column | Physics | 382 |
60,437,375 | https://en.wikipedia.org/wiki/Mori-Zwanzig%20formalism | The Mori–Zwanzig formalism, named after the physicists and Robert Zwanzig, is a method of statistical physics. It allows the splitting of the dynamics of a system into a relevant and an irrelevant part using projection operators, which helps to find closed equations of motion for the relevant part. It is used e.g. in fluid mechanics or condensed matter physics.
Idea
Macroscopic systems with a large number of microscopic degrees of freedom are often well described by a small number of relevant variables, for example the magnetization in a system of spins. The Mori–Zwanzig formalism allows the finding of macroscopic equations that only depend on the relevant variables based on microscopic equations of motion of a system, which are usually determined by the Hamiltonian. The irrelevant part appears in the equations as noise. The formalism does not determine what the relevant variables are, these can typically be obtained from the properties of the system.
The observables describing the system form a Hilbert space. The projection operator then projects the dynamics onto the subspace spanned by the relevant variables. The irrelevant part of the dynamics then depends on the observables that are orthogonal to the relevant variables. A correlation function is used as a scalar product, which is why the formalism can also be used for analyzing the dynamics of correlation functions.
Derivation
A not explicitly time-dependent observable obeys the Heisenberg equation of motion
where the Liouville operator is defined using the commutator in the quantum case and using the Poisson bracket in the classical case. We assume here that the Hamiltonian does not have explicit time-dependence. The derivation can also be generalized towards time-dependent Hamiltonians. This equation is formally solved by
The projection operator acting on an observable is defined as
where is the relevant variable (which can also be a vector of various observables), and is some scalar product of operators. The Mori product, a generalization of the usual correlation function, is typically used for this scalar product. For observables , it is defined as
where is the inverse temperature, Tr is the trace (corresponding to an integral over phase space in the classical case) and is the Hamiltonian. is the relevant probability operator (or density operator for quantum systems). It is chosen in such a way that it can be written as a function of the relevant variables only, but is a good approximation for the actual density, in particular such that it gives the correct mean values.
Now, we apply the operator identity
to
Using the projection operator introduced above and the definitions
(frequency matrix),
(random force) and
(memory function), the result can be written as
This is an equation of motion for the observable , which depends on its value at the current time , the value at previous times (memory term) and the random force (noise, depends on the part of the dynamics that is orthogonal to ).
Markovian approximation
The equation derived above is typically difficult to solve due to the convolution term. Since we are typically interested in slow macroscopic variables changing timescales much larger than the microscopic noise, this has the effect of integrating over an infinite time limit while disregarding the lag in the convolution. We see this by expanding the equation to second order in , to obtain
,
where
.
Generalizations
For larger deviations from thermodynamic equilibrium, the more general form of the Mori–Zwanzig formalism is used, from which the previous results can be obtained through a linearization. In this case, the Hamiltonian has explicit time-dependence. In this case, the transport equation for a variable
,
where is the mean value and is the fluctuation, be written as (use index notation with summation over repeated indices)
,
where
,
,
and
.
We have used the time-ordered exponential
and the time-dependent projection operator
These equations can also be re-written using a generalization of the Mori product. Further generalizations can be used to apply the formalism to time-dependent Hamiltonians, general relativity, and arbitrary dynamical systems
See also
Nakajima–Zwanzig equation
Zwanzig projection operator
Notes
References
Hermann Grabert Projection operator techniques in nonequilibrium statistical mechanics, Springer Tracts in Modern Physics, Band 95, 1982
Robert Zwanzig Nonequilibrium Statistical Mechanics 3rd ed., Oxford University Press, New York, 2001
Statistical mechanics | Mori-Zwanzig formalism | Physics | 910 |
11,470,282 | https://en.wikipedia.org/wiki/Crotonyl-CoA | Crotonyl-coenzyme A is an intermediate in the fermentation of butyric acid, and in the metabolism of lysine and tryptophan. It is important in the metabolism of fatty acids and amino acids.
Crotonyl-coA and reductases
Before a 2007 report by Alber and coworkers, crotonyl-coA carboxylases and reductases (CCRs) were known for reducing crotonyl-coA to butyryl-coA. A report by Alber and coworkers concluded that a specific CCR homolog was able to reduce crotonyl-coA to (2S)-ethyl malonyl-coA which was a favorable reaction. The specific CCR homolog came from the bacterium Rhodobacter sphaeroides.
Role of Crotonyl-coA in Transcription
Post-translational modification of histones either by acetylation or crotonylation is important for the active transcription of genes. Histone crotonylation is regulated by the concentration of crotonyl-coA which can change based on environmental cell conditions or genetic factors.
References
See also
Crotonic acid
Glutaryl-CoA dehydrogenase
Biomolecules
Metabolism
Thioesters of coenzyme A | Crotonyl-CoA | Chemistry,Biology | 269 |
43,236,838 | https://en.wikipedia.org/wiki/Infrared%20atmospheric%20sounding%20interferometer | The infrared atmospheric sounding interferometer (IASI) is a Fourier transform spectrometer based on the Michelson interferometer, associated with an integrated imaging system (IIS).
As part of the payload of the MetOp series of polar-orbiting meteorological satellites, there are currently two IASI instruments in operation: on MetOp-A (launched 19 October 2006 with end of mission in November 2021), on Metop-B (launched 17 September 2012) and Metop-C launched in November 2018.
IASI is a nadir-viewing instrument recording infrared emission spectra from 645 to 2760 cm−1 at 0.25 cm−1 resolution (0.5 cm−1 after apodisation). Although primarily intended to provide information in near real-time on atmospheric temperature and water vapour to support weather forecasting, the concentrations of various trace gases can also be retrieved from the spectra.
Origin and development
IASI belongs to the thermal infrared (TIR) class of spaceborne instruments, which are devoted to tropospheric remote sensing. On the operational side, IASA is a replacement for the HIRS instruments, whereas on the scientific side, it continues the mission of instruments dedicated to atmospheric composition, which are also nadir viewing, Fourier Transform instruments (e.g. Atmospheric Chemistry Experiment). Thus, it blends the demands imposed by both meteorology - high spatial coverage, and atmospheric chemistry - accuracy and vertical information for trace gases. Designed by the Centre national d'Études Spatiales, it now combines a good horizontal coverage and a moderate spectral resolution. Its counterpart on the Suomi NPP is the Cross-track Infrared Sounder (CrIS).
Under an agreement between CNES and EUMETSAT (European Organisation for the Exploitation of Meteorological Satellites), the former was responsible for developing the instrument and data processing software. The latter is responsible for archiving and distributing the data to the users, as well as for operating IASI itself. Currently, Alcatel Space is the prime contractor of the project and oversees the production of the recurring models.
Main characteristics
Spectral range
The IASI spectral range has been chosen such that the instrument can record data from the following ranges:
carbon dioxide strong absorption around 15 μm
ozone absorption ν2 around 9.6 μm
water vapour ν3 strong absorption
methane absorption up to the edge of TIR
As such, the spectral range of IASI is 645 – 2760 cm−1 (15.5 - 3.62 μm). It has 8461 spectral samples that are aligned in 3 bands within the spectral range, shown in the table below. Correspondingly, the spectral resolution at which the measurements are made is 0.5 cm−1.
Each band has a specific purpose, as shown in the following table:
Sampling parameters
As an across track scanning system, IASI has a scan range of 48°20′ on either side of the nadir direction; the corresponding swath is then around 2×1100 km. Here, with respect to the flight direction of MetOp, the scanning executed by IASI starts on the left.
Also, a nominal scan line has three targets it must cover. First, a scan of the Earth where, within each step, there are 30 (15 in each 48°20′ branch) positions at which measurements are made. In addition to that, two views dedicated to calibration - henceforth, they will be referred to as reference views. One of the two is directed into deep space (cold reference), while the other is observing the internal black body (hot reference).
The elementary (or effective) field of view (EFOV) is defined as the useful field of view at each scan position. Each such element consists of a 2×2 circular pixel matrix of what is called instantaneous fields of view (IFOV). Each of the four pixels projected on the ground is circular and has a diameter of 12 km at nadir. The shape of the IFOV at the edge of the scan line is no longer circular: across track, it measures 39 km and along track, 20 km.
Lastly, the IIS field of view is a square area, the side of which has an angular width of 59.63 mrad. Within this area, there are 64×64 pixels and they measure the same area as the EFOV above.
Data processing system
The IASI instrument produces around 1 300 000 spectra every day. It takes around 8 seconds for IASI to acquire data from one complete across track and the onboard calibration. The former consists of 120 interferograms, each one corresponding to one pixel. Of course, as researchers are really interested in the spectra, the data gathered by IASI has to pass through several stages of processing.
Furthermore, IASI has an allocated data transmission rate of 1.5 Megabits (Mb) per second. However, the data production rate is 45 Mbit/s and therefore, a major part of the data processing is set to be on board. As such, the transmitted data is an encoded spectrum that is band merged and roughly calibrated.
Additionally, there is an offline processing chain located at the Technical Expertise Centre, also referred to as TEC. Its task is to monitor the instrument performance, to compute the level 0 and 1 initialisation parameters in relation to the preceding point and to compute the long-term varying IASI products, as well as to monitor the Near Real Time (NTR) processing (i.e. levels 0 and 1).
IASI processing levels
There are three such processing levels for the IASI data, numbered from 0 to 2. First, Level 0 data gives the raw output of the detectors, which Level 1 transforms into spectra by applying FFT and the necessary calibrations, and finally, Level 2 executes retrieval techniques so as to describe the physical state of the atmosphere that was observed.
The first two levels are dedicated to transforming the interferograms into spectra that are fully calibrated and independent of the state of the instrument at any given time. By contrast, the third is dedicated to the retrieval of meaningful parameters not only from IASI, but from other instruments from MetOp as well.
For example, since the instrument is expected to be linear in energy, a non linearity correction is applied to the interferograms before the computation of the spectra. Next, the two reference views are used for the first step of radiometric calibration. A second step, performed on ground, is used to compensate for certain physical effects that have been ignored in the first (e.g., incidence correction for the scanning mirror, non-blackness effect etc.).
A digital processing subsystem executes a radiometric calibration and an inverse Fourier transform in order to obtain the raw spectra.
Level 0
The central objective of the Level 0 processing is to reduce the transmission rate by calibrating the spectra in terms of radiometry and merging the spectral bands. This is divided into three processing sub-chains:
Interferogram preprocessing that is concerned with:
the non-linearity correction
spike detection that prevents the use of corrupted interferograms during calibration
the computation of NZPD (Number sampler of the Zero Path Difference) which determines the pivot sample corresponding to the Fourier Transform
the algorithm that applies a Fourier Transform to the interferogram to give the spectrum corresponding to the measured interferogram.
The computation of the radiometric coefficients and filtering
The computation of atmospheric spectra involving applying the calibration coefficients, merging the bands and coding the spectra.
by applying a spectral scaling law, removing the offset and applying a bit mask to the merged spectra, the transmission is done at an average rate of 8.2 bits per spectral sample, without losing useful information
Level 1
Level 1 is divided into three sublevels. Its main aim is to give the best estimate of the geometry of the interferometer at the time of the measurement. Several of the parameters of the estimation model are computing by the TEC processing chain and serve as input for the Level 1 estimations.
The estimation model is used as a basis to compute a more accurate model by calculating the corresponding spectral calibration and apodisation functions. This allows the removal of all spectral variability of the measurements.
Level 1a
The estimation model is used here to give the correct spectral positions of the spectra samples, since the positions are varying from one pixel to another. Moreover, certain errors ignored in Level 0 are now accounted for, such as the emissivity of the black body not being unity or the dependency of the scanning mirror on temperature.
Also, it estimates the geolocation of IASI using the results from the correlation of AVHRR and the calibrated IIS image.
Level 1b
Here, the spectra are resampled. To perform this operation, the spectra from Level 1a are over-sampled by a factor of 5. These over-sampled spectra are finally interpolated on a new constant wave-number basis (0.25 cm−1), by using a cubic spline interpolation.
Level 1c
The estimated apodisation functions are applied.
It generates the radiance cluster analysis based on AVHRR within the IASI IFOV using the IASI point spread function.
Level 2
This level is concerned with deriving geophysical parameters from the radiance measurements:
Temperature profiles
Humidity profiles
Columnar ozone amounts in thick layers
Surface temperature
Surface emissivity
Fractional cloud cover
Cloud top temperature
Cloud top pressure
Cloud phase
Total column of N2O
Total column of CO
Total column of CH4
Total column of CO2
Error covariance
Processing and equality flags
The processes here are performed synergically with the ATOVS instrument suite, AVHRR and forecast data from numerical weather prediction.
Methods of research
Some researchers prefer to use their own retrieval algorithms, which process Level 1 data, while others use directly the IASI Level 2 data. Multiple algorithms exist to produce Level 2 data, which differ in their assumptions and formulation and will therefore have different strengths and weaknesses (which can be investigated by intercomparison studies). The choice of algorithm is guided by knowledge of these limitations, the resources available and the specific features of the atmosphere that wish to be investigated.
In general, algorithms are based on the optimal estimation method. This essentially involves comparing the measured spectra with an a priori spectrum. Subsequently, the a priori model is contaminated with a certain amount of the item one wants to measure (e.g. SO2) and the resulting spectra are once again compared to the measured ones. The process is repeated again and again, the aim being to adjust the amount of contaminants such that simulated spectrum resembles the measured one as closely as possible. It must be noted that a variety of errors must be taken into consideration while perturbing the a priori, such as the error on the a priori, the instrumental error or the expected error.
Alternatively, the IASI Level 1 data can be processed by least square fit algorithms. Again, the expected error must be taken into consideration.
Design
IASI's main structure comprises 6 sandwich panels that have an aluminium honeycomb core and carbon cyanate skins. Out of these, the one that supports optical sub-assemblies, electronics and mechanisms is called the main panel.
The instrument's thermal architecture was engineered to split IASI in independent enclosures, optimising the design of every such enclosure in particular. For example, the optical components can be found in a closed volume containing only low dissipative elements, while the cube corners are exterior to this volume. Furthermore, the enclosure which contains the interferometer is almost entirely decoupled from the rest of the instrument by Multi-Layer Insulation (MLI). This determines a very good thermal stability for the optics of the interferometer: the temporal and spatial gradients are less than 1 °C, which is important for the radiometric calibration performance. Furthermore, other equipments are either sealed in specific enclosures, such as dissipative electronics, laser sources or thermally controlled through the thermal control section of the main structure, for example the scan mechanisms or the blackbody.
Upon entering the interferometer, the light will encounter the following instruments:
Scan mirror which provides the ±48.3° swath symmetrically about the nadir. Moreover, it views the calibration hot and cold blackbody (internal blackbody and the deep space, respectively). For the step-by-step scene scanning, fluid lubricated bearings are used.
Off-axis afocal telescope which transfers the aperture stop onto the scan mirror.
Michelson Interferometer that has the general structure of the Michelson Interferometer, but two silicon carbide cube corner mirrors. The advantage of using corner reflectors over plane mirrors is that the latter would impose dynamic alignment.
Folding and off-axis focusing mirrors of which the first directs the recombined beam onto the latter. This results in an image of the Earth forming at the entrance of the cold box.
The cold box which contains: aperture stops, field stops, field lens that images the aperture stop on the cube corners, dichroic plates dividing the whole spectrum range into the three spectral bands, lenses which produce an image of the field stop onto the detection unit, three focal planes that are equipped with micro lenses. These have the role to image the aperture stop on the detectors and preamplifiers.
So as to reduce the instrument background and thermo-elerctronic detector noise, the temperature of the cold box is maintained at 93 K by a passive cryogenic cooler. This was preferred to a cryogenic machine due to the fact that the vibration levels of the latter can potential cause the degradation of the spectral quality.
Measures against ice contamination
Ice accumulation on the optical surfaces determines loss of transmission. In order to reduce IASI's sensitivity to ice contamination, the emissive cavities have been added with two even holes.
Moreover, it was necessary to ensure protection for the cold optics from residual contamination. To achieve this, sealing improvements have been made (bellows and joints).
Suggested images
IASI at the European Space Agency
IASI data product profile
IASI observations
Depiction of MetOp in orbit
External links
IASI at Centre national d'études spatiales
IASI scanning the Earth
IASI at TACT, LATMOS
IASI at EODG, University of Oxford
References
Interferometers
Atmospheric sounding satellite sensors
Satellite meteorology | Infrared atmospheric sounding interferometer | Technology,Engineering | 2,974 |
979,503 | https://en.wikipedia.org/wiki/Graham%20number | The Graham number or Benjamin Graham number is a figure used in securities investing that measures a stock's so-called fair value. Named after Benjamin Graham, the founder of value investing, the Graham number can be calculated as follows:
The final number is, theoretically, the maximum price that a defensive investor should pay for the given stock. Put another way, a stock priced below the Graham Number would be considered a good value, if it also meets a number of other criteria.
The Number represents the geometric mean of the maximum that one would pay based on earnings and based on book value. Graham writes:
Alternative calculation
Earnings per share is calculated by dividing net income by shares outstanding. Book value is another way of saying shareholders' equity. Therefore, book value per share is calculated by dividing equity by shares outstanding. Consequently, the formula for the Graham number can also be written as follows:
See also
Altman Z-score
Beneish M-score
Ohlson O-score
Fundamental analysis
Magic formula investing
Value investing
References
Valuation (finance)
Mathematical finance | Graham number | Mathematics | 210 |
38,207,359 | https://en.wikipedia.org/wiki/Russula%20herrerae | Russula herrerae is an edible mushroom in the genus Russula. Described as new to science in 2002, it is found only in its type locality in Mexico, where it grows in temperate oak forests near the village of San Francisco Temezontla in the state of Tlaxcala. The specific epithet herrerae honors Mexican mycologist Teófilo Herrera. R. herrerae is classified in the section Plorantes, subsection Lactarioideae.
Description
The fruit bodies have a white cap that is in diameter. The cap margin is appendiculate, meaning that there are patches of the partial veil attached to it. The brittle white gills have an adnate to decurrent attachment to the stem and are distantly spaced, with many lamellulae (short gills) interspersed between them. The white to yellowish stem measures long by thick and is equal in width throughout, or tapers towards the base. The color of the spore print ranges from white to pale cream.
The mushrooms are considered edible by most inhabitants of San Francisco Temezontla, who call it hongo blanco (white mushroom) or hongo blanco de ocote (pine white mushroom).
See also
List of Russula species
References
herrerae
Edible fungi
Fungi described in 2002
Fungi of Mexico
Fungi without expected TNC conservation status
Fungus species | Russula herrerae | Biology | 272 |
21,637 | https://en.wikipedia.org/wiki/New%20Year | The New Year is the time or day at which a new calendar year begins and the calendar's year count increments by one. Many cultures celebrate the event in some manner. In the Gregorian calendar, the most widely used calendar system today, New Year occurs on January 1 (New Year's Day, preceded by New Year's Eve). This was also the first day of the year in the original Julian calendar and the Roman calendar (after 153 BC).
Other cultures observe their traditional or religious New Year's Day according to their own customs, typically (though not invariably) because they use a lunar calendar or a lunisolar calendar. Chinese New Year, the Islamic New Year, Tamil New Year (Puthandu), and the Jewish New Year are among well-known examples. India, Nepal, and other countries also celebrate New Year on dates according to their own calendars that are movable in the Gregorian calendar.
During the Middle Ages in Western Europe, while the Julian calendar was still in use, authorities moved New Year's Day, depending upon locale, to one of several other days, including March 1, March 25, Easter, September 1, and December 25. Since then, many national civil calendars in the Western World and beyond have changed to using one fixed date for New Year's Day, January 1most doing so when they adopted the Gregorian calendar.
By type
Based on the used calendar new years are often categorized between lunar or lunisolar new years or solar new years.
By month or season
January
January 1: The first day of the civil year in the Gregorian calendar used by most countries.
Contrary to common belief in the west, the civil New Year of January 1 is not an Orthodox Christian religious holiday. The Eastern Orthodox liturgical calendar makes no provision for the observance of a New Year. January 1 is itself a religious holiday, but that is because it is the feast of the circumcision of Christ (seven days after His birth), and a commemoration of saints. While the liturgical calendar begins September 1, there is also no particular religious observance attached to the start of the new cycle. Orthodox nations may, however, make civil celebrations for the New Year. Those who adhere to the revised Julian calendar (which synchronizes dates with the Gregorian calendar), including Bulgaria, Cyprus, Egypt, Greece, Romania, Syria, Turkey and Ukraine, observe both the religious and civil holidays on January 1. In other nations and locations where Orthodox churches still adhere to the Julian calendar, including Georgia, Israel, Russia, the Republic of Macedonia, Serbia, Montenegro and Russian-occupied Ukraine, the civil new year is observed on January 1 of the civil calendar, while those same religious feasts occur on January 14 Gregorian (which is January 1 Julian), in accord with the liturgical calendar.
The Japanese New Year (正月, Shōgatsu) is currently celebrated on January 1, with the holiday usually being observed until January 3, while other sources say that Shōgatsu lasts until January 6. In 1873, five years after the Meiji Restoration, Japan adopted the Gregorian calendar. Prior to 1873, Japan used a lunar calendar with twelve months each of 29 or 30 days for a total year of about 354 days.
The Sámi celebrated Ođđajagemánnu.
Winter lunar new years
The Chinese New Year, also known as Spring Festival or Lunar New Year, occurs every year on the new moon of the first lunar month, about the beginning of spring (Lichun). The exact date can fall any time between January 21 and February 21 (inclusive) of the Gregorian Calendar. Traditionally, years were marked by one of twelve Earthly Branches, represented by an animal, and one of ten Heavenly Stems, which correspond to the five elements. This combination cycles every 60 years. It is the most important Chinese celebration of the year.
The Korean New Year is a Seollal or Lunar New Year's Day. Although January 1 is, in fact, the first day of the year, Seollal, the first day of the lunar calendar, is more meaningful for Koreans. A celebration of the Lunar New Year is believed to have started to let in good luck and ward off bad spirits all throughout the year. With the old year out and a new one in, people gather at home and sit around with their families and relatives, catching up on what they have been doing.
The Vietnamese New Year is the Tết Nguyên Đán which most times is the same day as the Chinese New Year due to the Vietnamese using a Lunar calendar similar to the Chinese calendar.
The Tibetan New Year is Losar and falls between January and March.
The Taiwanese New Year is called Kuè-nî and falls between January and March.
March
Babylonian New Year began with the first New Moon after the northward equinox. Ancient celebrations lasted for eleven days.
Nava Varsha is celebrated in India in various regions from March–April.
The Iranian New Year, called Nowruz, is the day containing the exact moment of the Northward equinox, which usually occurs on March 20 or 21, marking the start of the spring season. The Zoroastrian New Year coincides with the Iranian New Year of Nowruz and is celebrated by the Parsis in India and by Zoroastrians and Persians across the world. In the Baháʼí calendar, the new year occurs on the vernal equinox on March 20 or 21 and is called Naw-Rúz. The Iranian tradition was also passed on to Central Asian countries, including Kazakhs, Uzbeks, and Uighurs, and there is known as Nauryz. It is usually celebrated on March 22.
The Balinese New Year, based on the Saka Calendar (Balinese-Javanese Calendar), is called Nyepi, and it falls on Bali's Lunar New Year (around March). It is a day of silence, fasting, and meditation: observed from 6 am until 6 am the next morning, Nyepi is a day reserved for self-reflection and as such, anything that might interfere with that purpose is restricted. Although Nyepi is a primarily Hindu holiday, non-Hindu residents of Bali observe the day of silence as well, out of respect for their fellow citizens. Even tourists are not exempt; although free to do as they wish inside their hotels, no one is allowed onto the beaches or streets, and the only airport in Bali remains closed for the entire day. The only exceptions granted are for emergency vehicles carrying those with life-threatening conditions and women about to give birth.
Ugadi (, ); the Telugu and Kannada New Year, generally falls in the months of March or April. The people of Andhra Pradesh, Telangana and Karnataka states in southern India celebrate the advent of New Year's Day in these months. The first month of the new year is Chaitra Masa.
In the Kashmiri calendar, the holiday Navreh marks the New Year in March–April. This holy day of Kashmiri Brahmins has been celebrated for several millennia.
Gudi Padwa is celebrated as the first day of the Hindu year by the people of Maharashtra, India and Sanskar Padwa is celebrated in Goa. This day falls in March–April and coincides with Ugadi. (see: Deccan)
The Sindhi festival of Cheti Chand is celebrated on the same day as Ugadi/Gudi Padwa to mark the celebration of the Sindhi New Year.
The Thelemic New Year on March 20 (or on April 8 by some accounts) is usually celebrated with an invocation to Ra-Hoor-Khuit, commemorating the beginning of the New Aeon in 1904. It also marks the start of the twenty-two-day Thelemic holy season, which ends on the third day of the writing of The Book of the Law. This date is also known as The Feast of the Supreme Ritual. There are some that believe the Thelemic New Year falls on either March 19, 20, or 21, depending on the vernal equinox, which is The Feast for the Equinox of the Gods on the vernal equinox of each year to commemorate the founding of Thelema in 1904. In 1904 the vernal equinox was on March 21, and it was the day after Aleister Crowley ended his Horus Invocation that brought on the new Æon and Thelemic New Year.
April
The Chaldean-Babylonian New Year, called Kha b'Nissan or Resha d'Sheeta, occurs on April 1.
Thelemic New Year Celebrations usually end on April 10, after an approximately one-month-long period that begins on March 20 (the formal New Year). This one-month period is referred to by many as the High Holy Days, and end with periods of observance on April 8, 9, and 10, coinciding with the three days of the Writing of the Book of the Law by Aleister Crowley in 1904.
Mid-April (Spring in the Northern Hemisphere)
The new year of many South and Southeast Asian calendars falls between April 13–15, marking the beginning of spring.
The Baloch Hindu people in Pakistan and India celebrate their new year called Bege Roch in the month of Daardans according to their Saaldar calendar.
Tamil New Year ( Puthandu) is celebrated in the South Indian state of Tamil Nadu, on the first of Chithrai (சித்திரை) (April 13, 14, or 15). In the temple city of Madurai, the Chithrai Thiruvizha is celebrated in the Meenakshi Temple. A huge exhibition is also held, called Chithrai Porutkaatchi. In some parts of Southern Tamil Nadu, it is also called Chithrai Vishu. The day is marked with a feast in Hindu homes and the entrance to the houses are decorated elaborately with kolams.
Punjabi/Sikh Vaisakhi (ਵਿਸਾਖੀ) is celebrated on April 14 in Punjab according to their nanakshahi calendar.
Nepal New Year in Nepal is celebrated on the 1st of Baisakh Baisākh which falls on 12–15 April in the Gregorian calendar. Nepal follows the Bikram Sambat (BS) as an official calendar.
The Dogra of Himachal Pradesh celebrate their new year Chaitti in the month of Chaitra.
Maithili New Year or Jude-Sheetal too fall on these days. It is celebrated by Maithili People all around the world.
Assamese New Year (Rongali Bihu or Bohag Bihu) is celebrated on April 14 or 15 in the Indian state of Assam.
Bengali New Year ( Pôhela Boishakh or Bangla Nôbobôrsho) is celebrated on the 1st of Boishakh (April 14 or 15) in Bangladesh and the Indian state of West Bengal and Tripura.
Odia New Year (Vishuva Sankranti) is celebrated on April 14 in the Indian state of Odisha. It is also called Vishuva Sankranti or Pana Sankranti (ପଣା ସଂକ୍ରାନ୍ତି).
Manipuri New Year or Cheirouba is celebrated on April 14 in the Indian State of Manipur with much festivities and feasting.
Sinhalese New Year is celebrated with the harvest festival (in the month of Bak) when the sun moves from the Meena Rashiya (House of Pisces) to the Mesha Rashiya (House of Aries). Sri Lankans begin celebrating their National New Year "Aluth Avurudda (අලුත් අවුරුද්ද)" in Sinhala and "Puththandu (புத்தாண்டு)" in Tamil. However, unlike the usual practice where the new year begins at midnight, the National New Year begins at the time determined by the astrologers by calculating the exact time that sun goes from Meena Rashiya (House of Pisces) to the Mesha Rashiya (House of Aries). Not only the beginning of the new year but the conclusion of the old year is also specified by the astrologers. And unlike the customary ending and beginning of the new year, there is a period of a few hours in between the conclusion of the Old Year and the commencement of the New Year, which is called the "nona gathe" (neutral period) Where part of the sun in House of Pisces and Part is in House of Aries.
Malayali New Year (, Vishu) is celebrated in the South Indian state of Kerala in mid-April.
Western parts of Karnataka where Tulu is spoken, the new year is celebrated along with Tamil/ Malayali New year April 14 or 15, although in other parts most commonly celebrated on the day of Gudi Padwa, the Maharashtrian new year. In Kodagu, in Southwestern Karnataka, however, both new year, Yugadi (corresponding to Gudi Padwa in March) and Bisu (corresponding to Vishu in around April 14 or 15), are observed.
The Water Festival is the form of similar new year celebrations taking place in many Southeast Asian countries, on the day of the full moon of the 11th month on the lunisolar calendar each year. The date of the festival is based on the traditional lunisolar calendar which determines the dates of Buddhist festivals and holidays, and is observed from April 13 to 15. Traditionally people gently sprinkled water on one another as a sign of respect, but since the new year falls during the hottest month in Southeast Asia, many people end up dousing strangers and passersby in vehicles in boisterous celebration. The festival has many different names specific to each country:
In Burma it is known as Thingyan ()
Songkran () in Thailand
Pi Mai Lao ( Songkan) in Laos
Chaul Chnam Thmey ( ) in Cambodia.
It is also the traditional new year of the Dai peoples of Yunnan Province, China. Religious activities in the tradition of Theravada Buddhism are also carried out, a tradition in which all of these cultures share.
June
The New Year of the Kutchi people occurs on Ashadi Beej, that is 2nd day of Shukla paksha of Aashaadha month of Hindu calendar. As for people of Kutch, this day is associated with the beginning of rains in Kutch, which is largely a desert area. Hindu calendar month of Aashaadh usually begins on June 22 and ending on July 22.
Odunde Festival is a celebration on the 2nd Sunday of June, where "Odunde" means "Happy New Year" in the Yorube Nigerian language.
The Xooy ceremony of the Serer people of Senegal, Gambia and Mauritania marks the Serer New Year.
In the Dogon religion, the Bulo festival marks the Dogon New Year.
July
The New Year of the Zulu people occurs on the full moon of July.
September
Neyrouz, the Coptic New Year, is the continuation of the ancient Egyptian New Year following the Roman emperor Augustus's reform of its calendar. Its date of Thoth 1 usually occurs on August 29 in the Julian calendar, except in the year before a Julian leap year, when it occurs the next day. The leap years removed from the Gregorian calendar mean that it presently falls on September 11 or 12 but on different days before 1900 or after 2100.
Enkutatash, the Ethiopian New Year, occurs on the same day as Neyrouz.
The New Year of the French Revolutionary Calendar, in force from 1793 to 1805 and briefly under the Paris Commune in 1871, occurred on the Southward equinox (22, 23, or 24 September)
Autumn in the Northern Hemisphere
Rosh Hashanah (Hebrew for 'head of the year') is a Jewish, two day holiday, commemorating the culmination of the seven days of Creation, and marking God's yearly renewal of His world. The day has elements of festivity and introspection, as God is traditionally believed to be assessing His creation and determining the fate of all men and creatures for the coming year. In Jewish tradition, honey is used to symbolize a sweet new year. At the traditional meal for that holiday, apple slices are dipped in honey and eaten with blessings recited for a good, sweet new year. Some Rosh Hashanah greetings show honey and an apple, symbolizing the feast. In some congregations, small straws of honey are given out to usher in the new year.
The Pathans Kalasha celebrate their Chowmus which marks the beginning of their year in Chitral district of Pakistan and parts of India.
The Marwari New Year (Thapna) is celebrated on the day of the festival of Diwali, which is the last day Krishna Paksha of the Ashvin month & also the last day of the Ashvin month of the Hindu calendar.
The Gujarati New Year (Bestu/Nao Varas) is celebrated the day after the festival of Diwali (which occurs in mid-fall – either October or November, depending on the Lunar calendar). The Gujarati New Year is synonymous with sud ekam, i.e. first day of Shukla paksha of the Kartik month, which is taken as the first day of the first month of the Gujarati lunar calendar. Most other Hindus celebrate the New Year in early spring. The Gujarati community all over the world celebrates the New Year after Diwali to mark the beginning of a new fiscal year.
The Sikkimese celebrate their new year called Losar.
The Nepal Era New year (see Nepal Sambat) is celebrated in regions encompassing original Nepal. The new year occurs on the fourth day of Diwali. The calendar was used as an official calendar until the mid-19th century. However, the new year is still celebrated by the Newars community of Nepal.
Some neo-pagans celebrate their interpretation of Samhain (a festival of the ancient Celts, held around November 1) as a New Year's Day representing the new cycle of the Wheel of the Year, although they do not use a different calendar that starts on this day.
December
The Mizo in northeast India celebrate their Pawl Kut in December.
The Inuit, the Aleut, the Yupik, the Chukchi and the Iñupiat celebrate Quviasukvik () as their New Year. It occurs on the same day as Christmas Eve.
Variable
The Islamic New Year occurs on Muharram. Since the Islamic calendar is based on 12 lunar months amounting to about 354 days, its New Year occurs about eleven days earlier each year in relation to the Gregorian calendar, with two Islamic New Years falling in the Gregorian year 2008.
Satu Suro is the Javanese New Year, which falls on the first day of the month of Suro and corresponds with the first Islamic month of Muharram. Most Javanese in Java, Indonesia celebrated it by staying at home and refrain leaving the house.
The "Opening of the Year" (; ), usually transcribed as Wep Renpet, was the ancient Egyptian New Year. It appears to have originally been set to occur upon Sirius's return to the night sky (July 19 proleptic Julian calendar), during the initial stages of former annual flood of the Nile. However the Egyptian calendar's lack of leap years, until its reform by the Roman emperor Augustus, meant that the celebration slowly cycled through the entire solar year over the course of two or three 1460-year Sothic cycles.
Christian liturgical year
The early development of the Christian liturgical year coincided with the Roman Empire (east and west), and later the Byzantine Empire, both of which employed a taxation system labeled the Indiction, the years for which began on September 1. This timing may account for the ancient church's establishment of September 1 as the beginning of the liturgical year, despite the official Roman New Year's Day of January 1 in the Julian calendar, because the Indiction was the principal means for counting years in the empires, apart from the reigns of the Emperors. The September 1 date prevailed throughout all of Christendom for many centuries, until subsequent divisions eventually produced revisions in some places.
After the sack of Rome in 410, communications and travel between east and west deteriorated. Liturgical developments in Rome and Constantinople did not always match, although a rigid adherence to form was never mandated in the church. Nevertheless, the principal points of development were maintained between east and west. The Roman and Constantinopolitan liturgical calendars remained compatible even after the East-West Schism in 1054. Separations between the Catholic General Roman Calendar and Eastern Orthodox liturgical calendar grew only over several centuries' time. During those intervening centuries, the Latin Church Catholic ecclesiastic year was moved to the first day of Advent, the Sunday nearest to St. Andrew's Day (November 30). By the time of the Reformation (early 16th century), the Roman Catholic general calendar provided the initial basis for the calendars for the liturgically oriented Protestants, including the Anglican and Lutheran Churches, who inherited this observation of the liturgical new year.
The present-day Eastern Orthodox liturgical calendar is the virtual culmination of the ancient eastern development cycle, though it includes later additions based on subsequent history and lives of saints. It still begins on September 1, proceeding annually into the Nativity of the Theotokos (September 8) and Exaltation of the Cross (September 14) to the celebration of Nativity of Christ (Christmas), through his death and resurrection (Pascha/Easter), to his Ascension and the Dormition of the Theotokos ("falling asleep" of the Virgin Mary, August 15). This last feast is known in the Roman Catholic church as the Assumption. The dating of "September 1" is according to the "new" (revised) Julian calendar or the "old" (standard) Julian calendar, depending on which is used by a particular Orthodox Church. Hence, it may fall on September 1 on the civil calendar, or on September 14 (between 1900 and 2099 inclusive).
The liturgical calendars of the Coptic and Ethiopian Orthodox churches are unrelated to these systems but instead follow the Alexandrian calendar which fixed the wandering ancient Egyptian calendar to the Julian year. Their New Year celebrations on Neyrouz and Enkutatash were fixed; however, at a point in the Sothic cycle close to the Indiction; between the years 1900 and 2100, they fall on September 11 during most years and September 12 in the years preceding a leap year.
Historical European new year dates
During the Roman Republic and the Roman Empire, years began on the date on which each consul first entered the office. This was probably May 1 before 222 BC, March 15 from 222 BC to 154 BC, and January 1 from 153 BC. In 45 BC, when Julius Caesar's new Julian calendar took effect, the Senate fixed January 1 as the first day of the year. At that time, this was the date on which those who were to hold civil office assumed their official position, and it was also the traditional annual date for the convening of the Roman Senate. This civil new year remained in effect throughout the Roman Empire, east and west, during its lifetime and well after, wherever the Julian calendar continued in use.
In the Middle Ages in Europe a number of significant feast days in the ecclesiastical calendar of the Roman Catholic Church came to be used as the beginning of the Julian year:
In Modern Style or Circumcision Style dating, the new year started on January 1, the Feast of the Circumcision of Christ.
In Annunciation Style or Lady Day Style dating the new year started on March 25, the feast of the Annunciation (traditionally nicknamed Lady Day). This date was used in many parts of Europe during the Middle Ages and beyond.
In Easter Style dating, the new year started on Holy Saturday (the day before Easter), or sometimes on Good Friday. This was used all over Europe, but especially in France, from the eleventh to the sixteenth century. A disadvantage of this system was that because Easter was a movable feast the same date could occur twice in a year; the two occurrences were distinguished as "before Easter" and "after Easter".
In Christmas Style or Nativity Style dating the new year started on December 25. This was used in Germany and England until the eleventh century, and in Spain from the fourteenth to the sixteenth century.
Over the centuries, countries changed between styles until the Modern Style (January 1) prevailed. For example,
In England and Ireland, either Annunciation Style (March 25) or Nativity Style (December 25th) was used until the Norman Conquest in 1066, when Modern Style (January 1) was adopted; but Annunciation Style was used again from 1155.
Scotland changed from Annunciation Style (March 25) to Modern Style with effect from January 1, 1600 (by Order of the King's Privy Council on December 17, 1599).
Despite the unification of the Scottish and English royal crowns with the accession of King James VI and I in 1603, and even the union of the kingdoms themselves in 1707, England continued using Annunciation Style while Scotland used Modern Style.
The final change came when Parliament passed the Calendar (New Style) Act 1750. This act had two major elements: it converted all parts of the British Empire to use of the Gregorian calendar and simultaneously it declared the civil new year in England, Wales, Ireland and the Colonies to be January 1 (as was already the case in Scotland). It went into effect on 3 September (Old Style) or 14September (New Style) 1752.
A more unusual case is France, which observed the Northern autumn equinox day (usually September 22) as "New Year's Day" in the French Republican Calendar, which was in use from 1793 to 1805. This was primidi Vendémiaire, the first day of the first month.
Adoptions of January 1
It took quite a long time before January 1 again became the universal or standard start of the civil year. The years of adoption of January 1 as the new year are as follows:
March 1 was the first day of the numbered year in the Republic of Venice until its destruction in 1797, and in Russia from 988 until 1492 (Anno Mundi 7000 in the Byzantine calendar). September 1 was used in Russia from 1492 (A.M. 7000) until the adoption of both the Anno Domini notation and 1 January as New Year's Day, with effect from 1700, via December 1699 decrees (1735, 1736) of Tsar Peter I.
Time zones
Because of the division of the globe into time zones, the new year moves progressively around the globe as the start of the day ushers in the New Year. The first time zone to usher in the New Year, just west of the International Date Line, is located in the Line Islands, a part of the nation of Kiribati, and has a time zone 14 hours ahead of UTC. All other time zones are 1 to 25 hours behind, most in the previous day (December 31); on American Samoa and Midway, it is still 11 pm on December 30. These are among the last inhabited places to observe New Year. However, uninhabited outlying US territories Howland Island and Baker Island are designated as lying within the time zone 12 hours behind UTC, the last places on Earth to see the arrival of January 1. These small coral islands are found about midway between Hawaii and Australia, about 1,000 miles west of the Line Islands. This is because the International Date Line is a composite of local time zone arrangements, which winds through the Pacific Ocean, allowing each locale to remain most closely connected in time with the nearest or largest or most convenient political and economic locales with which each associate. By the time Howland Island sees the new year, it is 2 am on January 2 in the Line Islands of Kiribati.
Gallery of celebrations
See also
Old New Year (or Orthodox New Year, Julian New Year)
Notes
References
Sources
- see pages xvii–xviii
External links
Calendars
Kigo | New Year | Physics | 5,772 |
44,750,391 | https://en.wikipedia.org/wiki/Griffiths%20group | In mathematics, more specifically in algebraic geometry, the Griffiths group of a projective complex manifold X measures the difference between homological equivalence and algebraic equivalence, which are two important equivalence relations of algebraic cycles.
More precisely, it is defined as
where denotes the group of algebraic cycles of some fixed codimension k and the subscripts indicate the groups that are homologically trivial, respectively algebraically equivalent to zero.
This group was introduced by Phillip Griffiths who showed that for a general quintic in (projective 4-space), the group is not a torsion group.
Notes
References
Algebraic geometry | Griffiths group | Mathematics | 122 |
53,502,028 | https://en.wikipedia.org/wiki/Gambierol | Gambierol is a marine polycyclic ether toxin which is produced by the dinoflagellate Gambierdiscus toxicus. Gambierol is collected from the sea at the Rangiroa Peninsula in French Polynesia. The toxins are accumulated in fish through the food chain and can therefore cause human intoxication. The symptoms of the toxicity resemble those of ciguatoxins, which are extremely potent neurotoxins that bind to voltage-sensitive sodium channels and alter their function. These ciguatoxins cause ciguatera fish poisoning. Because of the resemblance, there is a possibility that gambierol is also responsible for ciguatera fish poisoning. Because the natural source of gambierol is limited, biological studies are hampered. Therefore, chemical synthesis is required.
Structure and reactivity
Gambierol is a ladder-shaped polyether, composed of eight ether rings, 18 stereocenters, and two challenging pyranyl rings having methyl groups that are in a 1,3-diaxial orientation to one another.
Different structural analogues were synthesized to determine which groups and side chains attached to the gambierol are essential for its toxicity. Not only the fused polycyclic ether core is essential, but also the triene side chain at C51 and the C48-C49 double bond were indispensable. In the triene side chain, the double bond between C57 and C58 was essential. The C1 and C8 hydroxy groups were not essential, but they enhance the activity. The conjugate diene in the triene side chain also enhances the toxicity.
Synthesis
The synthesis of gambierol consists of two tetracyclic precursor molecules, one alcohol and one acetic acid, that fuse together. The whole synthesis of gambierol is depicted in the figure below. After obtaining the octacyclic backbone, the tail is added via Stille coupling. The acetic acid (compound 1) and alcohol (compound 2) are converted to compound 3. The reaction of compound 3 with the titanium alkylidene from dibromide 1,1-dibromoethane, provides cyclic enol ether (compound 4). Oxidation of the alcohols gives majorly compound 5, but also compound 6. These are both ketones, but they have other stereochemistry. Compound 6 can be converted back in compound 5 with reactant c, thereby moving the equilibrium towards compound 5. This ketone can be converted further to produce reactive gambierol. By reductive cyclization of the D ring, the octacyclic core structure (compound 7) was made. Oxidation of compound 7 to the aldehyde was followed by formation of the diiodolefin. Stereoselective reduction, global deprotection and Stille coupling of compound 8 with dienyl stannane (compound 9) provide gambierol.
Mechanism of action
Gambierol acts as a low-efficacy partial agonist at voltage-gated sodium channels (VGSC's) and as a high affinity inhibitor of voltage-gated potassium currents. It reduces the current through potassium channels irreversibly by stabilizing some of the closed channels. It acts as an intermembrane anchor where it displaces lipids and prohibits the voltage sensor domain of the channel from moving during physiologically important changes. This causes the channel to remain in the closed state and lowers the current. Gambierol also decreases the amplitude of inward sodium currents and hyperpolarizes the inward sodium current activation.
Gambierol has a high affinity for especially K1.1-1.5 channels and the K3.1 channel. K1.1-1.5 channels are responsible for repolarization of the membrane potential. The K1.3 channel however, has additional functions by regulating the Ca2+ signaling for T cells. If they are blocked, the T cells at the site of inflammation paralyse and are not reactivated. K3.1 channels are responsible for the high frequency firing of action potentials. If the K channels are closed, the depolarized membrane cannot repolarize to its resting state, causing a permanent action potential. This leads to paralysis of, for example, the respiratory system and therefore suffocation of the organism.
In neurons, gambierol has been reported to induce Ca oscillations because of blockage of the voltage-gated potassium channels. The Ca oscillations involve glutamate release and activation of NMDARs (glutamate receptors). This is however secondary to the blockade of potassium channels. The oscillations reduce the cytoplasmic Ca threshold for the activation of Ras. Ras stimulates MAPKs to phosphorylate ERK1/2 which induce outgrowth of neurites. This is however dependent on intracellular concentrations and interaction of the NMDAR receptors They both work bidirectionally.
An increase in intracellular calcium concentration also activates the nitric oxide synthase to produce nitric oxide. In combination with a superoxide, nitric oxide forms peroxynitrite and causes oxidative stress in different sorts of tissues. This explains the toxic symptoms derived from intake of gambierol.
Metabolism
Metabolism of gambierol is not known yet, but the expectation is that gambierol acts almost the same as the ciguatoxins. Ciguatoxins are polycyclic polyether compounds. Their molecular weight is between 1.023 and 1.159 Dalton. Gambierol is structurally similar to ciguatoxins and it can be synthesized together with them. Excretion of these ciguatoxins is largely via the feces and in smaller amounts via urine. The compounds are very lipophilic and will therefore diffuse to multiple organs and tissues, for example the liver, fat and the brain. The highest concentration was found in the brain, but muscles contained the highest total amount after a few days. Because gambierol is lipophilic, it can easily persist and accumulate in the fish food chain. The detoxification pathways are still unknown, but it is possible to eliminate the gambierol. This will take several months or years.
Efficacy and side effects
The membrane potential and calcium signaling in T lymphocytes are controlled by ion channels. T cells can be activated if membrane potential and calcium signaling are altered, because they are coupled to signal transduction pathways. If these signal transduction pathways are disrupted, it can prevent the T cells from responding to antigens. This is called immune suppression. Gambierol is a potent blocker of potassium channels, which for a part determine the membrane potential. Gambierol is therefore a good option for the development of a drug that can be used in immunotherapy. This is for example used in diseases like multiple sclerosis, diabetes mellitus type 1 and rheumatoid arthritis.
Treatment with gambierol is not being used yet, because the compound is toxic and also blocks other channels and thereby disrupts important processes. Intake of gambierol can also cause pain, because Kv1.1 and Kv1.4 channels are blocked and that increases the excitability of central circuits. It also causes illness for several weeks. This is explained by the fact that gambierol is very lipophilic. Lipophilic compounds have high affinity for the lipid bilayer of cell membranes. It is likely that gambierol remains in the cell membrane for days or a few weeks, which explains the long term illness associated with gambierol treatment. There are also other symptoms already explained by the mechanism of action of gambierol, for example difficulties with respiration and hypotension.
Gambierol is also an interesting compound in research into treatments of pathologies like Alzheimer's disease, which are caused by increased expression of β-amyloid and/or tau hyperphosphorylation. Increases in β-amyloid accumulation and/or tau phosphorylation affects neurons the most. The neurons will then be degenerated and therefore this process has effects on the nervous system. However, gambierol can reduce this overexpression of β-amyloid and/or tau hyperphosphorylation in vitro and in vivo.
Gambierols function in inducing outgrowth of neurites in a bidirectional manner can potentially be used after neural injury. After for example a trauma or a stroke, gambierol can be used to change the structural plasticity of the brain. The possibility of gambierol to cross the blood–brain barrier is very important in this case.
Toxicity
Poisoning by gambierol is normally caused after eating contaminated fish. Gambierol exhibits potent toxicity against mice at 50-80 μg/kg by intraperitoneal injection or 150 μg/kg when consumed orally. Symptoms resemble those of ciguatera poisoning. The symptoms concerning the gastrointestinal tract are:
Abdominal pain
Nausea
Vomiting
Diarrhea
Painful defecation
The neurological symptoms include:
Paradoxical temperature reversion; cold objects feel hot and vice versa.
Dental pain; teeth feel loose.
Treatment
There is no known antidote for gambierol poisoning.
References
Neurotoxins
Polyether toxins
Ion channel toxins
Non-protein ion channel toxins
Potassium channel blockers
Heterocyclic compounds with 7 or more rings | Gambierol | Chemistry | 1,986 |
78,737,317 | https://en.wikipedia.org/wiki/Imocitrelvir | Imocitrelvir is an investigational new drug that is being evaluated for the treatment of viral infections. It is a 3C protease inhibitor in picornaviruses. Originally developed by Pfizer for treating human rhinovirus infections, this small molecule has shown promise against a broader range of viruses, including polioviruses.
References
Antiviral drugs
Amides
Enoic acids
Esters
Isoxazoles
Propargyl compounds
Pyridones
Pyrrolidones | Imocitrelvir | Chemistry,Biology | 102 |
52,728,545 | https://en.wikipedia.org/wiki/List%20of%20electric%20aircraft | This is a list of electric aircraft, whose primary flight power is electrical.
! Type !! Country !! Class !! Power !! Role !! Date !! Status !! Notes
|-
| ACS-Itaipu Sora-E || Brazil || Propeller || Battery || Experimental || 2015 || Prototype ||
|-
| AgustaWestland Project Zero || Italy || UAV || Battery || Experimental || 2011 || Prototype || First large-scale all-electric tilt-rotor.
|-
| Air Energy AE-1 Silent || Germany || Motor glider || Battery || || 1998 || Production ||
|-
| Airbus A³ Vahana || United States || Propeller || Battery || Experimental || 2018 || Prototype || Retired December 2019 to focus on CityAirbus development.
|-
| Airbus E-Fan || France || Propeller || Battery || Trainer || 2014 || Cancelled || Co-developed with Aero Composite Saintonge.
|-
| Alisport Silent Club || Italy || Motor glider || Battery || || 1997 || Production || First production electric aircraft.
|-
| Ampaire Electric EEL || United States || Propeller || Hybrid || || 2019 || Testing ||
|-
| APEV Demoichelle || France || Propeller || Battery || Experimental || 2010 || Prototype ||
|-
| APEV Pouchelec || France || Propeller || Battery || || 2009 || Prototype || Development of the Pouchel Light.
|-
| AstroFlight Sunrise || United States || UAV || Solar || Experimental || 1975 || Prototype || First solar-powered flight. Sunrise II flew in 1975.
|-
| AutoGyro eCavalon || Germany || Rotorcraft || Battery || Experimental || 2013 || Prototype ||
|-
| Baykar Cezeri || Turkey || eVTOL || Battery || Transport || 2020 || Project || Flown unmanned.
|-
| Beta AVA || United States || Propeller || Battery || Transport || 2019 || Prototype || Testing and preparing serial production.
|-
| Boeing Fuel Cell Demonstrator (FCD) || United States || Motor glider || Fuel cell || Experimental || 2008 || Prototype || Modified Diamond HK-36 Super Dimona.
|-
| Bye Aerospace eFlyer 2 || United States || Propeller || Battery || || 2016 || Project ||
|-
| Bye Aerospace eFlyer 4 || United States || Propeller || Battery || || 2018 || Project ||
|-
| Cessna 172 electric || United States || Propeller || Battery || Experimental || 2010 || Demonstrator only || On 19 October 2012 Beyond Aviation announced that it had flown an electric Cessna 172 Skyhawk.
|-
| Cessna 208 eCaravan || United States || Propeller || Battery || Utility || 2020 || Prototype || Currently undergoing testing prior to certification.
|-
| CityAirbus || Multinational || eVTOL || Battery || Transport || 2019 || Prototype ||
|-
| DigiSky SkySpark || || || Fuel cell || || || || Converted Alpi Pioneer 300.
|-
| Viking Dragonfly (electric conversion) || Netherlands || Propeller || Battery || Experimental || 2019 || Project || Converted Viking Dragonfly.
|-
| Hamilton aEro 1 || Switzerland || Propeller || Battery || Private || 2016 || Prototype || Converted Silence Twister.
|-
| e-Genius || Germany || || Battery || Experimental || 2011 || Prototype ||
|-
| e-Sling || Switzerland || || Battery || Experimental || 2022 || Prototype || Based on the Sling TSi
|-
| EADS Green Cri-Cri || || || Battery || Experimental || 2010 || Prototype || Converted Colomban Cri-cri.
|-
| Electric Aircraft Corporation ElectraFlyer Trike || United States || Motor glider || Battery || Private || 2007 || Production || Ultralight. First commercial offering of an electric aircraft.
|-
| Electric Aircraft Corporation ElectraFlyer-C || United States || Motor glider || Battery || Private || 2008 || Production || Converted Monnett Moni motor glider.
|-
| Electravia E-Fenix || France || Motor glider || Battery || Private || 2001 || Prototype ||
|-
| Electravia ElectroLight2 || France || Motor glider || Battery || Private || 2001 || Production ||
|-
| Electravia BL1E Electra || France || Propeller || Battery || || 2007 || || First registered aircraft in the world powered by electric engine and with batteries.
|-
| Electravia Electro Trike || France || Motor glider || Battery || || 2008 || ||
|-
| ENFICA-FC || || Propeller || Fuel cell || Experimental || || Prototype || Converted Rapid 200FC.
|-
| eUP Aviation Green1 || Canada || Motor glider || Battery || || 2012 || Production ||
|-
| Eviation Alice || Israel || Propeller || Battery || Transport || 2022 || Testing || 2 pilot + 9 passenger, 444 km/h cruise and 1,367 km range
|-
| Flightstar e-Spyder || || Propeller || Battery || || 2009 || || Converted Flightstar Sportstar Spyder. Also offered as the Greenwing GW280 and Yuneec eSpyder.
|-
| Icaro 2000 Trike || || Motor glider || Battery || Private || || Production ||
|-
| Joby Aviation S4 || United States || Propeller || Battery || Experimental || 2017 || Prototype ||
|-
| La France || France || Airship || Battery || Experimental || 1884 || Prototype ||
|-
| LAK-17B FES Self-Launch (mini) || Lithuania || Motor glider || Battery || Private || 2020 || Production ||
|-
| Lange Antares 20E || Germany || Motor glider || Battery || Experimental || 2003 || Production || First electric aircraft to obtain a certificate of airworthiness.
|-
| Lange Antares 23E || Germany || Motor glider || Battery || Experimental || 2012 || Production ||
|-
| Lange LF 20 || Germany || Motor glider || Battery || Experimental || 1999 || Prototype || Modified DG800.
|-
| Lilium Jet || Germany || Ducted fan || Battery || Private || 2017 || Prototype || two-seater (5 planned from 2025) vertical take-off and landing air taxi prototype
|-
| Luxembourg Special Aerotechnics MC30E || || || Battery || Experimental || 2012 || Prototype ||
|-
| MacCready Gossamer Penguin || United States || Propeller || Solar || Experimental || 1980 || Prototype ||
|-
| MacCready Solar Challenger || United States || Propeller || Solar || Experimental || 1981 || Prototype || Flew from Paris to England.
|-
| Matsushita / Tokyo Institute of Technology aircraft || Japan || Propeller || Battery || Experimental || 2006 || Prototype || Powered by 160 AA battery cells.
|-
| Mauro Solar Riser || || Propeller || Solar || Experimental || 1979 || Prototype || First manned, solar-powered airplane. Based on the UFM Easy Riser. Solar cells charged battery for flight.
|-
| MC15E Cri-Cri || || Propeller || Battery || || 2010 || ||
|-
| Militky MB-E1 || West Germany || Propeller || Battery || Experimental || 1973 || Prototype || First manned airplane to fly solely on electric power.
|-
| NASA Centurion || United States || UAV || Solar || Experimental || 1998 || Prototype ||
|-
| NASA Helios || United States || UAV || Battery || Experimental || 1999 || Prototype ||
|-
| NASA Pathfinder || United States || UAV || Solar || Experimental || 1993 || Prototype || Developed by AeroVironment, Inc from the HALSOL prototype. Pathfinder Plus had increased span.
|-
| NASA Puffin || United States || || Battery || Experimental || 2010 || Project ||
|-
| NASA X-57 Maxwell || United States || Propeller || Battery || Experimental || 2016 || Project || Modified Tecnam P2006T.
|-
| New Concept Aircraft (Zhuhai) Green Pioneer Ι || China || || Solar || Experimental || 2002 || Prototype ||
|-
| Opener BlackFly || United States || Propeller || Battery || Utility || 2011 || Prototype ||
|-
| PC-Aero Elektra One || Germany || Propeller || Solar || || 2011 || Project ||
|-
| Petróczy-Kármán-Žurovec PKZ-1 || Hungary || Rotorcraft || Cable || Patrol || 1917 || Prototype || Tethered to the ground.
|-
| Phoenix Air Phoenix || Czech republic || Propeller || Battery || || 2018 || Project || Testing and preparing serial production.
|-
| Pipistrel Alpha Electro || Slovenia || Propeller || Battery || Experimental || 2011 || Production || Electric version of the Pipistrel Alpha Trainer.
|-
| Pipistrel Taurus Electro G2 || Slovenia || Motor glider || Battery || Private || 2011 || Production ||
|-
| Pipistrel Taurus G4 || Slovenia || Motor glider || Battery || Experimental || 2011 || Prototype || Twin fuselage. The G4 won the NASA Green Flight Challenge in 2011.
|-
| Pipistrel Velis Electro || Slovenia || Propeller || Battery || Trainer || 2020 || Production || Type certified. Based on the Pipistrel Virus
|-
| Pipistrel WATTsUP || Slovenia || || Battery || Experimental || 2014 || Prototype || Led to the Pipistrel Alpha Electro.
|-
| QinetiQ Zephyr || United Kingdom || UAV || Solar || Patrol || 2008 || Prototype || The 2010 redesign holds the UAV endurance record of over 2 weeks (336 hours).
|-
| Rolls-Royce ACCEL || United Kingdom || Propeller || Battery || Private || 2021 || Prototype ||
|-
| Schempp-Hirth Discus-2c FES || Germany || Motor glider || Battery || || 2015 || Production ||
|-
| Schempp-Hirth Ventus-2cxa FES || Germany || Motor glider || Battery || || 2014 || Production ||
|-
| Schempp-Hirth Arcus-E || Germany || Motor glider || Battery || || 2010 || Production ||
|-
| Siemens-FlyEco Magnus eFusion || Germany-Hungary || Propeller || Hybrid diesel-electric || || 2018 || Under development for production ||
|-
| Solair 1 || || Propeller || Solar || Experimental || 1983 || || Developed from a Farner canard design. The Solair II flew 1998.
|-
| Solar Impulse || || Propeller || Solar || Experimental || 2009 || Prototype ||
|-
| Solar Impulse 2 || || Propeller || Solar || Experimental || 2015 || Prototype || First round-the-world flight by an electric aircraft.
|-
| Solar-Powered Aircraft Developments Solar One || United Kingdom || Propeller || Solar || Experimental || 1979 || Prototype || Solar cells charged battery for flight.
|-
| SolarStratos || || Propeller || Solar || Experimental || 2017 || Prototype ||
|-
| Solution F/Chretien Helicopter || || Rotorcraft || Battery || Experimental || 2011 || Prototype || First free-flying manned electric helicopter.
|-
| Sonex Electric Sport Aircraft || || Propeller || Battery || || 2010 || Project ||
|-
| Stuttgart University Icaré II || Germany || || Solar || || 1996 || ||
|-
| Sunseeker I || United States || Propeller || Solar || Experimental || 1990 || Prototype || The Sunseeker II was built in 2002.
|-
| Sunseeker Duo || United States || Propeller || Solar || Experimental || 2013 || Prototype ||
|-
| Tier1 electric Robinson R44 || United States || Rotorcraft || Battery || Experimental || 2016 || Prototype ||
|-
| Tissandier || France || Airship || Battery || Experimental || 1883 || Prototype || First electric powered aircraft.
|-
| Ultraflight Lazair Electric || || || Battery || Experimental || 2011 || Prototype ||
|-
| Volocopter || Germany || Rotorcraft || || || 2008 || ||
|-
| Volta Volare GT4 || || || Hybrid || || 2012 || Project || Diesel-electric hybrid.
|-
| Yuneec International E430 || China || Propeller || Battery || Private || 2009 || Production || Homebuilt aircraft.
|-
| smartflyer SFX1 || Switzerland || Propeller || Serial Hybrid || Prototype || 2017 || Production || bulit by smartflyer AG Switzerland
|}
References
Aircraft
Lists of aircraft by power source | List of electric aircraft | Engineering | 3,084 |
1,216,761 | https://en.wikipedia.org/wiki/HD%2049674 | HD 49674 is a solar-type star with an exoplanetary companion in the northern constellation of Auriga. It has an apparent visual magnitude of 8.10 and thus is an eighth-magnitude star that is too faint to be readily visible to the naked eye. The system is located at a distance of 140.6 light-years from the Sun based on parallax, and is drifting further away with a radial velocity of +12 km/s.
HD 49674, and its planetary system, was chosen as part of the 2019 NameExoWorlds campaign organised by the International Astronomical Union, which assigned each country a star and planet to be named. HD 49674 was assigned to Belgium. The winning proposal named the star Nervia and the planet Eburonia, both after prominent Belgic tribes, the Nervii and Eburones, respectively.
This is an ordinary G-type main-sequence star with a stellar classification of G3V, which indicates it is generating energy through hydrogen fusion at its core. Spinning with a projected rotational velocity of 4.7 km/s, it is younger than the Sun, roughly two billion years of age, and is a metal-rich star. HD 49674 has a similar mass and radius as the Sun. It is radiating 96% of the Sun's luminosity from its photosphere at an effective temperature of .
Planetary system
At the time of discovery of the planet HD 49674 b in 2002, it was the least massive planet known, very close to the boundary between sub-Jupiter mass and Neptune mass at 0.1 MJ. This planet orbits very close to the star, with a semimajor axis of .
See also
Lists of exoplanets
References
External links
HIP 32916 Catalog
Image HD 49674
G-type main-sequence stars
Planetary systems with one confirmed planet
Auriga
Durchmusterung objects
049674
032916 | HD 49674 | Astronomy | 412 |
5,539,816 | https://en.wikipedia.org/wiki/Ether%E2%80%90%C3%A0%E2%80%90go%E2%80%90go%20potassium%20channel | An ether‐à‐go‐go potassium channel is a potassium channel which is inwardly-rectifying and voltage-gated.
They are named after the ether‐à‐go‐go gene, which codes for one such channel in the fruit fly Drosophila melanogaster.
Examples include hERG, KCNH6, and KCNH7.
References
Potassium channels | Ether‐à‐go‐go potassium channel | Chemistry | 78 |
2,815,048 | https://en.wikipedia.org/wiki/Computational%20model | A computational model uses computer programs to simulate and study complex systems using an algorithmic or mechanistic approach and is widely used in a diverse range of fields spanning from physics, engineering, chemistry and biology to economics, psychology, cognitive science and computer science.
The system under study is often a complex nonlinear system for which simple, intuitive analytical solutions are not readily available. Rather than deriving a mathematical analytical solution to the problem, experimentation with the model is done by adjusting the parameters of the system in the computer, and studying the differences in the outcome of the experiments. Operation theories of the model can be derived/deduced from these computational experiments.
Examples of common computational models are weather forecasting models, earth simulator models, flight simulator models, molecular protein folding models, Computational Engineering Models (CEM), and neural network models.
See also
Computational Engineering
Computational cognition
Reversible computing
Agent-based model
Artificial neural network
Computational linguistics
Data-driven model
Decision field theory
Dynamical systems model of cognition
Membrane computing
Ontology (information science)
Programming language theory
Microscale and macroscale models
References
Models of computation
Mathematical modeling | Computational model | Mathematics | 225 |
160,501 | https://en.wikipedia.org/wiki/Ultra%20high%20frequency | Ultra high frequency (UHF) is the ITU designation for radio frequencies in the range between 300 megahertz (MHz) and 3 gigahertz (GHz), also known as the decimetre band as the wavelengths range from one meter to one tenth of a meter (one decimeter). Radio waves with frequencies above the UHF band fall into the super-high frequency (SHF) or microwave frequency range. Lower frequency signals fall into the VHF (very high frequency) or lower bands. UHF radio waves propagate mainly by line of sight; they are blocked by hills and large buildings although the transmission through building walls is strong enough for indoor reception. They are used for television broadcasting, cell phones, satellite communication including GPS, personal radio services including Wi-Fi and Bluetooth, walkie-talkies, cordless phones, satellite phones, and numerous other applications.
The IEEE defines the UHF radar band as frequencies between 300 MHz and 1 GHz. Two other IEEE radar bands overlap the ITU UHF band: the L band between 1 and 2 GHz and the S band between 2 and 4 GHz.
Propagation characteristics
Radio waves in the UHF band travel almost entirely by line-of-sight propagation (LOS) and ground reflection; unlike in the HF band there is little to no reflection from the ionosphere (skywave propagation), or ground wave. UHF radio waves are blocked by hills and cannot travel beyond the horizon, but can penetrate foliage and buildings for indoor reception. Since the wavelengths of UHF waves are comparable to the size of buildings, trees, vehicles and other common objects, reflection and diffraction from these objects can cause fading due to multipath propagation, especially in built-up urban areas. Atmospheric moisture reduces, or attenuates, the strength of UHF signals over long distances, and the attenuation increases with frequency. UHF TV signals are generally more degraded by moisture than lower bands, such as VHF TV signals.
As the visual horizon sets the maximum range of UHF transmission to between 30 and 40 miles (48 to 64 km) or less, depending on local terrain, the same frequency channels can be reused by other users in neighboring geographic areas (frequency reuse). Radio repeaters are used to retransmit UHF signals when a distance greater than the line of sight is required.
Occasionally when conditions are right, UHF radio waves can travel long distances by tropospheric ducting as the atmosphere warms and cools throughout the day.
Antennas
The length of an antenna is related to the length of the radio waves used. Due to the short wavelengths, UHF antennas are conveniently stubby and short; at UHF frequencies a quarter-wave monopole, the most common omnidirectional antenna is between 2.5 and 25 cm long. UHF wavelengths are short enough that efficient transmitting antennas are small enough to mount on handheld and mobile devices, so these frequencies are used for two-way land mobile radio systems, such as walkie-talkies, two-way radios in vehicles, and for portable wireless devices; cordless phones and cell phones. Omnidirectional UHF antennas used on mobile devices are usually short whips, sleeve dipoles, rubber ducky antennas or the planar inverted F antenna (PIFA) used in cellphones. Higher gain omnidirectional UHF antennas can be made of collinear arrays of dipoles and are used for mobile base stations and cellular base station antennas.
The short wavelengths also allow high gain antennas to be conveniently small. High gain antennas for point-to-point communication links and UHF television reception are usually Yagi, log periodic, corner reflectors, or reflective array antennas. At the top end of the band, slot antennas and parabolic dishes become practical. For satellite communication, helical and turnstile antennas are used since satellites typically employ circular polarization which is not sensitive to the relative orientation of the transmitting and receiving antennas. For television broadcasting specialized vertical radiators that are mostly modifications of the slot antenna or reflective array antenna are used: the slotted cylinder, zig-zag, and panel antennas.
Applications
UHF television broadcasting channels are used for digital television, although much of the former bandwidth has been reallocated to land mobile radio system, trunked radio and mobile telephone use.
Since at UHF frequencies transmitting antennas are small enough to install on portable devices, the UHF spectrum is used worldwide for land mobile radio systems, two-way radios used for voice communication for commercial, industrial, public safety, and military purposes. Examples of personal radio services are GMRS, PMR446, and UHF CB.
The most rapidly-expanding use of the band is Wi-Fi (wireless LAN) networks in homes, offices, and public places. Wi-Fi IEEE 802.11 low band operates between 2412 and 2484 MHz. A second widespread use is for cellphones, allowing handheld mobile phones be connected to the public switched telephone network and the Internet. Current 3G and 4G cellular networks use UHF, the frequencies varying among different carriers and countries. Satellite phones also use this frequency in the L band and S band.
Examples of UHF frequency allocations
Australia
406–406.1 MHz: Mobile satellite service
450.4875–451.5125 MHz:Fixed point-to-point link
457.50625–459.9875 MHz: Land mobile service
476–477 MHz: UHF citizens band (Land mobile service)
503–694 MHz: UHF channels for television broadcasting
Canada
430–450 MHz: Amateur radio (70 cm band)
470–806 MHz: Terrestrial television (with select channels in the 600 & 700 MHz bands left vacant)
1452–1492 MHz: Digital Audio Broadcasting (L band)
Many other frequency assignments for Canada and Mexico are similar to their US counterparts
France
380-400 MHz: Terrestrial Trunked Radio for Police
430-440 MHz: Amateur radio (70 cm band)
470-694 MHz: Terrestrial television
New Zealand
406.1–420 MHz: Land mobile service
430–440 MHz: Amateur radio (70 cm band) and amateur radio satellite
476–477 MHz: PRS Personal Radio Service (Land mobile service)
485–502 MHz: Analog and P25 Emergency services use
510–622 MHz: Terrestrial television
960–1215 MHz: Aeronautical radionavigation
1240–1300 MHz: Amateur radio (23 cm band)
United Kingdom
380–399.9 MHz: Terrestrial Trunked Radio (TETRA) service for emergency use
430–440 MHz: Amateur radio (70 cm band)
446.0–446.2 MHz : European unlicensed PMR service => PMR446
457–464 MHz: Scanning telemetry and telecontrol, assigned mostly to the water, gas, and electricity industries
606–614 MHz: Radio microphones and radio-astronomy
470–862 MHz: Previously used for analogue TV channels 21–69 (until 2012).
Currently channels 21 to 37 and 39 to 48 are used for Freeview digital TV. Channels 55 to 56 were previously used by temporary muxes COM7 and COM8, channel 38 was used for radio astronomy but has been cleared to allow PMSE users access on a licensed, shared basis.
694–790 MHz: i.e. Channels 49 to 60 have been cleared, to allow these channels to be allocated for 5G cellular communication.
791–862 MHz, i.e. channels 61 to 69 inclusive were previously used for licensed and shared wireless microphones (channel 69 only), has since been allocated to 4G cellular communications.
863–865 MHz: Used for licence-exempt wireless systems.
863–870 MHz: Short range devices, LPWAN IoT devices such as NarrowBand-IoT.
870–960 MHz: Cellular communications (GSM900 - Vodafone and O2 only) including GSM-R and future TETRA
1240–1325 MHz: Amateur radio (23 cm band)
1710–1880 MHz: 2G Cellular communications (GSM1800)
1880–1900 MHz: DECT cordless telephone
1900–1980 MHz: 3G cellular communications (mobile phone uplink)
2110–2170 MHz: 3G cellular communications (base station downlink)
2310–2450 MHz: Amateur radio (13 cm band)
United States
UHF channels are used for digital television broadcasting on both over the air channels and cable television channels. Since 1962, UHF channel tuners (at the time, channels 14 to 83) have been required in television receivers by the All-Channel Receiver Act. However, because of their more limited range, and because few sets could receive them until older sets were replaced, UHF channels were less desirable to broadcasters than VHF channels (and licenses sold for lower prices).
A complete list of US Television Frequency allocations can be found at Pan-American television frequencies.
There is a considerable amount of lawful unlicensed activity (cordless phones, wireless networking) clustered around 900 MHz and 2.4 GHz, regulated under Title 47 CFR Part 15. These ISM bands—frequencies with a higher unlicensed power permitted for use originally by Industrial, Scientific, Medical apparatus—are now some of the most crowded in the spectrum because they are open to everyone. The 2.45 GHz frequency is the standard for use by microwave ovens, adjacent to the frequencies allocated for Bluetooth network devices.
The spectrum from 806 MHz to 890 MHz (UHF channels 70 to 83) was taken away from TV broadcast services in 1983, primarily for analog mobile telephony.
In 2009, as part of the transition from analog to digital over-the-air broadcast of television, the spectrum from 698 MHz to 806 MHz (UHF channels 52 to 69) was removed from TV broadcasting, making it available for other uses. Channel 55, for instance, was sold to Qualcomm for their MediaFLO service, which was later sold to AT&T, and discontinued in 2011. Some US broadcasters had been offered incentives to vacate this channel early, permitting its immediate mobile use. The FCC's scheduled auction for this newly available spectrum was completed in March 2008.
225–420 MHz: Government use, including meteorology, military aviation, and federal two-way use
420–450 MHz: Government radiolocation, amateur radio satellite and amateur radio (70 cm band), MedRadio
450–470 MHz: UHF business band, General Mobile Radio Service, and Family Radio Service 2-way "walkie-talkies", public safety
470–512 MHz: Low-band TV channels 14 to 20 (shared with public safety land mobile 2-way radio in 12 major metropolitan areas scheduled to relocate to 700 MHz band by 2023)
512–608 MHz: Medium-band TV channels 21 to 36
608–614 MHz: Channel 37 used for radio astronomy and wireless medical telemetry
614–698 MHz: Mobile broadband shared with TV channels 38 to 51 auctioned in April 2017. TV stations were relocated by 2020.
617–652 MHz: Mobile broadband service downlink
652–663 MHz: Wireless microphones (higher priority) and unlicensed devices (lower priority)
663–698 MHz: Mobile broadband service uplink
698–806 MHz: Was auctioned in March 2008; bidders got full use after the transition to digital TV was completed on June 12, 2009 (formerly high-band UHF TV channels 52 to 69) and recently modified in 2021 for Next Generation 5G UHF transmission bandwidth for 'over the air' channels 2 thru 69 (virtual 1 thru 36).
806–816 MHz: Public safety and commercial 2-way (formerly TV channels 70 to 72)
817–824 MHz: ESMR band for wideband mobile services (mobile phone) (formerly public safety and commercial 2-way)
824–849 MHz: Cellular A & B franchises, terminal (mobile phone) (formerly TV channels 73 to 77)
849–851 MHz: Commercial aviation air-ground systems (Gogo)
851–861 MHz: Public safety and commercial 2-way (formerly TV channels 77 to 80)
862–869 MHz: ESMR band for wideband mobile services (base station) (formerly public safety and commercial 2-way)
869–894 MHz: Cellular A & B franchises, base station (formerly TV channels 80 to 83)
894–896 MHz: Commercial aviation air-ground systems (Gogo)
896–901 MHz: Commercial 2-way radio
901–902 MHz: Narrowband PCS: commercial narrowband mobile services
902–928 MHz: ISM band, amateur radio (33 cm band), cordless phones and stereo, radio-frequency identification, datalinks
928–929 MHz: SCADA, alarm monitoring, meter reading systems and other narrowband services for a company's internal use
929–930 MHz: Pagers
930–931 MHz: Narrowband PCS: commercial narrowband mobile services
931–932 MHz: Pagers
932–935 MHz: Fixed microwave services: distribution of video, audio and other data
935–940 MHz: Commercial 2-way radio
940–941 MHz: Narrowband PCS: commercial narrowband mobile services
941–960 MHz: Mixed studio-transmitter fixed links, SCADA, other.
960–1215 MHz: Aeronautical radionavigation
1240–1300 MHz: Amateur radio (23 cm band)
1300–1350 MHz: Long range radar systems
1350–1390 MHz: Military air traffic control and mobile telemetry systems at test ranges
1390–1395 MHz: Proposed wireless medical telemetry service. TerreStar failed to provide service by the required deadline.
1395–1400 MHz: Wireless medical telemetry service
1400–1427 MHz: Earth exploration, radio astronomy, and space research
1427–1432 MHz: Wireless medical telemetry service
1432–1435 MHz: Proposed wireless medical telemetry service. TerreStar failed to provide service by the required deadline.
1435–1525 MHz: Military use mostly for aeronautical mobile telemetry (therefore not available for Digital Audio Broadcasting, unlike Canada/Europe)
1525–1559 MHz: Skyterra downlink (Ligado is seeking FCC permission for terrestrial use)
1526–1536 MHz: proposed Ligado downlink
1536–1559 MHz: proposed guard band
1559–1610 MHz: Radio Navigation Satellite Services (RNSS) Upper L-band
1563–1587 MHz: GPS L1 band
1593–1610 MHz: GLONASS G1 band
1559–1591 MHz: Galileo E1 band (overlapping with GPS L1)
1610–1660.5 MHz: Mobile Satellite Service
1610–1618: Globalstar uplink
1618–1626.5 MHz: Iridium uplink and downlink
1626.5–1660.5 MHz: Skyterra uplink (Ligado is seeking FCC permission for terrestrial use)
1627.5–1637.5 MHz: proposed Ligado uplink 1
1646.5–1656.5 MHz: proposed Ligado uplink 2
1660.5–1668.4 MHz: Radio astronomy observations. Transmitting is not permitted.
1668.4–1670 MHz: Radio astronomy observations. Weather balloons may utilize the spectrum after an advance notice.
1670–1675 MHz: Geostationary Operational Environmental Satellite transmissions to three earth stations in Wallops Island, Virginia; Greenbelt, Maryland and Fairbanks, Alaska. Nationwide broadband service license in this range is held by a subsidiary of Crown Castle International Corp. who is trying to provide service in cooperation with Ligado Networks.
1675–1695 MHz: Meteorological federal users
1695–1780 MHz: AWS mobile phone uplink (UL) operating band
1695–1755 MHz: AWS-3 blocks A1 and B1
1710–1755 MHz: AWS-1 blocks A, B, C, D, E, F
1755–1780 MHz: AWS-3 blocks G, H, I, J (various federal agencies transitioning by 2025)
1780–1850 MHz: exclusive federal use (Air Force satellite communications, Army's cellular-like communication system, other agencies)
1850–1920 MHz: PCS mobile phone—order is A, D, B, E, F, C, G, H blocks. A, B, C = 15 MHz; D, E, F, G, H = 5 MHz
1920–1930 MHz: DECT cordless telephone
1930–2000 MHz: PCS base stations—order is A, D, B, E, F, C, G, H blocks. A, B, C = 15 MHz; D, E, F, G, H = 5 MHz
2000–2020 MHz: lower AWS-4 downlink (mobile broadband)
2020–2110 MHz: Cable Antenna Relay service, Local Television Transmission service, TV Broadcast Auxiliary service, Earth Exploration Satellite service
2110–2200 MHz: AWS mobile broadband downlink
2110–2155 MHz: AWS-1 blocks A, B, C, D, E, F
2155–2180 MHz: AWS-3 blocks G, H, I, J
2180–2200 MHz: upper AWS-4
2200–2290 MHz: NASA satellite tracking, telemetry and control (space-to-Earth, space-to-space)
2290–2300 MHz: NASA Deep Space Network
2300–2305 MHz: Amateur radio (13 cm band, lower segment)
2305–2315 MHz: WCS mobile broadband service uplink blocks A and B
2315–2320 MHz: WCS block C (AT&T is pursuing smart grid deployment)
2320–2345 MHz: Satellite radio (Sirius XM)
2345–2350 MHz: WCS block D (AT&T is pursuing smart grid deployment)
2350–2360 MHz: WCS mobile broadband service downlink blocks A and B
2360–2390 MHz: Aircraft landing and safety systems
2390–2395 MHz: Aircraft landing and safety systems (secondary deployment in a dozen of airports), amateur radio otherwise
2395–2400 MHz: Amateur radio (13 cm band, upper segment)
2400–2483.5 MHz: ISM, IEEE 802.11, 802.11b, 802.11g, 802.11n wireless LAN, IEEE 802.15.4-2006, Bluetooth, radio-controlled aircraft (strictly for spread spectrum use), microwave ovens, Zigbee
2483.5–2495 MHz: Globalstar downlink and Terrestrial Low Power Service suitable for TD-LTE small cells
2495–2690 MHz: Educational Broadcast and Broadband Radio Services
2690–2700 MHz: Receive-only range for radio astronomy and space research
See also
Digital Audio Broadcasting and its regional implementations
Digital terrestrial television
The Thing (listening device)
References
External links
U.S. cable television channel frequencies
Tomislav Stimac, "Definition of frequency bands (VLF, ELF... etc.)". IK1QFK Home Page (vlf.it).
Radio spectrum
Television technology
Wireless | Ultra high frequency | Physics,Technology,Engineering | 3,961 |
21,249,077 | https://en.wikipedia.org/wiki/Tubular%20proteinuria | Tubular proteinuria is proteinuria (excessive protein in the urine) caused by renal tubular dysfunction. Proteins of low molecular weight are normally filtered at the glomerulus of the kidney and are then normally reabsorbed by the tubular cells, so that less than 150 mg per day should appear in the urine. Low-molecular-weight proteins' appearing in larger quantities than this is tubular proteinuria, which points to failure of reabsorption by damaged tubular cells. Tubular proteinuria is a laboratory sign, not a disease; as a sign it appears in various syndromes and diseases, such as Fanconi syndrome.
Urine | Tubular proteinuria | Biology | 129 |
3,476,719 | https://en.wikipedia.org/wiki/MIMOSA | MIMOSA (Micromeasurements of Satellite Acceleration), COSPAR 2003-031B, was a Czech scientific microsatellite. The satellite was nearly spherical with 28 sides and carried a microaccelerometer to monitor the atmospheric density profile by sensing the atmospheric drag on the approximated sphere.
MIMOSA was launched on June 30, 2003, alongside other miniature satellites including MOST and several CubeSat-based satellites. It had a fairly eccentric orbit, with an initial perigee of and apogee of . The satellite never became fully functional due to several technical problems on board. It is no longer in orbit. NORAD reported it burnt into the atmosphere on December 11, 2011.
See also
2003 in spaceflight
References
External links
Informative English page
A free paper model of MIMOSA to download and build
2003 in spaceflight
Spacecraft which reentered in 2011
Spacecraft launched in 2003
Atmospheric sounding satellites
Space program of the Czech Republic
Spacecraft launched by Rokot rockets | MIMOSA | Astronomy | 201 |
57,511,265 | https://en.wikipedia.org/wiki/Kinetic%20isotope%20effects%20of%20RuBisCO | The kinetic isotope effect (KIE) of ribulose-1,5-bisphosphate carboxylase oxygenase (RuBisCO) is the isotopic fractionation associated solely with the step in the Calvin-Benson cycle where a molecule of carbon dioxide () is attached to the 5-carbon sugar ribulose-1,5-bisphosphate (RuBP) to produce two 3-carbon sugars called 3-phosphoglycerate (3 PGA). This chemical reaction is catalyzed by the enzyme RuBisCO, and this enzyme-catalyzed reaction creates the primary kinetic isotope effect of photosynthesis. It is also largely responsible for the isotopic compositions of photosynthetic organisms and the heterotrophs that eat them. Understanding the intrinsic KIE of RuBisCO is of interest to earth scientists, botanists, and ecologists because this isotopic biosignature can be used to reconstruct the evolution of photosynthesis and the rise of oxygen in the geologic record, reconstruct past evolutionary relationships and environmental conditions, and infer plant relationships and productivity in modern environments.
Reaction details and energetics
The fixation of by RuBisCO is a multi-step process. First, a molecule (that is not the molecule that is eventually fixed) attaches to the uncharged ε-amino group of lysine 201 in the active site to form a carbamate. This carbamate then binds to the magnesium ion (Mg2+) in RuBisCO's active site. A molecule of RuBP then binds to the Mg2+ ion. The bound RuBP then loses a proton to form a reactive, enodiolate species. The rate-limiting step of the Calvin-Benson cycle is the addition of CO2 to this 2,3-enediol form of RuBP. This is the stage where the intrinsic KIE of Rubisco occurs because a new C-C bond is formed. The newly formed 2-carboxy-3-keto-D-arabinitol 1,5-bisphosphate molecule is then hydrated and cleaved to form two molecules of 3-phosphoglycerate (3 PGA). 3 PGA is then converted into hexoses to be used in the photosynthetic organism's central metabolism.
The isotopic substitutions that can occur in this reaction are for carbon, oxygen, and/or hydrogen, though currently only a significant isotope effect is seen for carbon isotope substitution. Isotopes are atoms that have the same number of protons but varying numbers of neutrons. "Lighter" isotopes (like the stable carbon-12 isotope) have a smaller overall mass, and "heavier" isotopes (like the stable carbon-13 isotope or radioactive carbon-14 isotope) have a larger overall mass. Stable isotope geochemistry is concerned with how varying chemical and physical processes preferentially enrich or deplete stable isotopes. Enzymes like RuBisCO cause isotopic fractionation because molecules containing lighter isotopes have higher zero-point energies (ZPE), the lowest possible quantum energy state for a given molecular arrangement. For this reaction, 13CO2 has a lower ZPE than 12CO2 and sits lower in the potential energy well of the reactants. When enzymes catalyze chemical reactions, the lighter isotope is preferentially selected because it has a lower activation energy and is thus more energetically favorable to overcome the high potential-energy transition state and proceed through the reaction. Here, 12CO2 has a lower activation energy so more 12CO2 than 13CO2 goes through the reaction, resulting in the product (3 PGA) being lighter.
Ecological trade-offs influence isotope effects
The observed intrinsic KIEs of RuBisCO have been correlated with two aspects of its enzyme kinetics: 1) Its "specificity" for CO2 over O2, and 2) Its rate of carboxylation.
Specificity (SC/O)
The reactive enodiolate species is also sensitive to oxygen (O2), which results in the dual carboxylase / oxygenase activity of RuBisCO. This reaction is considered wasteful as it produces products (3-phosphoglycerate and 2-phosphoglycolate) that must be catabolized through photorespiration. This process requires energy and is a missed-opportunity for CO2 fixation, which results in the net loss of carbon fixation efficiency for the organism. The dual carboxylase / oxygenase activity of RuBisCO is exacerbated by the fact that O2 and CO2 are small, relatively indistinguishable molecules that can bind only weakly, if at all, in Michaelis-Menten complexes. There are four forms of RuBisCO (Form I, II, III, and IV), with Form I being the most abundantly used form. Form I is used extensively by higher plants, eukaryotic algae, cyanobacteria, and Pseudomonadota (formerly proteobacteria). Form II is also used but much less widespread, and can be found in some species of Pseudomonadota and in dinoflagellates. RuBisCOs from different photosynthetic organisms display varying abilities to distinguish between CO2 and O2. This property can be quantified and is termed "specificity" (Sc/o). A higher value of Sc/o means that a RuBisCO's carboxylase activity is greater than its oxygenase activity.
Rate of carboxylation (VC) and Michaelis-Menten constant (KC)
The rate of carboxylation (VC) is the rate that RuBisCO fixes CO2 to RuBP under substrate saturated conditions. A higher value of VC corresponds to a higher rate of carboxylation. This rate of carboxylation can also be represented through its Michaelis-Menten constant KC, with a higher value of KC corresponding to a higher rate of carboxylation. VC is represented by Vmax, and KC is represented as KM in the generalized Michaelis-Menten curve. Although the rate of carboxylation varies among RuBisCO types, RuBisCO on average fixes only three molecules of CO2 per second. This is remarkably slow compared to typical enzyme catalytic rates, which usually catalyze reactions at the rate of thousands of molecules per second.
Phylogenetic patterns
It has been observed among natural RuBisCOs that an increased ability to distinguish between CO2 and O2 (larger values of Sc/o) corresponds with a decreased rate of carboxylation (lower values of VC and KC). The variation and trade-off between Sc/o and KC has been observed across all photosynthetic organisms, from photosynthetic bacteria and algae to higher plants. Organisms using RuBisCOs with high values of VC / KC, and low values of Sc/o have localized RuBisCO to areas within the cell with artificially high local CO2 concentrations. In cyanobacteria, concentrations of CO2 are increased using a carboxysome, an icosahedral protein compartment about 100 nm in diameter that selectively uptakes bicarbonate and converts it to CO2 in the presence of RuBisCO. Organisms without a CCM, like certain plants, instead utilize RuBisCOs with high values of Sc/o and low values of VC and KC. It has been theorized that groups with a CCM have been able to maximize KC at the expense of decreasing Sc/o, because artificially enhancing the concentration of CO2 would decrease the concentration of O2 and remove the need for high CO2 specificity. However, the opposite is true for organisms without a CCM, who must optimize Sc/o at the expense of KC because O2 is readily present in the atmosphere.
This trade-off between Sc/o and VC or KC observed in extant organisms suggest that RuBisCO has evolved through geologic time to be maximally optimized in its current, modern environment. RuBisCO evolved over 2.5 billion years ago when atmospheric CO2 concentrations were 300 to 600 times higher than present day concentrations, and oxygen concentrations were only 5-18% of present-day levels. Therefore, because CO2 was abundant and O2 rare, there was no need for the ancestral RuBisCO enzyme to have high specificity. This is supported by the biochemical characterization of an ancestral RuBisCO enzyme, which has intermediate values of VC and SC/O between the extreme end-members.
It has been theorized that this ecological trade-off is due to the form that 2-carboxy-3-keto-D-arabinitol 1,5-bisphophate in its transient transition state before cleaving into two 3PGA molecules. The more closely the Mg2+-bound CO2 moiety resembles the carboxylate group in 2-carboxy-3-keto-D-arabinitol 1,5-bisphophate, the greater the structural difference between the transition states of carboxylation and oxygenation. The larger structural difference allows RuBisCO to better distinguish between CO2 and O2, resulting in larger values of Sc/o. However, this increasing structural similarity between the transition state and the product state requires strong binding at the carboxyketone group, and this binding is so strong that the rate of cleavage into two product 3PGA molecules is slowed. Therefore, an increased specificity for CO2 over O2 necessitates a lower overall rate of carboxylation. This theory implies that there is a physical chemistry limitation at the heart of Rubisco's active site, and may preclude any efforts to engineer a simultaneously more selective and faster Rubisco.
Isotope effects
Sc/o has been positively correlated with the magnitude of carbon isotope fractionation (represented by Δ13C), with larger values of Sc/o corresponding with a larger values of Δ13C. It has been theorized that because increasing Sc/o means the transition state is more like the product, the O2C---C-2 bond will be shorter, resulting in a higher overall potential energy & vibrational energy. This creates a higher energy transition state, which makes it even harder for 13CO2 (lower in the potential energy well than 12CO2) to overcome the required activation energy. The RuBisCOs used by varying photosynthetic organisms vary slightly in their enzyme structure, and this enzyme structure results in varying transition states. This diversity in enzyme structure is reflected in the resulting Δ13C values measured from different photosynthetic organisms. However, overlap exists between the Δ13C values of different groups because the carbon isotope values measured are generally of the entire organism, and not just its RuBisCO enzyme. Many other factors, including growth rate and the isotopic composition of the starting substrate, can affect the carbon isotope values of whole organism and cause the spread seen in C isotope measurements.
See also
Isotope geochemistry
Fractionation of carbon isotopes in oxygenic photosynthesis
Isotopes of carbon
Isotopic signature
References
Chemical kinetics
Photosynthesis
Isotope separation | Kinetic isotope effects of RuBisCO | Chemistry,Biology | 2,341 |
70,578,382 | https://en.wikipedia.org/wiki/Flavipin | Flavipin is a phototoxic, antibiotic and antifungal metabolite with the molecular formula C9H8O5 which is produced by the fungi Aspergillus flavipes, Epicoccum nigrum and Epicoccum andropogonis. Flavipin is also a potent antioxidant.
References
Further reading
Antibiotics
Benzaldehydes
Phenols
Alkyl-substituted benzenes | Flavipin | Chemistry,Biology | 92 |
2,088,194 | https://en.wikipedia.org/wiki/Ammonium%20persulfate | Ammonium persulfate (APS) is the inorganic compound with the formula (NH4)2S2O8. It is a colourless (white) salt that is highly soluble in water, much more so than the related potassium salt. It is a strong oxidizing agent that is used as a catalyst in polymer chemistry, as an etchant, and as a cleaning and bleaching agent.
Preparation and structure
Ammonium persulfate is prepared by electrolysis of a cold concentrated solution of either ammonium sulfate or ammonium bisulfate in sulfuric acid at a high current density. The method was first described by Hugh Marshall.
The ammonium, sodium, and potassium salts adopt very similar structures in the solid state, according to X-ray crystallography. In the ammonium salt, the O-O distance is 1.497Å. The sulfate groups are tetrahedral, with three short S-O distances near 1.44Å and one long S-O bond at 1.64Å.
Uses
As a source of radicals, APS is mainly used as a radical initiator in the polymerization of certain alkenes. Commercially important polymers prepared using persulfates include styrene-butadiene rubber and polytetrafluoroethylene. In solution, the dianion dissociates into radicals:
[O3SO–OSO3]2− 2 [SO4]•−
Regarding its mechanism of action, the sulfate radical adds to the alkene to give a sulfate ester radical. It is also used along with tetramethylethylenediamine to catalyze the polymerization of acrylamide in making a polyacrylamide gel, hence being important for SDS-PAGE and western blot.
Illustrative of its powerful oxidizing properties, ammonium persulfate is used to etch copper on printed circuit boards as an alternative to ferric chloride solution. This property was discovered many years ago. In 1908, John William Turrentine used a dilute ammonium persulfate solution to etch copper. Turrentine weighed copper spirals before placing the copper spirals into the ammonium persulfate solution for an hour. After an hour, the spirals were weighed again and the amount of copper dissolved by ammonium persulfate was recorded. This experiment was extended to other metals such as nickel, cadmium, and iron, all of which yielded similar results.
The oxidation equation is thus: (aq) + 2 → 2 (aq).
Ammonium persulfate is a standard ingredient in hair bleach.
Persulfates are used as oxidants in organic chemistry. For example, in the Minisci reaction and Elbs persulfate oxidation
Safety
Airborne dust containing ammonium persulfate may be irritating to eye, nose, throat, lung and skin upon contact. Exposure to high levels of dust may cause difficulty in breathing.
It has been noted that persulfate salts are a major cause of asthmatic effects. Furthermore, it has been suggested that exposure to ammonium persulfate can cause asthmatic effects in hair dressers and receptionists working in the hairdressing industry. These asthmatic effects are proposed to be caused by the oxidation of cysteine residues, as well as methionine residues.
References
External links
International Chemical Safety Card 0632
Persulfates
Peroxides
Ammonium compounds
Oxidizing agents | Ammonium persulfate | Chemistry | 729 |
75,055,566 | https://en.wikipedia.org/wiki/HD%20196917 | HD 196917 (HR 7909; 17 G. Microscopii; NSV 25227) is a solitary star located in the southern constellation Microscopium. It is faintly visible to the naked eye as a red-hued point of light with an apparent magnitude of 5.74. Gaia DR3 parallax measurements imply a distance of 426 light-years and it is rapidly approaching the Solar System with a heliocentric radial velocity of . At its current distance, HD 196917's brightness is diminished by 0.13 magnitudes due to interstellar extinction and it has an absolute magnitude of +0.04.
HD 196917 has a stellar classification of either M1 III or M0 III, indicating that it is an evolved M-type giant. It is currently on the asymptotic giant branch, fusing hydrogen and helium shells around an inert carbon core. It has 1.27 times the mass of the Sun but it has expanded to 44.2 times the radius of the Sun. It radiates 620 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of . HD 196917 is metal deficient with an iron abundance of [Fe/H] = −0.28 or 52.5% of the Sun's.
The variability of the star was first detected in 1997 by the Hipparcos mission. It found variations between 5.82 and 5.86 in the Hipparcos passband. Koen & Lyer (2002) observed visual variations from the star and found that HD 196917 varies by 0.009 magnitudes within 21.01 hours. As of 2004, its variability has not been confirmed.
References
M-type giants
Asymptotic-giant-branch stars
Suspected variables
Microscopium
Microscopii, 17
CD-32 16130
196917
102092
7909
00441396067 | HD 196917 | Astronomy | 398 |
64,547,814 | https://en.wikipedia.org/wiki/Bruno%20Oberle | Bruno Oberle is a Swiss environmental scientist and economist. He is currently the president of the WRF, World Resources Forum.
Biography
Oberle was born in St. Gallen, Switzerland on 12 October 1955, and grew up in Locarno and Zürich. He took biology and environmental sciences at ETH Zurich and obtained his PhD.
Oberle was Professor at the École Polytechnique Fédérale de Lausanne, Chair of the Green Economy and Resource Governance program at the École Polytechnique Fédérale de Lausanne, and is a former director of the Swiss Federal Office for the Environment and State Secretary for the Environment. Presently, he is Director General of the International Union for Conservation of Nature.
References
1955 births
Living people
Academic staff of the École Polytechnique Fédérale de Lausanne
Environmental scientists
People associated with the International Union for Conservation of Nature
People from the canton of St. Gallen
Swiss civil servants
ETH Zurich alumni | Bruno Oberle | Environmental_science | 187 |
404,170 | https://en.wikipedia.org/wiki/Koku | The is a Chinese-based Japanese unit of volume. 1 koku is equivalent to 10 or approximately , or about of rice. It converts, in turn, to 100 shō and 1000 gō. One gō is the traditional volume of a single serving of rice (before cooking), used to this day for the plastic measuring cup that is supplied with commercial Japanese rice cookers.
The koku in Japan was typically used as a dry measure. The amount of rice production measured in koku was the metric by which the magnitude of a feudal domain (han) was evaluated. A feudal lord was only considered daimyō class when his domain amounted to at least 10,000 koku. As a rule of thumb, one koku was considered a sufficient quantity of rice to feed one person for one year.
The Chinese equivalent or cognate unit for capacity is the shi or dan () also known as hu (), now approximately 103 litres but historically about .
Chinese equivalent
The Chinese dan is equal to 10 dou () "pecks", 100 sheng () "pints". While the current dan is 103 litres in volume, the dan of the Tang dynasty (618–907) period equalled 59.44 litres.
Modern unit
The exact modern is calculated to be 180.39 litres, 100 times the capacity of a modern . This modern is essentially defined to be the same as the from the Edo period (1600–1868), namely 100 times the equal to 64827 cubic in the traditional measuring system.
Origin of the modern unit
The , the semi-official one measuring box since the late 16th century under Daimyo Nobunaga, began to be made in a different (larger) size in the early Edo period, sometime during the 1620s. Its dimensions, given in the traditional Japanese length unit system, were 4 9 square times 2 7 depth. Its volume, which could be calculated by multiplication was:
1 = 100 = 100 × (49 × 49 × 27 ) = 100 × 64,827 cubic
Although this was referred to as or the "new" measuring cup in its early days, its use supplanted the old measure in most areas in Japan, until the only place still left using the old cup ("") was the city of Edo, and the Edo government passed an edict declaring the the official nationwide measure standard in 1669 (Kanbun 9).
Modern measurement enactment
When the 1891 Japanese was promulgated, it defined the unit as the capacity of the standard of 64827 cubic . The same act also defined the length as metre. The metric equivalent of the modern is litres. The modern is therefore litres, or 180.39 litres.
The modern defined here is set to equal the so-called ( or "compromise "), measuring 302.97 mm, a middle-ground value between two different standards. A researcher has pointed out that the () cups ought to have used which were 0.2% longer. However, the actual measuring cups in use did not quite attain the metric, and when the Japanese Ministry of Finance had collected actual samples of from the (measuring-cup guilds) of both eastern and western Japan, they found that the measurements were close to the average of and .
Lumber koku
The "lumber " or "maritime " is defined as equal to 10 cubic in the lumber or shipping industry, compared with the standard measures 6.48 cubic . A lumber is conventionally accepted as equivalent to 120 board feet, but in practice may convert to less. In metric measures 1 lumber is about .
Historic use
The exact measure now in use was devised around the 1620s, but not officially adopted for all of Japan until the Kanbun era (1660s).
Feudal Japan
Under the Tokugawa shogunate (1603–1868) of the Edo period of Japanese history, each feudal domain had an assessment of its potential income known as kokudaka (production yield) which in part determined its order of precedence at the Shogunal court. The smallest kokudaka to qualify the fief-holder for the title of daimyō was 10,000 koku (worth ) and Kaga han, the largest fief (other than that of the shōgun), was called the "million-koku domain". Its holdings totaled around 1,025,000 koku (worth ). Many samurai, including hatamoto (a high-ranking samurai), received stipends in koku, while a few received salaries instead.
The kokudaka was reported in terms of brown rice (genmai) in most places, with the exception of the land ruled by the Satsuma clan which reported in terms of unhusked or non-winnowed rice (. Since this practice had persisted, past Japanese rice production statistics need to be adjusted for comparison with other countries that report production by milled or polished rice.
Even in certain parts of the Tōhoku region or Ezo (Hokkaidō), where rice could not be grown, the economy was still measured in terms of koku, with other crops and produce converted to their equivalent value in terms of rice. The kokudaka was not adjusted from year to year, and thus some fiefs had larger economies than their nominal koku indicated, due to land reclamation and new rice field development, which allowed them to fund development projects.
As measure of cargo ship class
Koku was also used to measure how much a ship could carry when all its loads were rice. Smaller ships carried 50 koku () while the biggest ships carried over 1,000 koku (). The biggest ships were larger than military vessels owned by the shogunate.
In popular culture
The Hyakumangoku Matsuri (Million-Koku Festival) in Kanazawa, Japan celebrates the arrival of daimyō Maeda Toshiie into the city in 1583, although Maeda's income was not raised to over a million koku until after the Battle of Sekigahara in 1600.
In fiction
The James Clavell novel Shōgun uses the Koku measure extensively as a plot device by many of the main characters as a method of reward, punishment and enticement. While fiction, it shows the importance of the fief, the rice measure and payments.
Explanatory notes
References
Citations
Bibliography
Economy of feudal Japan
Human-based units of measurement
Japanese historical terms
Obsolete units of measurement
Units of volume
Standards of Japan | Koku | Mathematics | 1,323 |
2,565,775 | https://en.wikipedia.org/wiki/Q%20star | A Q-star, also known as a grey hole, is a hypothetical type of a compact, heavy neutron star with an exotic state of matter. Such a star can be smaller than the progenitor star's Schwarzschild radius and have a gravitational pull so strong that some light, but not all photons, can escape. The Q stands for a conserved particle number. A Q-star may be mistaken for a stellar black hole.
Types of Q-stars
Q-ball
B-ball, stable Q-balls with a large baryon number B. They may exist in neutron stars that have absorbed Q-ball(s).
See also
Black hole
Stellar black hole
Compact star
Exotic star
Boson star
Electroweak star
Preon star
Strange star
Quark star
References
Further reading
Degenerate stars
Compact stars
Hypothetical stars | Q star | Physics,Astronomy | 170 |
11,512,405 | https://en.wikipedia.org/wiki/Exobasidium%20vaccinii-uliginosi | Exobasidium vaccinii-uliginosi is a species of fungus in the family Exobasidiaceae. It is a plant pathogen.
References
Fungal plant pathogens and diseases
Ustilaginomycotina
Fungi described in 1894
Fungus species | Exobasidium vaccinii-uliginosi | Biology | 54 |
33,168,651 | https://en.wikipedia.org/wiki/Emiliana%20alexandri | Emiliana is an extinct genus of planthopper in the Tropiduchidae tribe Emilianini and containing the single species Emiliana alexandri. The species is known only from the Middle Eocene Parachute Member, part of the Green River Formation, in the Piceance Creek Basin, Garfield County, northwestern Colorado, USA.
History and classification
Emiliana alexandri is known only from one fossil, the part and counterpart holotype, specimen number "PIN no. 4621/546". The specimen is composed of a single isolated tegmen which is preserved as a compression fossil in sedimentary rock. The fossil was recovered by David Kohls of Colorado Mountain College and A. P. Rasnitsyn of the Russian Academy of Sciences from outcrops of the Green River Formations Parachute Member exposed in the Anvil Points area of Garfield County, Colorado, USA. The type specimen is currently preserved in the paleoentomology collections housed in the Paleontological Institute, Russian Academy of Sciences, located in Moscow, Russia. Emiliana was first studied by Dmitry Shcherbakov of the Paleontological Institute, Russian Academy of Sciences with his 2006 type description of the tribe, genus and species being published in the Russian Entomological Journal. The generic name was coined by Shcherbakov in recognition of the world authority in planthoppers, A. F. Emeljanov, with the tribe name being a derivative of the genus name. The etymology of the specific epithet alexandri is in reference to Emeljanov's first name Alexandr.
When Emiliana alexandri was described it displaced the genus Jantaritambia, which is known from Baltic Amber specimens, as the oldest member of Tropiduchidae to be described from the fossil record, being 10 million years older than Jantaritambia. The genus is noted to be similar to the modern tropiduchid genera Neommatissus, Paricana, Pseudoparicana, and Paricanoides.
Description
The E. alexandri type specimen is a well-preserved almost complete adult fore-wing, called a tegmen which is long. The tegmen is preserved with a mostly pale or possibly hyaline coloration with both the wing base and wing tip darkened. The forking of the Cu vein occurs much closer to the vein base then in other members of the family. Also the merging of the MP and CuA veins is distinct to the genus. Emiliana possess a crossvein, the cup-pcu is seen only in the extinct monotypic tribe Jantaritambiini.
References
Fossil taxa described in 2006
†
Eocene insects of North America
†
Species known from a single specimen | Emiliana alexandri | Biology | 559 |
10,335,099 | https://en.wikipedia.org/wiki/Jackson%20integral | In q-analog theory, the Jackson integral series in the theory of special functions that expresses the operation inverse to q-differentiation.
The Jackson integral was introduced by Frank Hilton Jackson. For methods of numerical evaluation, see and .
Definition
Let f(x) be a function of a real variable x. For a a real variable, the Jackson integral of f is defined by the following series expansion:
Consistent with this is the definition for
More generally, if g(x) is another function and Dqg denotes its q-derivative, we can formally write
or
giving a q-analogue of the Riemann–Stieltjes integral.
Jackson integral as q-antiderivative
Just as the ordinary antiderivative of a continuous function can be represented by its Riemann integral, it is possible to show that the Jackson integral gives a unique q-antiderivative
within a certain class of functions (see ).
Theorem
Suppose that If is bounded on the interval for some then the Jackson integral converges to a function on which is a q-antiderivative of Moreover, is continuous at with and is a unique antiderivative of in this class of functions.
Notes
References
Victor Kac, Pokman Cheung, Quantum Calculus, Universitext, Springer-Verlag, 2002.
Jackson F H (1904), "A generalization of the functions Γ(n) and xn", Proc. R. Soc. 74 64–72.
Jackson F H (1910), "On q-definite integrals", Q. J. Pure Appl. Math. 41 193–203.
Special functions
Q-analogs | Jackson integral | Mathematics | 335 |
7,290,910 | https://en.wikipedia.org/wiki/B%E2%80%93Bbar%20oscillation | Neutral B meson oscillations (or – oscillations) are one of the manifestations of the neutral particle oscillation, a fundamental prediction of the Standard Model of particle physics. It is the phenomenon of B mesons changing (or oscillating) between their matter and antimatter forms before their decay. The meson can exist as either a bound state of a strange antiquark and a bottom quark, or a strange quark and bottom antiquark. The oscillations in the neutral B sector are analogous to the phenomena that produce long and short-lived neutral kaons.
– mixing was observed by the CDF experiment at Fermilab in 2006 and by LHCb at CERN in 2011 and 2021.
Excess of matter over antimatter
The Standard Model predicts that regular matter are slightly favored in these oscillations over their antimatter counterpart, making strange B mesons of special interest to particle physicists. The observation of the – mixing phenomena led physicists to propose the construction of the so-named "B factories" in the early 1990s. They realized that a precise – oscillation measure could pin down the unitarity triangle and perhaps explain the excess of matter over antimatter in the universe. To this end construction began on two "B factories" in the late nineties, one at the Stanford Linear Accelerator Center (SLAC) in California and one at KEK in Japan.
These B factories, BaBar and Belle, were set at the (4S) resonance which is just above the threshold for decay into two B mesons.
On 14 May 2010, physicists at the Fermi National Accelerator Laboratory reported that the oscillations decayed into matter 1% more often than into antimatter, which may help explain the abundance of matter over antimatter in the observed Universe. However, more recent results at LHCb in 2011, 2012, and 2021 with larger data samples have demonstrated no significant deviation from the Standard Model prediction of very nearly zero asymmetry.
See also
Baryogenesis
CP Violation
Kaon
Neutral particle oscillation
Strange B meson
References
Further reading
— paper describing the discovery of B-meson mixing by the ARGUS collaboration
— announcement of the 5 sigma discovery
External links
BaBar Public Homepage
Belle Public Homepage
B physics | B–Bbar oscillation | Physics | 479 |
6,151,568 | https://en.wikipedia.org/wiki/Mason%27s%20miter | A mason's mitre is a type of mitre joint, traditionally used in stonework or masonry but commonly seen in kitchen countertops. In a mason's mitre, the two elements being joined meet as for a butt joint but a small section of one member is removed creating a socket to receive the end of the other. A small mitre is made at the inside edges of the socket and on the end of the intersecting member so that edge treatments are carried through the joint appropriately.
The mason's mitre allows the appearance of a mitre joint to be created with much less waste than occurs with a common mitre joint, in which triangular sections must be removed from the ends of both joint members.
The terms "back mitre" and "mason's mitre" (or "miter") are often used interchangeably, but are different types of joints, and used for different purposes. Both joints are traditionally used in stone or woodwork. Neither joint requires that one part be coped (or fit) over the other. In the back mitre, the joints follow the mitre and stile/rail joining lines. In the mason's mitre, the intersecting mouldings are carved within a single stone block or the woodwork's stile, with the rail or adjacent block having a straight profile.
References
External links
Joinery
Masonry
Kitchen countertops
Woodworking | Mason's miter | Engineering | 287 |
70,207,477 | https://en.wikipedia.org/wiki/Robert%20A.%20Bosch | Robert A. (Bob) Bosch (born August 13, 1963, in Buffalo NY) is an author, recreational mathematician and the James F. Clark Professor of Mathematics at Oberlin College. He is known for domino art and for combining graph theory and mathematical optimization to design connect-the-dots eye candy: labyrinths, knight's tours, string art and TSP Art.
He is the author of Opt Art: From Mathematical Optimization to Visual Design.
Education and career
Bosch received a BA in mathematics at Oberlin College in 1985, an MS in operations research and statistics at Rensselaer Polytechnic Institute in 1987 and a PhD in operations research with the thesis Partial Updating in Interior-Point Methods for Linear Programming under Kurt Martin Anstreicher at Yale University in 1991.
He has been at Oberlin College since 1991 where he teaches mathematics, statistics and computer science.
Combining art and mathematics
Bosch is passionate about using computers and mathematical optimization techniques to design visual art. He refers to this work as "Opt Art." He has written dozens of papers on this topic, many of them with Oberlin College student collaborators. Over the years Bosch has created numerous portraits drawn with a single continuous line. Some of these drawings are solutions of the
Traveling salesman problem (or solutions to related problems). Examples include the "figurative tours" he created with computer scientist Tom Wexler and renditions of Leonardo da Vinci's Mona Lisa, a Van Gogh self portrait, and Vermeer's Girl with a Pearl Earring.
Domino portraits such as his renderings of Martin Luther King and Barack Obama are an expansion of the mathematical genre of opt art in another direction.
Awards
2007 Trevor Evans Award from the Mathematical Association of America (MAA), for the Math Horizons article "Opt Art".
2012 Inaugural Outstanding Paper Award from the Journal of Mathematics and the Arts, for the article "Simple-Close-Curve Sculptures of Knots and Links".
2010 First Prize in the Mathematical Art Exhibition of The American Mathematical Society (AMS), for the sculpture Embrace.
References
External links
Opt Art: From Mathematical Optimization to Visual Design [video]
Domino Artwork: The Mathematical Artwork of Robert Bosch
Robert Bosch Mathematical Art Galleries
Living people
1963 births
20th-century American mathematicians
21st-century American mathematicians
Recreational mathematicians
Mathematics popularizers
Oberlin College alumni
Rensselaer Polytechnic Institute alumni
Yale School of Management alumni
Oberlin College faculty
Mathematicians from New York (state) | Robert A. Bosch | Mathematics | 491 |
1,636,691 | https://en.wikipedia.org/wiki/Fredkin%20gate | The Fredkin gate (also controlled-SWAP gate and conservative logic gate) is a computational circuit suitable for reversible computing, invented by Edward Fredkin. It is universal, which means that any logical or arithmetic operation can be constructed entirely of Fredkin gates. The Fredkin gate is a circuit or device with three inputs and three outputs that transmits the first bit unchanged and swaps the last two bits if, and only if, the first bit is 1.
Background
The Fredkin gate, conceptualized by Edward Fredkin and Tommaso Toffoli at the MIT Laboratory for Computer Science, represents a pivotal advancement in the field of reversible computing and conservative logic. Developed within the framework of conservative logic, this gate is designed to align computing processes with fundamental physical principles such as the reversibility of dynamical laws and the conservation of energy. The technical rationale behind the Fredkin gate is rooted in addressing the inefficiencies of traditional computing, where irreversible operations typically result in significant energy dissipation.
In contrast to conventional logic gates, which often erase information and thus dissipate heat as per Landauer's principle, the Fredkin gate maintains reversibility — a property that ensures no information is lost during the computation process. Each output state of the gate uniquely determines its input state, which not only preserves information but also aligns with energy conservation principles. This characteristic is particularly crucial as the demand for computational power grows, making energy efficiency a key consideration.
The invention of the Fredkin gate was motivated by the quest to minimize the energy footprint of computational operations. It allows for the construction of computing systems that are not only efficient in terms of processing speed and power consumption but also environmentally sustainable. By embodying principles of reversible computing, the Fredkin gate offers a practical solution to reducing the energy costs associated with digital computations, marking a significant shift towards more sustainable computing technologies.
Definition
The basic Fredkin gate is a controlled swap gate (CSWAP gate) that maps three inputs onto three outputs . The C input is mapped directly to the C output. If C = 0, no swap is performed; maps to , and maps to . Otherwise, the two outputs are swapped so that maps to , and maps to . It is easy to see that this circuit is reversible, i.e., "undoes" itself when run backwards. A generalized n × n Fredkin gate passes its first n − 2 inputs unchanged to the corresponding outputs and swaps its last two outputs if and only if the first n − 2 inputs are all 1.
Controlled-SWAP Logic: The Fredkin gate, a three-bit controlled-SWAP gate, operates by conditionally swapping two target bits based on the state of a control bit. If the control bit is 1, the gate swaps the target bits; if 0, the bits pass through unchanged.
Reversible Computing: The gate is reversible, meaning that no information is lost during computation. This property aligns with principles of conservative logic, preserving data and reducing energy dissipation. This corresponds nicely to the conservation of mass in physics and helps to show that the model is not wasteful.
Truth functions with AND, OR, XOR, and NOT
The Fredkin gate can be defined using truth functions with AND, OR, XOR, and NOT, as follows:
,
,
Cout = Cin,
where .
Alternatively:
,
,
Cout = Cin.
Completeness
One way to see that the Fredkin gate is universal is to observe that it can be used to implement AND, NOT and OR:
If , then .
If , then .
If and , then .
Hardware description
We can encode the truth table in a hardware description language such as Verilog:
module fredkin_gate (
input u, input x1, input x2,
output v, output y1, output y2);
always @(*) begin
v = u;
y1 = (~u & x1) | (u & x2);
y2 = (u & x1) | (~u & x2);
end
endmodule
Example
Three-bit full adder (add with carry) using five Fredkin gates. The "garbage" output bit is if , and if .
Inputs on the left, including two constants, go through three gates to quickly determine the parity. The 0 and 1 bits swap places for each input bit that is set, resulting in parity bit on the 4th row and inverse of parity on 5th row.
Then the carry row and the inverse parity row swap if the parity bit is set and swap again if one of the or input bits are set (it doesn't matter which is used) and the resulting carry output appears on the 3rd row.
The and inputs are only used as gate controls so they appear unchanged in the output.
Applications
Quantum photonic chip implementation
Recent research has demonstrated the Fredkin gate on programmable silicon photonic chips. These chips use a network of Mach-Zehnder interferometers to route photons efficiently, creating a versatile and scalable platform that can handle multiple quantum gates. This approach allows for integrating Fredkin gates into large-scale quantum processors, paving the way for future quantum computing advancements.
Efficient controlled-SWAP operation
In a photonic setup, the Fredkin gate serves as an effective controlled-SWAP mechanism, enabling the conditional swap of target qubits. This is particularly valuable in generating high-fidelity Greenberger-Horne-Zeilinger (GHZ) states, which are crucial for quantum communication and other protocols. The gate thus provides a powerful tool for quantum protocols that require efficient conditional operations.
Quantum state estimation
The Fredkin gate's controlled operations allow for estimating the overlap between quantum states without requiring resource-intensive quantum state tomography. This makes it particularly useful for quantum communication, measurement, and cryptography, where efficiency and accuracy are paramount.
Quantum Fredkin gate
On March 25, 2016, researchers from Griffith University and the University of Queensland announced they had built a quantum Fredkin gate that uses the quantum entanglement of particles of light to swap qubits. The availability of quantum Fredkin gates may facilitate the construction of quantum computers.
See also
Quantum computing
Quantum gate
Quantum circuit
Quantum programming
Toffoli gate, which is a controlled-controlled-NOT gate.
References
Further reading
Logic gates
Quantum gates
Reversible computing | Fredkin gate | Physics | 1,317 |
27,740,507 | https://en.wikipedia.org/wiki/C3H6Cl2 | {{DISPLAYTITLE:C3H6Cl2}}
The molecular formula C3H6Cl2 (molar mass: 112.98 g/mol, exact mass: 111.9847 u) may refer to:
1,2-Dichloropropane
1,3-Dichloropropane | C3H6Cl2 | Chemistry | 71 |
68,135,719 | https://en.wikipedia.org/wiki/Natural%20and%20Built%20Environment%20Act%202023 | The Natural and Built Environment Act 2023 (NBA), now repealed, was one of the three laws intended to replace New Zealand's Resource Management Act 1991 (RMA). The NBA aimed to promote the protection and enhancement of the natural and built environment, while providing for housing and preparing for the effects of climate change.
An exposure draft of the bill was released in June 2021 to allow for public submissions. The bill passed its third reading on 15 August 2023, and received royal assent on 23 August 2023. On 23 December 2023, the NBA and the Spatial Planning Act (SPA) were both repealed by the National-led coalition government.
Exposure draft
The Natural and Built Environment Bill exposure draft features many contrasts to its RMA predecessor. This includes the ability to set environmental limits, the goal to reduce greenhouse gas emissions, the provisions to increase housing supply, and the ability for planners to access activities based on outcomes. A notable difference is the bill's stronger attention to Māori involvement in decision making and Māori environmental issues. Greater emphasis is put on upholding the nation's founding document, the Treaty of Waitangi.
Under the bill, over 100 plans and policy statements will be replaced by just 14 plans. These plans will be prepared by new Regional Council Planning Committees and their planning secretariats. The planning committee will be composed of one person to represent the Minister of Conservation, appointed representatives of , and elected people from each district within the region. The committee will have an array of responsibilities, including the ability to vote on plan changes, set environmental limits for the region, and consider recommendations from hearings. The planning secretariat would draft the plans and provide expert advice.
Provisions
In mid November 2022, the Natural and Built Environment Act was introduced into parliament. In its initial version, the bill establishes a National Planning Framework (NPF) setting out rules for land use and regional resource allocation. The NPF also replaces the Government's policy statements on water, air quality and other issues with an umbrella framework. Under NPF's framework, all 15 regions will be required to develop a Natural and Built Environment Plan (NBE) that will replace the 100 district and regional plans, harmonising consenting and planning rules. An independent national Māori entity will also be established to provide input into the NPF and ensure compliance with the Treaty of Waitangi's provisions.
Key provisions have included:
Every person has a responsibility to protect and sustain the health and well-being of the natural environment for the benefit of all New Zealanders.
Every person has a duty to avoid, minimise, remedy, offset, or provide redress for adverse effects including "unreasonable noise."
Prescribes restrictions relating to land, coastal marine area, river and lake beds, water, and discharges.
Establishes a national planning framework (NPF) to provide directions on integrated environmental management, resolve conflicts on environmental matters, and to set environmental limits and strategic directions. This framework will take the form of regulations, which will be considered secondary legislation.
Sets Te Ture Whaimana as the primary direction-setting document for the Waikato and Waipā rivers and activities within their catchments affecting the rivers.
Resource allocation are guided by the principles of sustainability, efficiency, and equity.
Prescribes the criteria for setting environmental limits, human health limits, exemptions, targets, and management units.
Outlines the process for submitting and appealing case to the Environment Court.
Outlines the resource consent process.
History
Background
A 2020 review of the Resource Management Act 1991 (RMA) found various problems with the existing resource management system, and concluded that it could not cope with modern environmental pressures. In January 2021, the government announced that the RMA will be replaced by three acts, with the Natural and Built Environment Bill being the primary of the three.
An exposure draft of the NBA was released in late June 2021.
Introduction
On 14 November 2022, the Sixth Labour Government of New Zealand introduced the Natural and Built Environment Bill into parliament alongside the companion Spatial Planning Act 2023 (SPA) as part of its efforts to replace the Resource Management Act. In response, the opposition National and ACT parties criticised the two replacement bills on the grounds that it created more centralisation, bureaucracy, and did little to reform the problems associated with the RMA process. The Green Party expressed concerns about the perceived lack of environment protection in the proposed legislation.
A third bill, the Climate Adaptation Bill (CAA), was expected to be introduced in 2023 with the goal of passing it into law in 2024. The CAA would have established the systems and mechanisms for protecting communities against the effects of climate change such as managed retreat in response to rising levels. The Climate Adaptation Bill also would have dealt with funding the costs of managing climate change.
First reading
The Natural and Built Environment Bill passed its first reading in the New Zealand House of Representatives on 22 November 2022 by a margin of 74 to 45 votes. The governing Labour and allied Green parties supported the bill while the opposition National, ACT, and Māori parties voted against the bill. The bill's sponsor David Parker and other Labour Members of Parliament including Associate Environment Minister Phil Twyford, Rachel Brooking, and Green MP Eugenie Sage advocated revamping the resource management system due to the unwieldy nature of the Resource Management Act. National MPs Scott Simpson, Chris Bishop, Sam Uffindell, and ACT MP Simon Court argued that the NBA would do little to improve the resource management system and address the centralisation of power and decision-making regarding resource management. Māori Party co-leader Debbie Ngarewa-Packer argued that the bill was insufficient in advancing co-governance and expressed concern that a proposed national Māori entity would undermine the power of Māori iwi (tribes) and hapū (sub-groups). The bill was subsequently referred to the Environment Select Committee.
Select committee
On 27 June 2023, the Environment select committee presented its final report on the Natural Built and Environment Bill. The committee made several recommendations including:
Inserting clauses to emphasise the protection of the health of the natural environment and intergenerational well being.
Inserting a new Clause 3A to outline the key aims of the legislation.
Clarifying clauses around geoheritage sites, greenhouse gas emissions, coastal marine areas, fishing, land supply, customary rights, cultural heritage, and public access.
Defining other natural environment aspects: air, soil, and estuaries.
Allowing the National Planning Framework (NPF) to set management units for freshwater and air and provide direction on them.
Amending Clause 58 to ease restrictions on non-commercial housing on Māori land.
Adding directions on protecting urban trees and the supply of fresh fruits and vegetables to the NPF.
A majority of Environment committee members voted to pass the amendments.
The National, ACT and Green parties released minority submissions on the bill. While supporting a revamp of the Resource Management Act, the National Party argued that the NBA failed to address the problems with the RMA framework, and criticised the NBA as complex, bureaucratic, detrimental to local democracy and property rights. Similarly, the ACT party criticised the legislation as complex, confusing, and claimed it would discourage development. Meanwhile, the Green Party opined that the NBA was insufficient in protecting the environment and reducing environmental degradation.
Second reading
The NBA passed its second reading on 18 July 2023 by a margin of 72 to 47 votes. While it was supported by the Labour, Green parties, and former Green Member of Parliament Elizabeth Kerekere, it was opposed by the National, ACT, Māori parties, and former Labour MP Meka Whaitiri. The House of Representatives also voted to accept the Environment select committee's recommendations. Labour MPs Parker, Brooking, Twyford, Angie Warren-Clark, Neru Leavasa, and Stuart Nash, and Green MP Sage gave speeches defending the bill while National MPs Chris Bishop, Scott Simpson, Barbara Kuriger, Tama Potaka, and ACT MP Simon Court criticised the bill in their speeches.
Third reading
The NBA passed its third reading on 15 August 2023 by margin of 72 to 47 votes. The Labour, Green parties, and Kerekere supported the bill while the National, ACT, Māori parties, and Whaitiri opposed it. Labour MPs Parker, Brooking, Twyford, Warren-Clark, Angela Roberts, Arena Williams, and Lydia Sosene and Green MP Sage defended the bill while National MPs Bishop, Kuriger, and Simpson opposed the bill.
Repeal
On 23 December, the National-led coalition government repealed the Natural and Built Environment Act and Spatial Planning Act. RMA Reform Minister Chris Bishop announced that New Zealand would revert to the Resource Management Act 1991 while the Government developed replacement legislation.
References
External links
2021 in New Zealand law
2021 in the environment
2022 in New Zealand law
2023 in New Zealand law
2022 in the environment
Environmental law in New Zealand
Environmental mitigation
Natural resource management
Repealed New Zealand legislation
Urban planning in New Zealand
Open environmental policy proposals | Natural and Built Environment Act 2023 | Chemistry,Engineering | 1,826 |
1,569,480 | https://en.wikipedia.org/wiki/TransManche%20Link | TransManche Link (Cross Channel Link) or TML was a British-French construction consortium responsible for building the Channel Tunnel under the English Channel between Cheriton in England, and Coquelles in France.
History
In April 1985 the British and French governments invited proposals for the construction of a link between the two countries to be privately funded. In January 1986 the two governments selected the Channel Tunnel Group/France Manche proposal for the construction of two undersea tunnels. At Canterbury Cathedral on 12 February 1986 the governments signed a treaty approving construction of the Channel Tunnel. In March the concession for the operation of the tunnel was given to Channel Tunnel Group (CTG) and France Manche (FM).
Following the award of this concession CTG was subsumed by the newly formed Eurotunnel plc and FM was similarly replaced with Eurotunnel SA, together these formed the Eurotunnel Group.
In July 1985 the British contractors formed Translink Contractors and the French consortium formed Transmanche Construction. On 18 October 1985 these two groups were merged to create TransManche Link (TML). TML was thus contracted to build the tunnel for its customer, Eurotunnel, who would own and operate it. TML senior management were employees of the partner companies seconded to the new organisation.
In October 1986 Eurotunnel was partially floated and the contractors and banks no longer exercised control over the company. Beginning in 1987 relations between TML and Eurotunnel deteriorated, with significant and increasingly public rows erupting over cost and programme management.
With the completion of the Channel Tunnel TML ceased to exist.
Organisation
The participants were as follows:
Channel Tunnel Group (later Translink Contractors)
Balfour Beatty
Costain
Tarmac Construction
Taylor Woodrow Construction
Wimpey International Construction
NatWest
Midland Bank
France Manche (later Transmanche Construction)
Bouygues
Dumez
Société Auxiliaire d’Entreprise
Société Générale d’Entreprises
Spie Batignolles
Crédit Lyonnais
Banque Nationale de Paris
Banque Indosuez
References
Channel Tunnel
Construction and civil engineering companies of the United Kingdom
Tunnelling organizations
Construction and civil engineering companies established in 1985
British companies established in 1985
Construction and civil engineering companies disestablished in the 20th century | TransManche Link | Engineering | 463 |
61,007,576 | https://en.wikipedia.org/wiki/Transient%20execution%20CPU%20vulnerability | Transient execution CPU vulnerabilities are vulnerabilities in which instructions, most often optimized using speculative execution, are executed temporarily by a microprocessor, without committing their results due to a misprediction or error, resulting in leaking secret data to an unauthorized party. The archetype is Spectre, and transient execution attacks like Spectre belong to the cache-attack category, one of several categories of side-channel attacks. Since January 2018 many different cache-attack vulnerabilities have been identified.
Overview
Modern computers are highly parallel devices, composed of components with very different performance characteristics. If an operation (such as a branch) cannot yet be performed because some earlier slow operation (such as a memory read) has not yet completed, a microprocessor may attempt to predict the result of the earlier operation and execute the later operation speculatively, acting as if the prediction were correct. The prediction may be based on recent behavior of the system. When the earlier, slower operation completes, the microprocessor determines whether the prediction was correct or incorrect. If it was correct then execution proceeds uninterrupted; if it was incorrect then the microprocessor rolls back the speculatively executed operations and repeats the original instruction with the real result of the slow operation. Specifically, a transient instruction refers to an instruction processed by error by the processor (incriminating the branch predictor in the case of Spectre) which can affect the micro-architectural state of the processor, leaving the architectural state without any trace of its execution.
In terms of the directly visible behavior of the computer it is as if the speculatively executed code "never happened". However, this speculative execution may affect the state of certain components of the microprocessor, such as the cache, and this effect may be discovered by careful monitoring of the timing of subsequent operations.
If an attacker can arrange that the speculatively executed code (which may be directly written by the attacker, or may be a suitable gadget that they have found in the targeted system) operates on secret data that they are unauthorized to access, and has a different effect on the cache for different values of the secret data, they may be able to discover the value of the secret data.
Timeline
2018
In early January 2018, it was reported that all Intel processors made since 1995 (besides Intel Itanium and pre-2013 Intel Atom) have been subject to two security flaws dubbed Meltdown and Spectre.
The impact on performance resulting from software patches is "workload-dependent". Several procedures to help protect home computers and related devices from the Spectre and Meltdown security vulnerabilities have been published. Spectre patches have been reported to significantly slow down performance, especially on older computers; on the newer 8th-generation Core platforms, benchmark performance drops of 2–14% have been measured. Meltdown patches may also produce performance loss. It is believed that "hundreds of millions" of systems could be affected by these flaws.
More security flaws were disclosed on May 3, 2018, on August 14, 2018, on January 18, 2019, and on March 5, 2020.
At the time, Intel was not commenting on this issue.
On March 15, 2018, Intel reported that it will redesign its CPUs (performance losses to be determined) to protect against the Spectre security vulnerability, and expects to release the newly redesigned processors later in 2018.
On May 3, 2018, eight additional Spectre-class flaws were reported. Intel reported that they are preparing new patches to mitigate these flaws.
On August 14, 2018, Intel disclosed three additional chip flaws referred to as L1 Terminal Fault (L1TF). They reported that previously released microcode updates, along with new, pre-release microcode updates can be used to mitigate these flaws.
2019
On January 18, 2019, Intel disclosed three new vulnerabilities affecting all Intel CPUs, named "Fallout", "RIDL", and "ZombieLoad", allowing a program to read information recently written, read data in the line-fill buffers and load ports, and leak information from other processes and virtual machines. Coffee Lake-series CPUs are even more vulnerable, due to hardware mitigations for Spectre.
2020
On March 5, 2020, computer security experts reported another Intel chip security flaw, besides the Meltdown and Spectre flaws, with the systematic name (or "Intel CSME Bug"). This newly found flaw is not fixable with a firmware update, and affects nearly "all Intel chips released in the past five years".
2021
In March 2021 AMD security researchers discovered that the Predictive Store Forwarding algorithm in Zen 3 CPUs could be used by malicious applications to access data it shouldn't be accessing. According to Phoronix there's little performance impact in disabling the feature.
In June 2021, two new vulnerabilities, Speculative Code Store Bypass (SCSB, CVE-2021-0086) and Floating Point Value Injection (FPVI, CVE-2021-0089), affecting all modern x86-64 CPUs both from Intel and AMD were discovered. In order to mitigate them software has to be rewritten and recompiled. ARM CPUs are not affected by SCSB but some certain ARM architectures are affected by FPVI.
Also in June 2021, MIT researchers revealed the PACMAN attack on Pointer Authentication Codes (PAC) in ARM v8.3A.
In August 2021 a vulnerability called "Transient Execution of Non-canonical Accesses" affecting certain AMD CPUs was disclosed. It requires the same mitigations as the MDS vulnerability affecting certain Intel CPUs. It was assigned CVE-2020-12965. Since most x86 software is already patched against MDS and this vulnerability has the exact same mitigations, software vendors don't have to address this vulnerability.
In October 2021 for the first time ever a vulnerability similar to Meltdown was disclosed to be affecting all AMD CPUs however the company doesn't think any new mitigations have to be applied and the existing ones are already sufficient.
2022
In March 2022, a new variant of the Spectre vulnerability called Branch History Injection was disclosed. It affects certain ARM64 CPUs and the following Intel CPU families: Cascade Lake, Ice Lake, Tiger Lake and Alder Lake. According to Linux kernel developers AMD CPUs are also affected.
In March 2022, a vulnerability affecting a wide range of AMD CPUs was disclosed under CVE-2021-26341.
In June 2022, multiple MMIO Intel CPUs vulnerabilities related to execution in virtual environments were announced. The following CVEs were designated: CVE-2022-21123, CVE-2022-21125, CVE-2022-21166.
In July 2022, the Retbleed vulnerability was disclosed affecting Intel Core 6 to 8th generation CPUs and AMD Zen 1, 1+ and 2 generation CPUs. Newer Intel microarchitectures as well as AMD starting with Zen 3 are not affected. The mitigations for the vulnerability decrease the performance of the affected Intel CPUs by up to 39%, while AMD CPUs lose up to 14%.
In August 2022, the SQUIP vulnerability was disclosed affecting Ryzen 2000–5000 series CPUs. According to AMD the existing mitigations are enough to protect from it.
According to a Phoronix review released in October, 2022 Zen 4/Ryzen 7000 CPUs are not slowed down by mitigations, in fact disabling them leads to a performance loss.
2023
In February 2023 a vulnerability affecting a wide range of AMD CPU architectures called "Cross-Thread Return Address Predictions" was disclosed.
In July 2023 a critical vulnerability in the Zen 2 AMD microarchitecture called Zenbleed was made public. AMD released a microcode update to fix it.
In August 2023 a vulnerability in AMD's Zen 1, Zen 2, Zen 3, and Zen 4 microarchitectures called Inception was revealed and assigned CVE-2023-20569. According to AMD it is not practical but the company will release a microcode update for the affected products.
Also in August 2023 a new vulnerability called Downfall or Gather Data Sampling was disclosed, affecting Intel CPU Skylake, Cascade Lake, Cooper Lake, Ice Lake, Tiger Lake, Amber Lake, Kaby Lake, Coffee Lake, Whiskey Lake, Comet Lake & Rocket Lake CPU families. Intel will release a microcode update for affected products.
The SLAM vulnerability (Spectre based on Linear Address Masking) reported in 2023 neither has received a corresponding CVE, nor has been confirmed or mitigated against.
2024
In March 2024, a variant of Spectre-V1 attack called GhostRace was published. It was claimed it affected all the major microarchitectures and vendors, including Intel, AMD and ARM. It was assigned CVE-2024-2193. AMD dismissed the vulnerability (calling it "Speculative Race Conditions (SRCs)") claiming that existing mitigations were enough. Linux kernel developers chose not to add mitigations citing performance concerns. The Xen hypervisor project released patches to mitigate the vulnerability but they are not enabled by default.
Also in March 2024, a vulnerability in Intel Atom processors called Register File Data Sampling (RFDS) was revealed. It was assigned CVE-2023-28746. Its mitigations incur a slight performance degradation.
In April 2024, it was revealed that the BHI vulnerability in certain Intel CPU families could be still exploited in Linux entirely in user space without using any kernel features or root access despite existing mitigations. Intel recommended "additional software hardening". The attack was assigned CVE-2024-2201.
In June 2024, Samsung Research and Seoul National University researchers revealed the TikTag attack against the Memory Tagging Extension in ARM v8.5A CPUs. The researchers created PoCs for Google Chrome and the Linux kernel. Researchers from VUSec previously revealed ARM's Memory Tagging Extension is vulnerable to speculative probing.
In July 2024, UC San Diego researchers revealed the Indirector attack against Intel Alder Lake and Raptor Lake CPUs leveraging high-precision Branch Target Injection (BTI). Intel downplayed the severity of the vulnerability and claimed the existing mitigations are enough to tackle the issue. No CVE was assigned.
Future
Spectre class vulnerabilities will remain unfixed because otherwise CPU designers will have to disable speculative execution which will entail a massive performance loss. Despite this, AMD has managed to design Zen 4 such a way its performance is not affected by mitigations.
Vulnerabilities and mitigations summary
The 8th generation Coffee Lake architecture in this table also applies to a wide range of previously released Intel CPUs, not limited to the architectures based on Intel Core, Pentium 4 and Intel Atom starting with Silvermont. Various CPU microarchitectures not included above are also affected, among them are ARM, IBM Power, MIPS and others.
Notes
References
External links
Linux kernel: Hardware vulnerabilities
Vulnerabilities associated with CPU speculative execution
A systematic evaluation of transient execution attacks and defenses
A dynamic tree of transient execution vulnerabilities for Intel, AMD and ARM CPUs
Transient Execution Attacks by Daniel Gruss, June 20, 2019
CPU Bugs
Intel: Refined Speculative Execution Terminology
Computer security exploits
Hardware bugs
Side-channel attacks | Transient execution CPU vulnerability | Technology | 2,394 |
14,880,603 | https://en.wikipedia.org/wiki/HCN1 | Potassium/sodium hyperpolarization-activated cyclic nucleotide-gated channel 1 is a protein that in humans is encoded by the HCN1 gene.
Function
Hyperpolarization-activated cation channels of the HCN gene family, such as HCN1, contribute to spontaneous rhythmic activity in both heart and brain.
Tissue distribution
HCN1 channel expression is found in the sinoatrial node, the neocortex, hippocampus, cerebellar cortex, dorsal root ganglion, trigeminal ganglion and brainstem.
Ligands
Ketamine is an inhibitor of HCN1 in addition to its other targets.
Propofol also inhibits HCN1.
Isoflurane and Sevoflurane inhibit HCN1.
Interactions
HCN1 has been shown to interact with HCN2.
Epilepsy
De novo mutations in HCN1 cause epilepsy.
See also
Cyclic nucleotide-gated ion channel
References
Further reading
External links
Ion channels | HCN1 | Chemistry | 208 |
30,483,590 | https://en.wikipedia.org/wiki/Pseudogene%20%28database%29 | Pseudogene is a database of pseudogenes annotations compiled from various sources.
See also
Gene prediction
Glossary of genetics
Index of molecular biology articles
References
External links
http://www.pseudogene.org
Biological databases
* | Pseudogene (database) | Biology | 50 |
9,953,214 | https://en.wikipedia.org/wiki/Ecton%20%28physics%29 | Ectons are explosive electron emissions observed as individual packets or avalanches of electrons, occurring as microexplosions at the cathode. The electron current in an ecton starts flowing as a result of overheating of the metal cathode because of the high energy density (104Jg−1), and stops when the emission zone cools off.
Ectons occur in plasma-involving phenomena, such as: electrical discharges in vacuum, cathode spots of vacuum arcs, volumetric discharges in gases, pseudosparks, coronas, unipolar arcs, etc.
An ecton consists of individual portions of electrons (1011– 1012 particles). The formation time is of the order of nanoseconds.
References
Electron
Plasma phenomena | Ecton (physics) | Physics,Chemistry | 160 |
1,562,475 | https://en.wikipedia.org/wiki/Silicon%20Glen | Silicon Glen is the nickname given to the high tech sector of Scotland, the name inspired by Silicon Valley in California. It is applied to the Central Belt triangle between Dundee, Inverclyde and Edinburgh, which includes Fife, Glasgow and Stirling; although electronics facilities outside this area may also be included in the term. The term has been in use since the 1980s. It does not technically represent a glen as it covers a much wider area than just one valley.
History
Origins
Silicon Glen had its origins in the electronics business with Ferranti establishing a plant in Edinburgh in 1943, relocating facilities from Manchester during the Second World War. When Ferranti remained in Edinburgh, other defence electronics companies also established themselves in Scotland, including the Marconi Company and Barr & Stroud. Major US companies followed in the late 1940s, including Honeywell and NCR Corporation, the latter setting up cash register and adding machine manufacturing in Dundee. IBM decided to establish a presence in the region in 1951, opening a manufacturing facility in Greenock in 1953. Indeed, this was typical of much of the early days of Silicon Glen, which were dominated by electronics manufacturing for foreign companies much more than research and development or the establishment of home grown companies.
Electronic dominance
The emphasis on electronics came about due to the decline in traditional Scottish heavy industries such as shipbuilding and mining. The government development agencies saw electronics manufacturing as being a positive replacement for people made redundant through heavy industry closures and the associated training and reskilling was relatively easy to achieve.
Semiconductors
Like the bedrock of Silicon Valley was in semiconductors, Silicon Glen also had a significant influence in semiconductor design and manufacturing starting in 1960 with Hughes Aircraft (now Raytheon) establishing its first facility outside the US in Glenrothes to manufacture germanium and silicon diodes. In 1965 Elliott Automation established a production facility in Glenrothes followed by a MOS research laboratory in 1967. This was followed in 1969 by the establishment of wafer fabs by General Instrument in Glenrothes, Motorola (now Freescale) in East Kilbride and National Semiconductor in Greenock. Signetics also opened a facility in Linlithgow in 1969.
In 1970, Compugraphic relocated from Aldershot to Glenrothes to provide photomask manufacturing for these companies. Other companies who developed semiconductor wafer fabrication or other manufacturing plants included SGS in Falkirk, NEC, Burr-Brown Corporation, IPS (then Seagate Technology) and Kymata (now Kaiam) in Livingston, CST in Glasgow and Micronas in Glenrothes.
There were some other notable successes such as the large Sun Microsystems plant in Linlithgow and the Digital Equipment Corporation semiconductor manufacturing plant in South Queensferry where the pioneering 64-bit Alpha 21064 and its derivatives were made. Digital also opened an office in Livingston, developing their flagship OpenVMS operating system. Digital's South Queensferry facility, opened in 1990 at an estimated cost of , was eventually sold to Motorola in 1995. At the time, Motorola itself employed 4,000 people at its own semiconductor plant at East Kilbride, as well as operating a cellular telephone plant at Easter Inch.
European single market
The potential and implications of a single European market motivated foreign companies, particularly those from the United States, to establish operations in Silicon Glen. By having a presence in a European Economic Community member country, companies could formally participate in standards committees and thus exert a degree of influence. Emerging European tariff rules concerning the origin of products were also strong motivators for the establishment of local manufacturing operations, with the EEC having updated its rules in 1989 to consider the location of the wafer diffusion phase of semiconductor production as determining the origin of the manufactured product. Local infrastructure support for the semiconductor industry was well regarded in Scotland, with local universities offering "a strong design base".
Rodime of Glenrothes pioneered the 3.5 inch hard disk drive in 1983 and spent subsequent years defending its patents against (and collecting royalties from) Seagate, Quantum, IBM and others.
Computing
The manufacturing sector grew to such an extent that at its peak it produced approximately 30% of Europe's PCs, 80% of its workstations, 65% of its ATMs and a significant percentage of its integrated circuits.
Recent history
Electronic decline
The heavy dependency on electronics manufacturing hit Silicon Glen hard after the collapse of the hi-tech economy in 2000. Viasystems, National Semiconductor (now Texas Instruments), Motorola and Chunghwa Picture Tubes all laid off substantial numbers of employees or closed factories completely. The effects of the Viasystems closure are still felt in the Scottish Borders today. Digital sold their Alpha facility to Motorola who eventually closed it down. Motorola also closed their factory in Bathgate and the substantial NEC plant in Livingston was also closed. In 2009 Sun ceased manufacturing at its Linlithgow plant and, after successive years of downsizing, NCR ended all manufacturing in Dundee.
However, there are many promising signs as well as a recognition that diversification away from electronics and manufacturing produces a more balanced and stronger economy. There is also more of an interest in encouraging home grown talent.
Scotland had 1,000 companies in electronics employing 25,000 people in 2004, this number has been in decline since 2000 when 48,000 people were employed in the industry in Scotland. However, by 2016 the Silicon Glen has begun to boom once again, with new digital start ups - such as Skyscanner - choosing Scotland for headquarters or offices.
Global services
To diversify away from electronics and manufacturing, the development agencies now see global services as being a potential area of growth, but there is also substantial interest in the software development industry, including Rockstar North, developers of the market leading Grand Theft Auto series. There is also a dynamic and fast growing electronics design and development industry, based around links between the very strong universities and indigenous companies and projects like the Alba Campus. The software sector has also notably attracted Amazon.com to set up a software development centre in Edinburgh, the first such centre outside the US. There remains a significant presence of global players like National Semiconductor, IBM, Shin Etsu Handotai Europe Ltd and Freescale. The move from a primarily manufacturing dominated region to a wealth creation one has been successful as demonstrated in a report from UBS Wealth Management in 2006 showing Scotland with more venture backed companies per capita than any other UK region.
In addition to the indigenous companies, Silicon Glen continues to have quite a significant semiconductor design community of inward investment companies including Atmel, Freescale, Texas Instruments, Micrel, Analog Devices, Allegro MicroSystems, Micro Linear, Micronas and ST Microelectronics.
Semefab, the former General Instrument semiconductor foundry, has been funded as the UK's Primary Centre for the development of microelectromechanical systems (MEMS) and nanotechnology.
The Open Source Awards (formerly the Scottish Open Source Awards) have been run from Scotland since 2007. It was initially a subset of the Scottish Software Awards.
Notable companies
Many high technology companies are established in Silicon Glen, including:
Unity (game engine)
Microsoft
Amazon.com
Codeplay
FanDuel
IBM
Oracle Corporation
Rockstar North
Adobe Systems
Canon Medical Systems Corporation
Skyscanner
Motorola
NCR
Proper Games
Raytheon
Texas Instruments
Freescale
3Com
Agilent
Analog Devices
Atmel
Atos
Axeon
ST Microelectronics
Broadcom Corporation
Cadence Design Systems
Cirrus Logic
Dialog Semiconductor
Dynamo Games
IndigoVision
Thales Optronics
Toshiba Medical Visualization Systems
Version 1
Linn
Maxim Integrated Products
Memex Technology Limited
Micrel
Braindead Ape Games
Micronas Intermetall
Leonardo MW Limited
Semefab
Allegro MicroSystems
AND Digital
Waracle
WFS Technologies
ATEEDA
Codestuff
Compugraphics
ClinTec International
Clyde Space
Infinity Works
Youmanage
Digital Goldfish
Kaiam Europe limited
MEP Technologies Ltd
Brand Rex
Elonics
Kumulos
Optos
Micro Linear
BI Technologies
Mage Control Systems Ltd
CRC Group
Shin Etsu Handotai Europe Ltd
See also
List of places with 'Silicon' names
References and notes
External links
Scotland IS, the trade body for the Scottish IT sector
Pico and General Instrument's 1970 development of a single chip calculator processor chip Possibly pre-dating Intel and TI.
The death and rebirth of Silicon Glen BBC News
Open Tech Calendar, free and open list of tech events in Scotland
Economy of Scotland
High-technology business districts in the United Kingdom
Silicon Glen
Science and technology in Scotland
Information technology places | Silicon Glen | Technology | 1,723 |
41,838,275 | https://en.wikipedia.org/wiki/Tree%20fork | A tree fork is a bifurcation in the trunk of a tree giving rise to two roughly equal diameter branches. These forks are a common feature of tree crowns. The wood grain orientation at the top of a tree fork is such that the wood's grain pattern most often interlocks to provide sufficient mechanical support. A common "malformation" of a tree fork is where bark has formed within the join, often caused by natural bracing occurring higher up in the crown of the tree, and these bark-included junctions often have a heightened risk of failure, especially when bracing branches are pruned out or are shaded out from the tree's crown.
Definition
In arboriculture, junctions in the crown structure of trees are frequently categorised as either branch-to-stem attachments or co-dominant stems. Co-dominant stems are where the two or more arising branches emerging from the junction are of near equal diameter and this type of junction in a tree is often referred to in layman's terms as 'a tree fork'.
There is actually no hard botanical division between these two forms of branch junction: they are topologically equivalent, and from their external appearance it is only a matter of the diameter ratio between the branches that are conjoined that separates a tree fork from being a branch-to-stem junction.
However, when a small branch joins to a tree trunk there is a knot that can be found to be embedded into the trunk of the tree, which was the initial base of the smaller branch. This is not the case in tree forks, as each branch is roughly equal in size and no substantial tissues from either branch is embedded into the other, so there is no reinforcing knot to supply the mechanical strength to the junction that will be needed to hold the branches aloft. To alleviate potential strain, it is recommended to identify and prune the codominant stem early in a tree's life, while in mature trees, a risk analysis should be conducted to decide whether to remove one stem, cable them together, or leave them intact.
Anatomy and morphology
Research has shown that a unique wood grain pattern at the apex of forks in hazel trees (Corylus avellana L.) acts to hold together the branches in this species, and this is probably the case in most other woody plants and trees. This is an example of 'trade-off' in xylem, where mechanical strength to the tree's junction is gained at the expense of efficiency in tree sap conductance by the production of this specialised wood, known as 'axillary wood'.
The complex interlocking wood grain patterns developed in axillary wood present a great opportunity for biomimicry (the mimicking of natural biological structures in man-made materials) in fibrous materials, where the production of a Y-shaped or T-shaped component is needed: particularly in such components that may need to act as a conduit for liquids as well as being mechanically strong.
Tree fork morphology has been shown to alter with the angle of inclination of the fork from the vertical axis. As the fork becomes more tilted from the vertical, the branches become more elliptical in cross-section, to adapt to the additional lateral loading upon them, and Buckley, Slater and Ennos (2015) showed that this adaption resulted in stronger tree forks when the fork was more inclined away from the vertical.
Bark inclusions and fork strength
Where a junction forms in a tree and bark is incorporated into the join, this is referred to as an 'included bark junction' or 'bark inclusion'. A common cause of bark being incorporated into the junction is that the junction is braced by the touching of branches or stems set above that junction (in arboriculture, these branch interactions are termed 'natural braces'). Such included bark junctions can be substantially weaker in strength than normal tree forks, and can become a significant hazard in a tree, particularly when the bracing branches are shaded out or pruned out of the tree. Research has shown that in hazel trees, the more the included bark is occluded within new wood growth, the stronger that junction will be, with the weakest forks being those with a large amount of unoccluded bark at their apex. Common tree care practices are to prune out such bark-included forks at an early stage of the tree's development, to brace the two arising branches above such a junction so that they can not split apart (using a flexible brace) or to reduce the length of the smaller arising branch, so that it is subordinated to the larger branch. Care should be taken not to prune out 'natural braces' set above weak tree forks in mature trees unless that is absolutely necessary.
The strength of a normally-formed tree fork can be assessed by its shape and the presence and location of axillary wood: those that are more U-shaped are typically considerably stronger than those that are V-shaped at their apex. This characteristic, and the presence of bark included in a tree fork, are important attributes for tree surveyors and tree contractors to note in order to assess whether the tree fork is a defect in the structure of a tree.
See also
Arboriculture
Biomimicry
Branch attachment
Branch collar
da Vinci branching rule
References
External links
International Society of Arboriculture
Arboricultural Association, U.K.
Hazards from Trees, Forestry Commission, U.K.
Trees
Forest management
Plant morphology
Plant anatomy | Tree fork | Biology | 1,117 |
21,481,649 | https://en.wikipedia.org/wiki/Institute%20of%20Nuclear%20Materials%20Management | The Institute of Nuclear Materials Management (INMM) is an international technical and professional organization that works to promote safe handling of nuclear material and the safe practice of nuclear materials management through publications, as well as organized presentations and meetings.
The INMM's headquarters is located in Deerfield, Illinois in the United States, but its members are located around the world including Europe, Asia, South America and North America. There are more than 1,100 members and 32 chapters.
Les Shephard, vice president of Sandia National Laboratories' Energy, Security, and Defense Technology Center, said in February 2009 of the INMM and the American Nuclear Society,
Structure
INMM is led by an executive committee of nine members, including a president, vice president, secretary, treasurer, four members-at-large, and the immediate past president. In addition, INMM has several standing and two technical committees. Many organizations, such as Los Alamos National Laboratory and Brookhaven National Laboratory, are Sustaining Members of the INMM.
Technical divisions
In 2010, the INMM Executive Committee approved a restructuring of the Institute. This included changes in the technical divisions. Some were merged and a new division was created. The technical divisions are:
Facility Operations
International Safeguards
Materials Control and Accountability
Nonproliferation and Arms Control
Nuclear Security and Physical Protection
Packaging, Transportation and Disposition
The division is the focal point for information and activities related to the physical protection of nuclear materials, nuclear facilities, and other high-value assets and facilities.
Until 2010, the INMM had six technical divisions:
International Safeguards, focusing on the development of effective international nuclear material safeguards, and working to advance safeguard procedures and technology
Materials Control and Accountability, promoting and communicating the need for development of technology for the control and accountability of nuclear materials
Nonproliferation and Arms Control, promoting research to further international stability with regard to nonproliferation and international arms control
Packaging and Transportation and Disposition, promoting technology and research aimed at the packaging and transportation of radioactive materials, including all levels of radioactive waste
Nuclear Security and Physical Protection, focusing on research to advance technology for the physical protection of nuclear materials and nuclear facilities
Waste Management, promoting research to help find a solution for the worldwide waste management issues, focusing on each step of waste management including handling, processing, storing, and disposal for all radioactive waste
Best practices
The INMM develops and promotes global "best practices" for nuclear materials management. Best practices are based on past events, lessons learned, and ways to effectiveness and efficiency. They are focused on the six technical divisions and should be applicable to all countries with nuclear capabilities, both civilian and military.
In 2008, the INMM joined with Nuclear Threat Initiative, the United States Department of Energy, and the International Atomic Energy Agency to establish a new international organization called the World Institute for Nuclear Security (WINS) aimed at strengthening physical protection and security of the world's nuclear and radioactive materials and facilities. This organization's focus is on collecting information on best management practices from professionals responsible for on-the-ground security and sharing that information with their peer professionals around the world. These security professionals are in the best position to know where the vulnerabilities are, how to improve security, and how to ensure that improvements are implemented quickly and effectively. WINS will place a high priority on protecting sensitive information that may be discussed between members. Initial funding for the WINS included $3 million each from the Peter G. Peterson Foundation and the U.S. Department of Energy plus $100,000 from Norway.
Chapters
The INMM has 32 chapters around the world, including six regional chapters in the United States and international chapters in Japan, South Korea, Morocco, Nigeria, Obninsk Regional in Russia, Russian Federation, South Africa, the United Kingdom, Ukraine, Urals, and Vienna.
The INMM also has 16 student chapters that offer opportunities including participation in a mentor program, meetings and workshops, publication subscriptions, and professional networking. Student chapters currently exist at Federal University of Rio de Janeiro, Georgia Institute of Technology, Idaho State University, Ibn Tofail University, Jordan University of Science and Technology, Mercyhurst College Institute for Intelligence Studies, Middlebury Institute of International Studies at Monterey, North Carolina State University and other Triangle Area universities, Oregon State University, Pandit Deendayal Petroleum University, Pennsylvania State University, Texas A&M University, the University of Michigan, University of Missouri, University of New Mexico, University of Tennessee, University of Utah,
University of Washington, and Universitas Gadjah Mada
Meetings and workshops
INMM holds an annual meeting, an annual Spent Fuel Management Seminar, and a number of other workshops each year. These educational/networking events allow professionals in nuclear materials management to learn new strategies, keep abreast of the science and technology, and to meet with colleagues from around the world. In June 2005, INMM held two workshops in Prague, Czech Republic, sponsored by the Nuclear Threat Initiative (NTI).
Publications
INMM publishes the Journal of Nuclear Materials Management, a quarterly, peer-reviewed technical journal. In addition, the "Communicator", online newsletter is posted three times annually.
American physicist William Higginbotham served as technical editor of the Journal from 1974 until his death in 1994.
See also
American National Standards Institute
U.S. Department of Energy
U.S. Nuclear Regulatory Commission
Brazilian-Argentine Agency for Accounting and Control of Nuclear Materials (ABACC)
European Nuclear Society
International Atomic Energy Agency
American Nuclear Society
Nuclear Threat Initiative
World Nuclear Association
Nuclear Energy Institute
References
External links
Institute of Nuclear Materials Management
World Institute for Nuclear Security
International nuclear energy organizations
Nuclear materials
Nuclear proliferation
Nuclear technology
Professional associations based in the United States
Companies based in Deerfield, Illinois
Organizations based in Illinois | Institute of Nuclear Materials Management | Physics,Engineering | 1,161 |
69,607,721 | https://en.wikipedia.org/wiki/Julie%20Overbaugh | Julie M. Overbaugh is an American virologist. She is a professor at the Fred Hutchinson Cancer Research Center. Overbaugh is best known for her translational approach to studying HIV transmission and pathogenesis and studies of how the antibody response evolves to recognize viruses. Her work in maternal and infant HIV transmission helped make clear the risk posed by breastfeeding and highlighted unique characteristics of an infant immune response that could inform vaccine development. Major scientific contributions to the understanding of HIV transmission and pathogenesis also include: identifying a bottleneck that selects one or a few variants during HIV transmission; demonstrating the importance of female hormones in HIV infection risk; showing the HIV reinfection is common; demonstrating a role for antibodies that mediate ADCC in clinical disease; showing that HIV infected infants develop unique neutralizing antibody responses to HIV.
Overbaugh is an elected member of the National Academy of Sciences and the American Academy of Arts and Sciences. In addition to being recognized for her scientific contributions, she is also recognized for her mentoring, her advocacy for equity and her commitment to global health.
Early life and education
Overbaugh was born and raised in Pennsylvania and graduated from Delone Catholic High School in 1975. During high school, she captained their basketball and field hockey teams and received an award for Excellence in Athletics.
Following high school, Overbaugh was recruited to play college basketball for the University of Connecticut's Huskies from 1976 to 1978. During her tenure, she averaged 3.5 points in 46 games, and also played varsity tennis and served as team captain during her final year. She graduated with a Bachelor of Science degree in chemistry in 1979 and earned her PhD in chemistry from the University of Colorado Boulder. During her PhD, Overbaugh spent four months in Oklahoma aiding the effort to ratify the Equal Rights Amendment. She returned to school to complete her PhD and continued her training as a postdoctoral fellow in interdisciplinary programs in health and cancer biology at the Harvard T.H. Chan School of Public Health from 1983 to 1987. During her fellowship, she became interested in HIV research because it was a "clear intersection of science and public health and medicine."
Career
Following her fellowship, Overbaugh joined the University of Washington (UW) to expand their HIV research program to include a basic science focus. In 1992, she became a member of the Nairobi HIV/STD Research Project, which included collaborating on a study of HIV mother-infant infection. For this, she joined Joan Kreiss and Ruth W. Nduati in Kenya to understand the risk of HIV transmission through breastfeeding. The research team found that breastfeeding doubled the risk of HIV transmission from mother to child. Her subsequent studies showed that the levels of virus in breastmilk predicted infection risk.
Overbaugh left the UW in 1999 to join the Fred Hutchinson Cancer Research Center where she continued and expanded her studies of HIV transmission and pathogenesis, including working closely with the Kenya research collaborative team.. That year Overbaugh received an Elizabeth Glaser Scientist award to expand her studies of mother-infant transmission of HIV to better understand how features of the virus and immune response impact infant infection risk. They went on to define immune factors that impact transmission in the setting of infant exposure, particularly a role for non-neutralizing antibodies that mediated cell killing in infant infection and disease Her research team also discovered novel aspects of the infected infant's immune response to HIV and isolated and characterized the first HIV neutralizing antibodies from infants. They also discovered a variant of HIV in an infant, BG505, that has been extensively leveraged for studies of HIV structure and for vaccine development.
Overbaugh's research has also focused on understanding how viral evolution impacts disease and she showed that the viruses that evolve over the course of infection are more pathogenic, in part because they have escaped neutralizing antibody control. Her work also highlighted that retroviruses can evolve to change their entry receptors and by so doing, can infect new cells and cause changes in disease outcomes. She also discovered that HIV adapted in cell culture can use more diverse receptor othologues, and that this adaption makes strains used in model system distinct In her very early work, Overbaugh also studied adaptive evolution.
Her work with the Kenya research collaborative team also included expanding studies with Kreiss to better define the basis of transmission in high-risk women, such as sex workers, work that continues to this day in Mombasa, Kenya. There, a focus of her research has been on the early dynamics of infection. Her group showed that despite chronically infected people having many distinct viral variants of HIV, there was the transmission of just one or a few of these variants, indicating a bottleneck in viruses that are transmitted. They went on to show that the transmission bottleneck is influenced by his factors such as hormonal contraceptives and sexually transmitted diseases. Her group further showed that transmission can also occur in the face of an existing infection, leading to re-infection, and her team has studied the setting of re-infection to understand immune correlates of protection to inform vaccine efforts.
Overbaugh was also involved with collaborated studies to define antibody escape pathways for HIV antibodies and the molecular determinants that govern HIV entry into host cells. Her lab went on to develop a method to profile escape called Phage-DMS that has been used for studies of HIV and SARS-CoV-2.
Due to her interest in infectious disease of global health importance, Overbaugh also emphasized the development of methods for detecting infections in her work. Because HIV strains in Africa differed from those in the US, she developed methods to detect infection with those strains and helped validate assays to define the levels of infection. More recently, she extended this approach to develop methods for detecting antibodies to SARS-CoV-2.
Major awards
Overbaugh received the Elizabeth Glaser Scientist Award for her work in pediatric HIV research in 1999. In 2011, Overbaugh was named FierceBiotech's 2011 Women in Biotech and was elected to the American Society for Microbiology. Later that year, Overbaugh received the Marion Spencer Fay Leadership Award Drexel University’s Institute for Women's Health and Leadership. In 2016, she also received the lifetime achievement Nature Award for Mentoring in Science and was the first US-Based scientists to be recognized with this award. Two years later, she was recognized for her long service to the global fight against HIV with the Field's Memorial Lecture at the opening session of the Conference on Retroviruses and Opportunistic Infections. As a result of her studies of HIV transmission and pathogenesis in affected cohorts, including African women and children, she was elected a Fellow of the American Academy of Arts and Sciences. She was also elected a Member of the National Academy of Sciences for her studies of HIV transmission and pathogenesis in affected cohorts, including African women and children; she is the first Seattle-based virologist to be elected to the prestigious NAS.
Mentoring
Overbaugh has been recognized in various ways for her mentoring and her commitment to launching the careers of the next generation of scientists. Her mentoring awards include both local and international recognition. She established a training program for graduate students to support training in the study of viral diseases and how viruses evolve even before the COVID pandemic highlighted the need to such training. Her trainees have taken a variety of position in academia, government, industry and a broader range of areas. In academia. Her former trainees have taken faculty positions at Baylor, Columbia, Emory, Harvard, Stanford U Michigan, U Washington, and other Universities focused on research and/or teaching. Former trainees have also attained positions in global health, at the Gates Foundation, the African Academy of Sciences, and in government at the CDC and NIH and the Kenya Medical Research Institute, Other trainees have secured roles in science policy and diversity and inclusion offices and some have written novels with a science focus.
Contributions to the Practice of Science
Overbaugh has actively contributed to improving the practice of science. She has written about effective mentoring from the perspective of a highly productive scientist who has also garnered multiple mentoring awards. She is also well recognized for having a lab that supports work-life balance and as a result, the journal Nature solicited a commentary on this topic. There she highlighted “there must be room for those who want that balance, otherwise creative people with the potential to make significant contributions to scientific discovery will be excluded”. She has written about differences in publication rates in high-profile journals based on gender and argued for better tracking to improve this imbalance. She also lectures on the practice peer-review in ethics forums having served as Chair of NIH grant review panels and as Journal editor.
Advocacy for diversity in science
Overbaugh has been a strong advocate for diversity in science during her career. Overbaugh was the founding faculty lead for Hutch United, a grass roots effort established in 2013 and led by trainees to help promote the success of underrepresented groups in science and those who otherwise felt on the fringes. She has also served in an advisory role in the Fred Hutch Office of Diversity, Equity and Inclusion, and was selected as one of three Fred Hutch faculty to present at the 2021 DEI Summit. In her role as senior vice president and director of the Office of Education and Training at the Fred Hutch, she helped oversee efforts to create a more diverse scientific workforce, she served as advocate for diversity in science at all levels and helped provide support for new faculty launching their careers. Overbaugh has published papers in scientific journals pointing out gender bias in the review process.
Overbaugh has the distinction of having hosted the most diverse lab at Fred Hutch. Among the 64 graduate students and postdoctoral fellows she has trained, 20 are members of the BIPOC community and 10 are members of the LGBTQ+ community. She has published 160 peer-reviewed publications with African co-authors, and has mentored numerous African scientists for periods ranging from short-term technical training to masters and PhD level training. Her citation for the Nature Mentorship Award calls out her strengths mentoring African scientists: "She has the patience to listen to and deal with culture shocks and adjustment to new surroundings and a different system of training and education."
Overbaugh's place as a role model and mentor for underrepresented groups in science has been recognized in multiple other ways including by the University of Washington School of Medicine in 2007. Her leadership was also highlighted in the introductory remarks ahead of her honorary opening lecture at the premiere meeting in the HIV field. There it was noted that she has a reputation as a ‘proponent for women’s rights’. In that lecture, Overbaugh highlighted her collaborative and bilateral research with Kenyan partners and emphasized her view of the importance of supporting training of aspiring African scientists.
Leadership and institutional recognition
Overbaugh has served in numerous leadership roles in the field nationally and internationally, including chair of NIH grant review panels on both HIV molecular biology and HIV immunology, as Chair of the Burroughs Wellcome Fund Investigators in the Pathogenesis of Infectious Diseases Award committee, and as chair of Conference on Retroviruses and Opportunistic Infections (CROI) as well as other major meetings.
Overbaugh also had major leadership role within her institutions. In 2017, Overbaugh was appointed the inaugural Hutch associate director for graduate education. In that capacity, she was subsequently named the inaugural senior vice president for education and in these leadership roles she established a new Office of Education and Training at the Fred Hutch. Overbaugh also established and led an NIH funded program to support and train graduate students pursuing research on viral evolution and pathogenesis across Seattle institutions and she served in a leadership role in the Medical Sciences Training program.
Her education and career development efforts along with her scientific excellence were widely recognized in the Seattle community. Despite never being given the tenure-track position afforded her male colleagues while at University of Washington, various university selection committees recognized her scientific excellence, both while there and when she moved to the Fred Hutch. In 1994, as a junior faculty member at UW, Overbaugh was selected to present the New Investigator lecture to the University of Medical School. In 2011, while at Fred Hutch, she was honored as the distinguished scientist of the year by the University of Washington School of Medicine. As part of this recognition, she presented her work to the School of Medicine in the Science in Medicine seminar series in a talk entitled “Deciphering the biology of HIV transmission: A basic scientist’s journey into interdisciplinary, international HIV research”. The University of Washington also recognized Overbaugh with an outstanding mentoring award in 2007. The Fred Hutch soon followed, bestowing their faculty mentoring award on Overbaugh in 2008.
Resignation from leadership: In early 2022, Overbaugh was placed on administrative leave from the Fred Hutchinson Cancer Research Center in order to conduct an independent external investigation. The investigation was prompted by an anonymous complaint regarding a Cancer Research Center Halloween Party in 2009 where she was asked to dress as Michael Jackson as part of a group "Thriller" costume and darkened her face for this role. This was determined to be an isolated incident, and an interview of her peers and coworkers failed to reveal any pattern of inappropriate behavior of any kind in the past or at any time while employed at Fred Hutch in her twenty-three year tenure. Overbaugh offered a public apology to the entire Fred Hutch community for any offense as described by the president of the Hutch in the town hall: "As part of [an] education healing process, Julie wanted to offer her direct apology to our community. I commend this." The external independent investigation report noted her decades-long leadership at Fred Hutch in support of equity and inclusion report and concluded that "Dr. Overbaugh’s individual contributions to DEI efforts at Fred Hutch have been significant, wide-reaching, and long-standing."
Having established the Office of Education and the position of senior vice president for education and serving in leadership roles in education for a decade at the Hutch, Overbaugh decided to step down from her administrative leadership roles to focus on her research on COVID, HIV and other emerging global pathogens. As the Hutch President reported: “Julie has offered to step down from her role as Senior Vice President of Education and Training and I have accepted her resignation”. “She will continue to be a prominent investigator at the Fred Hutch in the Human Biology Division working on viruses that affect so many people around the world”.
Research focus
Overbaugh's laboratory continues to study viral and immune factors that contribute to infection and disease in global populations at high risk of HIV infection. This includes continued studies of mother-infant HIV transmission and infant responses to infection. An aspect of the work of the lab includes defining how antibodies evolve to become more potent.
During the COVID pandemic, her laboratory started to apply their expertise in viral immunology to advance research on SARS-CoV-2 immunity. They developed tools that allows comprehensive profiling of antibody responses and pathways of escape. They are using these profiles to isolate and study antibodies to SARS-CoV-2, with a focus on identifying broad and potent antibodies that can recognize emerging variants of concern. She has also contributed to the discussions of re-infection.
The lab is studying other globally important pathogens such as Zika virus. They also study innate immune factors and their role in inhibiting virus replication.
References
Living people
Scientists from Pennsylvania
American virologists
University of Connecticut alumni
University of Colorado Boulder alumni
University of Washington faculty
HIV vaccine research
HIV/AIDS researchers
UConn Huskies women's basketball players
Fellows of the American Academy of Arts and Sciences
Members of the United States National Academy of Sciences
Year of birth missing (living people) | Julie Overbaugh | Chemistry | 3,238 |
370,539 | https://en.wikipedia.org/wiki/Leaf%20blower | A leaf blower, commonly known as a blower, is a device that propels air out of a nozzle to move debris such as leaves and grass cuttings. Leaf blowers are powered by electric or gasoline motors. Gasoline models have traditionally been two-stroke engines, but four-stroke engines were recently introduced to partially address air pollution concerns. Leaf blowers are typically self-contained handheld units, or backpack mounted units with a handheld wand. The latter is more ergonomic for prolonged use. Larger units may rest on wheels and even use a motor for propulsion. These are sometimes called "walk-behind leaf blowers" because they must be pushed by hand to be operated. Some units called blower vacs, can also suck in leaves and small twigs via a vacuum, and shred them into a bag.
Leaf blowers are a source of controversy due to their adverse impacts such as operator injury, including hearing loss, particulates air pollution, noise pollution, and ecological habitat destruction. Over 200 localities have restricted the use of leaf blowers and many major cities, including Washington, DC, are implementing total bans due to the negative effects to operator health, ecological destruction, pollution, and nuisances including noise. October 9, 2021, California passed an air pollution control law AB1346 phasing out small off-road engines, like those found in leaf blowers, set to take effect January 1, 2024. They can also be used to blow away thin materials such as clothes
History
Leaf blowers were originally introduced in California. By 1990, annual sales were over 800,000 in the U.S., and the tool had become a ubiquitous gardening implement.
Other functions beyond the simple use of garden maintenance have been demonstrated by Richard Hammond on the Brainiac television series, in which a man-sized hovercraft was constructed from a leaf blower. Being both portable and able to generate wind speeds of between and air volumes of 14 m3 per minute, the leaf blower has many potential uses in amateur construction projects.
The leaf blower originated in 1947 as a backpack fogger apparatus, invented by Japanese-based Kyoritsu Noki Company. Kyoritsu followed that design with a backpack/blower/misting machine in 1955. in 1968, Kyoritsu applied for a patent on a backpack blower mister design, and in 1972 established themselves in the United States as Kioritz Corporation of America, and is said to have invented the first leaf blower in 1977. The company changed its name to Echo in 1978.
Among such rival manufacturers as Stihl, Weed Eater, and Husqvarna, Echo saw the sales of leaf blowers in the 1970s explode. It is estimated that the sale of leaf blowers in the U.S., had exceeded 1 million units by 1989.
To meet the 1995 California regulations for noise and air pollution, leaf blower manufacturers modified the current engine designs to comply. However, 1999 regulations were far more stringent, forcing the engineering of a quieter, more compliant 2-stroke engine design. While leaf blowers were becoming more tolerable in U.S. suburban neighborhoods, many communities had by now, in fact, banned their use. In the mid-2000s and to further answer critics, manufacturers once again evolved the leaf blower, with the use of NICad (nickel-cadmium) powered tool design to create the first cordless leaf blower. The new NiCad battery-powered leaf blower designs were further improved by way of the more powerful, and longer run time lithium-ion batteries, which incorporate most cordless leaf blowers marketed today. Cordless leaf blowers today operate with zero emissions and operate at an estimated 70% noise reduction (compared to levels produced by their predecessors).
Environmental and occupational impact
Emissions from gasoline-powered grounds-keeping equipment in general are a source of air pollution and more immediately, noise pollution. In the United States, US emission standards prescribe maximum emissions from small engines. The two-stroke engines used in most leaf blowers operate by mixing gasoline with oil, and a third of this mixture is not burned, but is emitted as an aerosol exhaust. These pollutants have been linked to cancer, heart disease, and asthma. A 2011 study found that the amount of NMHC pollutants emitted by a leaf blower operated for 30 minutes is comparable to the amount emitted by a Ford F-150 pickup truck driving from Texas to Alaska.
In addition to the adverse health effects of carbon monoxide, nitrogen oxides, hydrocarbons, and particulates generated in the exhaust gas of the gasoline-powered engines, leaf blowers pose problems related to the dust raised by the powerful flow of air. Dust clouds caused by leaf blowers contain potentially harmful substances such as pesticides, mold, and animal fecal matter that may cause irritation, allergies, and disease.
Noise pollution is also a concern with leaf blowers, as they can emit noise levels above those required to cause hearing loss to both the operator and those nearby.
Leaf blowers also present an occupational hearing hazard to the nearly 1 million people who work in lawn service and ground-keeping. A recent study assessed the occupational noise exposure among groundskeepers at several North Carolina public universities and found noise levels from leaf blowers averaging 89 decibels (A-weighted) and maximum sound pressure levels reaching 106 dB(A), both far exceeding the National Institute for Occupational Safety and Health (NIOSH) Recommended Exposure Limit of 85 dB(A)
Leaves are ecologically beneficial, providing habitat for insects and microorganisms and nutrients for the soil. Leaving some leaves rather than removing them all can support biodiversity.
Battery-powered leaf blowers produce zero emissions, are more efficient, and are even rechargeable, making them an increasingly reliable alternative to gas power.
Bans
Soon after the leaf blower was introduced into the U.S., its use was banned in two California cities, Carmel-by-the-Sea in 1975 and Beverly Hills in 1978, as a noise nuisance. There are currently twenty California cities that have banned leaf blowers, sometimes only within residential neighborhoods and usually targeting gasoline-powered equipment. Another 80 cities have ordinances on the books restricting either usage or noise level or both.
Washington, DC, passed a ban on gas-power leaf blowers in 2018. A law banning the sale of gas-powered lawn equipment in California will take effect in 2024.
See also
String trimmer
References
External links
American inventions
Gardening tools
Leaves
Home appliances
20th-century inventions | Leaf blower | Physics,Technology | 1,343 |
36,872,941 | https://en.wikipedia.org/wiki/Mageba%20%28Swiss%20company%29 | Mageba (stylised as mageba) is a civil engineering service provider and manufacturer of bridge bearings, expansion joints, seismic protection and structural monitoring devices for the construction industry. The company is headquartered in Bulach, Switzerland, and operates through offices in Europe, Americas and Asia Pacific. In all, mageba has official representations in over 40 countries.
History
mageba was founded in 1963 in Bulach, Switzerland. By 1969 the company was designing and manufacturing a variety of bridge bearings and expansion joints, and had heavy duty testing facilities in operation. In 2004 the company merged with Proceq. The resulting company continued to design and manufacture bridge bearings and expansion joints.
In April 2011, mageba USA LLC was founded with offices in New York and San Jose.
By then mageba had production facilities in Fussach (Austria), Shanghai (China), and offices in Uslar and Stuttgart (Germany) and Cugy (Switzerland). By 2012, the company had four facilities in India, and was also operating in Russia, South Korea, and Turkey.
mageba has supplied bearings and expansion joints to more than 10,000 bridges around the world, including the Audubon Bridge in Louisiana USA, Incheon Bridge in South Korea the Golden Ears Bridge in British Columbia, Canada(2009), the Bandra Worli Sea link in India, the Øresund Bridge which has linked Denmark and Sweden since 2000, and the Tsing Ma Bridge in Hong Kong.
mageba also installs and services bridge components.
A recent focus of activities of the firm has been the provision of structure surveillance services, including installation and remote monitoring of sensors, inspections and testing.
References & Publications
External links
https://www.mageba-group.com/global/ Mageba International Website
https://www.mageba-group.com/us/ Mageba USA Website
https://www.youtube.com/magebagroup Mageba's YouTube channel
http://en.structurae.de/products/data/index.cfm?id=21 Structurae page
Civil engineering organizations
Manufacturing companies of Switzerland | Mageba (Swiss company) | Engineering | 440 |
69,533,402 | https://en.wikipedia.org/wiki/Anthony%20Kelly%20%28materials%20scientist%29 | Anthony Kelly CBE FRS (25 January 1929 — 3 June 2014) was a British materials scientist.
He joined the Crystallography Research Group in the Cavendish Laboratory in 1950, after completing his physics undergraduate degree at the University of Reading. In the 50s, he held positions at the University of Illinois, the University of Birmingham, and Northwestern University, before returning to Cambridge in 1959 as lecturer in the department of metallurgy.
In 1967, he moved to the National Physical Laboratory, where he worked first in the Division of Inorganic and Metallic Structure, and then in the Materials Group as deputy director. Whilst still involved with NPL, he served an extensive period as Vice Chancellor of the University of Surrey from 1975 to 1994. He returned to Cambridge in 1994 as a distinguished research fellow in the Department of Materials Science.
He was elected Fellow of the Royal Society in 1973, Fellow of the Royal Academy of Engineering in 1979.
References
1929 births
2014 deaths
Academics of the University of Cambridge
Alumni of the University of Reading
Fellows of the Royal Society
Fellows of the Royal Academy of Engineering
Materials scientists and engineers
People associated with the University of Surrey | Anthony Kelly (materials scientist) | Materials_science,Engineering | 225 |
26,701,331 | https://en.wikipedia.org/wiki/NuoG%20RNA%20motif | The nuoG RNA motif is a conserved RNA structure detected by bioinformatics. It is located in the presumed 5' untranslated regions of nuoG genes. This gene and the downstream genes probably comprise an operon that encodes various subunits of ubiquinone reductase enzyme.
nuoG RNAs are found only in some, but not all, enterobacteria. There is a question of whether sequences in the genus Salmonella correspond to nuoG RNAs that do not conserve the proposed secondary structure. If so, this observation would undermine the proposed conserved structure. However, the similarity in sequence between the recognized nuoG RNAs and the Salmonella sequences is loose, and so the sequences might be unrelated. Because of the question of the Salmonella sequences, some ambiguity remains as to whether or not nuoG RNAs do, in fact, function as structured RNAs.
References
External links
Cis-regulatory RNA elements | NuoG RNA motif | Chemistry | 192 |
52,925,755 | https://en.wikipedia.org/wiki/C21H29ClO3 | {{DISPLAYTITLE:C21H29ClO3}}
The molecular formula C21H29ClO3 (molar mass: 364.90616 g/mol) may refer to:
Clogestone, or chlormadinol
Clostebol acetate
Hydromadinone
Molecular formulas | C21H29ClO3 | Physics,Chemistry | 69 |
173,272 | https://en.wikipedia.org/wiki/Multi-exposure%20HDR%20capture | In photography and videography, multi-exposure HDR capture is a technique that creates high dynamic range (HDR) images (or extended dynamic range images) by taking and combining multiple exposures of the same subject matter at different exposures. Combining multiple images in this way results in an image with a greater dynamic range than what would be possible by taking one single image. The technique can also be used to capture video by taking and combining multiple exposures for each frame of the video. The term "HDR" is used frequently to refer to the process of creating HDR images from multiple exposures. Many smartphones have an automated HDR feature that relies on computational imaging techniques to capture and combine multiple exposures.
A single image captured by a camera provides a finite range of luminosity inherent to the medium, whether it is a digital sensor or film. Outside this range, tonal information is lost and no features are visible; tones that exceed the range are "burned out" and appear pure white in the brighter areas, while tones that fall below the range are "crushed" and appear pure black in the darker areas. The ratio between the maximum and the minimum tonal values that can be captured in a single image is known as the dynamic range. In photography, dynamic range is measured in exposure value (EV) differences, also known as stops.
The human eye's response to light is non-linear: halving the light level does not halve the perceived brightness of a space, it makes it look only slightly dimmer. For most illumination levels, the response is approximately logarithmic. Human eyes adapt fairly rapidly to changes in light levels. HDR can thus produce images that look more like what a human sees when looking at the subject.
This technique can be applied to produce images that preserve local contrast for a natural rendering, or exaggerate local contrast for artistic effect. HDR is useful for recording many real-world scenes containing a wider range of brightness than can be captured directly, typically both bright, direct sunlight and deep shadows. Due to the limitations of printing and display contrast, the extended dynamic range of HDR images must be compressed to the range that can be displayed. The method of rendering a high dynamic range image to a standard monitor or printing device is called tone mapping; it reduces the overall contrast of an HDR image to permit display on devices or prints with lower dynamic range.
Benefits
One aim of HDR is to present a similar range of luminance to that experienced through the human visual system. The human eye, through non-linear response, adaptation of the iris, and other methods, adjusts constantly to a broad range of luminance present in the environment. The brain continuously interprets this information so that a viewer can see in a wide range of light conditions.
Most cameras are limited to a much narrower range of exposure values within a single image, due to the dynamic range of the capturing medium. With a limited dynamic range, tonal differences can be captured only within a certain range of brightness. Outside of this range, no details can be distinguished: when the tone being captured exceeds the range in bright areas, these tones appear as pure white, and when the tone being captured does not meet the minimum threshold, these tones appear as pure black. Images captured with non-HDR cameras that have a limited exposure range (low dynamic range, LDR), may lose detail in highlights or shadows.
Modern CMOS image sensors have improved dynamic range and can often capture a wider range of tones in a single exposure reducing the need to perform multi-exposure HDR. Color film negatives and slides consist of multiple film layers that respond to light differently. Original film (especially negatives versus transparencies or slides) feature a very high dynamic range (in the order of 8 for negatives and 4 to 4.5 for positive transparencies).
Multi-exposure HDR is used in photography and also in extreme dynamic range applications such as welding or automotive work. In security cameras the term "wide dynamic range" is used instead of HDR.
Limitations
A fast-moving subject, or camera movement between the multiple exposures, will generate a "ghost" effect or a staggered-blur strobe effect due to the merged images not being identical. Unless the subject is static and the camera mounted on a tripod there may be a tradeoff between extended dynamic range and sharpness. Sudden changes in the lighting conditions (strobed LED light) can also interfere with the desired results, by producing one or more HDR layers that do have the luminosity expected by an automated HDR system, though one might still be able to produce a reasonable HDR image manually in software by rearranging the image layers to merge in order of their actual luminosity.
Because of the nonlinearity of some sensors image artifacts can be common.
Camera characteristics such as gamma curves, sensor resolution, noise, photometric calibration and color calibration affect resulting high-dynamic-range images.
Process
High-dynamic-range photographs are generally composites of multiple standard dynamic range images, often captured using exposure bracketing. Afterwards, photo manipulation software merges the input files into a single HDR image, which is then also tone mapped in accordance with the limitations of the planned output or display.
Capturing multiple images (exposure bracketing)
Any camera that allows manual exposure control can perform multi-exposure HDR image capture, although one equipped with automatic exposure bracketing (AEB) facilitates the process. Some cameras have an AEB feature that spans a far greater dynamic range than others, from ±0.6 in simpler cameras to ±18 EV in top professional cameras,
The exposure value (EV) refers to the amount of light applied to the light-sensitive detector, whether film or digital sensor such as a CCD. An increase or decrease of one stop is defined as a doubling or halving of the amount of light captured. Revealing detail in the darkest of shadows requires an increased EV, while preserving detail in very bright situations requires very low EVs.
EV is controlled using one of two photographic controls: varying either the size of the aperture or the exposure time. A set of images with multiple EVs intended for HDR processing should be captured only by altering the exposure time; altering the aperture size also would affect the depth of field and so the resultant multiple images would be quite different, preventing their final combination into a single HDR image.
Multi-exposure HDR photography generally is limited to still scenes because any movement between successive images will impede or prevent success in combining them afterward. Also, because the photographer must capture three or more images to obtain the desired luminance range, taking such a full set of images takes extra time. Photographers have developed calculation methods and techniques to partially overcome these problems, but the use of a sturdy tripod is advised to minimize framing differences between exposures.
Merging the images into an HDR image
Tonal information and details from shadow areas can be recovered from images that are deliberately overexposed (i.e., with positive EV compared to the correct scene exposure), while similar tonal information from highlight areas can be recovered from images that are deliberately underexposed (negative EV). The process of selecting and extracting shadow and highlight information from these over/underexposed images and then combining them with image(s) that are exposed correctly for the overall scene is known as exposure fusion. Exposure fusion can be performed manually, relying on the HDR operator's judgment, experience, and training, but usually, fusion is performed automatically by software.
Storing
Information stored in high-dynamic-range images typically corresponds to the physical values of luminance or radiance that can be observed in the real world. This is different from traditional digital images, which represent colors as they should appear on a monitor or a paper print. Therefore, HDR image formats are often called scene-referred, in contrast to traditional digital images, which are device-referred or output-referred. Furthermore, traditional images are usually encoded for the human visual system (maximizing the visual information stored in the fixed number of bits), which is usually called gamma encoding or gamma correction. The values stored for HDR images are often gamma compressed using mathematical functions such as power laws logarithms, or floating point linear values, since fixed-point linear encodings are increasingly inefficient over higher dynamic ranges.
HDR images often do not use fixed ranges per color channel, other than traditional images, to represent many more colors over a much wider dynamic range (multiple channels). For that purpose, they do not use integer values to represent the single color channels (e.g., 0–255 in an 8 bit per pixel interval for red, green and blue) but instead use a floating point representation. Common values are 16-bit (half precision) or 32-bit floating-point numbers to represent HDR pixels. However, when the appropriate transfer function is used, HDR pixels for some applications can be represented with a color depth that has as few as 10 to 12 bits ( to values) for luminance and 8 bits ( values) for chrominance without introducing any visible quantization artifacts.
Tone mapping
Tone mapping reduces the dynamic range, or contrast ratio, of an entire image while retaining localized contrast. Although it is a distinct operation, tone mapping is often applied to HDR files by the same software package.
Tone mapping is often needed because the dynamic range that can be displayed is often lower than the dynamic range of the captured or processed image. HDR displays can receive a higher dynamic range signal than SDR displays, reducing the need for tone mapping.
Types of HDR
HDR can be done via several methods:
DOL: Digital overlap
BME: Binned multiplexed exposure
SME: Spatially multiplexed exposure
QBC: Quad Bayer Coding
Examples
This is an example of four standard dynamic range images that are combined to produce three resulting tone mapped images:
This is an example of a scene with a very wide dynamic range:
Devices
Post-capture software
Several software applications are available on the PC, Mac, and Linux platforms for producing HDR files and tone mapped images. Notable titles include:
Adobe Photoshop
Affinity Photo
Aurora HDR
Dynamic Photo HDR
EasyHDR
GIMP
HDR PhotoStudio
Luminance HDR
Nik Collection HDR Efex Pro
Oloneo PhotoEngine
Photomatix Pro
PTGui
SNS-HDR
Photography
Several camera manufacturers offer built-in multi-exposure HDR features. For example, the Pentax K-7 DSLR has an HDR mode that makes 3 or 5 exposures and outputs (only) a tone mapped HDR image in a JPEG file. The Canon PowerShot G12, Canon PowerShot S95, and Canon PowerShot S100 offer similar features in a smaller format. Nikon's approach is called 'Active D-Lighting' which applies exposure compensation and tone mapping to the image as it comes from the sensor, with the emphasis being on creating a realistic effect.
Some smartphones provide HDR modes for their cameras, and most mobile platforms have apps that provide multi-exposure HDR picture taking. Google released a HDR+ mode for the Nexus 5 and Nexus 6 smartphones in 2014, which automatically captures a series of images and combines them into a single still image, as detailed by Marc Levoy. Unlike traditional HDR, Levoy's implementation of HDR+ uses multiple images underexposed by using a short shutter speed, which are then aligned and averaged by pixel, improving dynamic range and reducing noise. By selecting the sharpest image as the baseline for alignment, the effect of camera shake is reduced.
Some of the sensors on modern phones and cameras may combine two images on-chip so that a wider dynamic range without in-pixel compression is directly available to the user for display or processing.
Videography
Although not as established as for still photography capture, it is also possible to capture and combine multiple images for each frame of a video in order to increase the dynamic range captured by the camera. This can be done via multiple methods:
Creating a time-lapse of individually images created via the multi-exposure HDR technique.
Taking consecutively two differently exposed images by cutting the frame rate in half.
Taking simultaneously two differently exposed images by cutting the resolution in half.
Taking simultaneously two differently exposed images with full resolution and frame rate via a sensor with dual gain architecture. For example: Arri Alexa's sensor, Samsung sensors with Smart-ISO Pro.
Some cameras designed for use in security applications can automatically provide two or more images for each frame, with changing exposure. For example, a sensor for 30fps video will give out 60fps with the odd frames at a short exposure time and the even frames at a longer exposure time.
In 2020, Qualcomm announced Snapdragon 888, a mobile SoC able to do computational multi-exposure HDR video capture in 4K and also to record it in a format compatible with HDR displays.
In 2021, the Xiaomi Mi 11 Ultra smartphone is able to do computational multi-exposure HDR for video capture.
Surveillance cameras
HDR capture can be implemented on surveillance cameras, even inexpensive models. This is usually termed a wide dynamic range (WDR) function Examples include CarCam Tiny, Prestige DVR-390, and DVR-478.
History
Mid-19th century
The idea of using several exposures to adequately reproduce a too-extreme range of luminance was pioneered as early as the 1850s by Gustave Le Gray to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard methods, as the luminosity range was too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two into one picture in positive.
Mid-20th century
Manual tone mapping was accomplished by dodging and burning – selectively increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This was effective because the dynamic range of the negative is significantly higher than would be available on the finished positive paper print when that is exposed via the negative in a uniform manner. An excellent example is the photograph Schweitzer at the Lamp by W. Eugene Smith, from his 1954 photo essay A Man of Mercy on Albert Schweitzer and his humanitarian work in French Equatorial Africa. The image took five days to reproduce the tonal range of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow.
Ansel Adams elevated dodging and burning to an art form. Many of his famous prints were manipulated in the darkroom with these two methods. Adams wrote a comprehensive book on producing prints called The Print, which prominently features dodging and burning, in the context of his Zone System.
With the advent of color photography, tone mapping in the darkroom was no longer possible due to the specific timing needed during the developing process of color film. Photographers looked to film manufacturers to design new film stocks with improved response, or continued to shoot in black and white to use tone mapping methods.
Color film capable of directly recording high-dynamic-range images was developed by Charles Wyckoff and EG&G "in the course of a contract with the Department of the Air Force". This XR film had three emulsion layers, an upper layer having an ASA speed rating of 400, a middle layer with an intermediate rating, and a lower layer with an ASA rating of 0.004. The film was processed in a manner similar to color films, and each layer produced a different color. The dynamic range of this extended range film has been estimated as 1:108. It has been used to photograph nuclear explosions, for astronomical photography, for spectrographic research, and for medical imaging. Wyckoff's detailed pictures of nuclear explosions appeared on the cover of Life magazine in the mid-1950s.
Late 20th century
Georges Cornuéjols and licensees of his patents (Brdi, Hymatom) introduced the principle of HDR video image, in 1986, by interposing a matricial LCD screen in front of the camera's image sensor, increasing the sensors dynamic by five stops.
The concept of neighborhood tone mapping was applied to video cameras in 1988 by a group from the Technion in Israel, led by Oliver Hilsenrath and Yehoshua Y. Zeevi. Technion researchers filed for a patent on this concept in 1991, and several related patents in 1992 and 1993.
In February and April 1990, Georges Cornuéjols introduced the first real-time HDR camera that combined two images captured successively by a sensor or simultaneously by two sensors of the camera. This process is known as bracketing used for a video stream.
In 1991, the first commercial video camera was introduced that performed real-time capturing of multiple images with different exposures, and producing an HDR video image, by Hymatom, licensee of Georges Cornuéjols.
Also in 1991, Georges Cornuéjols introduced the HDR+ image principle by non-linear accumulation of images to increase the sensitivity of the camera: for low-light environments, several successive images are accumulated, thus increasing the signal-to-noise ratio.
In 1993, another commercial medical camera producing an HDR video image, by the Technion.
Modern HDR imaging uses a completely different approach, based on making a high-dynamic-range luminance or light map using only global image operations (across the entire image), and then tone mapping the result. Global HDR was first introduced in 1993 resulting in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by Steve Mann and Rosalind Picard.
On October 28, 1998, Ben Sarao created one of the first nighttime HDR+G (high dynamic range + graphic) image of STS-95 on the launch pad at NASA's Kennedy Space Center. It consisted of four film images of the space shuttle at night that were digitally composited with additional digital graphic elements. The image was first exhibited at NASA Headquarters Great Hall, Washington DC, in 1999 and then published in Hasselblad Forum.
The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. Steve Mann developed and patented the global-HDR method for producing digital images having extended dynamic range at the MIT Media Lab. Mann's method involved a two-step procedure: First, generate one floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods). Second, convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by the first step of Mann's process is called a lightspace image, lightspace picture, or radiance map. Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map, which has been used for computer vision, and other image processing operations.
21st century
In February 2001, the Dynamic Ranger technique was demonstrated, using multiple photos with different exposure levels to accomplish high dynamic range similar to the naked eye.
In the early 2000s, several scholarly research efforts used consumer-grade sensors and cameras. A few companies such as RED and Arri have been developing digital sensors capable of a higher dynamic range. RED EPIC-X can capture time-sequential HDRx images with a user-selectable 1–3 stops of additional highlight latitude in the "x" channel. The "x" channel can be merged with the normal channel in post production software. The Arri Alexa camera uses a dual-gain architecture to generate an HDR image from two exposures captured at the same time.
With the advent of low-cost consumer digital cameras, many amateurs began posting tone-mapped HDR time-lapse videos on the Internet, essentially a sequence of still photographs in quick succession. In 2010, the independent studio Soviet Montage produced an example of HDR video from disparately exposed video streams using a beam splitter and consumer grade HD video cameras. Similar methods have been described in the academic literature in 2001 and 2007.
In 2005, Adobe Systems introduced several new features in Photoshop CS2 including Merge to HDR, 32 bit floating point image support, and HDR tone mapping.
On June 30, 2016, Microsoft added support for the digital compositing of HDR images to Windows 10 using the Universal Windows Platform.
See also
Comparison of graphics file formats
HDRi (data format)
High-dynamic-range rendering
High-dynamic-range television
JPEG XT
Logluv TIFF
OpenEXR
RGBE image format
scRGB
Wide dynamic range
References
Benjamin Sarao (1999). Ben Sarao, Trenton, NJ, USA: Space Shuttle Discovery, pages 16–17 (English ed.). Victor Hasselblad AB, Goteborg, Sweden. ISSN 0282-5449
External links
Articles containing video clips
Computer graphics
High dynamic range
High-dynamic-range imaging
Photographic techniques | Multi-exposure HDR capture | Engineering | 4,343 |
967,813 | https://en.wikipedia.org/wiki/NGC%202403 | NGC 2403 (also known as Caldwell 7) is an intermediate spiral galaxy in the constellation Camelopardalis. It is an outlying member of the M81 Group, and is approximately 8 million light-years distant. It bears a similarity to M33, being about 50,000 light years in diameter and containing numerous star-forming H II regions.
The northern spiral arm connects it to the star forming region NGC 2404. NGC 2403 can be observed using 10×50 binoculars. NGC 2404 is 940 light-years in diameter, making it one of the largest known H II regions. This H II region represents striking similarity with NGC 604 in M33, both in size and location in galaxy.
Supernovae and Supernovae Imposters
There have been four reported astronomical transients in the galaxy:
SN 1954J was first noticed by Gustav Tammann and Allan Sandage as a "bright blue irregular variable" star, which they named V12. They noted it underwent a major outburst on 2/3 November 1954, which attained a magnitude of 16 at its brightest. In 1972, Fritz Zwicky classified this event as a type V supernova. It was later determined to be a supernova imposter: a highly luminous, very massive eruptive star, surrounded by a dusty nebula, similar to the 1843 Great Eruption of η Carinae in the Milky Way.
SN 2002 kg was discovered by LOTOSS (Lick Observatory and Tenagra Observatory Supernova Searches) on 26 October 2002 and initially classified as a type IIn, or possibly the outburst of a luminous blue variable. On 24 August 2021, it was reclassified as a Gap transient.
SN 2004dj (type II-P, mag. 11.2) was discovered by Kōichi Itagaki on 31 July 2004. At the time of its discovery, it was the nearest and brightest supernova observed in the 21st century.
AT2016ccd, initially designated as SNhunt225, is a luminous blue variable, first discovered by Catalina Real-time Transient Survey (CRTS) and Stan Howerton in December 2013. Outbursts from this star have been observed as recently as November 2021.
History
The galaxy was discovered by William Herschel in 1788. Edwin Hubble detected Cepheid variables in NGC 2403 using the Hale Telescope, making it the first galaxy beyond the Local Group within which a Cepheid was discovered. By 1963, 59 variables had been found in NGC 2403, of which 17 were eventually confirmed as Cepheids, with periods between 20 and 87 days. As late as 1950 Hubble was using a distance of just under 2 million light years for the galaxy's distance, but by 1968 the analysis of the Cepheids increased this by almost a factor of five, to within 0.2 magnitudes of the current value.
Companions
NGC 2403 has two known companions. One is the relatively massive dwarf galaxy DDO 44. It is currently being disrupted by NGC 2403, as evidenced by a tidal stream extending on both sides of DDO 44. DDO 44 is approaching NGC 2403 at a distance much closer than typical for dwarf galaxy interactions. It currently has a V-band absolute magnitude of −12.9, but its progenitor was even more luminous.
The other known companion is officially named MADCASH J074238+652501-dw, although it is nicknamed MADCASH-1. The name refers to the MADCASH (Magellanic Analog Dwarf Companions and Stellar Halos) project. MADCASH-1 is similar to typical dwarf spheroidal galaxies in the Local Group; it is quite faint, with an absolute V-band magnitude of −7.81, and has only an ancient, metal-poor population of red giant stars.
Luminous blue variables in NGC 2403
NGC 2403 has four known luminous blue variables. AT 2016ccd, NGC 2403 V14, NGC 2403 V37, and NGC 2403 V12.
Not much is known about AT 2016ccd, besides that it is a luminous blue variable. AT 2016ccd has a magnitude of 18-19.95, so it is quite dim. NGC 2403 V14 is more well known then AT 2016ccd. NGC 2403 V14 has a size of 1,260.2 solar radii, it has a mass of 24 solar masses and has a temperature of 7,041 K. NGC 2403 V14 has a magnitude of 12.9. NGC 2403 V37 is not well known, it is believed to be a luminous blue variable with a magnitude of 12.9. NGC 2403 V12 is an unknown luminous blue variable with a magnitude of 6.5.
See also
Triangulum Galaxy-looks very similar to NGC 2403.
References
External links
Spiral Galaxy NGC 2403 at the astro-photography site of Mr. Takayuki Yoshida
NGC 2403 at ESA/Hubble
SEDS – NGC 2403
Intermediate spiral galaxies
M81 Group
Camelopardalis
2403
03918
21396
007b
Astronomical objects discovered in 1788
Discoveries by William Herschel | NGC 2403 | Astronomy | 1,062 |
2,295,167 | https://en.wikipedia.org/wiki/Zydis | Zydis is a technology used to manufacture orally disintegrating tablets developed by R.P. Scherer Corporation. Zydis tablets dissolve in the mouth within 3 seconds.
History
Zydis technology was developed by R.P. Scherer Corporation (currently owned by Catalent Pharma Solutions) in 1986. The technology's first commercial application was in August, 1993, when a new dosage form of Pepcidine (famotidine) from Merck & Co. was launched in Sweden.
In November 1993 Imodium Lingual (loperamide) from Janssen Pharmaceutica was released in Germany with Zydis technology.
In December, 1996, the Food and Drug Administration approved Claritin (loratadine) RediTabs from Schering-Plough, the first prescription drug with Zydis technology sold in the U.S.
Technology
A Zydis tablet is produced by lyophilizing or freeze-drying the
drug in a matrix usually consisting of gelatin. The resulting product is very lightweight and fragile, and must be dispensed in a special blister pack.
Amipara et al., in their article "Oral disintirating tablet of antihypertensive drug" explain the technology's limitations:
The Zydis formulations consist of a drug physically trapped in a water-soluble matrix (saccharine mixture and polymer), which is freeze dried to produce a product that dissolves rapidly when placed in mouth. The ideal candidate for Zydis technology should be chemically stable and insoluble and particle size preferably less than 50 micron.
Water soluble drugs might form eutectic mixtures and not freeze adequately, so dose is limited to 60 mg and the maximum drug limit is 400 mg for water insoluble drug as large particle sizes might present sedimentation problems during manufacture.
Advantages and disadvantages
Advantages
Zydis tablets:
are convenient for the patients who have difficulty in swallowing (children, old people, bed-ridden and psychiatric patients);
are fast to absorb;
don't require water to consume;
have good taste (mouth feel);
don't provoke choking or suffocation;
have high microbial resistance ("due to the low moisture content in the final product, the Zydis formulation does not allow microbial growth").
Disadvantages
Disadvantages include:
increased price due to cost-intensive production;
sensitivity to moisture (tablets can degrade at higher humidity);
poor physical resistance (easy to break);
limited ability to incorporate higher concentrations of active drug.
Fast dissolving drugs with Zydis technology
Data from "Fast Disintegrating Drug Delivery Systems: A Review with Special Emphasis on Fast Disintegrating Tablets" (2013).
See also
Orally disintegrating tablet
Catalent Pharma Solutions
External links
References
Drug delivery devices
Dosage forms | Zydis | Chemistry | 603 |
19,530,253 | https://en.wikipedia.org/wiki/HD%20112028 | HD 112028 is an evolved star in the northern constellation of Camelopardalis. It has spectral peculiarities that have been interpreted as a shell, and also relatively weak magnesium and silicon lines. Its spectral class has been variously assigned between B9 and A2, and its luminosity class between a subgiant and bright giant.
At an angular separation of 21.47″ is the slightly fainter spectroscopic binary HD 112014, consisting of a pair of A-type main sequence stars. HD 112028 and HD 112014 together are known as the binary star Struve 1694.
References
External links
HR 4893
CCDM J12492+8325
Image HD 112028
Camelopardalis
112028
062572
A-type giants
4893
Durchmusterung objects | HD 112028 | Astronomy | 172 |
19,063,394 | https://en.wikipedia.org/wiki/NGC%201134 | NGC 1134 is a spiral galaxy in the Aries constellation. It has a highly inclined disk, with respect to the line of sight from Earth. There is a weak outer extension of the spiral structure in this galaxy. It has been listed in the Arp Atlas of Peculiar Galaxies (Arp number 200), under the "Galaxies with material ejected from nuclei" section. NGC 1134 is classified as a galaxy with reduced surface brightness, and it possesses a distinct bulge in its centre, as judged by photometric analysis. It has a small and distant companion about 7' to the south.
References
External links
NGC 1134
Image NGC 1134
Aries (constellation)
1134
02365
200
10928
10928
Spiral galaxies | NGC 1134 | Astronomy | 148 |
59,469 | https://en.wikipedia.org/wiki/Linear%20cryptanalysis | In cryptography, linear cryptanalysis is a general form of cryptanalysis based on finding affine approximations to the action of a cipher. Attacks have been developed for block ciphers and stream ciphers. Linear cryptanalysis is one of the two most widely used attacks on block ciphers; the other being differential cryptanalysis.
The discovery is attributed to Mitsuru Matsui, who first applied the technique to the FEAL cipher (Matsui and Yamagishi, 1992). Subsequently, Matsui published an attack on the Data Encryption Standard (DES), eventually leading to the first experimental cryptanalysis of the cipher reported in the open community (Matsui, 1993; 1994). The attack on DES is not generally practical, requiring 247 known plaintexts.
A variety of refinements to the attack have been suggested, including using multiple linear approximations or incorporating non-linear expressions, leading to a generalized partitioning cryptanalysis. Evidence of security against linear cryptanalysis is usually expected of new cipher designs.
Overview
There are two parts to linear cryptanalysis. The first is to construct linear equations relating plaintext, ciphertext and key bits that have a high bias; that is, whose probabilities of holding (over the space of all possible values of their variables) are as close as possible to 0 or 1. The second is to use these linear equations in conjunction with known plaintext-ciphertext pairs to derive key bits.
Constructing linear equations
For the purposes of linear cryptanalysis, a linear equation expresses the equality of two expressions which consist of binary variables combined with the exclusive-or (XOR) operation. For example, the following equation, from a hypothetical cipher, states the XOR sum of the first and third plaintext bits (as in a block cipher's block) and the first ciphertext bit is equal to the second bit of the key:
In an ideal cipher, any linear equation relating plaintext, ciphertext and key bits would hold with probability 1/2. Since the equations dealt with in linear cryptanalysis will vary in probability, they are more accurately referred to as linear approximations.
The procedure for constructing approximations is different for each cipher. In the most basic type of block cipher, a substitution–permutation network, analysis is concentrated primarily on the S-boxes, the only nonlinear part of the cipher (i.e. the operation of an S-box cannot be encoded in a linear equation). For small enough S-boxes, it is possible to enumerate every possible linear equation relating the S-box's input and output bits, calculate their biases and choose the best ones. Linear approximations for S-boxes then must be combined with the cipher's other actions, such as permutation and key mixing, to arrive at linear approximations for the entire cipher. The piling-up lemma is a useful tool for this combination step. There are also techniques for iteratively improving linear approximations (Matsui 1994).
Deriving key bits
Having obtained a linear approximation of the form:
we can then apply a straightforward algorithm (Matsui's Algorithm 2), using known plaintext-ciphertext pairs, to guess at the values of the key bits involved in the approximation.
For each set of values of the key bits on the right-hand side (referred to as a partial key), count how many times the approximation holds true over all the known plaintext-ciphertext pairs; call this count T. The partial key whose T has the greatest absolute difference from half the number of plaintext-ciphertext pairs is designated as the most likely set of values for those key bits. This is because it is assumed that the correct partial key will cause the approximation to hold with a high bias. The magnitude of the bias is significant here, as opposed to the magnitude of the probability itself.
This procedure can be repeated with other linear approximations, obtaining guesses at values of key bits, until the number of unknown key bits is low enough that they can be attacked with brute force.
See also
Piling-up lemma
Differential cryptanalysis
References
External links
Linear Cryptanalysis of DES
A Tutorial on Linear and Differential Cryptanalysis
Linear Cryptanalysis Demo
A tutorial on linear (and differential) cryptanalysis of block ciphers
"Improving the Time Complexity of Matsui's Linear Cryptanalysis", improves the complexity thanks to the Fast Fourier Transform
Cryptographic attacks | Linear cryptanalysis | Technology | 900 |
3,473,022 | https://en.wikipedia.org/wiki/Carbonate%20ester | In organic chemistry, a carbonate ester (organic carbonate or organocarbonate) is an ester of carbonic acid. This functional group consists of a carbonyl group flanked by two alkoxy groups. The general structure of these carbonates is and they are related to esters (), ethers () and also to the inorganic carbonates.
Monomers of polycarbonate (e.g. Makrolon or Lexan) are linked by carbonate groups. These polycarbonates are used in eyeglass lenses, compact discs, and bulletproof glass. Small carbonate esters like dimethyl carbonate, ethylene carbonate, propylene carbonate are used as solvents, dimethyl carbonate is also a mild methylating agent.
Structures
Carbonate esters have planar OC(OC)2 cores, which confers rigidity. The unique O=C bond is short (1.173 Å in the depicted example), while the C-O bonds are more ether-like (the bond distances of 1.326 Å for the example depicted).
Carbonate esters can be divided into three structural classes: acyclic, cyclic, and polymeric. The first and general case is the acyclic carbonate group. Organic substituents can be identical or not. Both aliphatic or aromatic substituents are known, they are called dialkyl or diaryl carbonates, respectively. The simplest members of these classes are dimethyl carbonate and diphenyl carbonate.
Alternatively, the carbonate groups can be linked by a 2- or 3-carbon bridge, forming cyclic compounds such as ethylene carbonate and trimethylene carbonate. The bridging compound can also have substituents, e.g. CH3 for propylene carbonate. Instead of terminal alkyl or aryl groups, two carbonate groups can be linked by an aliphatic or aromatic bifunctional group.
A third family of carbonates are the polymers, such as poly(propylene carbonate) and poly(bisphenol A carbonate) (e.g. Makrolon or Lexan).
Preparation
Organic carbonates are not prepared from inorganic carbonate salts.
Two main routes to carbonate esters are practiced: the reaction of an alcohol (or phenol) with phosgene (phosgenation), and the reaction of an alcohol with carbon monoxide and an oxidizer (oxidative carbonylation). Other carbonate esters may subsequently be prepared by transesterification.
In principle carbonate esters can be prepared by direct condensation of methanol and carbon dioxide. The reaction is however thermodynamically unfavorable. A selective membrane can be used to separate the water from the reaction mixture and increase the yield.
Phosgenation
Alcohols react with phosgene to yield carbonate esters according to the following reaction:
2 ROH + COCl2 → ROC(O)OR + 2 HCl
Phenols react similarly. Polycarbonate derived from bisphenol A is produced in this manner. This process is high yielding. However, toxic phosgene is used, and stoichiometric quantities of base (e.g. pyridine) are required to neutralize the hydrogen chloride that is cogenerated. Chloroformate esters are intermediates in this process. Rather than reacting with additional alcohol, they may disproportionate to give the desired carbonate diesters and one equivalent of phosgene:
PhOH + COCl2 → PhOC(O)Cl + HCl
2 PhOC(O)Cl → PhOC(O)OPh + COCl2
Overall reaction is:
2 PhOH + COCl2 → PhOC(O)OPh + 2 HCl
Oxidative carbonylation
Oxidative carbonylation is an alternative to phosgenation. The advantage is the avoidance of phosgene. Using copper catalysts, dimethylcarbonate is prepared in this way:
2 MeOH + CO + 1/2 O2 → MeOC(O)OMe + H2O
Diphenyl carbonate is also prepared similarly, but using palladium catalysts. The Pd-catalyzed process requires a cocatalyst to reconvert the Pd(0) to Pd(II). Manganese(III) acetylacetonate has been used commercially.
Reaction of carbon dioxide with epoxides
The reaction of carbon dioxide with epoxides is a general route to the preparation of cyclic 5-membered carbonates. Annual production of cyclic carbonates was estimated at 100,000 tonnes per year in 2010. Industrially, ethylene and propylene oxides readily react with carbon dioxide to give ethylene and propylene carbonates (with an appropriate catalyst). For example:
C2H4O + CO2 → C2H4O2CO
Carbonate transesterification
Carbonate esters can be converted to other carbonates by transesterification. A more nucleophilic alcohol will displace a less nucleophilic alcohol. In other words, aliphatic alcohols will displace phenols from aryl carbonates. If the departing alcohol is more volatile, the equilibrium may be driven by distilling that off.
Reactions
Carbonate esters undergo many of the reactions of conventional carboxylic acid esters. With Grignard reagents carbonate esters react to give tertiary alcohols. Some cyclic carbonates are susceptible to polymerization.
Uses
Organic carbonates are used as solvents in lithium batteries. Due to their high polarity, they dissolve lithium salts. The problem of high viscosity is circumvented by using mixtures for example of dimethyl carbonate, diethyl carbonate, and dimethoxyethane.
They are also used as solvents in organic synthesis. Classified as polar solvents, they have a wide liquid temperature range. One example is propylene carbonate with melting point −55 °C and boiling point 240 °C. Other advantages are low ecotoxicity and good biodegradability. Many industrial production pathways for carbonates are not green because they rely on phosgene or propylene oxide.
Dimethyl dicarbonate is commonly used as a beverage preservative, processing aid, or sterilant.
References
Functional groups | Carbonate ester | Chemistry | 1,331 |
76,396,169 | https://en.wikipedia.org/wiki/NGC%202012 | NGC 2012 is a large lenticular galaxy in the Constellation Mensa. It was discovered by John Herschel in 1836. With its distance from the Earth being over 236 million light years, NGC 2012 is not visible to the naked eye, and a large telescope is needed. A probe has never been sent out to study the galaxy.
Discovery
Polymath John Herschel observed the galaxy in 1836, and it was then added to the New General Catalog (NGC). The galaxy itself is a relatively long distance from Earth, making Herschel's find very uncommon for the time period.
References
Spiral galaxies
Mensa (constellation)
2012
17194
Astronomical objects discovered in 1836
Discoveries by John Herschel | NGC 2012 | Astronomy | 140 |
35,441,511 | https://en.wikipedia.org/wiki/Fontaine%E2%80%93Mazur%20conjecture | In mathematics, the Fontaine–Mazur conjectures are some conjectures introduced by about when p-adic representations of Galois groups of number fields can be constructed from representations on étale cohomology groups of varieties. Some cases of this conjecture in dimension 2 have been proved by .
The first conjecture stated by Fontaine and Mazur assumes that is an irreducible representation that is unramified except at a finite number of primes and which is not the Tate twist of an even representation that factors through a finite quotient group of . It claims that in this case, is associated to a cuspidal newform if and only if is potentially semi-stable at .
References
External links
Robert Coleman's lectures on the Fontaine–Mazur conjecture
Galois theory
Number theory
Conjectures | Fontaine–Mazur conjecture | Mathematics | 164 |
7,106,579 | https://en.wikipedia.org/wiki/Business%20support%20system | Business support systems (BSS) are the components that a telecommunications service provider (or telco) uses to run its business operations towards customers.
Together with operations support systems (OSS), they are used to support various end-to-end telecommunication services (e.g., telephone services). BSS and OSS have their own data and service responsibilities. The two systems together are abbreviated in various ways, such as OSS/BSS, BSS/OSS, B/OSS, BSSOSS, OSSBSS or BOSS. Some commentators and analysts take a network-up approach to these systems (hence OSS/BSS) and others take a business-down approach (hence BSS/OSS).
The initialism BSS is also used in a singular form to refer to all the business support systems, viewed as a whole system.
Role
BSS deals with the taking of orders, payment issues, revenues, etc. It supports four processes: product management, order management, revenue management and customer management.
Product management
Product management supports product development, the sales and management of products, offers and bundles to businesses and mass-market customers. Product management regularly includes offering cross-product discounts, appropriate pricing and managing how products relate to one another.
Customer management
Service providers require a single view of the customer and regularly need to support complex hierarchies across customer-facing applications (customer relationship management). Customer management also covers requirements for partner management and 24x7 web-based customer self-service. Customer management can also be thought of as full-fledged customer relationship management systems implemented to help customer care agents handle the customers in a better and more informed manner.
Revenue management
Revenue management focuses on billing, charging and settlement. It includes billing for consumer, enterprise and wholesale services, including interconnect and roaming. This includes billing mediation systems, bill generation and bill presentment. Revenue management may also include fraud management and revenue assurance.
Order management
Order management encompasses four areas:
Order decomposition details the rules for decomposing a Sales Order into multiple work orders or service orders. For example, a Triple Play Telco Sales order with three services - landline, Internet and wireless - can be broken down into three sub-orders, one for each line of business. Each of the sub-orders will be fulfilled separately in its own provisioning systems. However, there may be dependencies in each sub-order; e.g., an Internet sub-order can be fulfilled only when the landline has been successfully installed, provisioned and activated at the customer premises.
Order orchestration is an objective application which is used by telcos to precisely manage, process and handle their customer orders across a multiple fulfillment and order capture network. It helps in the data aggregation transversely from assorted order capture and order fulfillment systems and delivers an all-inclusive platform for customer order management. It has been in vast application in the recent times, due to its advanced and precise order information efficiency and low order fulfillment costs, thus aggregating lesser manual process, and faster output. Its radical exception response based functioning and proactive monitoring enables it to centralize order data in accurate manner with ease.
Order fallout, also known as Order Failure, refers to the condition when an order fails during processing. The order fallout occurs due to multiple scenarios; such as downstream system failure, which relates to an internal non-data related error; or when the system receives incorrect or missing data, which subsequently fails the order. Other Order Fallout conditions include database failure or error pertaining to network connectivity. Validation or recognition of order also occurs, in which the system marks the received corrupted order from an external system as failed. Another Order Fallout condition refers to the state of run-time failure, wherein an order is inhibited from getting processed due to non-determined reliance. Order Fallout Management helps in complete resolve of order failures through detection, notification and recovery process, helping the order to process sustain-ably and precisely.
Order status management
Order management as a beginning of assurance is normally associated with OSS, although BSS is often the business driver for fulfillment management and order provisioning.
See also
Business Process Framework (eTOM)
Operations, administration and management (OAM)
References
External links
What is BSS?
Not your parents’ BSS/OSS: A digital stack for operators in the internet economy
Business software
Telecommunications systems | Business support system | Technology | 894 |
39,848,877 | https://en.wikipedia.org/wiki/Quantum%20Hall%20transitions | Quantum Hall transitions are the quantum phase transitions that occur between different robustly quantized electronic phases of the quantum Hall effect. The robust quantization of these electronic phases is due to strong localization of electrons in their disordered, two-dimensional potential. But, at the quantum Hall transition, the electron gas delocalizes as can be observed in the laboratory. This phenomenon is understood in the language of topological field theory. Here, a vacuum angle (or 'theta angle') distinguishes between topologically different sectors in the vacuum. These topological sectors correspond to the robustly quantized phases. The quantum Hall transitions can then be understood by looking at the topological excitations (instantons) that occur between those phases.
Historical perspective
Just after the first measurements on the quantum Hall effect in 1980, physicists wondered how the strongly localized electrons in the disordered potential were able to delocalize at their phase transitions. At that time, the field theory of Anderson localization didn't yet include a topological angle and hence it predicted that: "for any given amount of disorder, all states in two dimensions are localized". A result that was irreconcilable with the observations on delocalization. Without knowing the solution to this problem, physicists resorted to a semi-classical picture of localized electrons that, given a certain energy, were able to percolate through the disorder. This percolation mechanism was what assumed to delocalize the electrons
As a result of this semi-classical idea, many numerical computations were done based on the percolation picture. On top of the classical percolation phase transition, quantum tunneling was included in computer simulations to calculate the critical exponent of the `semi-classical percolation phase transition'. To compare this result with the measured critical exponent, the Fermi-liquid approximation was used, where the Coulomb interactions between electrons are assumed to be finite. Under this assumption, the ground state of the free electron gas can be adiabatically transformed into the ground state of the interacting system and this gives rise to an inelastic scattering length so that the canonical correlation length exponent can be compared to the measured critical exponent.
But, at the quantum phase transition, the localization lengths of the electrons becomes infinite (i.e. they delocalize) and this compromises the Fermi-liquid assumption of an inherently free electron gas (where individual electrons must be well-distinguished). The quantum Hall transition will therefore not be in the Fermi-liquid universality class, but in the 'F-invariant' universality class that has a different value for the critical exponent. The semi-classical percolation picture of the quantum Hall transition is therefore outdated (although still widely used) and we need to understand the delocalization mechanism as an instanton effect.
Disorder in the sample
The random disorder in the potential landscape of the two-dimensional electron gas plays a key role in the observation of topological sectors and their instantons (phase transitions). Because of the disorder, the electrons are localized and thus they cannot flow across the sample. But if we consider a loop around a localized 2D electron, we can notice that current is still able to flow in the direction around this loop. This current is able to renormalize to larger scales and eventually becomes the Hall current that rotates along the edge of the sample. A topological sector corresponds to an integer number of rotations and it is now visible macroscopically, in the robustly quantized behavior of the measurable Hall current. If the electrons were not sufficiently localized, this measurement would be blurred out by the usual flow of current through the sample.
For the subtle observations on phase transitions it is important that the disorder is of the right kind. The random nature of the potential landscape should be apparent on a scale sufficiently smaller than the sample size in order to clearly distinguish the different phases of the system. These phases are only observable by the principle of emergence, so the difference between self-similar scales has to be multiple orders of magnitude for the critical exponent to be well-defined. On the opposite side, when the disorder correlation length is too small, the states are not sufficiently localized to observe them delocalize.
Renormalization group flow diagram
On the basis of the Renormalization Group Theory of the instanton vacuum one can form a general flow diagram where the topological sectors are represented by attractive fixed points. When scaling the effective system to larger sizes, the system generally flows to a stable phase at one of these points and as we can see in the flow diagram on the right, the longitudinal conductivity will vanish and the Hall conductivity takes on a quantized value. If we started with a Hall conductivity that is halfway between two attractive points, we would end up on the phase transition between topological sectors. As long as the symmetry isn't broken, the longitudinal conductivity doesn't vanish and is even able to increase when scaling to a larger system size. In the flow diagram, we see fixed points that are repulsive in the direction of the Hall current and attractive in the direction of the longitudinal current. It is most interesting to approach these fixed saddle points as close as possible and measure the (universal) behavior of the quantum Hall transitions.
Super-universality
If the system is rescaled, the change in conductivity depends only on the distance between a fixed saddle point and the conductivity. The scaling behavior near the quantum Hall transitions is then universal and different quantum Hall samples will give the same scaling results. But, by studying the quantum Hall transitions theoretically, many different systems that are all in different universality classes have been found to share a super-universal fixed point structure. This means that many different systems that are all in different universality classes still share the same fixed point structure. They all have stable topological sectors and also share other super-universal features. That these features are super-universal is due to the fundamental nature of the vacuum angle that governs the scaling behavior of the systems. The topological vacuum angle can be constructed in any quantum field theory but only under the right circumstances can its features be observed. The vacuum angle also appears in quantum chromodynamics and might have been important in the formation of the early universe.
See also
Quantum Hall effect
Anderson localization
Fermi-liquid theory
Instantons
Universality (dynamical systems)
References
Hall effect
Phase transitions | Quantum Hall transitions | Physics,Chemistry,Materials_science | 1,310 |
24,321,692 | https://en.wikipedia.org/wiki/World%20Design%20Capital | The World Design Capital (WDC) programme, designated every two years by the World Design Organization (WDO), recognizes cities for their effective use of design to drive economic, social, cultural, and environmental development. Through a year-long programme of events, the designated city showcases best practices in sustainable design-led urban policy and innovation that improve quality of life.
World Design Capitals by year
References
Design institutions
Industrial design
Capitals
Design events | World Design Capital | Engineering | 89 |
211,960 | https://en.wikipedia.org/wiki/Reinforcement | In behavioral psychology, reinforcement refers to consequences that increase the likelihood of an organism's future behavior, typically in the presence of a particular antecedent stimulus. For example, a rat can be trained to push a lever to receive food whenever a light is turned on; in this example, the light is the antecedent stimulus, the lever pushing is the operant behavior, and the food is the reinforcer. Likewise, a student that receives attention and praise when answering a teacher's question will be more likely to answer future questions in class; the teacher's question is the antecedent, the student's response is the behavior, and the praise and attention are the reinforcements.
Consequences that lead to appetitive behavior such as subjective "wanting" and "liking" (desire and pleasure) function as rewards or positive reinforcement. There is also negative reinforcement, which involves taking away an undesirable stimulus. An example of negative reinforcement would be taking an aspirin to relieve a headache.
Reinforcement is an important component of operant conditioning and behavior modification. The concept has been applied in a variety of practical areas, including parenting, coaching, therapy, self-help, education, and management.
Terminology
In the behavioral sciences, the terms "positive" and "negative" refer when used in their strict technical sense to the nature of the action performed by the conditioner rather than to the responding operant's evaluation of that action and its consequence(s). "Positive" actions are those that add a factor, be it pleasant or unpleasant, to the environment, whereas "negative" actions are those that remove or withhold from the environment a factor of either type. In turn, the strict sense of "reinforcement" refers only to reward-based conditioning; the introduction of unpleasant factors and the removal or withholding of pleasant factors are instead referred to as "punishment", which when used in its strict sense thus stands in contradistinction to "reinforcement". Thus, "positive reinforcement" refers to the addition of a pleasant factor, "positive punishment" refers to the addition of an unpleasant factor, "negative reinforcement" refers to the removal or withholding of an unpleasant factor, and "negative punishment" refers to the removal or withholding of a pleasant factor.
This usage is at odds with some non-technical usages of the four term combinations, especially in the case of the term "negative reinforcement", which is often used to denote what technical parlance would describe as "positive punishment" in that the non-technical usage interprets "reinforcement" as subsuming both reward and punishment and "negative" as referring to the responding operant's evaluation of the factor being introduced. By contrast, technical parlance would use the term "negative reinforcement" to describe encouragement of a given behavior by creating a scenario in which an unpleasant factor is or will be present but engaging in the behavior results in either escaping from that factor or preventing its occurrence, as in Martin Seligman’s experiment involving dogs learning to avoid electric shocks.
Introduction
B.F. Skinner was a well-known and influential researcher who articulated many of the theoretical constructs of reinforcement and behaviorism. Skinner defined reinforcers according to the change in response strength (response rate) rather than to more subjective criteria, such as what is pleasurable or valuable to someone. Accordingly, activities, foods or items considered pleasant or enjoyable may not necessarily be reinforcing (because they produce no increase in the response preceding them). Stimuli, settings, and activities only fit the definition of reinforcers if the behavior that immediately precedes the potential reinforcer increases in similar situations in the future; for example, a child who receives a cookie when he or she asks for one. If the frequency of "cookie-requesting behavior" increases, the cookie can be seen as reinforcing "cookie-requesting behavior". If however, "cookie-requesting behavior" does not increase the cookie cannot be considered reinforcing.
The sole criterion that determines if a stimulus is reinforcing is the change in probability of a behavior after administration of that potential reinforcer. Other theories may focus on additional factors such as whether the person expected a behavior to produce a given outcome, but in the behavioral theory, reinforcement is defined by an increased probability of a response.
The study of reinforcement has produced an enormous body of reproducible experimental results. Reinforcement is the central concept and procedure in special education, applied behavior analysis, and the experimental analysis of behavior and is a core concept in some medical and psychopharmacology models, particularly addiction, dependence, and compulsion.
History
Laboratory research on reinforcement is usually dated from the work of Edward Thorndike, known for his experiments with cats escaping from puzzle boxes. A number of others continued this research, notably B.F. Skinner, who published his seminal work on the topic in The Behavior of Organisms, in 1938, and elaborated this research in many subsequent publications. Notably Skinner argued that positive reinforcement is superior to punishment in shaping behavior. Though punishment may seem just the opposite of reinforcement, Skinner claimed that they differ immensely, saying that positive reinforcement results in lasting behavioral modification (long-term) whereas punishment changes behavior only temporarily (short-term) and has many detrimental side-effects.
A great many researchers subsequently expanded our understanding of reinforcement and challenged some of Skinner's conclusions. For example, Azrin and Holz defined punishment as a “consequence of behavior that reduces the future probability of that behavior,” and some studies have shown that positive reinforcement and punishment are equally effective in modifying behavior. Research on the effects of positive reinforcement, negative reinforcement and punishment continue today as those concepts are fundamental to learning theory and apply to many practical applications of that theory.
Operant conditioning
The term operant conditioning was introduced by Skinner to indicate that in his experimental paradigm, the organism is free to operate on the environment. In this paradigm, the experimenter cannot trigger the desirable response; the experimenter waits for the response to occur (to be emitted by the organism) and then a potential reinforcer is delivered. In the classical conditioning paradigm, the experimenter triggers (elicits) the desirable response by presenting a reflex eliciting stimulus, the unconditional stimulus (UCS), which they pair (precede) with a neutral stimulus, the conditional stimulus (CS).
Reinforcement is a basic term in operant conditioning. For the punishment aspect of operant conditioning, see punishment (psychology).
Positive reinforcement
Positive reinforcement occurs when a desirable event or stimulus is presented as a consequence of a behavior and the chance that this behavior will manifest in similar environments increases. For example, if reading a book is fun, then experiencing the fun positively reinforces the behavior of reading fun books. The person who receives the positive reinforcement (i.e., who has fun reading the book) will read more books to have more fun.
The high probability instruction (HPI) treatment is a behaviorist treatment based on the idea of positive reinforcement.
Negative reinforcement
Negative reinforcement increases the rate of a behavior that avoids or escapes an aversive situation or stimulus. That is, something unpleasant is already happening, and the behavior helps the person avoid or escape the unpleasantness. In contrast to positive reinforcement, which involves adding a pleasant stimulus, in negative reinforcement, the focus is on the removal of an unpleasant situation or stimulus. For example, if someone feels unhappy, then they might engage in a behavior (e.g., reading books) to escape from the aversive situation (e.g., their unhappy feelings). The success of that avoidant or escapist behavior in removing the unpleasant situation or stimulus reinforces the behavior.
Doing something unpleasant to people to prevent or remove a behavior from happening again is punishment, not negative reinforcement. The main difference is that reinforcement always increases the likelihood of a behavior (e.g., channel surfing while bored temporarily alleviated boredom; therefore, there will be more channel surfing while bored), whereas punishment decreases it (e.g., hangovers are an unpleasant stimulus, so people learn to avoid the behavior that led to that unpleasant stimulus).
Extinction
Extinction occurs when a given behavior is ignored (i.e. followed up with no consequence). Behaviors disappear over time when they continuously receive no reinforcement. During a deliberate extinction, the targeted behavior spikes first (in an attempt to produce the expected, previously reinforced effects), and then declines over time. Neither reinforcement nor extinction need to be deliberate in order to have an effect on a subject's behavior. For example, if a child reads books because they are fun, then the parents' decision to ignore the book reading will not remove the positive reinforcement (i.e., fun) the child receives from reading books. However, if a child engages in a behavior to get attention from the parents, then the parents' decision to ignore the behavior will cause the behavior to go extinct, and the child will find a different behavior to get their parents' attention.
Reinforcement versus punishment
Reinforcers serve to increase behaviors whereas punishers serve to decrease behaviors; thus, positive reinforcers are stimuli that the subject will work to attain, and negative reinforcers are stimuli that the subject will work to be rid of or to end. The table below illustrates the adding and subtracting of stimuli (pleasant or aversive) in relation to reinforcement vs. punishment.
Further ideas and concepts
Distinguishing between positive and negative reinforcement can be difficult and may not always be necessary. Focusing on what is being removed or added and how it affects behavior can be more helpful.
An event that punishes behavior for some may reinforce behavior for others.
Some reinforcement can include both positive and negative features, such as a drug addict taking drugs for the added euphoria (positive reinforcement) and also to eliminate withdrawal symptoms (negative reinforcement).
Reinforcement in the business world is essential in driving productivity. Employees are constantly motivated by the ability to receive a positive stimulus, such as a promotion or a bonus. Employees are also driven by negative reinforcement, such as by eliminating unpleasant tasks.
Though negative reinforcement has a positive effect in the short term for a workplace (i.e. encourages a financially beneficial action), over-reliance on a negative reinforcement hinders the ability of workers to act in a creative, engaged way creating growth in the long term.
Primary and secondary reinforcers
A primary reinforcer, sometimes called an unconditioned reinforcer, is a stimulus that does not require pairing with a different stimulus in order to function as a reinforcer and most likely has obtained this function through the evolution and its role in species' survival. Examples of primary reinforcers include food, water, and sex. Some primary reinforcers, such as certain drugs, may mimic the effects of other primary reinforcers. While these primary reinforcers are fairly stable through life and across individuals, the reinforcing value of different primary reinforcers varies due to multiple factors (e.g., genetics, experience). Thus, one person may prefer one type of food while another avoids it. Or one person may eat much food while another eats very little. So even though food is a primary reinforcer for both individuals, the value of food as a reinforcer differs between them.
A secondary reinforcer, sometimes called a conditioned reinforcer, is a stimulus or situation that has acquired its function as a reinforcer after pairing with a stimulus that functions as a reinforcer. This stimulus may be a primary reinforcer or another conditioned reinforcer (such as money).
When trying to distinguish primary and secondary reinforcers in human examples, use the "caveman test." If the stimulus is something that a caveman would naturally find desirable (e.g. candy) then it is a primary reinforcer. If, on the other hand, the caveman would not react to it (e.g. a dollar bill), it is a secondary reinforcer. As with primary reinforcers, an organism can experience satisfaction and deprivation with secondary reinforcers.
Other reinforcement terms
A generalized reinforcer is a conditioned reinforcer that has obtained the reinforcing function by pairing with many other reinforcers and functions as a reinforcer under a wide-variety of motivating operations. (One example of this is money because it is paired with many other reinforcers).
In reinforcer sampling, a potentially reinforcing but unfamiliar stimulus is presented to an organism without regard to any prior behavior.
Socially-mediated reinforcement involves the delivery of reinforcement that requires the behavior of another organism. For example, another person is providing the reinforcement.
The Premack principle is a special case of reinforcement elaborated by David Premack, which states that a highly preferred activity can be used effectively as a reinforcer for a less-preferred activity.
Reinforcement hierarchy is a list of actions, rank-ordering the most desirable to least desirable consequences that may serve as a reinforcer. A reinforcement hierarchy can be used to determine the relative frequency and desirability of different activities, and is often employed when applying the Premack principle.
Contingent outcomes are more likely to reinforce behavior than non-contingent responses. Contingent outcomes are those directly linked to a causal behavior, such a light turning on being contingent on flipping a switch. Note that contingent outcomes are not necessary to demonstrate reinforcement, but perceived contingency may increase learning.
Contiguous stimuli are stimuli closely associated by time and space with specific behaviors. They reduce the amount of time needed to learn a behavior while increasing its resistance to extinction. Giving a dog a piece of food immediately after sitting is more contiguous with (and therefore more likely to reinforce) the behavior than a several minute delay in food delivery following the behavior.
Noncontingent reinforcement refers to response-independent delivery of stimuli identified as reinforcers for some behaviors of that organism. However, this typically entails time-based delivery of stimuli identified as maintaining aberrant behavior, which decreases the rate of the target behavior. As no measured behavior is identified as being strengthened, there is controversy surrounding the use of the term noncontingent "reinforcement".
Natural and artificial reinforcement
In his 1967 paper, Arbitrary and Natural Reinforcement, Charles Ferster proposed classifying reinforcement into events that increase the frequency of an operant behavior as a natural consequence of the behavior itself, and events that affect frequency by their requirement of human mediation, such as in a token economy where subjects are rewarded for certain behavior by the therapist.
In 1970, Baer and Wolf developed the concept of "behavioral traps." A behavioral trap requires only a simple response to enter the trap, yet once entered, the trap cannot be resisted in creating general behavior change. It is the use of a behavioral trap that increases a person's repertoire, by exposing them to the naturally occurring reinforcement of that behavior. Behavioral traps have four characteristics:
They are "baited" with desirable reinforcers that "lure" the student into the trap.
Only a low-effort response already in the repertoire is necessary to enter the trap.
Interrelated contingencies of reinforcement inside the trap motivate the person to acquire, extend, and maintain targeted skills.
They can remain effective for long periods of time because the person shows few, if any, satiation effects.
Thus, artificial reinforcement can be used to build or develop generalizable skills, eventually transitioning to naturally occurring reinforcement to maintain or increase the behavior. Another example is a social situation that will generally result from a specific behavior once it has met a certain criterion.
Intermittent reinforcement schedules
Behavior is not always reinforced every time it is emitted, and the pattern of reinforcement strongly affects how fast an operant response is learned, what its rate is at any given time, and how long it continues when reinforcement ceases. The simplest rules controlling reinforcement are continuous reinforcement, where every response is reinforced, and extinction, where no response is reinforced. Between these extremes, more complex schedules of reinforcement specify the rules that determine how and when a response will be followed by a reinforcer.
Specific schedules of reinforcement reliably induce specific patterns of response, and these rules apply across many different species. The varying consistency and predictability of reinforcement is an important influence on how the different schedules operate. Many simple and complex schedules were investigated at great length by B.F. Skinner using pigeons.
Simple schedules
Ratio schedule – the reinforcement depends only on the number of responses the organism has performed.
Continuous reinforcement (CRF) – a schedule of reinforcement in which every occurrence of the instrumental response (desired response) is followed by the reinforcer.
Simple schedules have a single rule to determine when a single type of reinforcer is delivered for a specific response.
Fixed ratio (FR) – schedules deliver reinforcement after every nth response. An FR 1 schedule is synonymous with a CRF schedule.
Variable ratio schedule (VR) – reinforced on average every nth response, but not always on the nth response.
Fixed interval (FI) – reinforced after n amount of time.
Variable interval (VI) – reinforced on an average of n amount of time, but not always exactly n amount of time.
Fixed time (FT) – Provides a reinforcing stimulus at a fixed time since the last reinforcement delivery, regardless of whether the subject has responded or not. In other words, it is a non-contingent schedule.
Variable time (VT) – Provides reinforcement at an average variable time since last reinforcement, regardless of whether the subject has responded or not.
Simple schedules are utilized in many differential reinforcement procedures:
Differential reinforcement of alternative behavior (DRA) - A conditioning procedure in which an undesired response is decreased by placing it on extinction or, less commonly, providing contingent punishment, while simultaneously providing reinforcement contingent on a desirable response. An example would be a teacher attending to a student only when they raise their hand, while ignoring the student when he or she calls out.
Differential reinforcement of other behavior (DRO) – Also known as omission training procedures, an instrumental conditioning procedure in which a positive reinforcer is periodically delivered only if the participant does something other than the target response. An example would be reinforcing any hand action other than nose picking.
Differential reinforcement of incompatible behavior (DRI) – Used to reduce a frequent behavior without punishing it by reinforcing an incompatible response. An example would be reinforcing clapping to reduce nose picking
Differential reinforcement of low response rate (DRL) – Used to encourage low rates of responding. It is like an interval schedule, except that premature responses reset the time required between behavior.
Differential reinforcement of high rate (DRH) – Used to increase high rates of responding. It is like an interval schedule, except that a minimum number of responses are required in the interval in order to receive reinforcement.
Effects of different types of simple schedules
Fixed ratio: activity slows after reinforcer is delivered, then response rates increase until the next reinforcer delivery (post-reinforcement pause).
Variable ratio: rapid, steady rate of responding; most resistant to extinction.
Fixed interval: responding increases towards the end of the interval; poor resistance to extinction.
Variable interval: steady activity results, good resistance to extinction.
Ratio schedules produce higher rates of responding than interval schedules, when the rates of reinforcement are otherwise similar.
Variable schedules produce higher rates and greater resistance to extinction than most fixed schedules. This is also known as the Partial Reinforcement Extinction Effect (PREE).
The variable ratio schedule produces both the highest rate of responding and the greatest resistance to extinction (for example, the behavior of gamblers at slot machines).
Fixed schedules produce "post-reinforcement pauses" (PRP), where responses will briefly cease immediately following reinforcement, though the pause is a function of the upcoming response requirement rather than the prior reinforcement.
The PRP of a fixed interval schedule is frequently followed by a "scallop-shaped" accelerating rate of response, while fixed ratio schedules produce a more "angular" response.
fixed interval scallop: the pattern of responding that develops with fixed interval reinforcement schedule, performance on a fixed interval reflects subject's accuracy in telling time.
Organisms whose schedules of reinforcement are "thinned" (that is, requiring more responses or a greater wait before reinforcement) may experience "ratio strain" if thinned too quickly. This produces behavior similar to that seen during extinction.
Ratio strain: the disruption of responding that occurs when a fixed ratio response requirement is increased too rapidly.
Ratio run: high and steady rate of responding that completes each ratio requirement. Usually higher ratio requirement causes longer post-reinforcement pauses to occur.
Partial reinforcement schedules are more resistant to extinction than continuous reinforcement schedules.
Ratio schedules are more resistant than interval schedules and variable schedules more resistant than fixed ones.
Momentary changes in reinforcement value lead to dynamic changes in behavior.
Compound schedules
Compound schedules combine two or more different simple schedules in some way using the same reinforcer for the same behavior. There are many possibilities; among those most often used are:
Alternative schedules''' – A type of compound schedule where two or more simple schedules are in effect and whichever schedule is completed first results in reinforcement.
Conjunctive schedules – A complex schedule of reinforcement where two or more simple schedules are in effect independently of each other, and requirements on all of the simple schedules must be met for reinforcement.
Multiple schedules – Two or more schedules alternate over time, with a stimulus indicating which is in force. Reinforcement is delivered if the response requirement is met while a schedule is in effect.
Mixed schedules – Either of two, or more, schedules may occur with no stimulus indicating which is in force. Reinforcement is delivered if the response requirement is met while a schedule is in effect.Concurrent schedules – A complex reinforcement procedure in which the participant can choose any one of two or more simple reinforcement schedules that are available simultaneously. Organisms are free to change back and forth between the response alternatives at any time.
Concurrent-chain schedule of reinforcement' – A complex reinforcement procedure in which the participant is permitted to choose during the first link which of several simple reinforcement schedules will be in effect in the second link. Once a choice has been made, the rejected alternatives become unavailable until the start of the next trial.
Interlocking schedules – A single schedule with two components where progress in one component affects progress in the other component. In an interlocking FR 60 FI 120-s schedule, for example, each response subtracts time from the interval component such that each response is "equal" to removing two seconds from the FI schedule.
Chained schedules – Reinforcement occurs after two or more successive schedules have been completed, with a stimulus indicating when one schedule has been completed and the next has started
Tandem schedules – Reinforcement occurs when two or more successive schedule requirements have been completed, with no stimulus indicating when a schedule has been completed and the next has started.
Higher-order schedules – completion of one schedule is reinforced according to a second schedule; e.g. in FR2 (FI10 secs), two successive fixed interval schedules require completion before a response is reinforced.
Superimposed schedules
The psychology term superimposed schedules of reinforcement refers to a structure of rewards where two or more simple schedules of reinforcement operate simultaneously. Reinforcers can be positive, negative, or both. An example is a person who comes home after a long day at work. The behavior of opening the front door is rewarded by a big kiss on the lips by the person's spouse and a rip in the pants from the family dog jumping enthusiastically. Another example of superimposed schedules of reinforcement is a pigeon in an experimental cage pecking at a button. The pecks deliver a hopper of grain every 20th peck, and access to water after every 200 pecks.
Superimposed schedules of reinforcement are a type of compound schedule that evolved from the initial work on simple schedules of reinforcement by B.F. Skinner and his colleagues (Skinner and Ferster, 1957). They demonstrated that reinforcers could be delivered on schedules, and further that organisms behaved differently under different schedules. Rather than a reinforcer, such as food or water, being delivered every time as a consequence of some behavior, a reinforcer could be delivered after more than one instance of the behavior. For example, a pigeon may be required to peck a button switch ten times before food appears. This is a "ratio schedule". Also, a reinforcer could be delivered after an interval of time passed following a target behavior. An example is a rat that is given a food pellet immediately following the first response that occurs after two minutes has elapsed since the last lever press. This is called an "interval schedule".
In addition, ratio schedules can deliver reinforcement following fixed or variable number of behaviors by the individual organism. Likewise, interval schedules can deliver reinforcement following fixed or variable intervals of time following a single response by the organism. Individual behaviors tend to generate response rates that differ based upon how the reinforcement schedule is created. Much subsequent research in many labs examined the effects on behaviors of scheduling reinforcers.
If an organism is offered the opportunity to choose between or among two or more simple schedules of reinforcement at the same time, the reinforcement structure is called a "concurrent schedule of reinforcement". Brechner (1974, 1977) introduced the concept of superimposed schedules of reinforcement in an attempt to create a laboratory analogy of social traps, such as when humans overharvest their fisheries or tear down their rainforests. Brechner created a situation where simple reinforcement schedules were superimposed upon each other. In other words, a single response or group of responses by an organism led to multiple consequences. Concurrent schedules of reinforcement can be thought of as "or" schedules, and superimposed schedules of reinforcement can be thought of as "and" schedules. Brechner and Linder (1981) and Brechner (1987) expanded the concept to describe how superimposed schedules and the social trap analogy could be used to analyze the way energy flows through systems.
Superimposed schedules of reinforcement have many real-world applications in addition to generating social traps. Many different human individual and social situations can be created by superimposing simple reinforcement schedules. For example, a human being could have simultaneous tobacco and alcohol addictions. Even more complex situations can be created or simulated by superimposing two or more concurrent schedules. For example, a high school senior could have a choice between going to Stanford University or UCLA, and at the same time have the choice of going into the Army or the Air Force, and simultaneously the choice of taking a job with an internet company or a job with a software company. That is a reinforcement structure of three superimposed concurrent schedules of reinforcement.
Superimposed schedules of reinforcement can create the three classic conflict situations (approach–approach conflict, approach–avoidance conflict, and avoidance–avoidance conflict) described by Kurt Lewin (1935) and can operationalize other Lewinian situations analyzed by his force field analysis. Other examples of the use of superimposed schedules of reinforcement as an analytical tool are its application to the contingencies of rent control (Brechner, 2003) and problem of toxic waste dumping in the Los Angeles County storm drain system (Brechner, 2010).
Concurrent schedules
In operant conditioning, concurrent schedules of reinforcement are schedules of reinforcement that are simultaneously available to an animal subject or human participant, so that the subject or participant can respond on either schedule. For example, in a two-alternative forced choice task, a pigeon in a Skinner box is faced with two pecking keys; pecking responses can be made on either, and food reinforcement might follow a peck on either. The schedules of reinforcement arranged for pecks on the two keys can be different. They may be independent, or they may be linked so that behavior on one key affects the likelihood of reinforcement on the other.
It is not necessary for responses on the two schedules to be physically distinct. In an alternate way of arranging concurrent schedules, introduced by Findley in 1958, both schedules are arranged on a single key or other response device, and the subject can respond on a second key to change between the schedules. In such a "Findley concurrent" procedure, a stimulus (e.g., the color of the main key) signals which schedule is in effect.
Concurrent schedules often induce rapid alternation between the keys. To prevent this, a "changeover delay" is commonly introduced: each schedule is inactivated for a brief period after the subject switches to it.
When both the concurrent schedules are variable intervals, a quantitative relationship known as the matching law is found between relative response rates in the two schedules and the relative reinforcement rates they deliver; this was first observed by R.J. Herrnstein in 1961. Matching law is a rule for instrumental behavior which states that the relative rate of responding on a particular response alternative equals the relative rate of reinforcement for that response (rate of behavior = rate of reinforcement). Animals and humans have a tendency to prefer choice in schedules.
Shaping
Shaping is the reinforcement of successive approximations to a desired instrumental response. In training a rat to press a lever, for example, simply turning toward the lever is reinforced at first. Then, only turning and stepping toward it is reinforced. Eventually the rat will be reinforced for pressing the lever. The successful attainment of one behavior starts the shaping process for the next. As training progresses, the response becomes progressively more like the desired behavior, with each subsequent behavior becoming a closer approximation of the final behavior.
The intervention of shaping is used in many training situations, and also for individuals with autism as well as other developmental disabilities. When shaping is combined with other evidence-based practices such as Functional Communication Training (FCT), it can yield positive outcomes for human behavior. Shaping typically uses continuous reinforcement, but the response can later be shifted to an intermittent reinforcement schedule.
Shaping is also used for food refusal. Food refusal is when an individual has a partial or total aversion to food items. This can be as minimal as being a picky eater to so severe that it can affect an individual's health. Shaping has been used to have a high success rate for food acceptance.
Chaining
Chaining involves linking discrete behaviors together in a series, such that the consequence of each behavior is both the reinforcement for the previous behavior, and the antecedent stimulus for the next behavior. There are many ways to teach chaining, such as forward chaining (starting from the first behavior in the chain), backwards chaining (starting from the last behavior) and total task chaining (teaching each behavior in the chain simultaneously). People's morning routines are a typical chain, with a series of behaviors (e.g. showering, drying off, getting dressed) occurring in sequence as a well learned habit.
Challenging behaviors seen in individuals with autism and other related disabilities have successfully managed and maintained in studies using a scheduled of chained reinforcements. Functional communication training is an intervention that often uses chained schedules of reinforcement to effectively promote the appropriate and desired functional communication response.
Mathematical models
There has been research on building a mathematical model of reinforcement. This model is known as MPR, which is short for mathematical principles of reinforcement. Peter Killeen has made key discoveries in the field with his research on pigeons.
Applications
Reinforcement and punishment are ubiquitous in human social interactions, and a great many applications of operant principles have been suggested and implemented. Following are a few examples.
Addiction and dependence
Positive and negative reinforcement play central roles in the development and maintenance of addiction and drug dependence. An addictive drug is intrinsically rewarding; that is, it functions as a primary positive reinforcer of drug use. The brain's reward system assigns it incentive salience (i.e., it is "wanted" or "desired"), so as an addiction develops, deprivation of the drug leads to craving. In addition, stimuli associated with drug use – e.g., the sight of a syringe, and the location of use – become associated with the intense reinforcement induced by the drug. These previously neutral stimuli acquire several properties: their appearance can induce craving, and they can become conditioned positive reinforcers of continued use. Thus, if an addicted individual encounters one of these drug cues, a craving for the associated drug may reappear. For example, anti-drug agencies previously used posters with images of drug paraphernalia as an attempt to show the dangers of drug use. However, such posters are no longer used because of the effects of incentive salience in causing relapse upon sight of the stimuli illustrated in the posters.
In drug dependent individuals, negative reinforcement occurs when a drug is self-administered in order to alleviate or "escape" the symptoms of physical dependence (e.g., tremors and sweating) and/or psychological dependence (e.g., anhedonia, restlessness, irritability, and anxiety) that arise during the state of drug withdrawal.
Animal training
Animal trainers and pet owners were applying the principles and practices of operant conditioning long before these ideas were named and studied, and animal training still provides one of the clearest and most convincing examples of operant control. Of the concepts and procedures described in this article, a few of the most salient are: availability of immediate reinforcement (e.g. the ever-present bag of dog yummies); contingency, assuring that reinforcement follows the desired behavior and not something else; the use of secondary reinforcement, as in sounding a clicker immediately after a desired response; shaping, as in gradually getting a dog to jump higher and higher; intermittent reinforcement, reducing the frequency of those yummies to induce persistent behavior without satiation; chaining, where a complex behavior is gradually put together.
Child behavior – parent management training
Providing positive reinforcement for appropriate child behaviors is a major focus of parent management training. Typically, parents learn to reward appropriate behavior through social rewards (such as praise, smiles, and hugs) as well as concrete rewards (such as stickers or points towards a larger reward as part of an incentive system created collaboratively with the child). In addition, parents learn to select simple behaviors as an initial focus and reward each of the small steps that their child achieves towards reaching a larger goal (this concept is called "successive approximations").Forgatch MS, Patterson GR (2010). Parent management training — Oregon model: An intervention for antisocial behavior in children and adolescents. Evidence-based psychotherapies for children and adolescents (2nd ed.), 159–78. New York: Guilford Press. They may also use indirect rewards such through progress charts. Providing positive reinforcement in the classroom can be beneficial to student success. When applying positive reinforcement to students, it's crucial to make it individualized to that student's needs. This way, the student understands why they are receiving the praise, they can accept it, and eventually learn to continue the action that was earned by positive reinforcement. For example, using rewards or extra recess time might apply to some students more, whereas others might accept the enforcement by receiving stickers or check marks indicating praise.
Economics
Both psychologists and economists have become interested in applying operant concepts and findings to the behavior of humans in the marketplace. An example
is the analysis of consumer demand, as indexed by the amount of a commodity that is purchased. In economics, the degree to which price influences consumption is called "the price elasticity of demand." Certain commodities are more elastic than others; for example, a change in price of certain foods may have a large effect on the amount bought, while gasoline and other essentials may be less affected by price changes. In terms of operant analysis, such effects may be interpreted in terms of motivations of consumers and the relative value of the commodities as reinforcers.
Gambling – variable ratio scheduling
As stated earlier in this article, a variable ratio schedule yields reinforcement after the emission of an unpredictable number of responses. This schedule typically generates rapid, persistent responding. Slot machines pay off on a variable ratio schedule, and they produce just this sort of persistent lever-pulling behavior in gamblers. Because the machines are programmed to pay out less money than they take in, the persistent slot-machine user invariably loses in the long run. Slots machines, and thus variable ratio reinforcement, have often been blamed as a factor underlying gambling addiction.
Praise
The concept of praise as a means of behavioral reinforcement in humans is rooted in B.F. Skinner's model of operant conditioning. Through this lens, praise has been viewed as a means of positive reinforcement, wherein an observed behavior is made more likely to occur by contingently praising said behavior. Hundreds of studies have demonstrated the effectiveness of praise in promoting positive behaviors, notably in the study of teacher and parent use of praise on child in promoting improved behavior and academic performance, but also in the study of work performance. Praise has also been demonstrated to reinforce positive behaviors in non-praised adjacent individuals (such as a classmate of the praise recipient) through vicarious reinforcement. Praise may be more or less effective in changing behavior depending on its form, content and delivery. In order for praise to effect positive behavior change, it must be contingent on the positive behavior (i.e., only administered after the targeted behavior is enacted), must specify the particulars of the behavior that is to be reinforced, and must be delivered sincerely and credibly.
Acknowledging the effect of praise as a positive reinforcement strategy, numerous behavioral and cognitive behavioral interventions have incorporated the use of praise in their protocols. The strategic use of praise is recognized as an evidence-based practice in both classroom management and parenting training interventions, though praise is often subsumed in intervention research into a larger category of positive reinforcement, which includes strategies such as strategic attention and behavioral rewards.
Traumatic bonding
Traumatic bonding occurs as the result of ongoing cycles of abuse in which the intermittent reinforcement of reward and punishment creates powerful emotional bonds that are resistant to change.Chrissie Sanderson. Counselling Survivors of Domestic Abuse. Jessica Kingsley Publishers; 15 June 2008. . p. 84.
The other source indicated that
'The necessary conditions for traumatic bonding are that one person must dominate the other and that the level of abuse chronically spikes and then subsides. The relationship is characterized by periods of permissive, compassionate, and even affectionate behavior from the dominant person, punctuated by intermittent episodes of intense abuse. To maintain the upper hand, the victimizer manipulates the behavior of the victim and limits the victim's options so as to perpetuate the power imbalance. Any threat to the balance of dominance and submission may be met with an escalating cycle of punishment ranging from seething intimidation to intensely violent outbursts. The victimizer also isolates the victim from other sources of support, which reduces the likelihood of detection and intervention, impairs the victim's ability to receive countervailing self-referent feedback, and strengthens the sense of unilateral dependency ... The traumatic effects of these abusive relationships may include the impairment of the victim's capacity for accurate self-appraisal, leading to a sense of personal inadequacy and a subordinate sense of dependence upon the dominating person. Victims also may encounter a variety of unpleasant social and legal consequences of their emotional and behavioral affiliation with someone who perpetrated aggressive acts, even if they themselves were the recipients of the aggression.
Video games
Most video games are designed around some type of compulsion loop, adding a type of positive reinforcement through a variable rate schedule to keep the player playing the game, though this can also lead to video game addiction.
As part of a trend in the monetization of video games in the 2010s, some games offered "loot boxes" as rewards or purchasable by real-world funds that offered a random selection of in-game items, distributed by rarity. The practice has been tied to the same methods that slot machines and other gambling devices dole out rewards, as it follows a variable rate schedule. While the general perception that loot boxes are a form of gambling, the practice is only classified as such in a few countries as gambling and otherwise legal. However, methods to use those items as virtual currency for online gambling or trading for real-world money has created a skin gambling market that is under legal evaluation.
Criticisms
The standard definition of behavioral reinforcement has been criticized as circular, since it appears to argue that response strength is increased by reinforcement, and defines reinforcement as something that increases response strength (i.e., response strength is increased by things that increase response strength). However, the correct usage of reinforcement is that something is a reinforcer because'' of its effect on behavior, and not the other way around. It becomes circular if one says that a particular stimulus strengthens behavior because it is a reinforcer, and does not explain why a stimulus is producing that effect on the behavior. Other definitions have been proposed, such as F.D. Sheffield's "consummatory behavior contingent on a response", but these are not broadly used in psychology.
Increasingly, understanding of the role reinforcers play is moving away from a "strengthening" effect to a "signalling" effect. That is, the view that reinforcers increase responding because they signal the behaviors that are likely to result in reinforcement. While in most practical applications, the effect of any given reinforcer will be the same regardless of whether the reinforcer is signalling or strengthening, this approach helps to explain a number of behavioral phenomena including patterns of responding on intermittent reinforcement schedules (fixed interval scallops) and the differential outcomes effect.
See also
References
Further reading
External links
An On-Line Positive Reinforcement Tutorial
Scholarpedia Reinforcement
scienceofbehavior.com
Behavior therapy
Behavioral concepts
Behaviorism
Addiction
et:Sarrus | Reinforcement | Biology | 8,460 |
4,694,904 | https://en.wikipedia.org/wiki/Donald%20Mackay%20%28scientist%29 | Donald Mackay (30 October 1936 – 20 October 2023) was a Scottish-born Canadian scientist and engineer specializing in environmental chemistry.
Life and career
Donald Mackay was born on 30 October 1936. He was a member of the faculty of Chemical Engineering and Applied Chemistry at the University of Toronto and the founding director of the Canadian Environmental Modelling Centre at Trent University. He has developed several multimedia fugacity models. He has stressed that principles of good practice also need to be adopted for chemical assessments, especially in a regulatory context.
In 2004, Mackay was invested as an Officer of the Order of Canada for having "greatly contributed to the quality and our stewardship of the global environment". In 2004, he was also invested into the Order of Ontario for "his outstanding contributions to environmental science".
Mackay died at the Peterborough Regional Health Centre on 20 October 2023, at the age of 86.
References
1936 births
2023 deaths
Canadian chemists
Environmental scientists
Members of the Order of Ontario
Officers of the Order of Canada
Canadian people of Scottish descent
Academic staff of the University of Toronto
Canadian engineers
Environmental engineers
Scientists from Glasgow | Donald Mackay (scientist) | Chemistry,Engineering,Environmental_science | 221 |
40,643,107 | https://en.wikipedia.org/wiki/Stevensine | Stevensine is a bromopyrrole alkaloid originally isolated from an unidentified Micronesian marine sponge, as well as the known sponge species, Pseudaxinyssa cantharella and Axinella corrugata. Total synthesis of stevensine has been achieved by Ying-zi Xu et al., and investigations into the biosynthetic origin has been explored by Paul Andrade et al. Understanding methods to synthesize stevensine and other similar compounds is an important step to accomplish, as marine sponges contain numerous biologically active metabolites that have been shown to function as anything from antitumor to antibacterial agents when tested for medicinal applications. Reasons for why marine sponges contain so many bio-active chemicals has been attributed to their sessile nature, and the need to produce chemical defenses to ensure survival. However, since many of these compounds naturally occur in small amounts, harvesting the sponges has in the past led to near-extinction of some species.
The bioactive nature of stevensine has been explored both as to its evolutionary purpose as well as potential medicinal uses. At its natural concentrations in vivo, stevensine, as well as other secondary metabolite bromopyrroles from sponges have been shown to function as anti-feeding agents against predatory fish such as bluehead wrasse (Thalassoma bifasciatum). Stevensine is present in marine sponges in concentrations of approximately 19 mg/mL, but have been shown to deter feeding in a laboratory setting in concentrations as low as 2.25 mg/mL, while deterring in the field requires as much as 12 mg/mL. In vitro tests have shown that this compound functions as an antimicrobial agent, giving promise for this compound to be used as a potential drug, however it does not lower the activity of methicillin-resistant Staphylococcus aureus (MRSA), while related compounds isolated from sponges such as bromoageliferin do.
References
External links
Halogen-containing alkaloids
Azepines
Bromoarenes
Imidazoles
Pyrroles | Stevensine | Chemistry | 436 |
2,023,896 | https://en.wikipedia.org/wiki/MANIAC%20II | The MANIAC II (Mathematical Analyzer Numerical Integrator and Automatic Computer Model II) was a first-generation electronic computer, built in 1957 for use at Los Alamos Scientific Laboratory.
MANIAC II was built by the University of California and the Los Alamos Scientific Laboratory, completed in 1957 as a successor to MANIAC I. It used 2,850 vacuum tubes and 1,040 semiconductor diodes in the arithmetic unit. Overall it used 5,190 vacuum tubes, 3,050 semiconductor diodes, and 1,160 transistors.
It had 4,096 words of memory in Magnetic-core memory (with 2.4 microsecond access time), supplemented by 12,288 words of memory using Williams tubes (with 15 microsecond access time). The word size was 48 bits. Its average multiplication time was 180 microseconds and the average division time was 300 microseconds.
By the time of its decommissioning, the computer was all solid-state, using a combination of
RTL, DTL and TTL. It had an array multiplier, 15 index registers, 16K of 6-microsecond cycle time core memory, and 64K of 2-microsecond cycle time core memory. A NOP instruction took about 2.5 microseconds. A multiplication took 8 microseconds and a division 25 microseconds. It had a paging unit using 1K word pages with an associative 16-deep lookup memory. A 1-megaword CDC drum was hooked up as a paging device. It also had several ADDS Special-Order Direct-View Storage-Tube terminals. These terminals used an extended character set which covered about all the mathematical symbols, and allowed for half-line spacing for math formulas.
For I/O, it had two IBM 360 series nine-track and two seven-track 1/2" tape drives. It had an eight-bit paper-tape reader and punch, and a 500 line-per-minute printer (1500 line-per-minute using the hexadecimal character set). Storage was three IBM 7000 series 1301 disk drives, each having two modules of 21.6 million characters apiece.
One of the data products of MANIAC II was the table of numbers appearing in the book The 3-j and 6-j Symbols by Manuel Rotenberg, et al., published in 1959. Page 37 of that book contains a brief description of the implementation of the program on the computer, and the I/O devices used in the production of the book.
See also
List of vacuum-tube computers
MANIAC I
MANIAC III
Further reading
External links
BRL report on MANIAC II
One-of-a-kind computers
Vacuum tube computers
48-bit computers | MANIAC II | Technology | 582 |
37,875,107 | https://en.wikipedia.org/wiki/Lithium%20cyanide | Lithium cyanide is an inorganic compound with the chemical formula LiCN. It is a toxic, white coloured, hygroscopic, water-soluble salt that finds only niche uses.
Preparation
LiCN is produced from the reaction of lithium hydroxide and hydrogen cyanide. A laboratory-scale preparation uses acetone cyanohydrin as a surrogate for HCN:
(CH3)2C(OH)CN + LiH → (CH3)2CO + LiCN + H2
Uses
The compound decomposes to cyanamide and carbon when heated to a temperature close to but below 600 °C. Acids react to give hydrogen cyanide.
Lithium cyanide can be used as a reagent for organic compound cyanation.
RX + LiCN → RCN + LiX
References
Lithium salts
Cyanides
Lithium compounds | Lithium cyanide | Chemistry | 181 |
22,320,247 | https://en.wikipedia.org/wiki/Morphological%20Catalogue%20of%20Galaxies | The Morphological Catalogue of Galaxies (MCG) or Morfologiceskij Katalog Galaktik, is a Russian catalogue of 30,642 galaxies compiled by Boris Vorontsov-Velyaminov and V. P. Arkhipova. It is based on scrutiny of prints of the Palomar Sky Survey plates, and putatively complete to a photographic magnitude of 15. Including galaxies to magnitude 16 would have resulted in an unmanageably large dataset.
Publication
The catalogue was published in five parts (chapters) between 1962 and 1974, the final chapter including a certain number of galaxies with a photographic magnitude above 15.
Gallery
References
Astronomical catalogues of galaxies
Astronomy in the Soviet Union | Morphological Catalogue of Galaxies | Astronomy | 144 |
32,345,395 | https://en.wikipedia.org/wiki/Association%20of%20Plumbing%20and%20Heating%20Contractors | The Association of Plumbing and Heating Contractors (APHC) is a trade association for the plumbing and heating industry in England and Wales, representing around 1500 businesses employing some 60,000 specialist engineers ranging from those employed by large companies to sole traders working in domestic properties.
The APHC represented these specialists on the Specialist Engineering Contractors Group, a member of the Strategic Forum for Construction.
History
The APHC started in 1925 as the National Federation of Plumbers and Domestic Engineers, focused on industrial and commercial aspects of plumbing that had previously been managed by the Institute of Plumbers (today the CIPHE), which remained focused on education, training and technical matters.
It became the National Federation of Plumbers and Domestic Heating Engineers in 1965 to reflect the increased amount of members' work on heating systems. In 1972 it became an association: the National Association of Plumbing, Heating and Mechanical Services Contractors. It adopted its current name in 1996.
External links
APHC website
References
Construction organizations
Building
Engineering organizations | Association of Plumbing and Heating Contractors | Engineering | 197 |
45,655,682 | https://en.wikipedia.org/wiki/Corona%20Borealis%20Supercluster | The Corona Borealis Supercluster is a supercluster located in the constellation Corona Borealis and the most prominent example of its kind in the Northern Celestial Hemisphere. Dense and compact compared with other superclusters, its mass has been calculated to lie somewhere between 0.6 and 12 × 1016 solar masses (M⊙). It contains the galaxy clusters Abell 2056, Abell 2061, Abell 2065 (the most massive galaxy cluster within the supercluster), Abell 2067, Abell 2079, Abell 2089, and Abell 2092. Of these, Abell 2056, 2061, 2065, 2067 and A2089 are gravitationally bound and in the process of collapsing to form a massive cluster. This entity has an estimated mass of around 1 × 1016 M⊙. If there is inter-cluster mass present, then Abell 2092 may also be involved. It has been estimated to be 100 megaparsecs (330 million light-years) wide and 40 megaparsecs (130 million light years) deep. It has a redshift of 0.07, which is equivalent to a distance of around 265.5 megaparsecs (964 million light-years).
Observational history
Astronomers C. Donald Shane and Carl A. Wirtanen were the first to note a concentration or "cloud" of "extragalactic nebulae" in the region during a large-scale survey of extragalactic structures in the sky. George Abell was the first to note the presence of what he called "second-order clusters", namely clusters of clusters in the first publication of his Abell catalogue in 1958.
Postman and colleagues were the first to study the supercluster in detail in 1988, calculating it to have a mass of 8.2 × 1015 solar masses, and contain the Abell clusters Abell 2061, Abell 2065, Abell 2067, Abell 2079, Abell 2089, and Abell 2092. Abell 2124 lies 33 megaparsecs (110 million light-years) from the centre of the supercluster and has been considered part of the group by some authors.
Abell 2069 lies close by but is more distant, with a line-of-sight association only.
See also
Abell catalog
Large scale structure of the universe
List of Abell clusters
List of superclusters
References
Galaxy superclusters
Corona Borealis | Corona Borealis Supercluster | Astronomy | 513 |
26,014,321 | https://en.wikipedia.org/wiki/Tolerance%20relation | In universal algebra and lattice theory, a tolerance relation on an algebraic structure is a reflexive symmetric relation that is compatible with all operations of the structure. Thus a tolerance is like a congruence, except that the assumption of transitivity is dropped. On a set, an algebraic structure with empty family of operations, tolerance relations are simply reflexive symmetric relations. A set that possesses a tolerance relation can be described as a tolerance space. Tolerance relations provide a convenient general tool for studying indiscernibility/indistinguishability phenomena. The importance of those for mathematics had been first recognized by Poincaré.
Definitions
A tolerance relation on an algebraic structure is usually defined to be a reflexive symmetric relation on that is compatible with every operation in . A tolerance relation can also be seen as a cover of that satisfies certain conditions. The two definitions are equivalent, since for a fixed algebraic structure, the tolerance relations in the two definitions are in one-to-one correspondence. The tolerance relations on an algebraic structure form an algebraic lattice under inclusion. Since every congruence relation is a tolerance relation, the congruence lattice is a subset of the tolerance lattice , but is not necessarily a sublattice of .
As binary relations
A tolerance relation on an algebraic structure is a binary relation on that satisfies the following conditions.
(Reflexivity) for all
(Symmetry) if then for all
(Compatibility) for each -ary operation and , if for each then . That is, the set is a subalgebra of the direct product of two .
A congruence relation is a tolerance relation that is also transitive.
As covers
A tolerance relation on an algebraic structure is a cover of that satisfies the following three conditions.
For every and , if , then .
In particular, no two distinct elements of are comparable. (To see this, take .)
For every , if is not contained in any set in , then there is a two-element subset such that is not contained in any set in .
For every -ary and , there is a such that . (Such a need not be unique.)
Every partition of satisfies the first two conditions, but not conversely. A congruence relation is a tolerance relation that also forms a set partition.
Equivalence of the two definitions
Let be a tolerance binary relation on an algebraic structure . Let be the family of maximal subsets such that for every . Using graph theoretical terms, is the set of all maximal cliques of the graph . If is a congruence relation, is just the quotient set of equivalence classes. Then is a cover of and satisfies all the three conditions in the cover definition. (The last condition is shown using Zorn's lemma.) Conversely, let be a cover of and suppose that forms a tolerance on . Consider a binary relation on for which if and only if for some . Then is a tolerance on as a binary relation. The map is a one-to-one correspondence between the tolerances as binary relations and as covers whose inverse is . Therefore, the two definitions are equivalent. A tolerance is transitive as a binary relation if and only if it is a partition as a cover. Thus the two characterizations of congruence relations also agree.
Quotient algebras over tolerance relations
Let be an algebraic structure and let be a tolerance relation on . Suppose that, for each -ary operation and , there is a unique such that
Then this provides a natural definition of the quotient algebra
of over . In the case of congruence relations, the uniqueness condition always holds true and the quotient algebra defined here coincides with the usual one.
A main difference from congruence relations is that for a tolerance relation the uniqueness condition may fail, and even if it does not, the quotient algebra may not inherit the identities defining the variety that belongs to, so that the quotient algebra may fail to be a member of the variety again. Therefore, for a variety of algebraic structures, we may consider the following two conditions.
(Tolerance factorability) for any and any tolerance relation on , the uniqueness condition is true, so that the quotient algebra is defined.
(Strong tolerance factorability) for any and any tolerance relation on , the uniqueness condition is true, and .
Every strongly tolerance factorable variety is tolerance factorable, but not vice versa.
Examples
Sets
A set is an algebraic structure with no operations at all. In this case, tolerance relations are simply reflexive symmetric relations and it is trivial that the variety of sets is strongly tolerance factorable.
Groups
On a group, every tolerance relation is a congruence relation. In particular, this is true for all algebraic structures that are groups when some of their operations are forgot, e.g. rings, vector spaces, modules, Boolean algebras, etc. Therefore, the varieties of groups, rings, vector spaces, modules and Boolean algebras are also strongly tolerance factorable trivially.
Lattices
For a tolerance relation on a lattice , every set in is a convex sublattice of . Thus, for all , we have
In particular, the following results hold.
if and only if .
If and , then .
The variety of lattices is strongly tolerance factorable. That is, given any lattice and any tolerance relation on , for each there exist unique such that
and the quotient algebra
is a lattice again.
In particular, we can form quotient lattices of distributive lattices and modular lattices over tolerance relations. However, unlike in the case of congruence relations, the quotient lattices need not be distributive or modular again. In other words, the varieties of distributive lattices and modular lattices are tolerance factorable, but not strongly tolerance factorable. Actually, every subvariety of the variety of lattices is tolerance factorable, and the only strongly tolerance factorable subvariety other than itself is the trivial subvariety (consisting of one-element lattices). This is because every lattice is isomorphic to a sublattice of the quotient lattice over a tolerance relation of a sublattice of a direct product of two-element lattices.
See also
Dependency relation
Quasitransitive relation—a generalization to formalize indifference in social choice theory
Rough set
References
Further reading
Gerasin, S. N., Shlyakhov, V. V., and Yakovlev, S. V. 2008. Set coverings and tolerance relations. Cybernetics and Sys. Anal. 44, 3 (May 2008), 333–340.
Hryniewiecki, K. 1991, Relations of Tolerance, FORMALIZED MATHEMATICS, Vol. 2, No. 1, January–February 1991.
Universal algebra
Lattice theory
Reflexive relations
Symmetric relations
Approximations | Tolerance relation | Physics,Mathematics | 1,417 |
53,865,960 | https://en.wikipedia.org/wiki/STEVE | STEVE is an atmospheric optical phenomenon that appears as a purple and green light ribbon in the night sky, named in late 2016 by aurora watchers from Alberta, Canada. The backronym later adopted for the phenomenon is the Strong Thermal Emission Velocity Enhancement. According to analysis of satellite data from the European Space Agency's Swarm mission, the phenomenon is caused by a wide ribbon of hot plasma at an altitude of , with a temperature of and flowing at a speed of (compared to outside the ribbon). The phenomenon is not rare, but had not been investigated and described scientifically prior to that time.
Discovery and naming
The STEVE phenomenon has been observed by auroral photographers for decades. Some evidence suggests that STEVE observations may have been recorded as early as 1705. Notations resembling the phenomenon exist in some observations from 1911 to the 1950s by Carl Størmer.
The first accurate determination of the nature of the phenomenon was not made, however, until after members of a Facebook group, Alberta Aurora Chasers, named it, attributed it to a proton aurora, and began calling it a "proton arc". When physics professor Eric Donovan from the University of Calgary saw their photographs and suspected that their determination was incorrect because proton auroras are not visible, he correlated the time and location of the phenomenon with Swarm satellite data and one of the Alberta Aurora Chaser photographers, Song Despins. She provided GPS coordinates from Vimy, Alberta, that helped Donovan link the data to identify the phenomenon.
One of the aurora watchers, photographer Chris Ratzlaff, suggested using the name "Steve" for the phenomenon, in reference to Over the Hedge, an animated comedy movie from 2006. The characters in the movie give the name to a hedge that appears overnight, in order to make it seem more benign. Reports of the heretofore undescribed and unusual "aurora" went viral as an example of citizen science on Aurorasaurus.
During the fall meeting of the American Geophysical Union in December 2016, Robert Lysak suggested using a backronym of "Steve" for the phenomenon that would stand for a "Strong Thermal Emission Velocity Enhancement". That acronym, "STEVE", has been adopted by the team at NASA Goddard Space Flight Center that is studying the phenomenon.
Occurrence and cause
Location and timing
STEVE phenomena may be spotted further from the poles than the aurora, and as of March 2018, have been observed in the United Kingdom, Canada, Alaska, northern U.S. states, Australia, New Zealand and Denmark. The phenomenon appears as a very narrow arc extending for hundreds or thousands of kilometers, aligned east–west. It generally lasts for twenty minutes to an hour. As of March 2018, STEVE phenomena have only been spotted in the presence of an aurora. None were observed from October 2016 to February 2017, or from October 2017 to February 2018, leading NASA to believe that STEVE phenomena may only appear during certain seasons. However, STEVE phenomena have since been reported and photographed in South Australia during a geomagnetic storm event on 11 October 2024.
Research into cause
A study published in March 2018 by Elizabeth A. MacDonald and co-authors in the peer-reviewed journal, Science Advances, suggested that the STEVE phenomenon accompanies a subauroral ion drift (SAID), a fast-moving stream of extremely hot particles. STEVE marks the first observed visual effect accompanying a SAID.
In August 2018, researchers determined that the skyglow of the phenomenon was not associated with particle precipitation (electrons or ions) and, as a result, could be generated in the ionosphere.
One proposed mechanism for the glow is that excited nitrogen breaks apart and interacts with oxygen to form glowing nitric oxide.
Association with picket-fence aurora
Often, although not always, a STEVE phenomenon is observed above a green, "picket-fence" aurora according to a study published in Geophysical Research Letters. Although the picket-fence aurora is created through precipitation of electrons, they appear outside the auroral oval and so their formation is different from traditional aurora. The study also showed these phenomena appear in both hemispheres simultaneously. Sightings of picket-fence aurora have been made without observations of STEVE.
The green emissions in the picket fence aurora seem to be related to eddies in the supersonic flow of charged particles, similar to the eddies seen in a river that move more slowly than the water around them. Hence, the green bars in the picket fence are moving more slowly than the structures in the purple emissions and some scientists have speculated they could be caused by turbulence in the charged particles from space.
Research
2017
"How I met Steve" - Eric Donovan's presentation to the 2017 ESA Earth Explorer Missions Science Meeting, March 20, 2017 (1:08:30 - 1:26:00)
"On the location of Steve, the mysterious subauroral feature"
2018
"New Science in Plain Sight: Citizen scientists lead to the discovery of optical structure in the upper atmosphere"
"On the Origin of STEVE: Particle Precipitation or Ionospheric Skyglow?"
"Historical observations of STEVE"
"What else can citizen science and 'amateur' observations reveal about STEVE?"
"From the spark to the fire, reflections on five years of public participation in aurora research"
"On the origin and geomagnetic conditions of STEVE's formation"
"A Statistical Analysis of STEVE"
2019
"How Did We Miss This? An Upper Atmospheric Discovery Named STEVE"
"First Observations From the TREx Spectrograph: The Optical Spectrum of STEVE and the Picket Fence Phenomena"
"Color Ratios of Subauroral (STEVE) Arcs"
"A new dataset of STEVE phenomenon related observations spanning multiple solar cycles"
"Subauroral Green STEVE Arcs: Evidence for Low-Energy Excitation"
"Magnetospheric Signatures of STEVE: Implications for the Magnetospheric Energy Source and Interhemispheric Conjugacy"
"High-Latitude Ionospheric Electrodynamics Characterizing Energy and Momentum Deposition during STEVE Events Reported by Citizen Scientists"
"Steve: The Optical Signature of Intense Subauroral Ion Drifts"
"Optical Spectra and Emission Altitudes of Double-Layer STEVE: A Case Study"
"The Vertical Distribution of the Optical Emissions of a Steve and Picket Fence Event"
"Identifying STEVE's Magnetospheric Driver Using Conjugate Observations in the Magnetosphere and on the Ground"
"STEVE and the Picket Fence: Evidence of Feedback-Unstable Magnetosphere-Ionosphere Interaction"
"Possible Evidence of STEVE in Dynamics Explorer-2 Data"
2020
"Early Ground-Based Work by Auroral Pioneer Carl Størmer on the High-Altitude Detached Subauroral Arcs Now Known as “STEVE”"
"Early Evidence of Isolated Auroral Structures in the 100 km Height Regime Observed at Subauroral Latitudes by the Aurora Pioneer Carl Størmer"
"Early Ground-Based Work by Auroral Pioneer Carl Størmer on the High-Altitude Detached Subauroral Arcs Now Known as “STEVE”"
"Magnetospheric Conditions for STEVE and SAID: Particle Injection, Substorm Surge, and Field-Aligned Currents"
"Neutral Wind Dynamics Preceding the STEVE Occurrence and Their Possible Preconditioning Role in STEVE Formation"
"A Mechanism for the STEVE Continuum Emission"
"High-latitude Ionospheric Electrodynamics during STEVE Events"
"Dynamics of Auroral Precipitation Boundaries Associated With STEVE and SAID"
"The Apparent Motion of STEVE and the Picket Fence Phenomena"
"Characteristics of fragmented aurora-like emissions (FAEs) observed on Svalbard"
"Fragmented Aurora-like Emissions (FAEs) as a new type of aurora-like phenomenon"
2021
"Multi-Wavelength Imaging Observations of STEVE at Athabasca, Canada"
"Registration of synchronous geomagnetic pulsations and proton aurora during the substorm on March 1, 2017"
"First Simultaneous Observation of STEVE and SAR Arc Combining Data From Citizen Scientists, 630.0 nm All-Sky Images, and Satellites"
"Proton Aurora and Optical Emissions in the Subauroral Region"
"Robust techniques to improve high quality triangulations of contemporaneous citizen science observations of STEVE"
"Comparison of the SAR arc, STEVE and Picket fence dynamics registered at the Maimaga subauroral station on March 1, 2017"
"Improved Analysis of STEVE Photographs"
2022
"Rainbow of the Night: First Direct Observation of a SAR arc evolving into STEVE"
"Auroral structures: Revealing the importance of meso-scale M-I coupling"
2023
"It's Not Easy Being Green: Kinetic Modeling of the Emission Spectrum Observed in STEVE's Picket Fence"
"Unsolved problems in Strong Thermal Emission Velocity Enhancement (STEVE) and the picket fence"
See also
Space weather
Thermosphere
Solar prominence
Upper-atmospheric lightning
Unusual types of aurorae
References
External links
Eric Donovan's presentation at 2017 ESA Earth Explorer Missions Science Meeting (1:08:30 - 1:26:00)
Alberta Aurora Chasers
STEVE over Copper Harbor May 5, 2021
Atmospheric optical phenomena
Earth phenomena
Electrical phenomena
Light sources
Plasma phenomena
Planetary science
Space plasmas
Citizen science | STEVE | Physics,Astronomy | 1,861 |
4,157,168 | https://en.wikipedia.org/wiki/Octahemioctahedron | In geometry, the octahemioctahedron or allelotetratetrahedron is a nonconvex uniform polyhedron, indexed as . It has 12 faces (8 triangles and 4 hexagons), 24 edges and 12 vertices. Its vertex figure is a crossed quadrilateral.
It is one of nine hemipolyhedra, with 4 hexagonal faces passing through the model center.
Orientability
It is the only hemipolyhedron that is orientable, and the only uniform polyhedron with an Euler characteristic of zero (a topological torus).
Related polyhedra
It shares the vertex arrangement and edge arrangement with the cuboctahedron (having the triangular faces in common), and with the cubohemioctahedron (having the hexagonal faces in common).
By Wythoff construction it has tetrahedral symmetry (Td), like the rhombitetratetrahedron construction for the cuboctahedron, with alternate triangles with inverted orientations. Without alternating triangles, it has octahedral symmetry (Oh). In this respect it is akin to the Morin surface, which has fourfold symmetry if orientation is ignored and twofold symmetry otherwise. However the octahemioctahedron has a higher degree of symmetry and is genus 1 rather than 0.
Octahemioctacron
The octahemioctacron is the dual of the octahemioctahedron, and is one of nine dual hemipolyhedra. It appears visually indistinct from the hexahemioctacron.
Since the hemipolyhedra have faces passing through the center, the dual figures have corresponding vertices at infinity; properly, on the real projective plane at infinity. In Magnus Wenninger's Dual Models, they are represented with intersecting prisms, each extending in both directions to the same vertex at infinity, in order to maintain symmetry. In practice the model prisms are cut off at a certain point that is convenient for the maker. Wenninger suggested these figures are members of a new class of stellation figures, called stellation to infinity. However, he also suggested that strictly speaking they are not polyhedra because their construction does not conform to the usual definitions.
The octahemioctacron has four vertices at infinity.
See also
Compound of five octahemioctahedra
Hemi-cube - The four vertices at infinity correspond directionally to the four vertices of this abstract polyhedron.
References
(Page 101, Duals of the (nine) hemipolyhedra)
External links
Uniform polyhedra and duals
Toroidal polyhedra | Octahemioctahedron | Mathematics | 552 |
33,313,236 | https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2088 | In molecular biology, glycoside hydrolase family 88 is a family of glycoside hydrolases.
Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes.
Glycoside hydrolase family 88 CAZY GH_88 includes enzymes with d-4,5 unsaturated β-glucuronyl hydrolase activity.
References
EC 3.2.1
Glycoside hydrolase families
Protein families | Glycoside hydrolase family 88 | Biology | 193 |
1,245,377 | https://en.wikipedia.org/wiki/5%CE%B1-Reductase | 5α-Reductases, also known as 3-oxo-5α-steroid 4-dehydrogenases, are enzymes involved in steroid metabolism. They participate in three metabolic pathways: bile acid biosynthesis, androgen and estrogen metabolism. There are three isozymes of 5α-reductase encoded by the genes SRD5A1, SRD5A2, and SRD5A3.
5α-Reductases catalyze the following generalized chemical reaction:
a 3-oxo-5α-steroid + acceptor a 3-oxo-Δ4-steroid + reduced acceptor
Where a 3-oxo-5α-steroid and acceptor are substrates, and a corresponding 3-oxo-Δ4-steroid and the reduced acceptor are products. An instance of this generalized reaction that 5α-reductase type 2 catalyzes is:
dihydrotestosterone + NADP+ testosterone + NADPH + H+
where dihydrotestosterone is the 3-oxo-5α-steroid, NADP+ is the acceptor and testosterone is the 3-oxo-Δ4-steroid and NADPH the reduced acceptor.
Production and activity
The enzyme is produced in many tissues in both males and females, in the reproductive tract, testes and ovaries, skin, seminal vesicles, prostate, epididymis and many organs, including the nervous system. There are three isoenzymes of 5α-reductase: steroid 5α-reductase 1, 2, and 3 (SRD5A1, SRD5A2 and SRD5A3).
5α-Reductases act on 3-oxo (3-keto), Δ4,5 C19/C21 steroids as its substrates; "3-keto" refers to the double bond of the third carbon to oxygen. Carbons 4 and 5 also have a double bond, represented by 'Δ4,5'. The reaction involves a stereospecific and permanent break of the Δ4,5 with the help of NADPH as a cofactor. A hydride anion (H−) is also placed on the α face at the fifth carbon, and a proton on the β face at carbon 4.
Distribution with age
5α-R1 is expressed in fetal scalp and nongenital skin of the back, anywhere from 5 to 50 times less than in the adult. 5α-R2 is expressed in fetal prostates similar to adults. 5α-R1 is expressed mainly in the epithelium and 5α-R2 the stroma of the fetal prostate. Scientists looked for 5α-R2 expression in fetal liver, adrenal, testis, ovary, brain, scalp, chest, and genital skin, using immunoblotting, and were only able to find it in genital skin.
After birth, the 5α-R1 is expressed in more locations, including the liver, skin, scalp and prostate. 5α-R2 is expressed in prostate, seminal vesicles, epididymis, liver, and to a lesser extent the scalp and skin. Hepatic expression of both 5α-R1 and 2 is immediate, but disappears in the skin and scalp at month 18. Then, at puberty, only 5α-R2 is reexpressed in the skin and scalp.
5α-R1 and 5α-R2 appear to be expressed in the prostate in male fetuses and throughout postnatal life. 5α-R1 and 5α-R2 are also expressed, although to different degrees in liver, genital and nongenital skin, prostate, epididymis, seminal vesicle, testis, ovary, uterus, kidney, exocrine pancreas, and the brain.
In adulthood, 5α-R1-3 is ubiquitously expressed.
Substrates
Specific substrates include testosterone, progesterone, androstenedione, epitestosterone, cortisol, aldosterone, and deoxycorticosterone. Outside of dihydrotestosterone, much of the physiological role of 5α-reduced steroids is unknown. Beyond reducing testosterone to dihydrotestosterone, 5alpha-reductase enzyme isoforms I and II reduce progesterone to dihydroprogesterone (DHP) and deoxycorticosterone to dihydrodeoxycorticosterone (DHDOC). In vitro and animal models suggest subsequent 3alpha-reduction of DHT, DHP and DHDOC lead to steroid metabolites with effects on cerebral function achieved by enhancing GABAergic inhibition. These neuroactive steroid derivatives enhance GABA via allosteric modulation at GABA(A) receptors and have anticonvulsant, antidepressant and anxiolytic effects, and also alter sexual and alcohol related behavior. 5α-dihydrocortisol is present in the aqueous humor of the eye, is synthesized in the lens, and might help make the aqueous humor itself. Allopregnanolone and THDOC are neurosteroids, with the latter having effects on the susceptibility of animals to seizures. In socially isolated mice, 5α-R1 is specifically down-regulated in glutamatergic pyramidal neurons that converge on the amygdala from cortical and hippocampal regions. This down-regulation may account for the appearance of behavioral disorders such as anxiety, aggression, and cognitive dysfunction. 5α-dihydroaldosterone is a potent antinatriuretic agent, although different from aldosterone. Its formation in the kidney is enhanced by restriction of dietary salt, suggesting it may help retain sodium as follows:
{Substrate} + {NADPH} + H+ -> {5\alpha-substrate} + NADP+
5α-DHP is a major hormone in circulation of normal cycling and pregnant women.
Testosterone
5α-Reductase is most known for converting testosterone, the male sex hormone, into the more potent dihydrotestosterone:
The major difference is the Δ4,5 double-bond on the A (leftmost) ring. The other differences between the diagrams are unrelated to structure.
List of conversions
The following reactions are known to be catalyzed by 5α-reductase:
Cholestenone → 5α-Cholestanone
Progesterone → 5α-Dihydroprogesterone
3α-Dihydroprogesterone → Allopregnanolone
3β-Dihydroprogesterone → Isopregnanolone
Deoxycorticosterone → 5α-Dihydrodeoxycorticosterone
Corticosterone → 5α-Dihydrocorticosterone
Aldosterone → 5α-Dihydroaldosterone
Androstenedione → 5α-Androstanedione
Testosterone → 5α-Dihydrotestosterone
Nandrolone → 5α-Dihydronandrolone
Structure
5α-Reductase is a membrane bound enzyme that catalyzes the NADPH dependent reduction of double bonds in steroid substrates to increase potency. The crystal structure of a homolog of 5α-reductase isoenzymes 1 and 2 has been found in Proteobacteria (proteobacteria 5α-reductase). This exists as a monomer with a seven alpha-helix transmembrane structure housing a hydrophobic pocket that holds cofactor NADPH and monoolein which occupies the steroid substrate binding pocket. In insect cells monoolein is not found, but is subbed out for other androgens and inhibitors. The integral seven transmembrane topology is likely conserved across species, with the N terminus in the endoplasmic reticulum lumen and the C terminus facing the cytosol. High conformational dynamics of the cytosolic region likely regulate NADPH/NADP+ exchange. Sequence conservation across known crystal structures has corroborated high conservation in enzyme structure.
Inhibition
The mechanism of 5α reductase inhibition is complex, but involves the binding of NADPH to the enzyme followed by the substrate. 5α-Reductase inhibitor drugs are used in benign prostatic hyperplasia, prostate cancer, pattern hair loss (androgenetic alopecia), and hormone replacement therapy for transgender women.
Inhibition of the enzyme can be classified into two categories: steroidal, which are irreversible, and nonsteroidal. There are more steroidal inhibitors, with examples including finasteride (MK-906), dutasteride (GG745), 4-MA, turosteride, MK-386, MK-434, and MK-963. Researchers have pursued synthesis of nonsteroidals to inhibit 5α-reductase due to the undesired side effects of steroidals. The most potent and selective inhibitors of 5α-R1 are found in this class, and include benzoquinolones, nonsteroidal aryl acids, butanoic acid derivatives, and more recognizably, polyunsaturated fatty acids (especially linolenic acid), zinc, and green tea. Riboflavin was also identified as a 5α-reductase inhibitor .
Additionally, it has been claimed that alfatradiol works through this mechanism of activity (5α-reductase), as well as the Ganoderic acids in lingzhi mushroom, and the Saw Palmetto.
Inhibition of 5α-reductase results in decreased conversion of testosterone to DHT, leading to increased testosterone and estradiol. Other enzymes compensate to a degree for the absent conversion, specifically with local expression at the skin of reductive 17β-hydroxysteroid dehydrogenase, oxidative 3α-hydroxysteroid dehydrogenase, and 3β-hydroxysteroid dehydrogenase enzymes.
Gynecomastia, erectile dysfunction, impaired cognitive function, fatigue, hypoglycemia, impaired liver function, constipation, and depression, are only a few of the possible side-effects of 5α-reductase inhibition. Long term side effects, that continued even after discontinuation of the drug have been reported.
Finasteride
Finasteride inhibits two 5α-reductase isoenzymes (II and III), while dutasteride inhibits all three. Finasteride potently inhibits 5α-R2 at a mean inhibitory concentration IC50 of 69 nM, but is less effective with 5α-R1 with an IC50 of 360 nM. Finasteride decreases mean serum level of DHT by 71% after 6 months, and was shown in vitro to inhibit 5α-R3 at a similar potency to 5α-R2 in transfected cell lines.
Dutasteride
Dutasteride inhibits 5α-reductase isoenzymes type 1 and 2 better than finasteride, leading to a more complete reduction in DHT at 24 weeks (94.7% versus 70.8%). It also reduces intraprostatic DHT 97% in men with prostate cancer at 5 mg/day over three months. A second study with 3.5 mg/day for 4 months decreased intraprostatic DHT even further by 99%. The suppression of DHT in vivo, and the report that dutasteride inhibits 5α-R3 in vitro suggest that dutasteride may be a triple 5α reductase inhibitor.
Congenital deficiencies
5α-Reductase 1
5α-Reductase type 1 inactivated male mice have reduced bone mass and forelimb muscle grip strength, which has been proposed to be due to lack of 5α-reductase type 1 expression in bone and muscle. In 5 alpha reductase type 2 deficient males, the type 1 isoenzyme is thought to be responsible for their virilization at puberty.
5α-Reductase 2
Impaired 5α-reductase 2 activity can result from mutations in the underlying SRD5A2 gene. The condition, known as 5α-reductase 2 deficiency, has a range of presentations as atypical appearances of the external genitalia in males. This is because 5α-reductase 2 catalyzes the transformation of testosterone to the potent androgen dihydrotestosterone, which is required for the proper masculinization of male genitalia.
5α-Reductase 3
When small interfering RNA is used to knock down the expression of 5α-R3 isozyme in cell lines, there is decreased cell growth, viability, and a decrease in DHT/T ratios. It has also shown the ability to reduce testosterone, androstenedione, and progesterone in androgen stimulated prostate cell lines by adenovirus vectors.
Congenital deficiency of 5α-R3 at the gene SRD53A has been linked to a rare, autosomal recessive condition in which patients are born with severe intellectual dysfunction and cerebellar and ocular defects. The presumed deficiency is reduction of the terminal bond of polyprenol to dolichol, an important step in N-glycosylation of proteins, which in turn is important for proper folding of asparagine residues on nascent protein in the endoplasmic reticulum.
Nervous system
Affective disorders
Isolation rearing has been shown to lower protein expression of 5α-reductase isoenzymes 1 and 2 in cortical and subcortical brain regions of rat models. However, the amount of 5α-reduced metabolite remained unaffected. This means isolation rearing likely leads to changes in the expression and activity of 5α-reductase in the brain, leading to dysregulation of dopamine neurotransmission, resulting in early chronic stress Treatment with finasteride, a 5α-reductase inhibitor, has been shown to mimic the effects of SSRI's causing sexual dysfunction. Research has shown that 5α-reductase is the rate-limiting enzyme in neurosteroid synthesis, specifically in the conversion of progesterone to allopregnanolone, low levels of allopregnanolone has been tied to depression, anxiety and schizophrenia. Sleep deprivation can enhance 5α-reductase expression and activity in the prefrontal cortex, leading to mania-related symptoms in rats. It is also contested whether the use of 5α-reductase inhibitors is associated with suicidal ideation and depression in patient populations who use them for benign prostatic hyperplasia. These symptoms have been found during active use of inhibitors and in immediate followup. However, it is unknown if these symptoms arise naturally from benign prostatic hyperplasia.
Hypothalamic–pituitary–adrenal axis dysfunction
An alternative mechanism of cortisol regulation is regulated via 5α-reductase which catalyzes an A-ring reduction of cortisol, metabolizing the compound. Type 1 and 2 of 5α-reductase are the principal enzymes involved in cortisol clearance through the liver. Excess cortisol has been tied to non-alcoholic fatty liver disease (NAFLD), but in-vitro studies have found that an over expression of 5α-reductase type 2 can suppress lipogenesis. The key role of 5α-reductase in cortisol breakdown and fat buildup has elucidated some of the side effects of 5α-reductase inhibitors. In randomized studies on human volunteers it was found that 5α-reductase inhibition through the use of dutasteride and finasteride can lead to hepatic lipid accumulation in men. In critical illness, overstimulation of cortisol as part of a stress response can lead to decreased clearance of cortisol through the liver via 5α-reductase and kidneys via 11β-hydroxysteroid dehydrogenase type 2, longterm elevation of cortisol can lead to Cushing's syndrome.
Nomenclature
This enzyme belongs to the family of oxidoreductases, to be specific, those acting on the CH-CH group of donor with other acceptors. The systematic name of this enzyme class is 3-oxo-5α-steroid:acceptor Δ4-oxidoreductase. Other names in common use include:
5α-Reductase
3-Oxosteroid Δ4-dehydrogenase
3-Oxo-5α-steroid Δ4-dehydrogenase
Steroid Δ4-5α-reductase
Δ4-3-Keto steroid 5α-reductase
Δ4-3-Oxo steroid reductase
Δ4-3-Ketosteroid-5α-oxidoreductase
Δ4-3-Oxosteroid-5α-reductase
3-Keto-Δ4-steroid-5α-reductase
Testosterone 5α-reductase
4-Ene-3-ketosteroid-5α-oxidoreductase
Δ4-5α-Dehydrogenase
3-Oxo-5α-steroid:(acceptor) Δ4-oxidoreductase
See also
Steroidogenic enzyme
Acne vulgaris
Cholestenone 5α-reductase
Hirsutism
Lower urinary tract symptoms
Polycystic ovarian syndrome
List of steroid metabolism modulators
References
Further reading
External links
EC 1.3.1
Steroid hormone biosynthesis | 5α-Reductase | Chemistry,Biology | 3,786 |
3,710,036 | https://en.wikipedia.org/wiki/Project%201153%20Orel | Project 1153 Orel ( pr: "Or'yol", Eagle) was Soviet Union's planned aircraft carrier class developed in the 1970s to give the Soviet Navy a true blue water aviation capability. The vessel would have about 72,000 tons displacement, a nuclear powered propulsion system and steam catapults for aircraft launch, similar to the earlier Kitty Hawk-class supercarriers of the U.S. Navy. Unlike them and the preceding Soviet aircraft cruisers, it was also designed with a large offensive capability; the ship mounts including 20 vertical launch tubes for anti-ship cruise missiles. The Soviets classified it as a "large cruiser with aircraft armament".
Etymology
The project was codenamed Eagle (Орёл), just like the two earlier helicopter and aircraft cruiser projects, and several projects of other classes of ships, were named after birds of prey. However the carriers themselves were named after Soviet cities, while only frigates were named after birds (see Russian ship naming conventions); the actual projected name of the ships is not known.
History
The origins of the Orel program dates back to the late 1960s, when the Soviet Defense Minister Andrei Grechko sponsored a program of constructing large aircraft-carrying cruisers in response to American aircraft carriers.
The purpose of this project was to strengthen the Soviet naval aviation capabilities to allow them to operate on the high seas. In fact, the only Soviet carriers at the time, the Moskva-class, were essentially helicopter carriers, uncapable of carrying fixed-wing aircraft. Thus leaving the Soviet fleet practically without air cover during operations away from the coast and severely limiting its operational capabilities.
The Project 1160 (Codenamed Orel) was projected as the first Soviet aircraft carrier powered by nuclear reactor. The development began in the early 1970s at the Nevskoye Design Bureau. The project envisaged the construction of three supercarriers with a displacement of 80.000 tons and capable of carrying about 60 carrier-based aircraft. In addition, the ship was installed with 16 P-700 Granit anti-ship missile VLS beneath the flight deck for an offensive abilities and to bypass the Montreux Convention, which forbade aircraft carriers from crossing the Dardanelles Strait. However in 1973, the work of the Project 1160 was cancelled for being too expensive.
Project 1153, (based on its predecessor Project 1160) a more V/STOL-aircraft-oriented, was developed instead. Compared to Project 1160, it is planned to have a displacement of 8.000 tons less while retaining its nuclear powered propulsion system and the VLS for anti-ship missiles. The ship was added 4 extra VLS, and its aircraft capacity reduced from 60 to 50. It was also planned that two ships will be constructed instead of three due to the insufficient shipyard availability. But in 1976, following the death of the main supporter of the project, Marshal Grechko, He was then succeeded by Marshal Dmitry Ustinov as the new Minister of Defense. Ustinov found the project being too expensive and so the plans were ordered to be redrawn and reduced to 60.000 tons to minimize the budget spending. Despite the attempts of redrawing and redesigning the plan to satisfy the demands of the Soviet Army, the plan was still too expensive and the entire project was cancelled in 1978. While the Orel never saw fruition, in the 1980s it influenced the also abortive Ulyanovsk program.
See also
Soviet aircraft carrier Ulyanovsk
Russian aircraft carrier Kuznetsov
List of ships of the Soviet Navy
List of ships of Russia by project number
References
Further reading
– translated copy available via the US government, pp. 22–29.
See also:
External links
"A Brief Look at Russian Aircraft Carrier Development," Robin J. Lee.
Aircraft carriers of the Soviet Navy
Cold War aircraft carriers of the Soviet Union
Proposed aircraft carriers
Abandoned military projects of the Soviet Union | Project 1153 Orel | Engineering | 802 |
62,198,068 | https://en.wikipedia.org/wiki/Comparison%20of%20OS%20emulation%20or%20virtualization%20apps%20on%20Android | There are many apps in Android that can run or emulate other operating systems, via utilizing hardware support for platform virtualization technologies, or via terminal emulation. Some of these apps support having more than one emulation/virtual file system for different OS profiles, thus the ability to have or run multiple OS's. Some even have support to run the emulation via a localhost SSH connection (letting remote ssh terminal apps on device access the OS emulation/VM, VNC, and XSDL. If more than one of these apps that support these protocols or technologies are available on the android device, via androids ability to do background tasking the main emulator/VM app on android can be used to launch multiple emulation/vm OS, which the other apps can connect to, thus multiple emulated/VM OS's can run at the same time. However, there are a few emulator or VM apps that require that the android device to be rooted for the app to work, and there are others that do not require such. Some remote terminal access apps also have the ability to access Android's internally implemented Toybox, via device loopback support. Some VM/emulator apps have a fixed set of OS's or applications that can be supported.
Since Android 8 (Oreo) and later versions of Android, some of these apps have been reporting issues as Google has heightened the security of file-access permissions on newer versions of Android. Some apps have difficulties or have lost access to SD card. It is also been reported that some of the apps have trouble utilizing packages like udisks2, Open vSwitch, Snort (software), and Mininet, due to new hardware or Android API restrictions on apps that have been put into place in the recent years. Due to this, many of these app developers and their community members are stating that the emulation/VM app can run itself and an OS without being rooted, however not all packages will be able to run unless the device is rooted.
OS emulators or VM Android apps
The following is a list of OS emulators and OS virtualization Android apps.
Terminal emulation apps utilizing internal OS
See also
Comparison of platform virtualization software
List of computer system emulators
OS virtualization and emulation on Android
Mobile virtualization
References
Software comparisons
Android (operating system) | Comparison of OS emulation or virtualization apps on Android | Technology | 491 |
14,542,297 | https://en.wikipedia.org/wiki/Formylmethanofuran%E2%80%94tetrahydromethanopterin%20N-formyltransferase | In enzymology, a formylmethanofuran-tetrahydromethanopterin N-formyltransferase () is an enzyme that catalyzes the chemical reaction
formylmethanofuran + 5,6,7,8-tetrahydromethanopterin methanofuran + 5-formyl-5,6,7,8-tetrahydromethanopterin
Thus, the two substrates of this enzyme are formylmethanofuran and 5,6,7,8-tetrahydromethanopterin, whereas its two products are methanofuran and 5-formyl-5,6,7,8-tetrahydromethanopterin.
This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is formylmethanofuran:5,6,7,8-tetrahydromethanopterin 5-formyltransferase. Other names in common use include formylmethanofuran-tetrahydromethanopterin formyltransferase, formylmethanofuran:tetrahydromethanopterin formyltransferase, N-formylmethanofuran(CHO-MFR):tetrahydromethanopterin(H4MPT), formyltransferase, FTR, formylmethanofuran:5,6,7,8-tetrahydromethanopterin, and N5-formyltransferase. This enzyme participates in folate biosynthesis.
Ftr from the thermophilic methanogen Methanopyrus kandleri (which has an optimum growth temperature 98 degrees C) is a hyperthermophilic enzyme that is absolutely dependent on the presence of lyotropic salts for activity and thermostability. The crystal structure of Ftr, determined to a reveals a homotetramer composed essentially of two dimers. Each subunit is subdivided into two tightly associated lobes both consisting of a predominantly antiparallel beta sheet flanked by alpha helices forming an alpha/beta sandwich structure. The approximate location of the active site was detected in a region close to the dimer interface. Ftr from the mesophilic methanogen Methanosarcina barkeri and the sulphate-reducing archaeon Archaeoglobus fulgidus have a similar structure.
In the methylotrophic bacterium Methylobacterium extorquens, Ftr interacts with three other polypeptides to form an Ftr/hydrolase complex which catalyses the hydrolysis of formyl-tetrahydromethanopterin to formate during growth on C1 substrates.
Structural studies
As of late 2007, 5 structures have been solved for this class of enzymes, with PDB accession codes , , , , and .
References
Further reading
Protein domains
EC 2.3.1
Enzymes of known structure | Formylmethanofuran—tetrahydromethanopterin N-formyltransferase | Biology | 656 |
58,352,456 | https://en.wikipedia.org/wiki/Elizabeth%20New | Elizabeth Joy New (born 1984) is an Australian chemist and Professor of the School of Chemistry, University of Sydney. She won the 2018 Australian Museum 3M Eureka Prize.
Early life and education
New was born in Sydney in 1984. She represented Australia at the International Chemistry Olympiad in 2000 and 2001, winning bronze and gold medals respectively, and graduated from James Ruse Agricultural High School with a UAI of 100. She earned a bachelor's degree in chemistry at the University of Sydney in 2005, where she completed her master's degree in 2006 with Professor Trevor Hambley. During her graduate studies she worked on fluorescent tags to monitor the cellular uptake and metabolism of anti-tumor complexes. New completed her doctoral studies at Durham University working with David Parker, graduating in 2010. Her work looked at the cellular behaviour of lanthanide complexes.
Research and career
She was appointed a Royal Commission for the Exhibition of 1851 Research Fellow at the University of California, Berkeley in 2010. She worked with Christopher Chang on fluorescent sensors for copper. She was an Australian Research Council Discovery Early Career Research Fellow from 2012-2014, and held a Westpac Research Fellowship from 2016-2019. New's group developed reversible fluorescent sensors for cellular redox environments. She provided the first examples of reversible ratiometric cytoplasmic sensing and mitochondrial sensing. Her group has developed cobalt complexes for contrast agents in magnetic resonance imaging. The complexes can be used to monitor oxidative stress. They have also worked on the development of fluorescent sensor arrays for biological and analytical applications.
New was made a lecturer in 2015 and a senior lecturer in 2016. In 2017 she received the ChemComm Emerging Investigator. She was appointed Associate Professor in 2018 and Professor in 2021.
Awards
2024 Medal of the Order of Australia (OAM)
2023 Society for Biological Inorganic Chemistry Early Career Award
2022 Australian Financial Review Emerging Leader in Higher Education
2020 Chemosensors Young Investigator Award
2019 Malcolm McIntosh Prize for Physical Scientist of the Year
2018 Royal Society of New South Wales Edgeworth David Medal
2018 3M Eureka Prize for Emerging Leader in Science
2018 Fellow of the Royal Society of New South Wales
2017 Royal Australian Chemical Institute Rennie Memorial Medal
2017 Royal Australian Chemical Institute Educator of the Year Award
2016 New South Wales Early Career Researcher of the Year
2015 Office of Learning and Teaching Teaching Excellence Award
2015 Young Tall Poppy Science Award
2015 Selby Research Award
2015 Vice-Chancellor award for Outstanding Teaching
2014 Royal Australian Chemical Institute Nyholm Lectureship, 2014-2015
2014 Asian Biological Inorganic Chemistry Early Career Research Award
2011 Royal Society of Chemistry Dalton Young Researchers Award
2005 University of Sydney The University Medal
References
Inorganic chemists
Australian women chemists
University of Sydney alumni
Alumni of Ustinov College, Durham
Academic staff of the University of Sydney
1984 births
Living people
Recipients of the Medal of the Order of Australia
Fellows of the Royal Society of New South Wales
21st-century Australian chemists
People educated at James Ruse Agricultural High School | Elizabeth New | Chemistry | 591 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.