id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
32,045,660 | https://en.wikipedia.org/wiki/Plectronidium%20australiense | Plectronidium australiense is a species of anamorphic fungus. Known only from Australia, where it grows on the dead branches of Banksia canei, it was described as new to science in 1986. Its conidia have a basal appendage and measure 19–26 by 1.5 μm—shorter and narrower than the similar species P. minor, P. sinense, or P. magnoliae.
References
External links
Fungi of Australia
Fungi described in 1986
Pezizomycotina
Fungus species | Plectronidium australiense | Biology | 107 |
11,306,766 | https://en.wikipedia.org/wiki/Phyllosticta%20lentisci | Phyllosticta lentisci is a fungal plant pathogen infecting pistachio.
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Fruit tree diseases
lentisci
Fungi described in 1913
Fungus species | Phyllosticta lentisci | Biology | 48 |
49,153,267 | https://en.wikipedia.org/wiki/Phaeocollybia%20amygdalospora | Phaeocollybia amygdalospora is a species of fungus in the family Cortinariaceae. Found in Durango, Mexico, where it grows under pine, it was described as new to science in 1996 by mycologists Victor Bandala and Egon Horak. It has amygdaliform (almond-shaped) spores (for which it is named) that measure 6.5–9 by 4–5 μm.
References
External links
Cortinariaceae
Fungi described in 1996
Fungi of Mexico
Fungi without expected TNC conservation status
Fungus species | Phaeocollybia amygdalospora | Biology | 118 |
59,842,040 | https://en.wikipedia.org/wiki/Graph%20removal%20lemma | In graph theory, the graph removal lemma states that when a graph contains few copies of a given subgraph, then all of the copies can be eliminated by removing a small number of edges. The special case in which the subgraph is a triangle is known as the triangle removal lemma.
The graph removal lemma can be used to prove Roth's theorem on 3-term arithmetic progressions, and a generalization of it, the hypergraph removal lemma, can be used to prove Szemerédi's theorem. It also has applications to property testing.
Formulation
Let be a graph with vertices. The graph removal lemma states that for any , there exists a constant such that for any -vertex graph with fewer than subgraphs isomorphic to , it is possible to eliminate all copies of by removing at most edges from .
An alternative way to state this is to say that for any -vertex graph with subgraphs isomorphic to , it is possible to eliminate all copies of by removing edges from . Here, the indicates the use of little o notation.
In the case when is a triangle, resulting lemma is called triangle removal lemma.
History
The original motivation for the study of triangle removal lemma was the Ruzsa–Szemerédi problem. Its initial formulation due to Imre Z. Ruzsa and Szemerédi from 1978 was slightly weaker than the triangle removal lemma used nowadays and can be roughly stated as follows: every locally linear graph on vertices contains edges. This statement can be quickly deduced from a modern triangle removal lemma. Ruzsa and Szemerédi provided also an alternative proof of Roth's theorem on arithmetic progressions as a simple corollary.
In 1986, during their work on generalizations of the Ruzsa–Szemerédi problem to arbitrary -uniform graphs, Erdős, Frankl, and Rödl provided a statement for general graphs very close to the modern graph removal lemma: if graph is a homomorphic image of , then any -free graph on vertices can be made -free by removing edges.
The modern formulation of the graph removal lemma was first stated by Füredi in 1994. The proof generalized earlier approaches by Ruzsa and Szemerédi and Erdős, Frankl, and Rödl, also using the Szemerédi regularity lemma.
Graph counting lemma
A key component of the proof of the graph removal lemma is the graph counting lemma about counting subgraphs in systems of regular pairs. The graph counting lemma is also very useful on its own. According to Füredi, it is used "in most applications of regularity lemma".
Heuristic argument
Let be a graph on vertices, whose vertex set is and edge set is . Let be sets of vertices of some graph such that, for all , the pair is -regular (in the sense of the regularity lemma). Let also be the density between the sets and . Intuitively, a regular pair with density should behave like a random Erdős–Rényi-like graph, where every pair of vertices is selected to be an edge independently with probability . This suggests that the number of copies of on vertices such that should be close to the expected number from the Erdős–Rényi model,where and are the edge set and the vertex set of .
Precise statement
The straightforward formalization of the above heuristic claim is as follows. Let be a graph on vertices, whose vertex set is and whose edge set is . Let be arbitrary. Then there exists such that for any as above, satisfying for all , the number of graph homomorphisms from to such that vertex is mapped to is not smaller than
Blow-up Lemma
One can even find bounded-degree subgraphs of blow-ups of in a similar setting. The following claim appears in the literature under name of the blow-up lemma and was first proven by Komlós, Sárközy, and Szemerédi. The precise statement here is a slightly simplified version due to Komlós, who referred to it also as the key lemma, as it is used in numerous regularity-based proofs.
Let be an arbitrary graph and let . Construct by replacing each vertex of by an independent set of size and replacing every edge of by the complete bipartite graph on . Let be arbitrary reals, let be a positive integer, and let be a subgraph of with vertices and maximum degree . Define . Finally, let be a graph and be disjoint sets of vertices of such that, whenever , then is a -regular pair with density at least . Then if and , then the number of injective graph homomorphisms from to is at least .
In fact, one can restrict to counting only those homomorphisms such that any vertex of with is mapped to a vertex in .
Proof
We will provide a proof of the counting lemma in the case when is a triangle (triangle counting lemma). The proof of the general case, as well as the proof of the blow-up lemma, are very similar and do not require different techniques.
Take . Let be the set of those vertices in which have at least neighbors in and at least neighbors in . Note that if there were more than vertices in with less than neighbors in , then these vertices together with the whole would witness -irregularity of the pair . Repeating this argument for shows that we must have . Now take an arbitrary and define and as neighbors of in and , respectively. By definition, and , so by the regularity of we obtain existence of at leasttriangles containing . Since was chosen arbitrarily from the set of size at least , we obtain a total of at leastwhich finishes the proof as .
Proof
Proof of the triangle removal lemma
To prove the triangle removal lemma, consider an -regular partition of the vertex set of . This exists by the Szemerédi regularity lemma. The idea is to remove all edges between irregular pairs, low-density pairs, and small parts, and prove that if at least one triangle still remains, then many triangles remain. Specifically, remove all edges between parts and if
This procedure removes at most edges. If there exists a triangle with vertices in after these edges are removed, then the triangle counting lemma tells us there are at leasttriples in which form a triangle. Thus, we may take
Proof of the graph removal lemma
The proof of the case of general is analogous to the triangle case, and uses the graph counting lemma instead of the triangle counting lemma.
Induced Graph Removal Lemma
A natural generalization of the graph removal lemma is to consider induced subgraphs. In property testing, it is often useful to consider how far a graph is from being induced -free. A graph is considered to contain an induced subgraph if there is an injective map such that is an edge of if and only if is an edge of . Notice that non-edges are considered as well. is induced -free if there is no induced subgraph . We define as -far from being induced -free if we cannot add or delete edges to make induced -free.
Formulation
A version of the graph removal lemma for induced subgraphs was proved by Alon, Fischer, Krivelevich, and Szegedy in 2000. It states that for any graph with vertices and , there exists a constant such that, if an -vertex graph has fewer than induced subgraphs isomorphic to , then it is possible to eliminate all induced copies of by adding or removing fewer than edges.
The problem can be reformulated as follows: Given a red-blue coloring of the complete graph (analogous to the graph on the same vertices where non-edges are blue and edges are red), and a constant , then there exists a constant such that for any red-blue colorings of has fewer than subgraphs isomorphic to , then it is possible to eliminate all copies of by changing the colors of fewer than edges. Notice that our previous "cleaning" process, where we remove all edges between irregular pairs, low-density pairs, and small parts, only involves removing edges. Removing edges only corresponds to changing edge colors from red to blue. However, there are situations in the induced case where the optimal edit distance involves changing edge colors from blue to red as well. Thus, the regularity lemma is insufficient to prove the induced graph removal lemma. The proof of the induced graph removal lemma must take advantage of the strong regularity lemma.
Proof
Strong Regularity Lemma
The strong regularity lemma is a strengthened version of Szemerédi's regularity lemma. For any infinite sequence of constants , there exists an integer such that for any graph , we can obtain two (equitable) partitions and such that the following properties are satisfied:
refines ; that is, every part of is the union of some collection of parts in .
is -regular and is -regular.
The function is defined to be the energy function defined in Szemerédi regularity lemma. Essentially, we can find a pair of partitions where is regular compared to , and at the same time are close to each other. This property is captured in the third condition.
Corollary of the Strong Regularity Lemma
The following corollary of the strong regularity lemma is used in the proof of the induced graph removal lemma. For any infinite sequence of constants , there exists such that there exists a partition and subsets for each where the following properties are satisfied:
is -regular for each pair
for all but pairs
The main idea of the proof of this corollary is to start with two partitions and that satisfy the Strong Regularity Lemma where . Then for each part , we uniformly at random choose some part that is a part in . The expected number of irregular pairs is less than 1. Thus, there exists some collection of such that every pair is -regular!
The important aspect of this corollary is that pair of is -regular! This allows us to consider edges and non-edges when we perform our cleaning argument.
Proof Sketch of the Induced Graph Removal Lemma
With these results, we are able to prove the induced graph removal lemma. Take any graph with vertices that has less than copies of . The idea is to start with a collection of vertex sets which satisfy the conditions of the Corollary of the Strong Regularity Lemma. We then can perform a "cleaning" process where we remove all edges between pairs of parts with low density, and we can add all edges between pairs of parts with high density. We choose the density requirements such that we added/deleted at most edges.
If the new graph has no copies of , then we are done. Suppose the new graph has a copy of . Suppose the vertex is embedded in . Then if there is an edge connecting in , then does not have low density. (Edges between were not removed in the cleaning process.) Similarly, if there is not an edge connecting in , then does not have high density. (Edges between were not added in the cleaning process.)
Thus, by a similar counting argument to the proof of the triangle counting lemma (that is, the graph counting lemma), we can show that has more than copies of .
Generalizations
The graph removal lemma was later extended to directed graphs and to hypergraphs.
Quantitative bounds
The usage of the regularity lemma in the proof of the graph removal lemma forces to be extremely small, bounded by a tower function whose height is polynomial in ; that is, (here is the tower of twos of height ). A tower function of height is necessary in all regularity proofs, as is implied by results of Gowers on lower bounds in the regularity lemma. However, in 2011, Fox provided a new proof of the graph removal lemma which does not use the regularity lemma, improving the bound to (here is the number of vertices of the removed graph ). His proof, however, uses regularity-related ideas such as energy increment, but with a different notion of energy, related to entropy. This proof can be also rephrased using the Frieze-Kannan weak regularity lemma as noted by Conlon and Fox. In the special case of bipartite , it was shown that is sufficient.
There is a large gap between the available upper and lower bounds for in the general case. The current best result true for all graphs is due to Alon and states that, for each nonbipartite , there exists a constant such that is necessary for the graph removal lemma to hold, while for bipartite , the optimal has polynomial dependence on , which matches the lower bound. The construction for the nonbipartite case is a consequence of Behrend construction of large Salem-Spencer sets. Indeed, as the triangle removal lemma implies Roth's theorem, existence of large Salem-Spencer sets may be translated to an upper bound for in the triangle removal lemma. This method can be leveraged for arbitrary nonbipartite to give the aforementioned bound.
Applications
Additive combinatorics
Graph theory
Property testing
See also
Counting lemma
Tuza's conjecture
References
Graph theory | Graph removal lemma | Mathematics | 2,712 |
44,424,239 | https://en.wikipedia.org/wiki/All%20in%20the%20Method | All in the Method is a British comedy web series produced, written by and starring Luke Kaile and Rich Keeble. The series is broadcast on the internet and premiered on 17 June 2012. So far, five episodes of season one have been made. The show can be found distributed on the web via YouTube. The series sees Rich and Luke as flat-sharing brothers who have both chosen the poorly paid profession of acting as their chosen career paths. Both of them believe in the ‘method’ form of acting, where they become their character twenty-four-seven. This naturally results in the pair getting themselves into scenarios that their everyday lives would never have come close to brushing with if it wasn’t for their commitment to the acting cause.
Both Keeble and Kaile’s experiences as actors and writers helped prepare them for producing All in the Method in many ways, despite never having produced a web series before. The cast includes many people they’ve worked with on past projects, both in the theatre and in film, and many of the characters are inspired by those past experiences.
All in the Method was screened at the Raindance Film Festival and won 'Best Guest Actor' (Peter Glover) at LA Web Fest 2014.
References
External links
Official website
All in the Method on YouTube
2012 web series debuts
2013 web series endings
British comedy web series | All in the Method | Technology | 273 |
782,680 | https://en.wikipedia.org/wiki/Bird%20bath | A bird bath (or birdbath) is an artificial puddle or small shallow pond, created with a water-filled basin, in which birds may drink, bathe, and cool themselves. A bird bath can be a garden ornament, small reflecting pool, outdoor sculpture, and also can be a part of creating a vital wildlife garden.
Description
A bird bath (or birdbath) is an artificial puddle or small shallow pond, created with a water-filled basin. Birds may use the bath to drink, bathe, and cool themselves. A bird bath is an attraction for many different species of birds to visit gardens, especially during the summer and drought periods. Bird baths that provide a reliable source of water year round add to the popularity and "micro-habitat" support.
Bird baths can be pre-made basins on pedestals and columns or hang from leaves and trees, or be carved out depressions in rocks and boulders. Requirements for a bird bath should include the following; a shallow gradually deepening basin; open surroundings to minimize cats' stalking; clean and renewed-refilled water; and cleaning to avoid contamination and mosquitoes. Two inches of water in the center is sufficient for most backyard birds, because they do not submerge their bodies, only dipping their wings to splash water on their backs. Deeper or wide basins can have "perch islands" in the water, which can also help discourage feline predators. Elevation on a pedestal is a common safety measure, providing a clear area around the bird bath that is free of hiding locations for predators. A bird feeder can complement a bird bath to encourage birds to linger and return.
The early bird baths were simple depressions in the ground. The first purpose-built bird bath was developed by UK garden design company, Abrahm Pulman & Sons in the 1830s.
Design and construction
A bird bath can be a garden ornament, small reflecting pool, outdoor sculpture, and also can be a part of creating a vital wildlife garden. Bird baths can be made with materials, including molded concrete, glazed terra cotta, glass, metals (e.g., copper), plastics, mosaic tiles, marble, or any other material that can be outdoors and hold water. In natural landscape gardens rocks and boulders with natural or stonemason carved basins can fit in unobtrusively. Some bird baths use a recirculating pump as part of a fountain or water feature, and can include filters, a float valve-water connection for automatic refilling, or a drip irrigation emitter aimed into the bowl. Some use a solar powered pump, floating or submerged, to recirculate the water. Birds are attracted to the sight and sound of running water, with integrated or nearby fountains helpful elements to bring birds to the garden.
Ornaments and sculptures
The traditional bird bath is made of molded concrete or glazed terra cotta formed in two pieces: the bowl and the pedestal. The bowl has an indentation or socket in the base which allows it to fit on the pedestal. The pedestal is typically about one meter tall. Both bowl and pedestal can be clean or decorated with bas-relief. Bowls can be pure curved geometry, or have motifs of a shell or pseudo-rocky spring. The pedestal can also be a simple silhouette or incorporate decorations. Birds seem unconcerned with the aesthetics, with even a shallow plate, pie-tin, or puddle below a slowly dripping water outlet used.
Baths for large birds
Large birds, such as the Canada goose, also enjoy baths. They may be accommodated well by large agricultural sprinklers in a field of stubble. Providing such a place for migratory birds, especially in urban and suburban areas devoid of wetlands is an excellent way of encouraging them to frequent an area.
Bird habitat
Perch and view needs
Bird baths require a place for birds to perch. The bath should also be shallow enough to avoid the risk of birds drowning. A depth of 2” is right for most species. This requirement may be fulfilled by making the bowl shallow enough to allow birds to perch in the water. For deeper bowls, stones, gravel or rocks can be placed in the center to give birds a place to perch. Objects placed in the bird bath bowl should have a texture that makes it easy for birds' talons to hold. Birds lacking binocular vision have poor depth perception, and can find a bird bath off-putting if they are unable to judge the water's depth. Leaning a stick or flat rock against the bird bath rim as a ramp to allow them gradual access into the water may allay their fear.
Consideration should also be made to the issue of house cats and other predators, by placing the bird bath in a location where birds can see the area around it, and where there are no hiding places for predators. Birds cannot fly well when their feathers are wet; two feet of open space on all sides of the bird bath allows birds to see danger coming with enough time to escape. If the bowl is too deep, some birds will be afraid to enter the bath, staying at the edge and using it for drinking water only, being unable to see beyond the edge if entering the water, or unwilling to enter water that is too deep for their safety.
Plants
Native plants, ornamental plants that supply berries, acorns, nuts, seeds, nectar, and other foods, and also bird nest building materials encourages the health and new generations of birds. These qualities can also increase the visible population to enjoy in a garden. Using companion planting and the birds' insect cuisine habits is a traditional method for pest control in an organic garden, and any landscape.
Taller shrubs and trees nearby allow short and safe "commutes" to the bird bath. The bird bath will attract more birds if placed where a frightened bird can fly up easily to an overhanging limb or resting place if disturbed or attacked.
Maintenance
A bird bath requires regular maintenance and fresh water. Fresh water and cleaning are important because of the possible adverse health effects of birds drinking dirty water, or water which may have become fouled with excrement, mosquito larvae, algae, or fungi.
Maintenance for some bird baths may be as simple as a wash and refill several times a week, but it will depend on the bird bath materials. There are a variety of methods and substances that can be used to clean a bird bath, including small quantities of bleach, oregano or olive oil, or commercially available, non-toxic cleaning products. Concrete bird baths tend to become mossy and, therefore, slippery—requiring an occasional scrubbing out with a stiff brush. Plastic or resin bird baths may need to be drained, wiped down with a towel, and refilled.
Mosquitoes and mosquito larvae are the most serious potential health risk that can be caused by poor bird bath maintenance. To prevent mosquito larvae, change the bird bath water weekly to interrupt their 7–10 day breeding cycle, or use a water aerator to break up the still water surface that mosquitoes require to lay eggs. Commercial products that contain bacillus thuringiensis israelensis (Bti), which is lethal to mosquitoes but non-toxic for humans and wildlife, can also be used to control mosquitoes.
See also
Bird feeder
Bird watching
Conservation biology
Drought
Gardens
Habitats
Mud-puddling
Riparian zone restoration
Wetlands
References
External links
Bird baths and birdwatching
Birds in popular culture
Birdwatching
Bird feeding
Garden features
Garden ornaments
Habitats
Architectural elements | Bird bath | Technology,Engineering | 1,514 |
5,561 | https://en.wikipedia.org/wiki/Computational%20linguistics | Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics draws upon linguistics, computer science, artificial intelligence, mathematics, logic, philosophy, cognitive science, cognitive psychology, psycholinguistics, anthropology and neuroscience, among others.
Origins
The field overlapped with artificial intelligence since the efforts in the United States in the 1950s to use computers to automatically translate texts from foreign languages, particularly Russian scientific journals, into English. Since rule-based approaches were able to make arithmetic (systematic) calculations much faster and more accurately than humans, it was expected that lexicon, morphology, syntax and semantics can be learned using explicit rules, as well. After the failure of rule-based approaches, David Hays coined the term in order to distinguish the field from AI and co-founded both the Association for Computational Linguistics (ACL) and the International Committee on Computational Linguistics (ICCL) in the 1970s and 1980s. What started as an effort to translate between languages evolved into a much wider field of natural language processing.
Annotated corpora
In order to be able to meticulously study the English language, an annotated text corpus was much needed. The Penn Treebank was one of the most used corpora. It consisted of IBM computer manuals, transcribed telephone conversations, and other texts, together containing over 4.5 million words of American English, annotated using both part-of-speech tagging and syntactic bracketing.
Japanese sentence corpora were analyzed and a pattern of log-normality was found in relation to sentence length.
Modeling language acquisition
The fact that during language acquisition, children are largely only exposed to positive evidence, meaning that the only evidence for what is a correct form is provided, and no evidence for what is not correct, was a limitation for the models at the time because the now available deep learning models were not available in late 1980s.
It has been shown that languages can be learned with a combination of simple input presented incrementally as the child develops better memory and longer attention span, which explained the long period of language acquisition in human infants and children.
Robots have been used to test linguistic theories. Enabled to learn as children might, models were created based on an affordance model in which mappings between actions, perceptions, and effects were created and linked to spoken words. Crucially, these robots were able to acquire functioning word-to-meaning mappings without needing grammatical structure.
Using the Price equation and Pólya urn dynamics, researchers have created a system which not only predicts future linguistic evolution but also gives insight into the evolutionary history of modern-day languages.
Chomsky's theories
Chomsky's theories have influenced computational linguistics, particularly in understanding how infants learn complex grammatical structures, such as those described in Chomsky normal form. Attempts have been made to determine how an infant learns a "non-normal grammar" as theorized by Chomsky normal form. Research in this area combines structural approaches with computational models to analyze large linguistic corpora like the Penn Treebank, helping to uncover patterns in language acquisition.
See also
Artificial intelligence in fiction
Collostructional analysis
Computational lexicology
Computational Linguistics (journal)
Computational models of language acquisition
Computational semantics
Computational semiotics
Computer-assisted reviewing
Dialog systems
Glottochronology
Grammar induction
Human speechome project
Internet linguistics
Lexicostatistics
Natural language processing
Natural language user interface
Quantitative linguistics
Semantic relatedness
Semantometrics
Systemic functional linguistics
Translation memory
Universal Networking Language
References
Further reading
Steven Bird, Ewan Klein, and Edward Loper (2009). Natural Language Processing with Python. O'Reilly Media. .
Daniel Jurafsky and James H. Martin (2008). Speech and Language Processing, 2nd edition. Pearson Prentice Hall. .
Mohamed Zakaria KURDI (2016). Natural Language Processing and Computational Linguistics: speech, morphology, and syntax, Volume 1. ISTE-Wiley. .
Mohamed Zakaria KURDI (2017). Natural Language Processing and Computational Linguistics: semantics, discourse, and applications, Volume 2. ISTE-Wiley. .
External links
Association for Computational Linguistics (ACL)
ACL Anthology of research papers
ACL Wiki for Computational Linguistics
CICLing annual conferences on Computational Linguistics
Computational Linguistics – Applications workshop
Language Technology World
Resources for Text, Speech and Language Processing
The Research Group in Computational Linguistics
Formal sciences
Cognitive science
Computational fields of study | Computational linguistics | Mathematics,Technology | 908 |
76,359,866 | https://en.wikipedia.org/wiki/Classifying%20space%20for%20SU%28n%29 | In mathematics, the classifying space for the special unitary group is the base space of the universal principal bundle . This means that principal bundles over a CW complex up to isomorphism are in bijection with homotopy classes of its continuous maps into . The isomorphism is given by pullback.
Definition
There is a canonical inclusion of complex oriented Grassmannians given by . Its colimit is:
Since real oriented Grassmannians can be expressed as a homogeneous space by:
the group structure carries over to .
Simplest classifying spaces
Since is the trivial group, is the trivial topological space.
Since , one has .
Classification of principal bundles
Given a topological space the set of principal bundles on it up to isomorphism is denoted . If is a CW complex, then the map:
is bijective.
Cohomology ring
The cohomology ring of with coefficients in the ring of integers is generated by the Chern classes:
Infinite classifying space
The canonical inclusions induce canonical inclusions on their respective classifying spaces. Their respective colimits are denoted as:
is indeed the classifying space of .
See also
Classifying space for O(n)
Classifying space for SO(n)
Classifying space for U(n)
Literature
External links
classifying space on nLab
BSU(n) on nLab
References
Algebraic topology | Classifying space for SU(n) | Mathematics | 269 |
1,248,306 | https://en.wikipedia.org/wiki/Strap | A strap, sometimes also called strop, is an elongated flap or ribbon, usually of leather or other flexible materials.
Thin straps are used as part of clothing or baggage, or bedding such as a sleeping bag. See for example spaghetti strap, shoulder strap. A strap differs from a belt mainly in that a strap is usually integral to the item of clothing; either can be used in combination with buckles.
Straps are also used as fasteners to attach, secure, carry, or bind items, to objects, animals (for example a saddle on a horse) and people (for example a watch on a wrist), or even to tie down people and animals, as on an apparatus for corporal punishment. Occasionally a strap is specified after what it binds or holds, e.g. chin strap. Webbing is a particular type of strap that is a strong fabric woven as a flat strip or tube that is also often used in place of rope. Modern webbing is typically made from exceptionally high-strength material and is used in automobile seat belts, furniture manufacturing, transportation, towing, military uniform, cargo fasteners, and many other fields.
Components
Strap loop
Strap union
Strap fitting
Packaging
The strap is commonly used in the packaging industry to secure or fasten items. It may be made from a wide range of materials, such as plastic, steel, paper, or fabric. Usually, the strap is secured to itself through various means, but it may also be secured to other items, such as pallets.
Gallery
See also
Buckle
Drawstring
Watch strap
Phone strap
Snap fastener
Strapping (punishment)
Hook and loop fastener
References
External links
Parts of clothing
Textile closures
fr:Strap
pl:Rzemień | Strap | Technology | 350 |
35,829,671 | https://en.wikipedia.org/wiki/Xi%20Piscium | Xi Piscium (ξ Piscium) is an orange-hued binary star system in the zodiac constellation of Pisces. In 1690, the astronomer Johannes Hevelius in his Firmamentum Sobiescianum regarded the constellation Pisces as being composed of four subdivisions. Xi Piscium was considered to be part of the Linum Austrinum, the South Cord. The star is visible to the naked eye, having an apparent visual magnitude of 4.60. Based upon an annual parallax shift of 11.67 mas as seen from Earth, it is located about 280 light years from the Sun. It is moving away from the Sun, having a radial velocity of +26 km/s.
This is a single-lined spectroscopic binary system with an orbital period of 4.6 years and an eccentricity of around 0.18. The spectroscopic binary nature of this star was discovered in 1901 by William Wallace Campbell using the Mills spectrograph at the Lick Observatory. The visible component is an evolved K-type giant star with a stellar classification of K0 III. It is a red clump star, which indicates it is generating energy through helium fusion at its core.
In non-Western astronomy
In Chinese astronomy, the "Outer Fence" () refers to an asterism consisting of ξ Piscium, δ Piscium, ε Piscium, ζ Piscium, μ Piscium, ν Piscium and α Piscium. Consequently, the Chinese name for ξ Piscium itself is "the Sixth Star of the Outer Fence" ()
References
K-type giants
Horizontal-branch stars
Spectroscopic binaries
Pisces (constellation)
Piscium, Xi
Durchmusterung objects
Piscium, 111
011559
008833
0549 | Xi Piscium | Astronomy | 375 |
53,050,024 | https://en.wikipedia.org/wiki/Granny%20dumping | Granny dumping (informal) is a form of modern senicide. The term was introduced in the early 1980s by professionals in the medical and social work fields. Granny dumping is defined by the Oxford English Dictionary as "the abandonment of an elderly person in a public place such as a hospital or nursing home, especially by a relative". It may be carried out by family members who are unable or unwilling to continue providing care due to financial problems, burnout, lack of resources (such as home health or assisted living options), or stress. However, instances of institutional granny dumping, by hospitals and care facilities, has also been known to occur. The "dumping" may involve the literal abandonment of an elderly person, who is taken to a location such as hospital waiting area or emergency room and then left, or in the refusal to return to collect an elderly person after the person is discharged from a hospital visit or hotel stay. While leaving an elderly person in a hospital or nursing facility is a common form of the practice, there have been incidences of elderly people being "dumped" in other locations, such as the side of a public street.
Historical background, causes, and costs
A practice known as ubasute, existed in Japanese mythology since centuries ago, involving of legends senile elders who were brought to mountaintops by poor citizens who were unable to look after them. The widespread economic and demographic problems facing Japan have seen it on the rise, with relatives dropping off seniors at hospitals or charities. 70,000 (both male and female equally) elderly Americans were estimated to have been abandoned in 1992 in a report issued by the American College of Emergency Physicians. In this same study, ACEP received informal surveys from 169 hospital Emergency Departments and report an average of 8 "granny dumping" abandonments per week. According to the New York Times, 1 in 5 people are now caring for an elderly parent and people are spending more time than ever caring for an elderly parent than their own children. Social workers have said that this may be the result of millions of people who are near the breaking point of looking after their elderly parents who are in poor health.
In the US, granny dumping is more likely to happen in states such as Florida, Texas, and California where there are large populations of retirement communities. Congress has attempted to step in by mandating to emergency departments requiring them to see all patients. In some US states, and some other countries, the practice is illegal, or is subject to efforts to declare it illegal.
However, Medicaid is covering fewer and fewer of medical bills through reimbursement (in 1989, it was 78% but that number is decreasing) and reduced eligibility. In some cases, the hospitals may not want to take the risk of having a patient who cannot pay so they will attempt to transfer their care to another hospital. According to the Consolidated Omnibus Budget Reconciliation Act of 1985 set into place by Ronald Reagan, a hospital can transfer at the patient's request or providers must sign a document providing why they believe a patient's care should be better served at another facility. With 40% of revenue coming from Medicaid and Medicare a hospital must earn 8 cents per dollar to compensate for the loss of 7 cents per Medicaid/Medicare patients. Hospitals had to pay an additional 2 billion dollars to private payers to cover costs for Medicare/Medicaid patients in 1989.
By caregivers
In cases where granny dumping is practiced by family members or caregivers, the dumping falls into two categories: temporary, or permanent. Temporary abandonment of elderly persons is generally due to the inability or expense of finding temporary care for a person with complex medical needs. Needing a break, or wishing to go on a holiday, the normal caregivers will take their elderly patient to a hospital emergency room, or possibly a hotel, and then leave, with the plan to return once the vacation is over.
Incidents of granny dumping often happen before long weekends and may peak before Christmas when families head off on holidays. Caregivers in both Australia and New Zealand report that old people without acute medical problems are dropped off at hospitals. As a result, hospitals and care facilities have to carry an extra burden on their limited resources.
In Poland, the practice of dumping elderly persons before Christmas or Easter is known among emergency and ambulance personnel as Babka Świąteczna, i.e. Holiday Granny, the phrase also meaning 'Holiday pie'
Caregivers may also intend the abandonment to be permanent. In such cases, the caregivers will refuse to return to collect the elderly person, even when contacted by officials. Caregivers may go to great lengths to abandon the elderly person in a place far from their home location to prevent being tracked down and having the elderly person returned to their care.
Permanent abandonment might be done because the caregiver is mentally, physically, or financially unable to continue to provide care, or conscientiously as a tool and method of forcing institutions and government assistance to step in and provide placement and support which would otherwise be unavailable or denied to the caregiver or elderly person.
Caregivers who abandon their elderly charges may face criminal charges or legal repercussions for doing so, dependent on their local laws.
Institutional
A hospital or care facility's legal obligation in such cases can be complicated. The protocols to handle a permanently abandoned elderly person are unclear and vary between institutions. However, the expense of providing emergency or long-term care to an abandoned elderly person can represent a considerable burden on a facility's budget, capacity, and manpower. This has led to institutional granny-dumping, where a hospital or nursing facility likewise abandon the elderly person to avoid the expense of their care.
Hospitals generally seek to place an abandoned elderly person with a long-term care or nursing facility, but such facilities may have no capacity, or may refuse to take the patient, who may have no ability to pay. When this occurs, hospitals are faced with the dilemma of either providing care themselves at great expense, or similarly dumping the patient by taking them off of hospital property and leaving them.
Nursing homes may similarly abandon low-income residents by evicting them and leaving them in hotels, homeless shelters, or on the street. Nursing homes may refuse to readmit residents after a trip home. In a granny dumping practice also called hospital dumping, residents may be sent to a hospital for temporary treatment and not permitted to return.
Another form of institutional granny dumping may occur when a nursing home closes, and staff abandon residents in the facility, or leave them in hotels, homeless shelters, or similar. During the COVID-19 pandemic, institutional granny dumping by nursing homes became a widespread problem in the United States as above average numbers of care facilities closed with no alternatives to provide care for the displaced residents.
References
Gerontology
Health care | Granny dumping | Biology | 1,384 |
7,617,637 | https://en.wikipedia.org/wiki/Artesunate%20suppositories | Artesunate suppositories are used for the treatment of malaria. Artesunate is an antimalarial water-soluble derivative of dihydroartemisinin. Artemisinins are sesquiterpene lactones isolated from Artemisia annua, a Chinese traditional medicine. These suppositories are given rectally due to the risk of death from severe malaria, as described below.
The risk of death from severe malaria is largely dependent on the time lag between the onset of symptoms and treatment. Rapid access and administration of effective treatment is therefore essential. For many patients, readily available oral drugs cannot be taken because of their symptoms (e.g., vomiting, convulsions, coma), and hospitals providing alternative, non-oral treatment are often inaccessible. The drug artesunate, given in rectal suppository form, provides a potential solution to this problem: it can be made available in remote areas and thus can be given at the onset of symptoms.
Artesunate is one of a number of artemisinin derivatives discovered and developed by Chinese scientists and registered in China since the 1980s. Since the 1990s, UNICEF/UNDP/World Bank/WHO Special Programme for Research and Training in Tropical Diseases (TDR) have supported studies to assess the properties of the drug. There were already indications that artesunate, given rectally, was effective in severe malaria. Significant work with artemisinin suppositories in severe malaria was conducted in Viet Nam in the early 1990s, and clinical trials of rectal artesunate followed by mefloquine treatment in moderately severe malaria were conducted in the mid-1990s in Thailand.
A major placebo-controlled clinical trial published in 2009 found that "if patients with severe malaria cannot be treated orally and access to injections will take several hours, a single inexpensive artesunate rectal suppository at the time of referral substantially reduces the risk of death or permanent disability."
References
Antimalarial agents
Terpenes and terpenoids | Artesunate suppositories | Chemistry | 425 |
52,005,845 | https://en.wikipedia.org/wiki/Petrolingual%20ligament | The petrolingual ligament lies at the posteroinferior aspect of the lateral wall of the cavernous sinus and marks the point at which the internal carotid artery enters the cavernous sinus.
Anatomically, the petrolingual ligament demarcates two of the segments of the internal carotid artery:
The petrolingual ligament marks the end of the petrous section of the internal carotid artery.
The cavernous section of the internal carotid artery begins at the superior aspect of the petrolingual ligament.
For surgeons and radiologists, it is important to be oriented to the location of this ligament in cases of possible dissection of the internal carotid artery, as it helps determine whether the dissection has occurred inside or outside the cavernous sinus.
References
Anatomy | Petrolingual ligament | Biology | 167 |
1,298,198 | https://en.wikipedia.org/wiki/SASL%20%28programming%20language%29 | SASL (St Andrews Static Language, alternatively St Andrews Standard Language) is a purely functional programming language developed by David Turner at the University of St Andrews in 1972, based on the applicative subset of ISWIM. In 1976 Turner redesigned and reimplemented it as a non-strict (lazy) language. In this form it was the foundation of Turner's later languages Kent Recursive Calculator (KRC) and Miranda, but SASL appears to be untyped whereas Miranda has polymorphic types.
Burroughs Corporation used SASL to write a compiler and operating system.
Notes
References
External links
The SASL Language Manual
Academic programming languages
Functional languages
History of computing in the United Kingdom
Programming languages created in 1972 | SASL (programming language) | Technology | 149 |
22,812,666 | https://en.wikipedia.org/wiki/RAFOS%20float | RAFOS floats are submersible devices used to map ocean currents well below the surface. They drift with these deep currents and listen for acoustic "pongs" emitted at designated times from multiple moored sound sources. By analyzing the time required for each pong to reach a float, researchers can pinpoint its position by trilateration. The floats are able to detect the pongs at ranges of hundreds of kilometers because they generally target a range of depths known as the SOFAR (Sound Fixing And Ranging) channel, which acts as a waveguide for sound. The name "RAFOS" derives from the earlier SOFAR floats, which emitted sounds that moored receivers picked up, allowing real-time underwater tracking. When the transmit and receive roles were reversed, so was the name: RAFOS is SOFAR spelled backward. Listening for sound requires far less energy than transmitting it, so RAFOS floats are cheaper and longer lasting than their predecessors, but they do not provide information in real-time: instead they store it on board, and upon completing their mission, drop a weight, rise to the surface, and transmit the data to shore by satellite.
Introduction
Of the importance of measuring ocean currents
The underwater world is still mostly unknown. The main reason for it is the difficulty to gather information in situ, to experiment, and even to reach certain places. But the ocean nonetheless is of a crucial importance for scientists, as it covers about 71% of the planet.
Knowledge of ocean currents is of crucial importance. In important scientific aspects, as the study of global warming, ocean currents are found to greatly affect the Earth's climate since they are the main heat transfer mechanism. They are the reason for heat flux between hot and cold regions, and in a larger sense drive almost every understood circulation. These currents also affect marine debris, and vice versa.
In an economical aspect, a better understanding can help reducing costs of shipping, since the currents would help boats reduce fuel costs. In the sail-ship era knowledge was even more essential. Even today, the round-the-world sailing competitors employ surface currents to their benefit. Ocean currents are also very important in the dispersal of many life forms. An example is the life-cycle of the European Eel.
The SOFAR channel
The SOFAR channel (short for Sound Fixing and Ranging channel), or deep sound channel (DSC), is a horizontal layer of water in the ocean at which depth the speed of sound is minimal, in average around 1200 m deep. It acts as a wave-guide for sound, and low frequency sound waves within the channel may travel thousands of miles before dissipating.
The SOFAR channel is centred on the depth where the cumulative effect of temperature and water pressure (and, to a smaller extent, salinity) combine to create the region of minimum sound speed in the water column. Near the surface, the rapidly falling temperature causes a decrease in sound speed, or a negative sound speed gradient. With increasing depth, the increasing pressure causes an increase in sound speed, or a positive sound speed gradient.
The depth where the sound speed is at a minimum is the sound channel axis. This is a characteristic that can be found in optical guides. If a sound wave propagates away from this horizontal channel, the part of the wave furthest from the channel axis travels faster, so the wave turns back toward the channel axis. As a result, the sound waves trace a path that oscillates across the SOFAR channel axis. This principle is similar to long distance transmission of light in an optical fiber. In this channel, a sound has a range of over 2000 km.
RAFOS float
Global idea
To use a RAFOS float, one has to submerge it in the specified location, so that it will get carried by the current. Then, every so often (usually every 6 or 8 hours) an 80-second sound signal is sent from moored emitters. Using the fact that a signal transmitted in the ocean preserves its phase structure (or pattern) for several minutes, it has been thought to use signals in which the frequency increases linearly of 1.523 Hz from start to end centered around 250 Hz. Then receivers would listen for specific phase structures, by comparing the incoming data with a reference 80-second signal. This permits to get rid of any noise appearing during the travel of the wave by floating particles or fish.
The detection scheme can be simplified by keeping only the information of positive or negative signal, allowing to work with a single bit of new information at each time step. This method works very well, and allows the use of small micro-processors, enabling the float itself to do the listening and computing, and a moored sound source. From the arrival time of the signals from two or more sound sources, and the previous location of the float, its current location can easily be determined to considerable (<1 km) accuracy. For instance, the float will listen for three sources and store the time of arrival for the two largest signals heard from each source. The location of the float will be computed onshore.
Technical characteristics
Mechanical characteristics
The floats consist of 8 cm by 1.5 to 2.2 m long glass pipe that contain a hydrophone, signal processing circuits, a microprocessor, a clock and a battery. A float weighs about 10 kg. The lower end is sealed with a flat aluminium endplate where all electrical and mechanical penetrators are located. The glass thickness is about 5 mm, giving the float a theoretical maximum depth of about 2700 m. The external ballast is suspended by a short piece of wire chosen for its resistance to saltwater corrosion. By dissolving it electrolytically the 1 kg ballast is released and the float returns to the surface.
Electrical characteristics
The electronics can be divided into four categories: a satellite transmitter used after surfacing, the set of sensors, a time reference clock, and a microprocessor. The clock is essential in locating the float, since it is used as reference to calculate the time travel of the sound signals from the moored emitters. It is also useful to have the float work on schedule. The microprocessor controls all subsystems except the clock, and stores the collected data at a regular schedule. The satellite transmitter is used to send data packages to orbiting satellites after the surfacing. It usually takes three days for the satellite to collect all the dataset.
The isobaric model
An isobaric float aims to follow a constant pressure plane, by adjusting the ballast's weight to attain buoyancy to a certain depth. It is the most easily achieved model. To achieve an isobaric float, its compressibility must be much lower than that of seawater. In that case, if the float were to be moved upwards from equilibrium, it will expand less than the surrounding seawater, leading to a restoring force pushing it downwards, back to its equilibrium position. Once correctly balanced, the float will remain in a constant pressure field.
The isopycnal model
The aim of an isopycnal float is to follow the density planes, that is to attain neutral buoyancy for constant density. To achieve this, it is necessary to remove pressure induced restoring forces, thus the float has to have the same compressibility as the surrounding seawater. This is often achieved by a compressible element, as a piston in a cylinder, so that the CPU can change the volume according to changes in pressure. An error of about 10% in the setting can lead to a 50 m depth difference once in water. This is why floats are ballasted in tanks working at high pressure.
Measures and projects
Computing the float's trajectory
Once the float's mission is over and the data collected by the satellites, one major step is to compute the float's route over time. This is done by looking at the travel time of the signals from the moored speakers to the float, computed from the emission time (known accurately), the reception time (known from the float's clock and corrected if the clock had moved). Then, because the speed of sound is known to 0.3% in sea, the position of the float can be determined to about 1 km by an iterative circular tracking procedure. The doppler effect can also be taken into account. Since the float's speed is not known, a first closing speed is determined by measuring the shift in time arrival between two transmissions, where the float is considered not to have moved.
The Argo project
The Argo project is an international collaboration between 50 research and operational agencies from 26 countries that aims to measure a global array of temperature, salinity and pressure of the top 2000m of the ocean. It uses over 3000 floats, some of which use RAFOS for underwater geolocation; most simply use the Global Positioning System (GPS) to obtain a position when surfacing every 10 days.
This project has greatly contributed to the scientific community and has issued many data that has since been used for ocean parameters cartography and Global change analysis.
Other results
Many results have been achieved thanks to these floats, on the global mapping of the ocean characteristics, or for example how floats systematically shoal (upwell) as they approach anticyclonic meanders and deepen (downwell) as they approach cyclonic meanders. On the left is a typical set of data from a RAFOS float. Today, such floats remain the best way to systematically probe the ocean's interior, since it is automatic and self-sufficient. In recent developments the floats have been able to measure different amounts of dissolved gases, and even to carry small experiments in situ.
See also
Argo (oceanography)
SOFAR channel
Ocean acoustic tomography
References
External links
RAFOS Float – Ocean Instruments
http://www.beyonddiscovery.org/content/view.page.asp?I=224
https://web.archive.org/web/20110205111415/http://www.beyonddiscovery.org/content/view.article.asp?a=219
http://www.dosits.org/people/researchphysics/measurecurrents/
http://www.whoi.edu/instruments/viewInstrument.do?id=1061
http://www.argo.ucsd.edu/index.html
Oceanography | RAFOS float | Physics,Environmental_science | 2,132 |
20,994,191 | https://en.wikipedia.org/wiki/Drug%20Industry%20Documents%20Archive | The Drug Industry Documents Archive (DIDA) is a digital archive of pharmaceutical industry documents created and maintained by the University of California, San Francisco, Library and Center for Knowledge Management. DIDA is a part of the larger UCSF Industry Documents Library which includes the Truth Tobacco Industry Documents. The archive contains documents about pharmaceutical industry clinical trials, publication of study results, pricing, marketing, relations with physicians and drug company involvement in continuing medical education.
Most of the documents on DIDA were made public as a result of lawsuits against pharmaceutical companies Parke-Davis, Warner-Lambert, Pfizer, Merck & Co., Wyeth and Abbott Labs, among others. DIDA was founded in 2005 with the support of a gift by Thomas Greene, the attorney for David Franklin, whistleblower in United States ex rel. Franklin v. Parke-Davis, the case from which the first documents in the archive originated.
Researchers as well as students, journalists, and the general public, use the archive to investigate the ways pharmaceutical companies market their products. The UCSF_Library created this digital archive in an attempt to facilitate further research into the drug industry's practice of establishing close links with the medical community which has been shown to influence scientific research, drug approval, prescription practices, and ultimately, consumer health.
Collections
DIDA contains:
internal pharmaceutical company documents
correspondence between drug companies and physicians, researchers and educational institutions
regulatory and legal documents
court filings
depositions
expert reports
internal University documents
Documents come from a variety of sources including:
Lawsuits against Merck & Co. regarding the marketing and use of Vioxx
The landmark whistleblower case involving Neurontin (gabapentin) and off-label marketing: United States of America ex rel. David Franklin vs. Parke-Davis, Division of Warner-Lambert/Pfizer
Investigations into conflict of interest and academic institutions by the US Senate Finance Committee, headed by Charles Grassley (R-Iowa)
Investigations into the marketing of Vioxx to physicians by the United States House Committee on Oversight and Government Reform, headed by Henry A. Waxman
Antitrust litigation involving Abbott Labs and their HIV/AIDS drug Norvir
Lawsuits against Wyeth for the unethical promotion of the hormone replacement drug Premarin to women. Wyeth paid medical ghostwriters to author journal articles about the drug.
References
Further reading
Waxman, HA. Memorandum to the Democratic Members of the Government Reform Committee: The marketing of Vioxx to physicians. May 5, 2005. Committee on Government Reform Minority Office, United States House of Representatives.
External links
Pharmaceutical industry
American digital libraries
Business and industry archives
University of California, San Francisco | Drug Industry Documents Archive | Chemistry,Biology | 543 |
992,829 | https://en.wikipedia.org/wiki/Microcell | A microcell is a cell in a mobile phone network served by a low power cellular base station (tower), covering a limited area such as a mall, a hotel, or a transportation hub. A microcell is usually larger than a picocell, though the distinction is not always clear. A microcell uses power control to limit the radius of its coverage area.
Typically the range of a microcell is less than two kilometers wide, whereas standard base stations may have ranges of up to 35 kilometres (22 mi). A picocell, on the other hand, is 200 meters or less, and a femtocell is on the order of 10 meters, although AT&T calls its femtocell that has a range of , a "microcell". AT&T uses "AT&T 3G MicroCell" as a trademark and not necessarily the "microcell" technology, however.
A microcellular network is a radio network composed of microcells.
Rationale
Like picocells, microcells are usually used to add network capacity in areas with very dense phone usage, such as train stations. Microcells are often deployed temporarily during sporting events and other occasions in which extra capacity is known to be needed at a specific location in advance.
Cell size flexibility is a feature of 2G (and later) networks and is a significant part of how such networks have been able to improve capacity. Power controls implemented on digital networks make it easier to prevent interference from nearby cells using the same frequencies. By subdividing cells, and creating more cells to help serve high density areas, a cellular network operator can optimize the use of spectrum and ensure capacity can grow. By comparison, older analog systems have fixed limits, beyond which attempts to subdivide cells simply would result in an unacceptable level of interference.
Microcell/picocell-only networks
Certain mobile phone systems, notably PHS and DECT, only provide microcellular (and Pico cellular) coverage. Microcellular systems are typically used to provide low cost mobile phone systems in high-density environments such as large cities. PHS is deployed throughout major cities in Japan as an alternative to ordinary cellular service. DECT is used by many businesses to deploy private license-free microcellular networks within large campuses where wireline phone service is less useful. DECT is also used as a private, non-networked, cordless phone system where its low power profile ensures that nearby DECT systems do not interfere with each other.
A forerunner of these types of network was the CT2 cordless phone system, which provided access to a looser network (without handover), again with base stations deployed in areas where large numbers of people might need to make calls. CT2's limitations ensured the concept never took off. CT2's successor, DECT, was provided with an interworking profile, GIP so that GSM networks could make use of it for microcellular access, but in practice the success of GSM within Europe, and the ability of GSM to support microcells without using alternative technologies, meant GIP was rarely used, and DECT's use in general was limited to non-GSM private networks, including use as cordless phone systems.
See also
Femtocell
GSM
Picocell
Small Cells
External links
Ericsson press release describing a GSM/UMTS picocell base station intended for residential use
Nokia 7200 Tutorial including definition of "Micro Cellular Network"
How To Install A Microcell Cell Phone Tower
References
Mobile telecommunications | Microcell | Technology | 737 |
1,117,429 | https://en.wikipedia.org/wiki/Ivermectin | Ivermectin is an antiparasitic drug. After its discovery in 1975, its first uses were in veterinary medicine to prevent and treat heartworm and acariasis. Approved for human use in 1987, it is used to treat infestations including head lice, scabies, river blindness (onchocerciasis), strongyloidiasis, trichuriasis, ascariasis and lymphatic filariasis. It works through many mechanisms to kill the targeted parasites, and can be taken by mouth, or applied to the skin for external infestations. It belongs to the avermectin family of medications.
William Campbell and Satoshi Ōmura were awarded the 2015 Nobel Prize in Physiology or Medicine for its discovery and applications. It is on the World Health Organization's List of Essential Medicines, and is approved by the U.S. Food and Drug Administration as an antiparasitic agent. In 2022, it was the 314th most commonly prescribed medication in the United States, with more than 200,000 prescriptions. It is available as a generic medicine.
Misinformation has been widely spread claiming that ivermectin is beneficial for treating and preventing COVID-19. Such claims are not backed by credible scientific evidence. Multiple major health organizations, including the U.S. Food and Drug Administration, the U.S. Centers for Disease Control and Prevention, the European Medicines Agency, and the World Health Organization have advised that ivermectin is not recommended for the treatment of COVID-19.
Medical uses
Ivermectin is used to treat human diseases caused by roundworms and a wide variety of external parasites.
Worm infections
For river blindness (onchocerciasis) and lymphatic filariasis, ivermectin is typically given as part of mass drug administration campaigns that distribute the drug to all members of a community affected by the disease. Adult worms survive in the skin and eventually recover to produce larval worms again; to keep the worms at bay, ivermectin is given at least once per year for the 1015-year lifespan of the adult worms.
The World Health Organization (WHO) considers ivermectin the drug of choice for strongyloidiasis. Ivermectin is also the primary treatment for Mansonella ozzardi and cutaneous larva migrans. The U.S. Centers for Disease Control and Prevention (CDC) recommends ivermectin, albendazole, or mebendazole as treatments for ascariasis. Ivermectin is sometimes added to albendazole or mebendazole for whipworm treatment, and is considered a second-line treatment for gnathostomiasis.
Mites and insects
Ivermectin is also used to treat infection with parasitic arthropods. Scabies – infestation with the mite Sarcoptes scabiei – is most commonly treated with topical permethrin or oral ivermectin. A single application of permethrin is more efficacious than a single treatment of ivermectin. For most scabies cases, ivermectin is used in a two-dose regimen: the first dose kills the active mites, but not their eggs. Over the next week, the eggs hatch, and a second dose kills the newly hatched mites. The two-dose regimen of ivermectin has similar efficacy to the single dose permethrin treatment. Ivermectin is, however, more effective than permethrin when used in the mass treatment of endemic scabies.
For severe "crusted scabies", where the parasite burden is orders of magnitude higher than usual, the U.S. Centers for Disease Control and Prevention (CDC) recommends up to seven doses of ivermectin over the course of a month, along with a topical antiparasitic. Both head lice and pubic lice can be treated with oral ivermectin, an ivermectin lotion applied directly to the affected area, or various other insecticides. Ivermectin is also used to treat rosacea and blepharitis, both of which can be caused or exacerbated by Demodex folliculorum mites.
Contraindications
The only absolute contraindication to the use of ivermectin is hypersensitivity to the active ingredient or any component of the formulation. In children under the age of five or those who weigh less than , there is limited data regarding the efficacy or safety of ivermectin, though the available data demonstrate few adverse effects. However, the American Academy of Pediatrics cautions against use of ivermectin in such patients, as the blood-brain barrier is less developed, and thus there may be an increased risk of particular CNS side effects such as encephalopathy, ataxia, coma, or death. The American Academy of Family Physicians also recommends against use in these patients, given a lack of sufficient data to prove drug safety. Ivermectin is secreted in very low concentration in breast milk. It remains unclear if ivermectin is safe during pregnancy.
Adverse effects
Side effects, although uncommon, include fever, itching, and skin rash when taken by mouth; and red eyes, dry skin, and burning skin when used topically for head lice. It is unclear if the drug is safe for use during pregnancy, but it is probably acceptable for use during breastfeeding.
Ivermectin is considered relatively free of toxicity in standard doses (around 300 μg/kg). Based on the data drug safety sheet for ivermectin, side effects are uncommon. However, serious adverse events following ivermectin treatment are more common in people with very high burdens of larval Loa loa worms in their blood. Those who have over 30,000 microfilaria per milliliter of blood risk inflammation and capillary blockage due to the rapid death of the microfilaria following ivermectin treatment.
One concern is neurotoxicity after large overdoses, which in most mammalian species may manifest as central nervous system depression, ataxia, coma, and even death, as might be expected from potentiation of inhibitory chloride channels.
Since drugs that inhibit the enzyme CYP3A4 often also inhibit P-glycoprotein transport, the risk of increased absorption past the blood-brain barrier exists when ivermectin is administered along with other CYP3A4 inhibitors. These drugs include statins, HIV protease inhibitors, many calcium channel blockers, lidocaine, the benzodiazepines, and glucocorticoids such as dexamethasone.
During a typical treatment course, ivermectin can cause minor aminotransferase elevations. In rare cases it can cause mild clinically apparent liver disease.
To provide context for the dosing and toxicity ranges, the of ivermectin in mice is 25 mg/kg (oral), and 80 mg/kg in dogs, corresponding to an approximated human-equivalent dose LD50 range of 2.02–43.24 mg/kg, which is far more than its FDA-approved usage (a single dose of 0.150–0.200 mg/kg to be used for specific parasitic infections). While ivermectin has also been studied for use in COVID-19, and while it has some ability to inhibit SARS-CoV-2 in vitro, achieving 50% inhibition in vitro was found to require an estimated oral dose of 7.0 mg/kg (or 35x the maximum FDA-approved dosage), high enough to be considered ivermectin poisoning. Despite insufficient data to show any safe and effective dosing regimen for ivermectin in COVID-19, doses have been taken far more than FDA-approved dosing, leading the CDC to issue a warning of overdose symptoms including nausea, vomiting, diarrhea, hypotension, decreased level of consciousness, confusion, blurred vision, visual hallucinations, loss of coordination and balance, seizures, coma, and death. The CDC advises against consuming doses intended for livestock or doses intended for external use and warns that increasing misuse of ivermectin-containing products is increasing harmful overdoses.
Pharmacology
Mechanism of action
Ivermectin and its related drugs act by interfering with the nerve and muscle functions of helminths and insects. The drug binds to glutamate-gated chloride channels common to invertebrate nerve and muscle cells. The binding pushes the channels open, which increases the flow of chloride ions and hyper-polarizes the cell membranes, paralyzing and killing the invertebrate. Ivermectin is safe for mammals (at the normal therapeutic doses used to cure parasite infections) because mammalian glutamate-gated chloride channels only occur in the brain and spinal cord: the causative avermectins usually do not cross the blood–brain barrier, and are unlikely to bind to other mammalian ligand-gated channels.
Pharmacokinetics
Ivermectin can be given by mouth, topically, or via injection. Oral doses are absorbed into systemic circulation; the alcoholic solution form is more orally available than tablet and capsule forms. Ivermectin is widely distributed in the body.
Ivermectin does not readily cross the blood-brain barrier of mammals due to the presence of P-glycoprotein (the MDR1 gene mutation affects the function of this protein). Crossing may still become significant if ivermectin is given at high doses, in which case brain levels peak 2–5 hours after administration. In contrast to mammals, ivermectin can cross the blood-brain barrier in tortoises, often with fatal consequences.
Ivermectin is metabolized into eight different products by human CYP3A4, two of which (M1, M2) remain toxic to mosquitos. M1 and M2 also have longer elimination half-lives of about 55 hours. CYP3A5 produces a ninth metabolite.
Chemistry
Fermentation of Streptomyces avermitilis yields eight closely related avermectin homologues, of which B1a and B1b form the bulk of the products isolated. In a separate chemical step, the mixture is hydrogenated to give ivermectin, which is an approximately 80:20 mixture of the two 22,23-dihydroavermectin compounds.
Ivermectin is a macrocyclical lactone.
History
The avermectin family of compounds was discovered by Satoshi Ōmura of Kitasato University and William Campbell of Merck. In 1970, Ōmura isolated a strain of Streptomyces avermitilis from woodland soil near a golf course along the southeast coast of Honshu, Japan. Ōmura sent the bacteria to William Campbell, who showed that the bacterial culture could cure mice infected with the roundworm Heligmosomoides polygyrus. Campbell isolated the active compounds from the bacterial culture, naming them "avermectins" and the bacterium Streptomyces avermitilis for the compounds' ability to clear mice of worms (in Latin: a 'without', vermis 'worms'). Of the various avermectins, Campbell's group found the compound "avermectin B1" to be the most potent when taken orally. They synthesized modified forms of avermectin B1 to improve its pharmaceutical properties, eventually choosing a mixture of at least 80% 22,23-dihydroavermectin B1a and up to 20% 22,23-dihydroavermectin B1b, a combination they called "ivermectin".
The discovery of ivermectin has been described as a combination of "chance and choice." Merck was looking for a broad-spectrum anthelmintic, which ivermectin is; however, Campbell noted that they "...also found a broad-spectrum agent for the control of ectoparasitic insects and mites."
Merck began marketing ivermectin as a veterinary antiparasitic in 1981. By 1986, ivermectin was registered for use in 46 countries and was administered massively to cattle, sheep, and other animals. By the late 1980s, ivermectin was the bestselling veterinary medicine in the world. Following its blockbuster success as a veterinary antiparasitic, another Merck scientist, Mohamed Aziz, collaborated with the World Health Organization to test the safety and efficacy of ivermectin against onchocerciasis in humans. They found it to be highly safe and effective, triggering Merck to register ivermectin for human use as "Mectizan" in France in 1987. A year later, Merck CEO Roy Vagelos agreed that Merck would donate all ivermectin needed to eradicate river blindness. In 1998, that donation would be expanded to include ivermectin used to treat lymphatic filariasis.
Ivermectin earned the title of "wonder drug" for the treatment of nematodes and arthropod parasites. Ivermectin has been used safely by hundreds of millions of people to treat river blindness and lymphatic filariasis.
Half of the 2015 Nobel Prize in Physiology or Medicine was awarded jointly to Campbell and Ōmura for discovering ivermectin, "the derivatives of which have radically lowered the incidence of river blindness and lymphatic filariasis, as well as showing efficacy against an expanding number of other parasitic diseases".
Society and culture
COVID-19 misinformation
Economics
The initial price proposed by Merck in 1987 was per treatment, which was unaffordable for patients who most needed ivermectin. The company donated hundreds of millions of courses of treatments since 1988 in more than 30 countries. Between 1995 and 2010, using donated ivermectin to prevent river blindness, the program is estimated to have prevented seven million years of disability at a cost of .
Ivermectin is considered an inexpensive drug. As of 2019, ivermectin tablets (Stromectol) in the United States were the least expensive treatment option for lice in children at approximately , while Sklice, an ivermectin lotion, cost around for .
, the cost effectiveness of treating scabies and lice with ivermectin has not been studied.
Brand names
It is sold under the brand names Heartgard, Sklice and Stromectol in the United States, Ivomec worldwide by Merial Animal Health, Mectizan in Canada by Merck, Iver-DT in Nepal by Alive Pharmaceutical and Ivexterm in Mexico by Valeant Pharmaceuticals International. In Southeast Asian countries, it is marketed by Delta Pharma Ltd. under the trade name Scabo 6. The formulation for rosacea treatment is sold under the brand name Soolantra. While in development, it was assigned the code MK-933 by Merck.
Research
Parasitic disease
Ivermectin has been researched in laboratory animals, as a potential treatment for trichinosis and trypanosomiasis.
Ivermectin has also been tested on zebrafish infected with Pseudocapillaria tomentosa.
Tropical diseases
Ivermectin is also of interest in the prevention of malaria, as it is toxic to both the malaria plasmodium itself and the mosquitos that carry it. A direct effect on malaria parasites could not be shown in an experimental infection of volunteers with Plasmodium falciparum. Use of ivermectin at higher doses necessary to control malaria is probably safe, though large clinical trials have not yet been done to definitively establish the efficacy or safety of ivermectin for prophylaxis or treatment of malaria. Mass drug administration of a population with ivermectin to treat and prevent nematode infestation is effective for eliminating malaria-bearing mosquitos and thereby potentially reducing infection with residual malaria parasites. Whilst effective in killing malaria-bearing mosquitos, a 2021 Cochrane review found that, to date, the evidence shows no significant impact on reducing incidence of malaria transmission from the community administration of ivermectin.
One alternative to ivermectin is moxidectin, which has been approved by the Food and Drug Administration for use in people with river blindness. Moxidectin has a longer half-life than ivermectin and may eventually supplant ivermectin as it is a more potent microfilaricide, but there is a need for additional clinical trials, with long-term follow-up, to assess whether moxidectin is safe and effective for treatment of nematode infection in children and women of childbearing potential.
There is tentative evidence that ivermectin kills bedbugs, as part of integrated pest management for bedbug infestations. However, such use may require a prolonged course of treatment which is of unclear safety.
NAFLD
In 2013, ivermectin was demonstrated as a novel ligand of the farnesoid X receptor, a therapeutic target for nonalcoholic fatty liver disease (NAFLD).
COVID-19
During the COVID-19 pandemic, ivermectin was researched for possible utility in preventing and treating COVID-19, but no good evidence of benefit was found.
Veterinary use
Ivermectin is routinely used to control parasitic worms in the gastrointestinal tract of ruminant animals. These parasites normally enter the animal when it is grazing, pass the bowel, and set and mature in the intestines, after which they produce eggs that leave the animal via its droppings and can infest new pastures. Ivermectin is only effective in killing some of these parasites, because of an increase in anthelmintic resistance. This resistance has arisen from the persistent use of the same anthelmintic drugs for the past 40 years. Additionally, the use of Ivermectin for livestock has a profound impact on dung beetles, such as T. lusitanicus, as it can lead to acute toxicity within these insects.
In dogs, ivermectin is routinely used as prophylaxis against heartworm. Dogs with defects in the P-glycoprotein gene (MDR1), often collie-like herding dogs, can be severely poisoned by ivermectin. The mnemonic "white feet, don't treat" refers to Scotch collies that are vulnerable to ivermectin. Some other dog breeds (especially the Rough Collie, the Smooth Collie, the Shetland Sheepdog, and the Australian Shepherd), also have a high incidence of mutation within the MDR1 gene (coding for P-glycoprotein) and are sensitive to the toxic effects of ivermectin. For dogs, the insecticide spinosad may have the effect of increasing the toxicity of ivermectin.
A 0.01% ivermectin topical preparation for treating ear mites in cats is available. Clinical evidence suggests 7-week-old kittens are susceptible to ivermectin toxicity.
Ivermectin is sometimes used as an acaricide in reptiles, both by injection and as a diluted spray. While this works well in some cases, care must be taken, as several species of reptiles are very sensitive to ivermectin. Use in turtles is particularly contraindicated.
A characteristic of the antinematodal action of ivermectin is its potency: for instance, to combat Dirofilaria immitis in dogs, ivermectin is effective at 0.001 milligram per kilogram of body weight when administered orally.
Notes
References
External links
Acaricides
Antiparasitic agents
GABAA receptor positive allosteric modulators
Glycine receptor agonists
Insecticides
Japanese inventions
Macrolides
Drugs developed by Merck & Co.
Nicotinic agonists
Peripherally selective drugs
Veterinary drugs
World Health Organization essential medicines
Chloride channel openers
Wikipedia medicine articles ready to translate | Ivermectin | Biology | 4,298 |
11,847,421 | https://en.wikipedia.org/wiki/Mediaroom | Mediaroom is a collection of software for operators to deliver IPTV (IPTV) subscription services, including content-protected, live, digital video recorder, video on demand, multiscreen, and applications. These services can be delivered via a range of devices inside and outside customers' homes, including wired and Wi-Fi set top boxes, PCs, tablets, smartphones and other connected devices – over both the operator's managed IP networks as well as "over the top" (OTT) or unmanaged networks.
According to a marketing firm, Mediaroom was the market leader in IPTV for 2014.
History
Microsoft TV platform
Microsoft announced an UltimateTV service from DirecTV in October 2000, based on technology acquired from WebTV Networks (later renamed MSN TV).
The software was called the Microsoft TV platform (which included the Foundation Edition); it had integrated digital video recorder (DVR) and Internet access capabilities. It was released on October 26, 2000. The software to decode and view digital video programming was derived from WebTV (later called MSN TV). UltimateTV had support for picture-in-picture and could record up to 35 hours of video content. The Internet capabilities were provided by Microsoft TV platform software, which was used for the TV guide. The TV guide could display programming schedule for 14 days, and recording could be scheduled for any of the shows. It could also be used to access E-mail. However, Microsoft lost distribution when DirecTV accepted an acquisition bid by Echostar, who had their own DVR. By 2003, it was taken off the market, even though it is still supported by DirecTV and the acquisition by Echostar failed.
The UltimateTV developers in Mountain View, California were eliminated by early 2002.
By June 2002, Moshe Lichtman replaced Jon DeVaan as leader of the division as more reductions were announced.
Foundation Edition
The Microsoft TV Foundation Edition platform integrated video-on-demand (VOD), DVR and HDTV programming with live television programming. It includes an electronic programming guide (EPG) that could be used to access any supported service from a centralized directory. The EPG could be used to search and filter the listings as well. The EPG was released around 2002. Comcast announced it would adopt this software in May 2004. Microsoft TV Foundation Edition platform also included an authoring environment that could be used to create content consumable from the set top box.
IPTV Edition
Microsoft TV IPTV Edition is an IPTV platform for accessing both on-demand as well as live television content over a 2-way IP network, coupled with DVR functionality. It is to be used with cable networks that have an IPTV infrastructure.
Microsoft Mediaroom
The IPTV platform was renamed Microsoft Mediaroom on June 18, 2007 at the NXTcomm conference. In January 2010, Microsoft Mediaroom 2.0 was announced at the International Consumer Electronics Show. On April 8, 2013, Microsoft and Ericsson announced plans for Ericsson to purchase Mediaroom. The sale was completed on September 5, 2013, and the platform officially became Ericsson Mediaroom.
Mediaroom
On February 6, 2014, Ericsson announced it had entered into an agreement to purchase multiscreen video platform company Azuki Systems. Azuki Systems was renamed Ericsson Mediaroom Reach.
MediaKind
On July 10, 2018, it was announced that the new identity of Ericsson Media Solutions is MediaKind. The CEO is Allen Broome.
Products
Current key products in Mediaroom's portfolio include Mediaroom, Mediaroom Reach, and MediaFirst TV Platform.
As of June 2016, Mediaroom TV was used in 65 commercial deployments in 34 countries, delivering services to over 16 million households via more than 30 million devices.
Mediaroom TV platforms are offered by 90 operators, including AT&T, Deutsche Telekom, CenturyLink, Telus, Hawaiian Telcom, Bell Canada (including Bell MTS), Hargray, Singtel, Telefónica SA, Cross Telephone, and Portugal Telecom.
See also
Windows Media Center
Interactive television
Smart TV
List of smart TV platforms and middleware software
10-foot user interface
Set-top box
Tasman (browser engine)
Xbox Video
References
External links
Mediaroom – official website.
Microsoft TV homepage
2007 software
Streaming television
Microsoft software | Mediaroom | Technology | 879 |
67,614,629 | https://en.wikipedia.org/wiki/Turner%20angle | The Turner angle Tu, introduced by Ruddick(1983) and named after J. Stewart Turner, is a parameter used to describe the local stability of an inviscid water column as it undergoes double-diffusive convection. The temperature and salinity attributes, which generally determine the water density, both respond to the water vertical structure. By putting these two variables in orthogonal coordinates, the angle with the axis can indicate the importance of the two in stability. Turner angle is defined as:
where tan−1 is the four-quadrant arctangent; α is the coefficient of thermal expansion; β is the equivalent coefficient for the addition of salinity, sometimes referred to as the "coefficient of saline contraction"; θ is potential temperature; and S is salinity. The relation between Tu and stability is as shown
If −45° < Tu < 45°, the column is statically stable.
If −90° < Tu < −45°, the column is unstable to diffusive convection.
If 45° < Tu < 90°, the column is unstable to salt fingering.
If −90° > Tu or Tu > 90°, the column is statically unstable to Rayleigh–Taylor instability.
Relation to density ratio
Turner angle is related to the density ratio mathematically by:
Meanwhile, Turner angle has more advantages than density ratio in aspects of:
The infinite scale of is replaced by a finite one running from +π to -π;
The strong fingering () and weak fingering () regions occupy about the same space on the Tu scale;
The indeterminate value obtained when is well defined in terms of Tu;
The regimes and their corresponding angles are easy to remember, and symmetric in the sense that if Tu corresponds to Rρ, then -Tu corresponds to Rρ−1. This links roughly equal strengths of finger and diffusive sense convection.
Nevertheless, Turner angle is not as directly obvious as density ratio when assessing different attributions of thermal and haline stratification. Its strength mainly focuses on classification.
Physical description
Turner angle is usually discussed when researching ocean stratification and double diffusion.
Turner angle assesses the vertical stability, indicating the density of the water column changes with depth. The density is generally related to potential temperature and salinity profile: the cooler and saltier the water is, the denser it is. As the light water overlays on the dense water, the water column is stably stratified. The buoyancy force preserves stable stratification. The Brunt-Vaisala frequency (N) is a measure of stability. If N2>0, the fluid is stably stratified.
A stably-statified fluid may be doubly stable. For instance, in the ocean, if the temperature decreases with depth (∂θ/∂z>0) and salinity increases with depth (∂S/∂z<0), then that part of the ocean is stably stratified with respect to both θ and S. In this state, the Turner angle is between -45° and 45°.
However, the fluid column can maintain static stability even if two attributes have opposite effects on the stability; the effect of one just has to have the predominant effect, overwhelming the smaller effect. In this sort of stable stratification, double diffusion occurs. Both attributes diffuse in opposite directions, reducing stability and causing mixing and turbulence. If the slower-diffusing component is the one that is stably-stratified, then the vertical gradient will stay smooth. If the faster-diffusing component is the one providing stability, then the interface will develop long "fingers", as diffusion will create pockets of fluid with intermediate attributes, but not intermediate density.
In the ocean, heat diffuses faster than salt. If colder, fresher water overlies warmer, saltier water, the salinity structure is stable while the temperature structure is unstable (∂θ/∂z<0, ∂S/∂z<0). In these diffusive cases, the Turner angle is -45° to -90°. If warmer, saltier water overlies colder, fresher water (∂θ/∂z>0, ∂S/∂z>0), salt fingering can be expected. This is because patchy mixing will create pockets of cold, salty water and pockets of warm, fresh water. and these pockets will sink and rise. In these fingering cases, the Turner angle is 45° to 90°.
Since Turner angle can indicate the thermal and haline properties of the water column, it is used to discuss thermohaline water structures. For instance, it can be used to define the boundaries of the subarctic front.
Characteristics
The global meridional Turner angle distributions at the surface and 300-m depth in different seasons are investigated by Tippins, Duncan & Tomczak, Matthias (2003), which indicates the overall stability of the ocean over a long-time scale. It's worth noting that 300-m depth is deep enough to be beneath the mixed layer during all seasons over most of the subtropics, yet shallow enough to be located entirely in the permanent thermocline, even in the tropics.
At the surface, as the temperature and salinity increase from the Subpolar Front towards subtropics, the Turner angle is positive, while it becomes negative due to the meridional salinity gradient being reversed on the equatorial side of the subtropical surface salinity maximum. Tu becomes positive again in the Pacific and Atlantic Oceans near the equator. A band of negative Tu in the South Pacific extends westward along 45°S, produced by the low salinities because of plenty of rainfall, off the southern coast of Chile.
In 300-m depth, it is dominated by positive Tu nearly everywhere except for only narrow bands of negative Turner angles. This reflects the shape of the permanent thermocline, which sinks to its greatest depth in the center of the oceanic gyres and then rises again towards the equator, and which also indicates a vertical structure in temperature and salinity where both decrease with depth.
Availability
The function of Turner angle is available:
For Python, published in the GSW Oceanographic Toolbox from the function gsw_Turner_Rsubrho.
For R, please reference this page Home/CRAN/gsw/gsw_Turner_Rsubrho: Turner Angle and Density Ratio.
For MATLAB, please reference this page GSW-Matlab/gsw_Turner_Rsubrho.m.
References
External links
The Gibbs SeaWater (GSW) Oceanographic Toolbox of TEOS-10
gsw_Turner_Rsubrho
Home/CRAN/gsw/gsw_Turner_Rsubrho: Turner Angle and Density Ratio.
GSW-Matlab/gsw_Turner_Rsubrho.m
Fluid dynamics
Oceanography | Turner angle | Physics,Chemistry,Engineering,Environmental_science | 1,436 |
32,765,170 | https://en.wikipedia.org/wiki/FLYWCH%20zinc%20finger | In molecular biology, the FLYWCH zinc finger is a zinc finger domain. It is found in a number of eukaryotic proteins. FLYWCH is a C2H2-type zinc finger characterised by five conserved hydrophobic residues, containing the conserved sequence motif:
F/Y-X(n)-L-X(n)-F/Y-X(n)-WXCX(6-12)CX(17-22)HXH where X indicates any amino acid. This domain was first characterised in Drosophila modifier of mdg4 proteins, Mod(mgd4), putative chromatin modulators involved in higher order chromatin domains. Mod(mdg4) proteins share a common N-terminal BTB/POZ domain, but differ in their C-terminal region, most containing C-terminal FLYWCH zinc finger motifs. The FLYWCH domain in Mod(mdg4) proteins has a putative role in protein-protein interactions; for example, Mod(mdg4)-67.2 interacts with DNA-binding protein Su(Hw) via its FLYWCH domain.
FLYWCH domains have been described in other proteins as well, including suppressor of killer of prune, Su(Kpn), which contains 4 terminal FLYWCH zinc finger motifs in a tandem array and a C-terminal glutathione S-transferase (GST) domain.
References
Protein domains | FLYWCH zinc finger | Biology | 312 |
60,048,566 | https://en.wikipedia.org/wiki/PyClone | PyClone is a software that implements a Hierarchical Bayes statistical model to estimate cellular frequency patterns of mutations in a population of cancer cells using observed alternate allele frequencies, copy number, and loss of heterozygosity (LOH) information. PyClone outputs clusters of variants based on calculated cellular frequencies of mutations.
Background
According to the Clonal Evolution model proposed by Peter Nowell, a mutated cancer cell can accumulate more mutations as it progresses to create sub-clones. These cells divide and mutate further to give rise to other sub-populations. In compliance with the theory of natural selection, some mutations may be advantageous to the cancer cells and thus make the cell immune to previous treatment. Heterogeneity within a single cancer tumour can arise from single nucleotide polymorphism/variation (SNP/SNV) events, microsatellite shifts and instability, loss of heterozygosity (LOH), Copy number variation and karyotypic variations including chromosome structural aberrations and aneuploidy. Due to the current methods of molecular analysis where a mixed population of cancer cells are lysed and sequenced, heterogeneity within the tumour cell population is under-detected. This results in a lack of information on the clonal composition of cancer tumours and more knowledge in this area would aid in the decisions for therapies.
PyClone is a hierarchical Bayes statistical model that uses measurements of allele frequency and allele specific copy numbers to estimate the proportion of tumor cells harboring a mutation. By using deeply sequenced data to find putative clonal clusters, PyClone estimates the cellular prevalence, the portion of cancer cells harbouring a mutation, of the input sample. Progress has been made for measuring variant allele frequency with deep sequencing data but statistical approaches to cluster mutations into biologically relevant groups remain underdeveloped. The commonness of a mutation between cells is difficult to measure because the proportion of cells that harbour a mutation doesn't simply relate to allelic prevalence. This is due to allelic prevalence depending on multiple factors such as the proportion of 'contaminating' normal cells in the sample, the proportion of tumor cells harboring the mutation, the number of allelic copies of the mutation in each cell, and sources of technical noise. PyClone is among the first methods to incorporate variant allele frequencies (VAFs) with allele-specific copy numbers. It also accounts for Allelic Imbalances, where alleles of a gene are expressed at different levels in a given cell, which may occur in the cell due to Segmental CNVs and normal cell contamination.
Workflow
Input
PyClone requires 2 inputs:
A set of deeply sequenced mutations from one or more samples derived from a single patient. Deep sequencing, also referred to as high throughput sequencing, uses methods such as sequencing by synthesis to sequence a genomic region with high coverage in order to detect rare clonal types and contaminating normal cells that comprise as little as 1% of the sample.
A measure of allele specific copy number at each mutation location. This is obtained from microarray-based comparative genomic hybridization or whole genome sequencing methods to detect chromosomal or copy number changes.
Statistical modeling
For each mutation, the PyClone model divides the input sample into three sub-populations. The three sub-populations are the normal (non-malignant) population consisting of normal cells, the reference cancer population consisting of cancer cells wild type for the mutation, and the variant cancer cell population consisting of the cancer cells with at least one variant allele of the mutation.
PyClone implements four advances in its statistic model that were tested on simulated datasets :
Beta-binomial emission densities
Beta-binomial Emission Densities are used by PyClone and are more effective than binomial models used by previous tools. Beta-binomial emission densities more accurately model input datasets that have more variance in allelic prevalence measurements. Higher accuracy in modeling variance in allelic prevalence translates to a higher confidence in the clusterings outputted by PyClone.
Priors
PyClone acknowledges that some geometrical structures and properties, such as copy number, of the clonal population to be reconstructed is known. When not enough information is available or taken into account, the reconstruction is usually of low confidence and many solutions are possible. PyClone uses priors, flexible prior probability estimates, of possible mutational genotypes to link allelic prevalence measurements to zygosity and copy number variants and is one of the first methods to incorporate variant allele frequencies (VAFs) with allele-specific copy numbers.
Bayesian nonparametric clustering
Instead of fixing the number of clusters prior to clustering, Bayesian nonparametric clustering is used to discover groupings of mutations and the number of groups simultaneously. This allows for cellular prevalence estimates to reflect uncertainty in this parameter.
Section sequencing
Multiple samples from the same patient can be analyzed at the same time to leverage the scenario in which clonal populations are shared across samples. When multiple samples are sequenced, subclonal populations that are similar in allelic prevalence in some cells but not others can be differentiated from each other.
Output
PyClone outputs posterior densities of cellular prevalences for the mutations in the sample and a matrix containing the probability any two mutations occur in the same cluster. Estimates of clonal populations from differing cellular prevalences of mutations are then generated from the posterior densities.
Applications
PyClone is used to analyze deeply sequenced (over 100× coverage) mutations to identify and quantify clonal populations in tumors. Some applications include:
Xenografting is used as a reasonable model to study human breast cancer but the consequences of engraftment and genomic propagation of xenografts have not been examined at a single-cell resolution. PyClone can be used to follow the clonal dynamics of initial grafts and serial propagation of primary and metastatic human breast cancers in immunodeficient mice. PyClone can predict how clonal dynamics differ after initial engraftment, over serial passage generations.
Circulating tumour DNA (plasma DNA) Analysis can be used to track tumour burden and analyse cancer genomes non-invasively but the extent to which it represents metastatic heterogeneity is unknown. PyClone can be used to compare the clonal population structures present in the tumour and plasma samples from amplicon sequencing data. Stem and metastatic-clade mutation clusters can be inferred using PyClone and then compared to results from clonal ordering.
Serial Time Point Sequencing: PyClone can be used to study the evolution of mutational clusters as cancer progresses. With samples taken from different time points, PyClone can identify the expansion and decline of initial clones and discover newly acquired subclones that arise during treatment. Understanding clonal dynamics improves understanding on how related cancers such as MDS, MPN and sAML compare in risk and give insight on the clinical significance of somatic mutations.
Section sequencing: PyClone is most effective for section sequencing tumor DNA. Section sequencing is when samples are taken from different portions of a single tumour to infer clonal structure from differential cellular prevalence. An advantage of section sequencing is more statistical power and information on the spatial position and interactions of the clones, uncovering information on how tumors evolve in space.
Assumptions
A key assumption of the PyClone model is that all cells within a clonal population have the same genotype. This assumption is likely false since copy number alterations and loss of heterozygosity events are common in cancer cells. The amount of error introduced by this assumption depends on the variability of genotype of cells in the location of interest. For example, in solid tumors the cells of a sample are spatially close together resulting in a small error rate, but for liquid tumors the assumption may introduce more error as cancer cells are mobile.
Another assumption made is that the sample follows a perfect and persistent phylogeny. This means that no site mutates more than once in a clonal population and each site has at most one mutant genotype. Mutations that revert to normal genotype, deletions of segments of DNA harbouring mutations and recurrent mutations are not accounted for in PyClone as it would lead to unidentifiable explanations for some observed data.
Limitations
In order to obtain input data for PyClone, cell lysis is a required step to prepare bulk sample sequencing. This results in the loss of information on the complete set of mutations defining a clonal population. PyClone can distinguish and identify the frequency of different clonal populations but can not identify exact mutations defining these populations.
Instead of clustering cells by mutational composition, PyClone clusters mutations that have similar cellular frequencies. In sub-clones that have similar cellular frequencies, PyClone will mistakenly cluster these subclones together. Chances of making this error decreases when using targeted deep sequencing with high coverage and joint analysis of multiple samples
A confounding factor of the PyClone model arises due to imprecise input information on the genotype of the sample and the depth of sequencing. Uncertainty arises in the posterior densities due to insufficient information on the genotype of mutations and depth of sequencing of the sample. This results in relying on the assumptions made by the PyClone model to interpret and cluster the sample.
Similar tools
SciClone- SciClone is a Bayesian clustering method on single nucleotide variants (SNVs).
Clomial- Clomial is a Bayesian clustering method with a decomposition process. Both Clomial and SciCloe limit the SNVs located in copy-number neutral region. The tumor is physically divided into subsections and deep sequenced to measure normal allele and variant allele. Their inference model uses Expectation-Maximization algorithm.
GLClone – GLClone uses a hierarchical probabilistic model and Bayesian posteriors to calculate copy number alterations in sub-clones.
Cloe - Cloe uses a phylogenetic latent feature model for analyzing sequencing data to distinguish the genotypes and the frequency of clones in a tumor.
PhyC - PhyC uses an unsupervised learning approach to identify subgroups of patients through clustering the respective cancer evolutionary trees. They identified the patterns of different evolutionary modes in a simulation analysis, and also successfully detected the phenotype-related and cancer type-related subgroups to characterize tree structures within subgroups using actual datasets.
PhyloWGS - PhyloWGS reconstructs tumor phylogenies and characterizes the subclonal populations present in a tumor sample using both SSMs and CNVs.
References
Computational biology | PyClone | Biology | 2,260 |
631,494 | https://en.wikipedia.org/wiki/Moment%20magnitude%20scale | The moment magnitude scale (MMS; denoted explicitly with or Mwg, and generally implied with use of a single M for magnitude) is a measure of an earthquake's magnitude ("size" or strength) based on its seismic moment. was defined in a 1979 paper by Thomas C. Hanks and Hiroo Kanamori. Similar to the local magnitude/Richter scale () defined by Charles Francis Richter in 1935, it uses a logarithmic scale; small earthquakes have approximately the same magnitudes on both scales. Despite the difference, news media often use the term "Richter scale" when referring to the moment magnitude scale.
Moment magnitude () is considered the authoritative magnitude scale for ranking earthquakes by size. It is more directly related to the energy of an earthquake than other scales, and does not saturatethat is, it does not underestimate magnitudes as other scales do in certain conditions. It has become the standard scale used by seismological authorities like the United States Geological Survey for reporting large earthquakes (typically M > 4), replacing the local magnitude () and surface-wave magnitude () scales. Subtypes of the moment magnitude scale (, etc.) reflect different ways of estimating the seismic moment.
History
Richter scale: the original measure of earthquake magnitude
At the beginning of the twentieth century, very little was known about how earthquakes happen, how seismic waves are generated and propagate through the Earth's crust, and what information they carry about the earthquake rupture process; the first magnitude scales were therefore empirical. The initial step in determining earthquake magnitudes empirically came in 1931 when the Japanese seismologist Kiyoo Wadati showed that the maximum amplitude of an earthquake's seismic waves diminished with distance at a certain rate. Charles F. Richter then worked out how to adjust for epicentral distance (and some other factors) so that the logarithm of the amplitude of the seismograph trace could be used as a measure of "magnitude" that was internally consistent and corresponded roughly with estimates of an earthquake's energy. He established a reference point and the ten-fold (exponential) scaling of each degree of magnitude, and in 1935 published what he called the "magnitude scale", now called the local magnitude scale, labeled . (This scale is also known as the Richter scale, but news media sometimes use that term indiscriminately to refer to other similar scales.)
The local magnitude scale was developed on the basis of shallow (~ deep), moderate-sized earthquakes at a distance of approximately , conditions where the surface waves are predominant. At greater depths, distances, or magnitudes the surface waves are greatly reduced, and the local magnitude scale underestimates the magnitude, a problem called saturation. Additional scales were developed – a surface-wave magnitude scale () by Beno Gutenberg in 1945, a body-wave magnitude scale () by Gutenberg and Richter in 1956, and a number of variants – to overcome the deficiencies of the scale, but all are subject to saturation. A particular problem was that the scale (which in the 1970s was the preferred magnitude scale) saturates around and therefore underestimates the energy release of "great" earthquakes such as the 1960 Chilean and 1964 Alaskan earthquakes. These had magnitudes of 8.5 and 8.4 respectively but were notably more powerful than other M 8 earthquakes; their moment magnitudes were closer to 9.6 and 9.3, respectively.
Single couple or double couple
The study of earthquakes is challenging as the source events cannot be observed directly, and it took many years to develop the mathematics for understanding what the seismic waves from an earthquake can tell about the source event. An early step was to determine how different systems of forces might generate seismic waves equivalent to those observed from earthquakes.
The simplest force system is a single force acting on an object. If it has sufficient strength to overcome any resistance it will cause the object to move ("translate"). A pair of forces, acting on the same "line of action" but in opposite directions, will cancel; if they cancel (balance) exactly there will be no net translation, though the object will experience stress, either tension or compression. If the pair of forces are offset, acting along parallel but separate lines of action, the object experiences a rotational force, or torque. In mechanics (the branch of physics concerned with the interactions of forces) this model is called a couple, also simple couple or single couple. If a second couple of equal and opposite magnitude is applied their torques cancel; this is called a double couple. A double couple can be viewed as "equivalent to a pressure and tension acting simultaneously at right angles".
In 1923 Hiroshi Nakano showed that certain aspects of seismic waves could be explained in terms of a double couple model. This led to a three-decade-long controversy over the best way to model the seismic source: as a single couple, or a double couple. While Japanese seismologists favored the double couple, most seismologists favored the single couple. Although the single couple model had some shortcomings, it seemed more intuitive, and there was a belief – mistaken, as it turned out – that the elastic rebound theory for explaining why earthquakes happen required a single couple model. In principle these models could be distinguished by differences in the radiation patterns of their S waves, but the quality of the observational data was inadequate for that.
but not from a single couple. This was confirmed as better and more plentiful data coming from the World-Wide Standard Seismograph Network (WWSSN) permitted closer analysis of seismic waves. Notably, in 1966 Keiiti Aki showed that the seismic moment of the 1964 Niigata earthquake as calculated from the seismic waves on the basis of a double couple was in reasonable agreement with the seismic moment calculated from the observed physical dislocation.
Dislocation theory
A double couple model suffices to explain an earthquake's far-field pattern of seismic radiation, but tells us very little about the nature of an earthquake's source mechanism or its physical features. While slippage along a fault was theorized as the cause of earthquakes (other theories included movement of magma, or sudden changes of volume due to phase changes), observing this at depth was not possible, and understanding what could be learned about the source mechanism from the seismic waves requires an understanding of the source mechanism.
Modeling the physical process by which an earthquake generates seismic waves required much theoretical development of dislocation theory, first formulated by the Italian Vito Volterra in 1907, with further developments by E. H. Love in 1927. More generally applied to problems of stress in materials, an extension by F. Nabarro in 1951 was recognized by the Russian geophysicist A. V. Vvedenskaya as applicable to earthquake faulting. In a series of papers starting in 1956 she and other colleagues used dislocation theory to determine part of an earthquake's focal mechanism, and to show that a dislocation – a rupture accompanied by slipping – was indeed equivalent to a double couple.
In a pair of papers in 1958, J. A. Steketee worked out how to relate dislocation theory to geophysical features. Numerous other researchers worked out other details, culminating in a general solution in 1964 by Burridge and Knopoff, which established the relationship between double couples and the theory of elastic rebound, and provided the basis for relating an earthquake's physical features to seismic moment.
Seismic moment
Seismic moment – symbol – is a measure of the fault slip and area involved in the earthquake. Its value is the torque of each of the two force couples that form the earthquake's equivalent double-couple. (More precisely, it is the scalar magnitude of the second-order moment tensor that describes the force components of the double-couple.) Seismic moment is measured in units of Newton meters (N·m) or Joules, or (in the older CGS system) dyne-centimeters (dyn-cm).
The first calculation of an earthquake's seismic moment from its seismic waves was by Keiiti Aki for the 1964 Niigata earthquake. He did this two ways. First, he used data from distant stations of the WWSSN to analyze long-period (200 second) seismic waves (wavelength of about 1,000 kilometers) to determine the magnitude of the earthquake's equivalent double couple. Second, he drew upon the work of Burridge and Knopoff on dislocation to determine the amount of slip, the energy released, and the stress drop (essentially how much of the potential energy was released). In particular, he derived an equation that relates an earthquake's seismic moment to its physical parameters:
with being the rigidity (or resistance to moving) of a fault with a surface area of over an average dislocation (distance) of . (Modern formulations replace with the equivalent , known as the "geometric moment" or "potency".) By this equation the moment determined from the double couple of the seismic waves can be related to the moment calculated from knowledge of the surface area of fault slippage and the amount of slip. In the case of the Niigata earthquake the dislocation estimated from the seismic moment reasonably approximated the observed dislocation.
Seismic moment is a measure of the work (more precisely, the torque) that results in inelastic (permanent) displacement or distortion of the Earth's crust. It is related to the total energy released by an earthquake. However, the power or potential destructiveness of an earthquake depends (among other factors) on how much of the total energy is converted into seismic waves. This is typically 10% or less of the total energy, the rest being expended in fracturing rock or overcoming friction (generating heat).
Nonetheless, seismic moment is regarded as the fundamental measure of earthquake size, representing more directly than other parameters the physical size of an earthquake. As early as 1975 it was considered "one of the most reliably determined instrumental earthquake source parameters".
Introduction of an energy-motivated magnitude Mw
Most earthquake magnitude scales suffered from the fact that they only provided a comparison of the amplitude of waves produced at a standard distance and frequency band; it was difficult to relate these magnitudes to a physical property of the earthquake. Gutenberg and Richter suggested that radiated energy Es could be estimated as
,
(in Joules). Unfortunately, the duration of many very large earthquakes was longer than 20 seconds, the period of the surface waves used in the measurement of . This meant that giant earthquakes such as the 1960 Chilean earthquake (M 9.5) were only assigned an . Caltech seismologist Hiroo Kanamori recognized this deficiency and took the simple but important step of defining a magnitude based on estimates of radiated energy, , where the "w" stood for work (energy):
Kanamori recognized that measurement of radiated energy is technically difficult since it involves the integration of wave energy over the entire frequency band. To simplify this calculation, he noted that the lowest frequency parts of the spectrum can often be used to estimate the rest of the spectrum. The lowest frequency asymptote of a seismic spectrum is characterized by the seismic moment, . Using an approximate relation between radiated energy and seismic moment (which assumes stress drop is complete and ignores fracture energy),
(where E is in Joules and is in Nm), Kanamori approximated by
Moment magnitude scale
The formula above made it much easier to estimate the energy-based magnitude , but it changed the fundamental nature of the scale into a moment magnitude scale. USGS seismologist Thomas C. Hanks noted that Kanamori's scale was very similar to a relationship between and that was reported by
combined their work to define a new magnitude scale based on estimates of seismic moment
where is defined in newton meters (N·m).
Current use
Moment magnitude is now the most common measure of earthquake size for medium to large earthquake magnitudes, but in practice, seismic moment (), the seismological parameter it is based on, is not measured routinely for smaller quakes. For example, the United States Geological Survey does not use this scale for earthquakes with a magnitude of less than 3.5, which includes the great majority of quakes.
Popular press reports most often deal with significant earthquakes larger than . For these events, the preferred magnitude is the moment magnitude , not Richter's local magnitude .
Definition
The symbol for the moment magnitude scale is , with the subscript "w" meaning mechanical work accomplished. The moment magnitude is a dimensionless value defined by Hiroo Kanamori as
where is the seismic moment in dyne⋅cm (10−7 N⋅m). The constant values in the equation are chosen to achieve consistency with the magnitude values produced by earlier scales, such as the local magnitude and the surface wave magnitude. Thus, a magnitude zero microearthquake has a seismic moment of approximately , while the Great Chilean earthquake of 1960, with an estimated moment magnitude of 9.4–9.6, had a seismic moment between and .
Seismic moment magnitude (M wg or Das Magnitude Scale ) and moment magnitude (M w) scales
To understand the magnitude scales based on Mo detailed background of Mwg and Mw scales is given below.
Mw scale
Hiroo Kanamori defined a magnitude scale (Log W0 = 1.5 Mw + 11.8, where W0 is the minimum strain energy) for great earthquakes using Gutenberg Richter Eq. (1).
Log Es = 1.5 Ms + 11.8 (A)
Hiroo Kanamori used W0 in place of Es (dyn.cm) and consider a constant term (W0/Mo = 5 × 10−5) in Eq. (A) and estimated Ms and denoted as Mw (dyn.cm). The energy Eq. (A) is derived by substituting m = 2.5 + 0.63 M in the energy equation Log E = 5.8 + 2.4 m (Richter 1958), where m is the Gutenberg unified magnitude and M is a least squares approximation to the magnitude determined from surface wave magnitudes. After replacing the ratio of seismic Energy (E) and Seismic Moment (Mo), i.e., E/Mo = 5 × 10−5, into the Gutenberg–Richter energy magnitude Eq. (A), Hanks and Kanamori provided Eq. (B):
Log M0 = 1.5 Ms + 16.1 (B)
Note that Eq. (B) was already derived by Hiroo Kanamori and termed it as Mw. Eq. (B) was based on large earthquakes; hence, in order to validate Eq. (B) for intermediate and smaller earthquakes, Hanks and Kanamori (1979) compared this Eq. (B) with Eq. (1) of Percaru and Berckhemer (1978) for the magnitude 5.0 ≤ Ms ≤ 7.5 (Hanks and Kanamori 1979). Note that Eq. (1) of Percaru and Berckhemer (1978) for the magnitude range 5.0 ≤ Ms ≤ 7.5 is not reliable due to the inconsistency of defined magnitude range (moderate to large earthquakes defined as Ms ≤ 7.0 and Ms = 7–7.5) and scarce data in lower magnitude range (≤ 7.0) which rarely represents the global seismicity (e.g., see Figs. 1A, B, 4 and Table 2 of Percaru and Berckhemer 1978). Furthermore, Equation (1) of Percaru and Berckhemer 1978) is only valid for (≤ 7.0).
Relations between seismic moment, potential energy released and radiated energy
Seismic moment is not a direct measure of energy changes during an earthquake. The relations between seismic moment and the energies involved in an earthquake depend on parameters that have large uncertainties and that may vary between earthquakes. Potential energy is stored in the crust in the form of elastic energy due to built-up stress and gravitational energy. During an earthquake, a portion of this stored energy is transformed into
energy dissipated in frictional weakening and inelastic deformation in rocks by processes such as the creation of cracks
heat
radiated seismic energy
The potential energy drop caused by an earthquake is related approximately to its seismic moment by
where is the average of the absolute shear stresses on the fault before and after the earthquake (e.g., equation 3 of ) and is the average of the shear moduli of the rocks that constitute the fault. Currently, there is no technology to measure absolute stresses at all depths of interest, nor method to estimate it accurately, and is thus poorly known. It could vary highly from one earthquake to another. Two earthquakes with identical but different would have released different .
The radiated energy caused by an earthquake is approximately related to seismic moment by
where is radiated efficiency and is the static stress drop, i.e., the difference between shear stresses on the fault before and after the earthquake (e.g., from equation 1 of ). These two quantities are far from being constants. For instance, depends on rupture speed; it is close to 1 for regular earthquakes but much smaller for slower earthquakes such as tsunami earthquakes and slow earthquakes. Two earthquakes with identical but different or would have radiated different .
Because and are fundamentally independent properties of an earthquake source, and since can now be computed more directly and robustly than in the 1970s, introducing a separate magnitude associated to radiated energy was warranted. Choy and Boatwright defined in 1995 the energy magnitude
where is in J (N·m).
Comparative energy released by two earthquakes
Assuming the values of are the same for all earthquakes, one can consider as a measure of the potential energy change ΔW caused by earthquakes. Similarly, if one assumes is the same for all earthquakes, one can consider as a measure of the energy Es radiated by earthquakes.
Under these assumptions, the following formula, obtained by solving for the equation defining , allows one to assess the ratio of energy release (potential or radiated) between two earthquakes of different moment magnitudes, and :
.
As with the Richter scale, an increase of one step on the logarithmic scale of moment magnitude corresponds to a 101.5 ≈ 32 times increase in the amount of energy released, and an increase of two steps corresponds to a 103 = 1,000 times increase in energy. Thus, an earthquake of of 7.0 contains 1,000 times as much energy as one of 5.0 and about 32 times that of 6.0.
Comparison with TNT equivalents
To make the significance of the magnitude value plausible, the seismic energy released during the earthquake is sometimes compared to the effect of the conventional chemical explosive TNT.
The seismic energy results from the above-mentioned formula according to Gutenberg and Richter to
or converted into Hiroshima bombs:
For comparison of seismic energy (in joules) with the corresponding explosion energy, a value of 4.2 x 109 joules per ton of TNT applies. The table illustrates the relationship between seismic energy and moment magnitude.
The end of the scale is at the value 10.6, corresponding to the assumption that at this value the Earth's crust would have to break apart completely.
Subtypes of Mw
Various ways of determining moment magnitude have been developed, and several subtypes of the scale can be used to indicate the basis used.
– Based on moment tensor inversion of long-period (~10 – 100 s) body-waves.
– From a moment tensor inversion of complete waveforms at regional distances (~1,000 miles). Sometimes called RMT.
– Derived from a centroid moment tensor inversion of intermediate- and long-period body- and surface-waves.
– Derived from a centroid moment tensor inversion of the W-phase.
() – Developed by Seiji Tsuboi for quick estimation of the tsunami potential of large near-coastal earthquakes from measurements of the P waves, and later extended to teleseismic earthquakes in general.
– A duration-amplitude procedure which takes into account the duration of the rupture, providing a fuller picture of the energy released by longer lasting ("slow") ruptures than seen with .
–Rapidly estimates earthquake magnitude by combining maximum displacements of teleseismic P wave and source durations.
See also
Earthquake engineering
Lists of earthquakes
Seismic magnitude scales
Notes
Sources
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
External links
USGS: Measuring earthquakes
Perspective: a graphical comparison of earthquake energy release – Pacific Tsunami Warning Center
Seismic magnitude scales
Geophysics
Logarithmic scales of measurement | Moment magnitude scale | Physics,Mathematics | 4,266 |
8,591,662 | https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Draco | This is the list of notable stars in the constellation Draco.
See also
List of stars by constellation
References
Bibliography
List
Draco | List of stars in Draco | Astronomy | 27 |
65,409,877 | https://en.wikipedia.org/wiki/Karin%20Aurivillius | Karin Aurivillius (1920–1982) was a Swedish chemist and crystallographer at the University of Lund, Sweden. She determined the crystal structures of many mercury compounds.
During the 1960s, she helped develop crystallography in Sweden while working closely with her prominent husband and fellow chemist, Bengt Aurivillius (1918–1994), who was a professor of inorganic chemistry at Lund University.
To reveal the structural chemistry of inorganic mercury (II) oxide or sulphide compounds, she studied crystal structures using X-rays and neutron diffraction methods. Some of her research was conducted at the Institute of Atomic Energy Research at the Atomic Energy Research Establishment (AERE) located in Didcot, Oxfordshire, United Kingdom.
Honors
The extremely rare mineral aurivilliusite was named in honor of Karin Aurivillius, for "her significant contributions to the crystal chemistry of mercury-bearing inorganic compounds." The mineral is dark grey-black with a dark red-brown streak and has been found at a small prospect pit near the abandoned Clear Creek mercury mine, New Idria district, San Benito County, California.
Selected works
Aurivillius, K. A. R. I. N. "The crystal structure of mercury (II) oxide studied by X-ray and neutron diffraction methods." Acta Chemica Scandinavica 10 (1956): 852–866.
Aurivillius, Karin. The structural chemistry of inorganic mercury (II) compounds: some aspects of the determination of the positions of" light" atoms in the presence of" heavy" atoms in crystal structures. Diss. 1965.
Aurivillius, K. A. R. I. N., and INCA-BRETT Carlsson. "The structure of hexagonal mercury (II) oxide." Acta Chemica Scandinavica 12 (1958): 1297.
Aurivillius, Karin, and Bo Arne Nilsson. "The crystal structure of mercury (II) phosphate, Hg3 (PO4) 2." Z. Kristallogr 141.1-2 (1975): 1-10.
Aurivillius, Karin, and Claes Stålhandske. "A reinvestigation of the crystal structures of HgSO4 and CdSO4." Zeitschrift für Kristallographie-Crystalline Materials 153.1-2 (1980): 121–129.
Aurivillius, K. A. R. I. N., and L. E. N. A. Folkmarson. "The crystal structure of terlinguaite Hg4O2Cl2." Acta Chemica Scandinavica 22 (1968): 2529–2540.
AURIVILLIUS, KARIN, and BIRGITTA MALMROS. "Studies on sulphates, selenates and chromates of mercury (II)." Acta Chem. Scand 15.9 (1961): 1932–1938.
Aurivillius, K. A. R. I. N., and G-I. Bertinsson. "Structures of complexes between metal halides and phosphinothioethers or related ligands. X.[1, 9-Bis (diphenylphosphino)-3, 7-dithianonane] monoiodonickel tetraphenylborate." Acta Crystallographica Section B: Structural Crystallography and Crystal Chemistry 36.4 (1980): 790–794.
References
1920 births
1982 deaths
20th-century Swedish chemists
Swedish women chemists
20th-century Swedish women scientists
Crystallographers | Karin Aurivillius | Chemistry,Materials_science | 773 |
39,156,141 | https://en.wikipedia.org/wiki/Symmetric%20cone | In mathematics, symmetric cones, sometimes called domains of positivity, are open convex self-dual cones in Euclidean space which have a transitive group of symmetries, i.e. invertible operators that take the cone onto itself. By the Koecher–Vinberg theorem these correspond to the cone of squares in finite-dimensional real Euclidean Jordan algebras, originally studied and classified by . The tube domain associated with a symmetric cone is a noncompact Hermitian symmetric space of tube type. All the algebraic and geometric structures associated with the symmetric space can be expressed naturally in terms of the Jordan algebra. The other irreducible Hermitian symmetric spaces of noncompact type correspond to Siegel domains of the second kind. These can be described in terms of more complicated structures called Jordan triple systems, which generalize Jordan algebras without identity.
Definitions
A convex cone C in a finite-dimensional real inner product space V is a convex set invariant under multiplication by positive scalars. It spans the subspace C – C and the largest subspace it contains is C ∩ (−C). It spans the whole space if and only if it contains a basis. Since the convex hull of the basis is a polytope with non-empty interior, this happens if and only if C has non-empty interior. The interior in this case is also a convex cone. Moreover, an open convex cone coincides with the interior of its closure, since any interior point in the closure must lie in the interior of some polytope in the original cone. A convex cone is said to be proper if its closure, also a cone, contains no subspaces.
Let C be an open convex cone. Its dual is defined as
It is also an open convex cone and C** = C. An open convex cone C is said to be self-dual if C* = C. It is necessarily proper, since
it does not contain 0, so cannot contain both X and −X.
The automorphism group of an open convex cone is defined by
Clearly g lies in Aut C if and only if g takes the closure of C onto itself. So Aut C is a closed subgroup of GL(V) and hence a Lie group. Moreover, Aut C* = (Aut C)*, where g* is the adjoint of g. C is said to be homogeneous if Aut C acts transitively on C.
The open convex cone C is called a symmetric cone if it is self-dual and homogeneous.
Group theoretic properties
If C is a symmetric cone, then Aut C is closed under taking adjoints.
The identity component Aut0 C acts transitively on C.
The stabilizers of points are maximal compact subgroups, all conjugate, and exhaust the maximal compact subgroups of Aut C.
In Aut0 C the stabilizers of points are maximal compact subgroups, all conjugate, and exhaust the maximal compact subgroups of Aut0 C.
The maximal compact subgroups of Aut0 C are connected.
The component group of Aut C is isomorphic to the component group of a maximal compact subgroup and therefore finite.
Aut C ∩ O(V) and Aut0 C ∩ O(V) are maximal compact subgroups in Aut C and Aut0 C.
C is naturally a Riemannian symmetric space isomorphic to G / K where G = Aut0 C. The Cartan involution is defined by σ(g)=(g*)−1, so that K = G ∩ O(V).
Spectral decomposition in a Euclidean Jordan algebra
In their classic paper, studied and completely classified a class of finite-dimensional Jordan algebras, that are now called either Euclidean Jordan algebras or formally real Jordan algebras.
Definition
Let E be a finite-dimensional real vector space with a symmetric bilinear product operation
with an identity element 1 such that a1 = a for a in A and a real inner product (a,b) for which the multiplication operators L(a) defined by L(a)b = ab on E are self-adjoint and satisfy the Jordan relation
As will turn out below, the condition on adjoints can be replaced by the equivalent condition that
the trace form Tr L(ab) defines an inner product. The trace form has the advantage of being manifestly invariant under automorphisms of the Jordan algebra, which is thus a closed subgroup of O(E) and thus a compact Lie group. In practical examples, however, it is often easier to produce an inner product for which the L(a) are self-adjoint than verify directly positive-definiteness of the trace form. (The equivalent original condition of Jordan, von Neumann and Wigner was that if a sum of squares of elements vanishes then each of those elements has to vanish.)
Power associativity
From the Jordan condition it follows that the Jordan algebra is power associative, i.e. the Jordan subalgebra generated by any single element a in E is actually an associative commutative algebra. Thus, defining an inductively by an = a (an−1), the following associativity relation holds:
so the subalgebra can be identified with R[a], polynomials in a. In fact polarizing of the Jordan relation—replacing a by a + tb and taking the coefficient of t—yields
This identity implies that L(am) is a polynomial in L(a) and L(a2) for all m. In fact, assuming the result for lower exponents than m,
Setting b = am – 1 in the polarized Jordan identity gives:
a recurrence relation showing inductively that L(am + 1) is a polynomial in L(a) and L(a2).
Consequently, if power-associativity holds when the first exponent is ≤ m, then it also holds for m+1 since
Idempotents and rank
An element e in E is called an idempotent if e2 = e. Two idempotents are said to be orthogonal if ef = 0. This is equivalent to orthogonality with respect to the inner product, since (ef,ef) = (e,f). In this case g = e + f is also an idempotent. An idempotent g is called primitive or minimal if it cannot be written as a sum of non-zero orthogonal idempotents. If e1, ..., em are pairwise orthogonal idempotents then their sum is also an idempotent and the algebra they generate consists of all linear combinations of the ei. It is an associative algebra. If e is an idempotent, then 1 − e is an orthogonal idempotent. An orthogonal set of idempotents with sum 1 is said to be a complete set or a partition of 1. If each idempotent in the set is minimal it is called a Jordan frame. Since the number of elements in any orthogonal set of idempotents is bounded by dim E, Jordan frames exist. The maximal number of elements in a Jordan frame is called the rank r of E.
Spectral decomposition
The spectral theorem states that any element a can be uniquely written as
where the idempotents ei's are a partition of 1 and the λi, the eigenvalues of a, are real and distinct. In fact let E0 = R[a] and let T be the restriction of L(a) to E0. T is self-adjoint and has 1 as a cyclic vector. So the commutant of T consists of polynomials in T (or a). By the spectral theorem for self-adjoint operators,
where the Pi are orthogonal projections on E0 with sum I and the λi's are the distinct real eigenvalues of T. Since the Pi's commute with T and are self-adjoint, they are given by multiplication elements ei of R[a] and thus form a partition of 1. Uniqueness follows because if fi is a partition of 1 and a = Σ μi fi, then with p(t)=Π (t - μj) and pi = p/(t − μi), fi = pi(a)/pi(μi). So the fi's are polynomials in a and uniqueness follows from uniqueness of the spectral decomposition of T.
The spectral theorem implies that the rank is independent of the Jordan frame. For a Jordan frame with k minimal idempotents can be used to construct an element a with k distinct eigenvalues. As above the minimal polynomial p of a has degree k and R[a] has dimension k. Its dimension is also the largest k such that Fk(a) ≠ 0 where Fk(a) is the determinant of a Gram matrix:
So the rank r is the largest integer k for which Fk is not identically zero on E. In this case, as a non-vanishing polynomial, Fr is non-zero on an open dense subset of E. the regular elements. Any other a is a limit of regular elements a(n). Since the operator norm of L(x) gives an equivalent norm on E, a standard compactness argument shows that, passing to a subsequence if necessary, the spectral idempotents of the a(n) and their corresponding eigenvalues are convergent. The limit of Jordan frames is a Jordan frame, since a limit of non-zero idempotents yields a non-zero idempotent by continuity of the operator norm. It follows that every Jordan frame is made up of r minimal idempotents.
If e and f are orthogonal idempotents, the spectral theorem shows that e and f are polynomials in a = e − f, so that L(e) and L(f) commute. This can be seen directly from the polarized Jordan identity which implies L(e)L(f) = 2 L(e)L(f)L(e). Commutativity follows by taking adjoints.
Spectral decomposition for an idempotent
If e is a non-zero idempotent then the eigenvalues of L(e) can only be 0, 1/2 and 1, since taking a = b = e in the polarized Jordan identity yields
In particular the operator norm of L(e) is 1 and its trace is strictly positive.
There is a corresponding orthogonal eigenspace decomposition of E
where, for a in E, Eλ(a) denotes the λ-eigenspace of L(a). In this decomposition E1(e) and E0(e) are Jordan algebras with identity elements e and 1 − e. Their sum E1(e) ⊕ E0(e) is a direct sum of Jordan algebras in that any product between them is zero. It is the centralizer subalgebra of e and consists of all a such that L(a) commutes with L(e). The subspace E1/2(e) is a module for the centralizer of e, the centralizer module, and the product of any two elements in it lies in the centralizer subalgebra. On the other hand, if
then U is self-adjoint equal to 1 on the centralizer algebra and −1 on the centralizer module. So U2 = I and the properties above show that
defines an involutive Jordan algebra automorphism σ of E.
In fact the Jordan algebra and module properties follow by replacing a and b in the polarized Jordan identity by e and a. If ea = 0, this gives L(e)L(a) = 2L(e)L(a)L(e). Taking adjoints it follows that L(a) commutes with L(e). Similarly if (1 − e)a = 0, L(a) commutes with I − L(e) and hence L(e). This implies the Jordan algebra and module properties. To check that a product of elements in the module lies in the algebra, it is enough to check this for squares: but if L(e)a = a, then ea = a, so L(a)2 + L(a2)L(e) = 2L(a)L(e)L(a) + L(a2e). Taking adjoints it follows that L(a2) commutes with L(e), which implies the property for squares.
Trace form
The trace form is defined by
It is an inner product since, for non-zero a = Σ λi ei,
The polarized Jordan identity can be polarized again by replacing a by a + tc and taking the coefficient of t. A further anyisymmetrization in a and c yields:
Applying the trace to both sides
so that L(b) is self-adjoint for the trace form.
Simple Euclidean Jordan algebras
The classification of simple Euclidean Jordan algebras was accomplished by , with details of the one exceptional algebra provided in the article immediately following theirs by . Using the Peirce decomposition, they reduced the problem to an algebraic problem involving multiplicative quadratic forms already solved by Hurwitz. The presentation here, following , using composition algebras or Euclidean Hurwitz algebras, is a shorter version of the original derivation.
Central decomposition
If E is a Euclidean Jordan algebra an ideal F in E is a linear subspace closed under multiplication by elements of E, i.e. F is invariant under the operators L(a) for a in E. If P is the orthogonal projection onto F it commutes with the operators L(a), In particular F⊥ = (I − P)E is also an ideal and E = F ⊕ F⊥. Furthermore, if e = P(1), then P = L(e). In fact for a in E
so that ea = a for a in F and 0 for a in F⊥. In particular e and 1 − e are orthogonal idempotents with L(e) = P and L(1 − e) = I − P. e and 1 − e are the identities in the Euclidean Jordan algebras F and F⊥. The idempotent e is central in E, where the center of E is defined to be the set of all z such that L(z) commutes with L(a) for all a. It forms a commutative associative subalgebra.
Continuing in this way E can be written as a direct sum of minimal ideals
If Pi is the projection onto Ei and ei = Pi(1) then Pi = L(ei). The ei's are orthogonal with sum 1 and are the identities in Ei. Minimality forces Ei to be simple, i.e. to have no non-trivial ideals. For since L(ei) commutes with all L(a)'s, any ideal F ⊂ Ei
would be invariant under E since F = eiF. Such a decomposition into a direct sum of simple Euclidean algebras is unique. If E = ⊕ Fj is another decomposition, then Fj=⊕ eiFj. By minimality only one of the terms here is non-zero so equals Fj. By minimality the corresponding Ei equals Fj, proving uniqueness.
In this way the classification of Euclidean Jordan algebras is reduced to that of simple ones. For a simple algebra E all inner products for which the operators L(a) are self adjoint are proportional. Indeed, any other product has the form (Ta, b) for some positive self-adjoint operator commuting with the L(a)'s. Any non-zero eigenspace of T is an ideal in A and therefore by simplicity T must act on the whole of E as a positive scalar.
List of all simple Euclidean Jordan algebras
Let Hn(R) be the space of real symmetric n by n matrices with inner product (a,b) = Tr ab and Jordan product a ∘ b = (ab + ba). Then Hn(R) is a simple Euclidean Jordan algebra of rank n for n ≥ 3.
Let Hn(C) be the space of complex self-adjoint n by n matrices with inner product (a,b) = Re Tr ab* and Jordan product a ∘ b = (ab + ba). Then Hn(C) is a simple Euclidean Jordan algebra of rank n for n ≥ 3.
Let Hn(H) be the space of self-adjoint n by n matrices with entries in the quaternions, inner product (a,b) = Re Tr ab* and Jordan product a ∘ b = (ab + ba). Then Hn(H) is a simple Euclidean Jordan algebra of rank n for n ≥ 3.
Let V be a finite dimensional real inner product space and set E = V ⊕ R with inner product (u⊕λ,v⊕μ) =(u,v) + λμ and product (u⊕λ)∘(v⊕μ)=( μu + λv) ⊕ [(u,v) + λμ]. This is a Euclidean Jordan algebra of rank 2, called a spin factor.
The above examples in fact give all the simple Euclidean Jordan algebras, except for one exceptional case H3(O), the self-adjoint matrices over the octonions or Cayley numbers, another rank 3 simple Euclidean Jordan algebra of dimension 27 (see below).
The Jordan algebras H2(R), H2(C), H2(H) and H2(O) are isomorphic to spin factors V ⊕ R where V has dimension 2, 3, 5 and 9, respectively: that is, one more than the dimension of the relevant division algebra.
Peirce decomposition
Let E be a simple Euclidean Jordan algebra with inner product given by the trace form τ(a)= Tr L(a). The proof that E has the above form rests on constructing an analogue of matrix units for a Jordan frame in E. The following properties of idempotents hold in E.
An idempotent e is minimal in E if and only if E1(e) has dimension one (so equals Re). Moreover E1/2(e) ≠ (0). In fact the spectral projections of any element of E1(e) lie in E so if non-zero must equal e. If the 1/2 eigenspace vanished then E1(e) = Re would be an ideal.
If e and f are non-orthogonal minimal idempotents, then there is a period 2 automorphism σ of E such that σe=f, so that e and f have the same trace.
If e and f are orthogonal minimal idempotents then E1/2(e) ∩ E1/2(f) ≠ (0). Moreover, there is a period 2 automorphism σ of E such that σe=f, so that e and f have the same trace, and for any a in this intersection, a2 = τ(e) |a|2 (e + f).
All minimal idempotents in E are in the same orbit of the automorphism group so have the same trace τ0.
If e, f, g are three minimal orthogonal idempotents, then for a in E1/2(e) ∩ E1/2(f) and b in E1/2(f) ∩ E1/2(g), L(a)2 b = τ0 |a|2 b and |ab|2 = τ0 |a|2|b|2. Moreover, E1/2(e) ∩ E1/2(f) ∩ E1/2(g) = (0).
If e1, ..., er and f1, ..., fr are Jordan frames in E, then there is an automorphism α such that αei = fi.
If (ei) is a Jordan frame and Eii = E1(ei) and Eij = E1/2(ei) ∩ E1/2(ej), then E is the orthogonal direct sum the Eii's and Eij's. Since E is simple, the Eii's are one-dimensional and the subspaces Eij are all non-zero for i ≠ j.
If a = Σ αi ei for some Jordan frame (ei), then L(a) acts as αi on Eii and (αi + αi)/2 on Eij.
Reduction to Euclidean Hurwitz algebras
Let E be a simple Euclidean Jordan algebra. From the properties of the Peirce decomposition it follows that:
If E has rank 2, then it has the form V ⊕ R for some inner product space V with Jordan product as described above.
If E has rank r > 2, then there is a non-associative unital algebra A, associative if r > 3, equipped with an inner product satisfying (ab,ab)= (a,a)(b,b) and such that E = Hr(A). (Conjugation in A is defined by a* = −a + 2(a,1)1.)
Such an algebra A is called a Euclidean Hurwitz algebra. In A if λ(a)b = ab and ρ(a)b = ba, then:
the involution is an antiautomorphism, i.e.
, , so that the involution on the algebra corresponds to taking adjoints
if
, , so that is an alternative algebra.
By Hurwitz's theorem A must be isomorphic to R, C, H or O. The first three are associative division algebras. The octonions do not form an associative algebra, so Hr(O) can only give a Jordan algebra for r = 3. Because A is associative when A = R, C or H, it is immediate that Hr(A) is a Jordan algebra for r ≥ 3. A separate argument, given originally by , is required to show that H3(O) with Jordan product a∘b = (ab + ba) satisfies the Jordan identity [L(a),L(a2)] = 0. There is a later more direct proof using the Freudenthal diagonalization theorem due to : he proved that given any matrix in the algebra Hr(A) there is an algebra automorphism carrying the matrix onto a diagonal matrix with real entries; it is then straightforward to check that [L(a),L(b)] = 0 for real diagonal matrices.
Exceptional and special Euclidean Jordan algebras
The exceptional Euclidean Jordan algebra E= H3(O) is called the Albert algebra. The Cohn–Shirshov theorem implies that it cannot be generated by two elements (and the identity). This can be seen directly. For by Freudenthal's diagonalization theorem one element X can be taken to be a diagonal matrix with real entries and the other Y to be orthogonal to the Jordan subalgebra generated by X. If all the diagonal entries of X are distinct, the Jordan subalgebra generated by X and Y is generated by the diagonal matrices and three elements
It is straightforward to verify that the real linear span of the diagonal matrices, these matrices and similar matrices with real entries form a unital Jordan subalgebra. If the diagonal entries of X are not distinct, X can be taken to be the primitive idempotent e1 with diagonal entries 1, 0 and 0. The analysis in then shows that the unital Jordan subalgebra generated by X and Y is proper. Indeed, if 1 − e1 is the sum of two primitive idempotents in the subalgebra, then, after applying an automorphism of E if necessary, the subalgebra will be generated by the diagonal matrices and a matrix orthogonal to the diagonal matrices. By the previous argument it will be proper. If 1 - e1 is a primitive idempotent, the subalgebra must be proper, by the properties of the rank in E.
A Euclidean algebra is said to be special if its central decomposition contains no copies of the Albert algebra. Since the Albert algebra cannot be generated by two elements, it follows that a Euclidean Jordan algebra generated by two elements is special. This is the Shirshov–Cohn theorem for Euclidean Jordan algebras.
The classification shows that each non-exceptional simple Euclidean Jordan algebra is a subalgebra of some Hn(R). The same is therefore true of any special algebra.
On the other hand, as showed, the Albert algebra H3(O) cannot be realized as a subalgebra of Hn(R) for any n.
Indeed, let π is a real-linear map of E = H3(O) into the self-adjoint operators on V = Rn with π(ab) = (π(a)π(b) + π(b)π(a)) and π(1) = I. If e1, e2, e3 are the diagonal minimal idempotents then Pi = π(ei are mutually orthogonal projections on V onto orthogonal subspaces Vi. If i ≠ j, the elements eij of E with 1 in the (i,j) and (j,i) entries and 0 elsewhere satisfy eij2 = ei + ej. Moreover, eijejk = eik if i, j and k are distinct. The operators Tij are zero on Vk (k ≠ i, j) and restrict to involutions on Vi ⊕ Vj interchanging Vi and Vj. Letting Pij = Pi Tij Pj and setting Pii = Pi, the (Pij) form a system of matrix units on V, i.e. Pij* = Pji, Σ Pii = I and PijPkm = δjk Pim. Let Ei and Eij be the subspaces of the Peirce decomposition of E. For x in O, set πij = Pij π(xeij), regarded as an operator on Vi. This does not depend on j and for x, y in O
Since every x in O has a right inverse y with xy = 1, the map πij is injective. On the other hand, it is an algebra homomorphism from the nonassociative algebra O into the associative algebra End Vi, a contradiction.
Positive cone in a Euclidean Jordan algebra
Definition
When (ei) is a partition of 1 in a Euclidean Jordan algebra E, the self-adjoint operators L(ei) commute and there is a decomposition into simultaneous eigenspaces. If a = Σ λi ei the eigenvalues of L(a) have the form Σ εi λi is 0, 1/2 or 1. The ei themselves give the eigenvalues λi. In particular an element a has non-negative spectrum if and only if L(a) has non-negative spectrum. Moreover, a has positive spectrum if and only if L(a) has positive spectrum. For if a has positive spectrum, a - ε1 has non-negative spectrum for some ε > 0.
The positive cone C in E is defined to be the set of elements a such that a has positive spectrum. This condition is equivalent to the operator L(a) being a positive self-adjoint operator on E.
C is a convex cone in E because positivity of a self-adjoint operator T— the property that its eigenvalues be strictly positive—is equivalent to (Tv,v) > 0 for all v ≠ 0.
C is an open because the positive matrices are open in the self-adjoint matrices and L is a continuous map: in fact, if the lowest eigenvalue of T is ε > 0, then T + S is positive whenever ||S|| < ε.
The closure of C consists of all a such that L(a) is non-negative or equivalently a has non-negative spectrum. From the elementary properties of convex cones, C is the interior of its closure and is a proper cone. The elements in the closure of C are precisely the square of elements in E.
C is self-dual. In fact the elements of the closure of C are just set of all squares x2 in E, the dual cone is given by all a such that (a,x2) > 0. On the other hand, (a,x2) = (L(a)x,x), so this is equivalent to the positivity of L(a).
Quadratic representation
To show that the positive cone C is homogeneous, i.e. has a transitive group of automorphisms, a generalization of the quadratic action of self-adjoint matrices on themselves given by X ↦ YXY has to be defined. If Y is invertible and self-adjoint, this map is invertible and carries positive operators onto positive operators.
For a in E, define an endomorphism of E, called the quadratic representation, by
Note that for self-adjoint matrices L(X)Y = (XY + YX), so that Q(X)Y = XYX.
An element a in E is called invertible if it is invertible in R[a]. If b denotes the inverse, then the spectral decomposition of a shows that L(a) and L(b) commute.
In fact a is invertible if and only if Q(a) is invertible. In that case
Indeed, if Q(a) is invertible it carries R[a] onto itself. On the other hand, Q(a)1 = a2, so
Taking b = a−1 in the polarized Jordan identity, yields
Replacing a by its inverse, the relation follows if L(a) and L(a−1) are invertible. If not it holds for a + ε1 with ε arbitrarily small and hence also in the limit.
These identities are easy to prove in a finite-dimensional (Euclidean) Jordan algebra (see below) or in a special Jordan algebra, i.e. the Jordan algebra defined by a unital associative algebra. They are valid in any Jordan algebra. This was conjectured by Jacobson and proved in : Macdonald showed that if a polynomial identity in three variables, linear in the third, is valid in any special Jordan algebra, then it holds in all Jordan algebras.
In fact for c in A and F(a) a function on A with values in End A, let
DcF(a) be the derivative at t = 0 of F(a + tc). Then
The expression in square brackets simplifies to c because L(a) commutes with L(a−1).
Thus
Applying Dc to L(a−1)Q(a) = L(a) and acting on b = c−1 yields
On the other hand, L(Q(a)b) is invertible on an open dense set where Q(a)b must also be invertible with
Taking the derivative Dc in the variable b in the expression above gives
This yields the fundamental identity for a dense set of invertible elements, so it follows in general by continuity. The fundamental identity implies that c = Q(a)b is invertible if a and b are invertible and gives a formula for the inverse of Q(c). Applying it to c gives the inverse identity in full generality.
Finally it can be verified immediately from the definitions that, if u = 1 − 2e for some idempotent e, then Q(u) is the period 2 automorphism constructed above for the centralizer algebra and module of e.
Homogeneity of positive cone
The proof of this relies on elementary continuity properties of eigenvalues of self-adjoint operators.
Let T(t) (α ≤ t ≤ β) be a continuous family of self-adjoint operators on E with T(α) positive and T(β) having a negative eigenvalue. Set S(t)= –T(t) + M with M > 0 chosen so large that S(t) is positive for all t. The operator norm ||S(t)|| is continuous. It is less than M for t = α and greater than M for t = β. So for some α < s < β, ||S(s)|| = M and there is a vector v ≠ 0 such that S(s)v = Mv. In particular T(s)v = 0, so that T(s) is not invertible.
Suppose that x = Q(a)b does not lie in C. Let b(t) = (1 − t) + tb with 0 ≤ t ≤ 1. By convexity b(t) lies in C. Let x(t) = Q(a)b(t) and X(t) = L(x(t)). If X(t) is invertible for all t with 0 ≤ t ≤ 1, the eigenvalue argument gives a contradiction since it is positive at t = 0 and has negative eigenvalues at t = 1. So X(s) has a zero eigenvalue for some s with 0 < s ≤ 1: X(s)w = 0 with w ≠ 0. By the properties of the quadratic representation, x(t) is invertible for all t. Let Y(t) = L(x(t)2). This is a positive operator since x(t)2 lies in C. Let T(t) = Q(x(t)), an invertible self-adjoint operator by the invertibility of x(t). On the other hand, T(t) = 2X(t)2 - Y(t). So (T(s)w,w) < 0 since Y(s) is positive and X(s)w = 0. In particular T(s) has some negative eigenvalues. On the other hand, the operator T(0) = Q(a2) = Q(a)2 is positive. By the eigenvalue argument, T(t) has eigenvalue 0 for some t with 0 < t < s, a contradiction.
It follows that the linear operators Q(a) with a invertible, and their inverses, take the cone C onto itself. Indeed, the inverse of Q(a) is just Q(a−1). Since Q(a)1 = a2, there is thus a transitive group of symmetries:
Euclidean Jordan algebra of a symmetric cone
Construction
Let C be a symmetric cone in the Euclidean space E. As above, Aut C denotes the closed subgroup of GL(E) taking C (or equivalently its closure) onto itself. Let G = Aut0 C be its identity component. K = G ∩ O(E). It is a maximal compact subgroup of G and the stabilizer of a point e in C. It is connected. The group G is invariant under taking adjoints. Let σg =(g*)−1, period 2 automorphism. Thus K is the fixed point subgroup of σ. Let be the Lie algebra of G. Thus σ induces an involution of and hence a ±1 eigenspace decomposition
where , the +1 eigenspace, is the Lie algebra of K and is the −1 eigenspace. Thus ⋅e is an affine subspace of dimension dim . Since C = G/K is an open subspace of E, it follows that dim E = dim and hence ⋅e = E. For a in E let L(a) be the unique element of such that L(a)e = a. Define
a ∘ b = L(a)b. Then E with its Euclidean structure and this bilinear product is a Euclidean Jordan algebra with identity 1 = e. The convex cone coincides C with the positive cone of E.
Since the elements of are self-adjoint, L(a)* = L(a). The product is commutative since
[, ] ⊆ annihilates e, so that ab = L(a)L(b)e = L(b)L(a)e = ba. It remains to check the Jordan identity [L(a),L(a2)] = 0.
The associator is given by [a,b,c] = [L(a),L(c)]b. Since [L(a),L(c)] lies in
it follows that [[L(a),L(c)],L(b)] = L([a,b,c]). Making both sides act on c yields
On the other hand,
and likewise
Combining these expressions gives
which implies the Jordan identity.
Finally the positive cone of E coincides with C. This depends on the fact that in any Euclidean Jordan algebra E
In fact Q(ea) is a positive operator,
Q(eta) is a one-parameter group of positive operators: this follows by continuity for rational t, where it is a consequence of the behaviour of powers So it has the form exp tX for some self-adjoint operator X. Taking the derivative at 0 gives X = 2L(a).
Hence the positive cone is given by all elements
with X in . Thus the positive cone of E lies inside C. Since both are self-dual,
they must coincide.
Automorphism groups and trace form
Let C be the positive cone in a simple Euclidean Jordan algebra E. Aut C is the closed subgroup of GL(E) taking C (or its closure) onto itself. Let G = Aut0 C be the identity component of Aut C and let K be the closed subgroup of G fixing 1. From the group theoretic properties of cones, K is a connected compact subgroup of G and equals the identity component of the compact Lie group Aut E. Let and be the Lie algebras of G and K. G is closed under taking adjoints and K is the fixed point subgroup of the period 2 automorphism σ(g) = (g*)−1. Thus K = G ∩ SO(E). Let be the −1 eigenspace of σ.
consists of derivations of E that are skew-adjoint for the inner product defined by the trace form.
[[L(a),L(c)],L(b)] = L([a,b,c]).
If a and b are in E, then D = [L(a),L(b)] is a derivation of E, so lies in . These derivations span .
If a is in C, then Q(a) lies in G.
C is the connected component of the open set of invertible elements of E containing 1. It consists of exponentials of elements of E and the exponential map gives a diffeomorphism of E onto C.
The map a ↦ L(a) gives an isomorphism of E onto and eL(a) = Q(ea/2). This space of such exponentials coincides with P the positive self-adjoint elements in G.
For g in G and a in E, Q(g(a)) = g Q(a) g*.
Cartan decomposition
G = P ⋅ K = K ⋅ P and the decomposition g = pk corresponds to the polar decomposition in GL(E).
If (ei) is a Jordan frame in E, then the subspace of spanned by L(ei) is maximal Abelian in . A = exp is the Abelian subgroup of operators Q(a) where a = Σ λi ei with λi > 0. A is closed in P and hence G. If b =Σ μi ei with μi > 0, then Q(ab)=Q(a)Q(b).
and P are the union of the K translates of and A.
Iwasawa decomposition for cone
If E has Peirce decomposition relative to the Jordan frame (ei)
then is diagonalized by this decomposition with L(a) acting as (αi + αj)/2 on Eij, where a = Σ αi ei.
Define the closed subgroup S of G by
where the ordering on pairs p ≤ q is lexicographic. S contains the group A, since it acts as scalars on Eij. If N is the closed subgroup of S such that nx = x modulo ⊕(p,q) > (i,j) Epq, then S = AN = NA, a semidirect product with A normalizing N. Moreover, G has the following Iwasawa decomposition:
For i ≠ j let
Then the Lie algebra of N is
Taking ordered orthonormal bases of the Eij gives a basis of E, using the lexicographic order on pairs (i,j). The group N is lower unitriangular and its Lie algebra lower triangular. In particular the exponential map is a polynomial mapping of onto N, with polynomial inverse given by the logarithm.
Complexification of a Euclidean Jordan algebra
Definition of complexification
Let E be a Euclidean Jordan algebra. The complexification EC = E ⊕ iE has a natural conjugation operation (a + ib)* = a − ib and a natural complex inner product and norm. The Jordan product on E extends bilinearly to EC, so that (a + ib)(c + id) = (ac − bd) + i(ad + bc). If multiplication is defined by L(a)b = ab then the Jordan axiom
still holds by analytic continuation. Indeed, the identity above holds when a is replaced by a + tb for t real; and since the left side is then a polynomial with values in End EC vanishing for real t, it vanishes also t complex. Analytic continuation also shows that all for the formulas involving power-associativity for a single element a in E, including recursion formulas for L(am), also hold in EC. Since for b in E, L(b) is still self-adjoint on EC, the adjoint relation L(a*) = L(a)* holds for a in EC. Similarly the symmetric bilinear form β(a,b) = (a,b*) satisfies β(ab,c) = β(b,ac). If the inner product comes from the trace form, then β(a,b) = Tr L(ab).
For a in EC, the quadratic representation is defined as before by Q(a)=2L(a)2 − L(a2). By analytic continuation the fundamental identity still holds:
An element a in E is called invertible if it is invertible in C[a]. Power associativity shows that L(a) and L(a−1) commute. Moreover, a−1 is invertible with inverse a.
As in E, a is invertible if and only if Q(a) is invertible. In that case
Indeed, as for E, if Q(a) is invertible it carries C[a] onto itself, while Q(a)1 = a2, so
so a is invertible. Conversely if a is invertible, taking b = a−2 in the fundamental identity shows that Q(a) is invertible. Replacing a by a−1 and b by a then shows that its inverse is Q(a−1). Finally if a and b are invertible then so is c = Q(a)b and it satisfies the inverse identity:
Invertibility of c follows from the fundamental formula which gives Q(c) = Q(a)Q(b)Q(a). Hence
The formula
also follows by analytic continuation.
Complexification of automorphism group
Aut EC is the complexification of the compact Lie group Aut E in GL(EC). This follows because the Lie algebras of Aut EC and Aut E consist of derivations of the complex and real Jordan algebras EC and E. Under the isomorphism identifying End EC with the complexification of End E, the complex derivations is identified with the complexification of the real derivations.
Structure groups
The Jordan operator L(a) are symmetric with respect to the trace form, so that L(a)t = L(a) for a in EC. The automorphism groups of E and EC consist of invertible real and complex linear operators g such that L(ga) = gL(a)g−1 and g1 = 1. Aut EC is the complexification of Aut E. Since an automorphism g preserves the trace form, g−1 = gt.
The structure groups of E and EC consist of invertible real and complex linear operators g such that
They form groups Γ(E) and Γ(EC) with Γ(E) ⊂ Γ(EC).
The structure group is closed under taking transposes g ↦ gt and adjoints g ↦ g*.
The structure group contains the automorphism group. The automorphism group can be identified with the stabilizer of 1 in the structure group.
If a is invertible, Q(a) lies in the structure group.
If g is in the structure group and a is invertible, ga is also invertible with (ga)−1 = (gt)−1a−1.
If E is simple, Γ(E) = Aut C × {±1}, Γ(E) ∩ O(E) = Aut E × {±1} and the identity component of Γ(E) acts transitively on C.
Γ(EC) is the complexification of Γ(E), which has Lie algebra .
The structure group Γ(EC) acts transitively on the set of invertible elements in EC.
Every g in Γ(EC) has the form g = h Q(a) with h an automorphism and a invertible.
The unitary structure group Γu(EC) is the subgroup of Γ(EC) consisting of unitary operators, so that Γu(EC) = Γ(EC) ∩ U(EC).
The stabilizer of 1 in Γu(EC) is Aut E.
Every g in Γu(EC) has the form g = h Q(u) with h in Aut E and u invertible in EC with u* = u−1.
Γ(EC) is the complexification of Γu(EC), which has Lie algebra .
The set S of invertible elements u such that u* = u−1 can be characterized equivalently either as those u for which L(u) is a normal operator with uu* = 1 or as those u of the form exp ia for some a in E. In particular S is connected.
The identity component of Γu(EC) acts transitively on S
g in GL(EC) is in the unitary structure group if and only if gS = S
Given a Jordan frame (ei) and v in EC, there is an operator u in the identity component of Γu(EC) such that uv = Σ αi ei with αi ≥ 0. If v is invertible, then αi > 0.
Given a frame in a Euclidean Jordan algebra E, the restricted Weyl group can be identified with the group of operators on arising from elements in the identity component of Γu(EC) that leave invariant.
Spectral norm
Let E be a Euclidean Jordan algebra with the inner product given by the trace form. Let (ei) be a fixed Jordan frame in E. For given a in EC choose u in Γu(EC) such that
ua = Σ αi ei with αi ≥ 0. Then the spectral norm ||a|| = max αi is independent of all choices. It is a norm on EC with
In addition ||a||2 is given by the operator norm of Q(a) on the inner product space EC. The fundamental identity for the quadratic representation implies that ||Q(a)b|| ≤ ||a||2||b||. The spectral norm of an element a is defined in terms of C[a] so depends only on a and not the particular Euclidean Jordan algebra in which it is calculated.
The compact set S is the set of extreme points of the closed unit ball ||x|| ≤ 1. Each u in S has norm one. Moreover, if u = eia and v = eib, then ||uv|| ≤ 1. Indeed, by the Cohn–Shirshov theorem the unital Jordan subalgebra of E generated by a and b is special. The inequality is easy to establish in non-exceptional simple Euclidean Jordan algebras, since each such Jordan algebra and its complexification can be realized as a subalgebra of some Hn(R) and its complexification Hn(C) ⊂ Mn(C). The spectral norm in Hn(C) is the usual operator norm. In that case, for unitary matrices U and V in Mn(C), clearly ||(UV + VU)|| ≤ 1. The inequality therefore follows in any special Euclidean Jordan algebra and hence in general.
On the other hand, by the Krein–Milman theorem, the closed unit ball is the (closed) convex span of S. It follows that ||L(u)|| = 1, in the operator norm corresponding to either the inner product norm or spectral norm. Hence ||L(a)|| ≤ ||a|| for all a, so that the spectral norm satisfies
It follows that EC is a Jordan C* algebra.
Complex simple Jordan algebras
The complexification of a simple Euclidean Jordan algebra is a simple complex Jordan algebra which is also separable, i.e. its trace form is non-degenerate. Conversely, using the existence of a real form of the Lie algebra of the structure group, it can be shown that every complex separable simple Jordan algebra is the complexification of a simple Euclidean Jordan algebra.
To verify that the complexification of a simple Euclidean Jordan algebra E has no ideals, note that if F is an ideal in EC then so too is F⊥, the orthogonal complement for the trace norm. As in the real case, J = F⊥ ∩ F must equal (0). For the associativity property of the trace form shows that F⊥ is an ideal and that ab = 0 if a and b lie in J. Hence J is an ideal. But if z is in J, L(z) takes EC into J and J into (0). Hence Tr L(z) = 0. Since J is an ideal and the trace form degenerate, this forces z = 0. It follows that EC = F ⊕ F⊥. If P is the corresponding projection onto F, it commutes with the operators L(a) and F⊥ = (I − P)EC. is also an ideal and E = F ⊕ F⊥. Furthermore, if e = P(1), then P = L(e). In fact for a in E
so that ea = a for a in F and 0 for a in F⊥. In particular e and 1 − e are orthogonal central idempotents with L(e) = P and L(1 − e) = I − P.
So simplicity follows from the fact that the center of EC is the complexification of the center of E.
Symmetry groups of bounded domain and tube domain
According to the "elementary approach" to bounded symmetric space of Koecher, Hermitian symmetric spaces of noncompact type can be realized in the complexification of a Euclidean Jordan algebra E as either the open unit ball for the spectral norm, a bounded domain, or as the open tube domain , where C is the positive open cone in E. In the simplest case where E = R, the complexification of E is just C, the bounded domain corresponds to the open unit disk and the tube domain to the upper half plane. Both these spaces have transitive groups of biholomorphisms given by Möbius transformations, corresponding to matrices in or . They both lie in the Riemann sphere }, the standard one-point compactification of C. Moreover, the symmetry groups are all particular cases of Möbius transformations corresponding to matrices in . This complex Lie group and its maximal compact subgroup act transitively on the Riemann sphere. The groups are also algebraic. They have distinguished generating subgroups and have an explicit description in terms of generators and relations. Moreover, the Cayley transform gives an explicit Möbius transformation from the open disk onto the upper half plane. All these features generalize to arbitrary Euclidean Jordan algebras. The compactification and complex Lie group are described in the next section and correspond to the dual Hermitian symmetric space of compact type. In this section only the symmetries of and between the bounded domain and tube domain are described.
Jordan frames provide one of the main Jordan algebraic techniques to describe the symmetry groups. Each Jordan frame gives rise to a product of copies of R and C. The symmetry groups of the corresponding open domains and the compactification—polydisks and polyspheres—can be deduced from the case of the unit disk, the upper halfplane and Riemann sphere. All these symmetries extend to the larger Jordan algebra and its compactification. The analysis can also be reduced to this case because all points in the complex algebra (or its compactification) lie in an image of the polydisk (or polysphere) under the unitary structure group.
Definitions
Let be a Euclidean Jordan algebra with complexification .
The unit ball or disk D in is just the convex bounded open set of elements
such the ||a|| < 1, i.e. the unit ball for the spectral norm.
The tube domain T in is the unbounded convex open set , where C is the open positive cone in .
Möbius transformations
The group SL(2,C) acts by Möbius transformations on the Riemann sphere C ∪ {∞}, the one-point compactification of C. If g in SL(2,C) is given by the matrix
then
Similarly the group SL(2,R) acts by Möbius transformations on the circle R ∪ {∞}, the one-point compactification of R.
Let k = R or C. Then SL(2,k) is generated by the three subgroups of lower and upper unitriangular matrices, L and U', and the diagonal matrices D. It is also generated by the lower (or upper) unitriangular matrices, the diagonal matrices and the matrix
The matrix J corresponds to the Möbius transformation and can be written
The Möbius transformations fixing ∞ are just the upper triangular matrices B = UD = DU. If g does not fix ∞, it sends ∞ to a finite point a. But then g can be composed with an upper unitriangular matrix to send a to 0 and then with J to send 0 to infinity. This argument gives one of the simplest examples of the Bruhat decomposition:
the double coset decomposition of . In fact the union is disjoint and can be written more precisely as
where the product occurring in the second term is direct.
Now let
Then
It follows is generated by the group of operators and J subject to the following relations:
is an additive homomorphism
is a multiplicative homomorphism
The last relation follows from the definition of . The generator and relations above is fact gives a presentation of . Indeed, consider the free group Φ generated by J and with J of order 4 and its square central. This consists of all products
for . There is a natural homomorphism of Φ onto . Its kernel contain the normal subgroup Δ generated by the relations above. So there is a natural homomorphism of Φ/Δ onto . To show that it is injective it suffices to show that the Bruhat decomposition also holds in . It is enough to prove the first version, since the more precise version follows from the commutation relations between J and
. The set is invariant under inversion, contains operators and J, so it is enough to show it is invariant under multiplication. By construction it is invariant under multiplication by B. It is invariant under multiplication by J because of the defining equation for .
In particular the center of consists of the scalar matrices and it is the only non-trivial normal subgroup of , so that } is simple. In fact if is a normal subgroup, then the Bruhat decomposition implies that is a maximal subgroup, so that either is contained in or
. In the first case fixes one point and hence every point of }, so lies in the center. In the second case, the commutator subgroup of is the whole group, since it is the group generated by lower and upper unitriangular matrices and the fourth relation shows that all such matrices are commutators
since . Writing with in and in , it follows that . Since and generate the whole group, . But then . The right hand side here is Abelian while the left hand side is its own commutator subgroup. Hence this must be the trivial group and .
Given an element a in the complex Jordan algebra , the unital Jordan subalgebra is associative and commutative. Multiplication by a defines an operator on which has a spectrum, namely its set of complex eigenvalues. If is a complex polynomial, then is defined in . It is invertible in if and only if it is invertible in
, which happen precisely when does not vanish on the spectrum of . This permits rational functions of to be defined whenever the function is defined on the spectrum of . If and are rational functions with and defined on , then
is defined on and . This applies in particular to complex Möbius transformations which can be defined by
. They leave invariant and, when defined, the group composition law holds. (In the next section complex Möbius transformations will be defined on the compactification of .)
Given a primitive idempotent in with Peirce decomposition
the action of by Möbius transformations on can be extended to an action on A so that the action leaves invariant the components and in particular acts trivially on . If is the projection onto , the action is given be the formula
For a Jordan frame of primitive idempotents , the actions of associated with different commute, thus giving an action of . The diagonal copy of gives again the action by Möbius transformations on .
Cayley transform
The Möbius transformation defined by
is called the Cayley transform. Its inverse is given by
The inverse Cayley transform carries the real line onto the circle with the point 1 omitted. It carries the upper halfplane onto the unit disk and the lower halfplane onto the complement of the closed unit disk. In operator theory the mapping takes self-adjoint operators T onto unitary operators U not containing 1 in their spectrum. For matrices this follows because unitary and self-adjoint matrices can be diagonalized and their eigenvalues lie on the unit circle or real line. In this finite-dimensional setting the Cayley transform and its inverse establish a bijection between the matrices of operator norm less than one and operators with imaginary part a positive operator. This is the special case for of the Jordan algebraic result, explained below, which asserts that the Cayley transform and its inverse establish a bijection between the bounded domain and the tube domain .
In the case of matrices, the bijection follows from resolvent formulas. In fact if the imaginary part of is positive, then is invertible since
In particular, setting ,
Equivalently
is a positive operator, so that ||P(T)|| < 1. Conversely if ||U|| < 1 then is invertible and
Since the Cayley transform and its inverse commute with the transpose, they also establish a bijection for symmetric matrices. This corresponds to the Jordan algebra of symmetric complex matrices, the complexification of .
In the above resolvent identities take the following form:
and equivalently
where the Bergman operator is defined by with . The inverses here are well defined. In fact in one direction is invertible for ||u|| < 1: this follows either using the fact that the norm satisfies ||ab|| ≤ ||a|| ||b||; or using the resolvent identity and the invertibility of (see below). In the other direction if the imaginary part of is in then the imaginary part of is positive definite so that is invertible. This argument can be applied to , so it also invertible.
To establish the correspondence, it is enough to check it when is simple. In that case it follows from the connectivity of and and because:
The first criterion follows from the fact that the eigenvalues of are exactly if the eigenvalues of are . So the are either all positive or all negative. The second criterion follows from the fact that if
with and u in , then has eigenvalues . So the are either all less than one or all greater than one.
The resolvent identity is a consequence of the following identity for and invertible
In fact in this case the relations for a quadratic Jordan algebra imply
so that
The equality of the last two terms implies the identity, replacing by .
Now set and . The resolvent identity is a special case of the more following more general identity:
In fact
so the identity is equivalent to
Using the identity above together with , the left hand side equals . The right hand side equals . These are equal because of the formula .
Automorphism group of bounded domain
If lies in the bounded domain , then is invertible. Since is invariant under multiplication by scalars of modulus ≤ 1, it follows that
is invertible for |λ| ≥ 1. Hence for ||a|| ≤ 1, is invertible for |λ| > 1. It follows that the Möbius transformation is defined for ||a|| ≤ 1 and in . Where defined it is injective. It is holomorphic on . By the maximum modulus principle, to show that maps onto it suffices to show it maps onto itself. For in that case and its inverse preserve so must be surjective. If with in , then lies in . This is a commutative associative algebra and the spectral norm is the supremum norm. Since with |ςi| = 1, it follows that where |g(ςi)| = 1. So lies in .
This is a direct consequence of the definition of the spectral norm.
This is already known for the Möbius transformations, i.e. the diagonal in . It follows for diagonal matrices in a fixed component in because they correspond to transformations in the unitary structure group. Conjugating by a Möbius transformation is equivalent to conjugation by a matrix in that component. Since the only non-trivial normal subgroup of is its center, every matrix in a fixed component carries onto itself.
Given an element in an transformation in the identity component of the unitary structure group carries it in an element in with supremum norm less than 1. An transformation in the carries it onto zero. Thus there is a transitive group of biholomorphic transformations of . The symmetry is a biholomorphic Möbius transformation fixing only 0.
If is a biholomorphic self-mapping of with and derivative at 0, then must be the identity. If not, has Taylor series expansion with homogeneous of degree and . But then . Let be a functional in of norm one. Then for fixed in , the holomorphic functions of a complex variable given by must have modulus less than 1 for |w| < 1. By Cauchy's inequality, the coefficients of must be uniformly bounded independent of , which is not possible if .
If is a biholomorphic mapping of onto itself just fixing 0 then
if , the mapping fixes 0 and has derivative there. It is therefore the identity map. So for any α. This implies g is a linear mapping. Since it maps onto itself it maps the closure onto itself. In particular it must map the Shilov boundary onto itself. This forces to be in the unitary structure group.
The orbit of 0 under AD is the set of all points with . The orbit of these points under the unitary structure group is the whole of . The Cartan decomposition follows because is the stabilizer of 0 in .
In fact the only point fixed by (the identity component of) KD in D is 0. Uniqueness implies that the center of GD must fix 0. It follows that the center of GD lies in KD. The center of KD is isomorphic to the circle group: a rotation through θ corresponds to multiplication by eiθ on D so lies in }. Since this group has trivial center, the center of GD is trivial.
In fact any larger compact subgroup would intersect AD non-trivially and it has no non-trivial compact subgroups.
Note that GD is a Lie group (see below), so that the above three statements hold with GD and KD replaced by their identity components, i.e. the subgroups generated by their one-parameter cubgroups. Uniqueness of the maximal compact subgroup up to conjugacy follows from a general argument or can be deduced for classical domains directly using Sylvester's law of inertia following . For the example of Hermitian matrices over C, this reduces to proving that is up to conjugacy the unique maximal compact subgroup in . In fact if , then is the subgroup of preserving W. The restriction of the hermitian form given by the inner product on minus the inner product on .
On the other hand, if is a compact subgroup of , there is a -invariant inner product on obtained by averaging any inner product with respect to Haar measure on . The Hermitian form corresponds to an orthogonal decomposition into two subspaces of dimension both invariant under with the form positive definite on one and negative definite on the other. By Sylvester's law of inertia, given two subspaces of dimension on which the Hermitian form is positive definite, one is carried onto the other by an element of . Hence there is an element of such that the positive definite subspace is given by . So leaves invariant and .
A similar argument, with quaternions replacing the complex numbers, shows uniqueness for the symplectic group, which corresponds to Hermitian matrices over R. This can also be seen more directly by using complex structures. A complex structure is an invertible operator J with J2 = −I preserving the symplectic form B and such that −B(Jx,y) is a real inner product. The symplectic group acts transitively on complex structures by conjugation. Moreover, the subgroup commuting with J is naturally identified with the unitary group for the corresponding complex inner product space. Uniqueness follows by showing that any compact subgroup K commutes with some complex structure J. In fact, averaging over Haar measure, there is a K-invariant inner product on the underlying space. The symplectic form yields an invertible skew-adjoint operator T commuting with K. The operator S = −T2 is positive, so has a unique positive square root, which commutes with K. So J = S−1/2T, the phase of T, has square −I and commutes with K.
Automorphism group of tube domain
There is a Cartan decomposition for GT corresponding to the action on the tube T = E + iC:
KT is the stabilizer of i in iC ⊂ T, so a maximal compact subgroup of GT. Under the Cayley transform, KT corresponds to KD, the stabilizer of 0 in the bounded symmetric domain, where it acts linearly. Since GT is semisimple, every maximal compact subgroup is conjugate to KT.
The center of GT or GD is trivial. In fact the only point fixed by KD in D is 0. Uniqueness implies that the center of GD must fix 0. It follows that the center of GD lies in KD and hence that the center of GT lies in KT. The center of KD is isomorphic to the circle group: a rotation through θ corresponds to multiplication by eiθ on D. In Cayley transform it corresponds to the Möbius transformation z ↦ (cz + s)(−sz + c)−1 where c = cos θ/2 and s = sin θ/2. (In particular, when θ = π, this gives the symmetry j(z) = −z−1.) In fact all Möbius transformations z ↦ (αz + β)(−γz + δ)−1 with αδ − βγ = 1 lie in GT. Since PSL(2,R) has trivial center, the center of GT is trivial.
AT is given by the linear operators Q(a) with a = Σ αi ei with αi > 0.
In fact the Cartan decomposition for follows from the decomposition for . Given in , there is an element in , the identity component of , such that with . Since ||z|| < 1, it follows that . Taking the Cayley transform of z, it follows that every in can be written , with the Cayley transform and in . Since
with
, the point is of the form with in . Hence .
3-graded Lie algebras
Iwasawa decomposition
There is an Iwasawa decomposition for GT corresponding to the action on the tube T = E + iC:
KT is the stabilizer of i in iC ⊂ T.
AT is given by the linear operators Q(a) where a = Σ αi ei with αi > 0.
NT is a lower unitriangular group on EC. It is the semidirect product of the unipotent triangular group N appearing in the Iwasawa decomposition of G (the symmetry group of C) and N0 = E, group of translations x ↦ x + b.
The group S = AN acts on E linearly and conjugation on N0 reproduces this action. Since the group S acts simply transitively on C, it follows that ANT=S⋅N0 acts simply transitively on T = E + iC. Let HT be the group of biholomorphisms of the tube T. The Cayley transform shows that is isomorphic to the group HD of biholomorphisms of the bounded domain D. Since ANT acts simply transitively on the tube T while KT fixes ic, they have trivial intersection.
Given g in HT, take s in ANT such that g−1(i)=s−1(i). then gs−1 fixes i and therefore lies in KT. Hence HT = KT ⋅A⋅NT. So the product is a group.
Lie group structure
By a result of Henri Cartan, HD is a Lie group. Cartan's original proof is presented in . It can also be deduced from the fact the D is complete for the Bergman metric, for which the isometries form a Lie group; by Montel's theorem, the group of biholomorphisms is a closed subgroup.
That HT is a Lie group can be seen directly in this case. In fact there is a finite-dimensional 3-graded Lie algebra of vector fields with an involution σ. The Killing form is negative definite on the +1 eigenspace of σ and positive definite on the −1 eigenspace. As a group HT normalizes since the two subgroups KT and ANT do. The +1 eigenspace corresponds to the Lie algebra of KT. Similarly the Lie algebras of the linear group AN and the affine group N0 lie in . Since the group GT has trivial center, the map into GL() is injective. Since KT is compact, its image in GL() is compact. Since the Lie algebra is compatible with that of ANT, the image of ANT is closed. Hence the image of the product is closed, since the image of KT is compact. Since it is a closed subgroup, it follows that HT is a Lie group.
Generalizations
Euclidean Jordan algebras can be used to construct Hermitian symmetric spaces of tube type. The remaining Hermitian symmetric spaces are Siegel domains of the second kind. They can be constructed using Euclidean Jordan triple systems, a generalization of Euclidean Jordan algebras. In fact for a Euclidean Jordan algebra E let
Then L(a,b) gives a bilinear map into End E such that
and
Any such bilinear system is called a Euclidean Jordan triple system. By definition the operators L(a,b) form a Lie subalgebra of End E.
The Kantor–Koecher–Tits construction gives a one-one correspondence between Jordan triple systems and 3-graded Lie algebras
satisfying
and equipped with an involutive automorphism σ reversing the grading. In this case
defines a Jordan triple system on . In the case of Euclidean Jordan algebras or triple systems the Kantor–Koecher–Tits construction can be identified with the Lie algebra of the Lie group of all homomorphic automorphisms of the corresponding bounded symmetric domain.
The Lie algebra is constructed by taking to be the Lie subalgebra of End E generated by the L(a,b) and to be copies of E. The Lie bracket is given by
and the involution by
The Killing form is given by
where β(T1,T2) is the symmetric bilinear form defined by
These formulas, originally derived for Jordan algebras, work equally well for Jordan triple systems.
The account in develops the theory of bounded symmetric domains starting from the standpoint of 3-graded Lie algebras. For a given finite-dimensional vector space E, Koecher considers finite-dimensional Lie algebras of vector fields on E with polynomial coefficients of degree ≤ 2. consists of the constant vector fields ∂i and must contain the Euler operator H = Σ xi⋅∂i as a central element. Requiring the existence of an involution σ leads directly to a Jordan triple structure on V as above. As for all Jordan triple structures, fixing c in E,
the operators Lc(a) = L(a,c) give E a Jordan algebra structure, determined by e. The operators L(a,b) themselves come from a Jordan algebra structure as above if and only if there are additional operators E± in so that H, E± give a copy of . The corresponding Weyl group element implements the involution σ. This case corresponds to that of Euclidean Jordan algebras.
The remaining cases are constructed uniformly by Koecher using involutions of simple Euclidean Jordan algebras. Let E be a simple Euclidean Jordan algebra and τ a Jordan algebra automorphism of E of period 2. Thus E = E+1 ⊕ E−1 has an eigenspace decomposition for τ with E+1 a Jordan subalgebra and E−1 a module. Moreover, a product of two elements in E−1 lies in E+1. For a, b, c in E−1, set
and (a,b)= Tr L(ab). Then F = E−1 is a simple Euclidean Jordan triple system, obtained by restricting the triple system on E to F. Koecher exhibits explicit involutions of simple Euclidean Jordan algebras directly (see below). These Jordan triple systems correspond to irreducible Hermitian symmetric spaces given by Siegel domains of the second kind. In Cartan's listing, their compact duals are SU(p + q)/S(U(p) × U(q)) with p ≠ q (AIII), SO(2n)/U(n) with n odd (DIII) and E6/SO(10) × U(1) (EIII).
Examples
F is the space of p by q matrices over R with p ≠ q. In this case L(a,b)c= abtc + cbta with inner product (a,b) = Tr abt. This is Koecher's construction for the involution on E = Hp + q(R) given by conjugating by the diagonal matrix with p digonal entries equal to 1 and q to −1.
F is the space of real skew-symmetric m by m matrices. In this case L(a,b)c = abc + cba with inner product (a,b) = −Tr ab. After removing a factor of √(-1), this is Koecher's construction applied to complex conjugation on E = Hn(C).
F is the direct sum of two copies of the Cayley numbers, regarded as 1 by 2 matrices. This triple system is obtained by Koecher's construction for the canonical involution defined by any minimal idempotent in E = H3(O).
The classification of Euclidean Jordan triple systems has been achieved by generalizing the methods of Jordan, von Neumann and Wigner, but the proofs are more involved. Prior differential geometric methods of , invoking a 3-graded Lie algebra, and of , lead to a more rapid classification.
Notes
References
(reprint of 1951 article)
, originally lecture notes from a course given in the University of Göttingen in 1962
Convex geometry
Non-associative algebras
Lie algebras
Lie groups
Several complex variables | Symmetric cone | Mathematics | 16,326 |
11,380,117 | https://en.wikipedia.org/wiki/Skew-Hamiltonian%20matrix |
Skew-Hamiltonian Matrices in Linear Algebra
In linear algebra, a skew-Hamiltonian matrix is a specific type of matrix that corresponds to a skew-symmetric bilinear form on a symplectic vector space. Let be a vector space equipped with a symplectic form, denoted by Ω. A symplectic vector space must necessarily be of even dimension.
A linear map is defined as a skew-Hamiltonian operator with respect to the symplectic form Ω if the bilinear form defined by is skew-symmetric.
Given a basis in , the symplectic form Ω can be expressed as . In this context, a linear operator is skew-Hamiltonian with respect to Ω if and only if its corresponding matrix satisfies the condition , where is the skew-symmetric matrix defined as:
With representing the identity matrix.
Matrices that meet this criterion are classified as skew-Hamiltonian matrices. Notably, the square of any Hamiltonian matrix is skew-Hamiltonian. Conversely, any skew-Hamiltonian matrix can be expressed as the square of a Hamiltonian matrix.
Notes
Matrices
Linear algebra | Skew-Hamiltonian matrix | Mathematics | 233 |
40,506,832 | https://en.wikipedia.org/wiki/Edison%20and%20Swan%20Electric%20Light%20Company | The Edison and Swan Electric Light Company Limited was a manufacturer of incandescent lamp bulbs and other electrical goods. It was formed in 1883 with the name Edison & Swan United Electric Light Company with the merger of the Swan United Electric Company and the Edison Electric Light Company.
Thomas Edison established the Edison Electric Light Company in 1878. Joseph Swan established the Swan United Electric Light Company in 1881. Swan sued Edison in the UK, claiming patent infringement; this was upheld by the British courts. In 1882, Edison sued Swan, claiming infringement of his 1879 U.S. patent; however, the Edison Company believed their case would be jeopardized if Swan could demonstrate prior research and publication. Subsequently, in order to avoid uncertain and expensive litigation, the two companies negotiated a merger. The glass bulbs sold in Britain were of Swan's design, while the filaments were of Edison's. From 1887 or earlier Sir Ambrose Fleming was an adviser to the company, and conducted research at Ponders End.
The company had offices at 155 Charing Cross Road, London, and factories in Brimsdown, Ponders End and Sunderland. In 1928, the company was acquired by Associated Electrical Industries. In 1956, a new cathode ray tube plant was opened in Sunderland. The company was renamed Siemens Ediswan following the takeover of Siemens Brothers by AEI in 1957. In 1964, AEI merged its lamp and radio valve manufacturing interests with those of Thorn Electrical Industries to form British Lighting Industries Ltd.
Ediswan Valves
Edison Swan (or later Siemens Edison Swan) produced a wide range of vacuum tubes and cathode ray tubes under the names "Ediswan" or "Mazda" and the 1964 Mazda Valve Data Book claimed: "Professor Sir. Ambrose Fleming... was Technical Consultant to the Edison Swan Company at the time. It was this close co-operation between University and Factory which resulted in the first radio valve in the world."
Ediswan still survives as a manufacturer of valves (located in Bromsgrove England).
See also
La Compagnie des Lampes (1921), EdiSwans French counterpart, which also made light bulbs and electronic tubes under the Mazda brand
References
Further reading
Bowers, Brian. "The Rise of the Electricity Supply Industry." History Today (March 1972), Vol. 22 Issue 3, pp 176–183 online
Bowers, Brian. "Edison and Early Electrical Engineering in Britain." History of Technology Volume 13 (2016): 168+
David, Paul A., and Julie Ann Bunn. "The economics of gateway technologies and network evolution: Lessons from electricity supply history." Information economics and policy 3.2 (1988): 165–202.
Hughes, Thomas Parke. "British Electrical Industry Lag: 1882-1888" Technology and Culture 3#1 (1962), pp. 27–44 online
External links
http://www.gracesguide.co.uk/Edison_Swan_Electric_Co
http://www.vintage-technology.info/pages/ephemera/vemazda.htm
Electrical engineering companies of the United Kingdom
Lighting brands
Vacuum tubes
General Electric Company
Associated Electrical Industries
Manufacturing companies established in 1883
Manufacturing companies disestablished in 1964
1883 establishments in England
1964 disestablishments in England | Edison and Swan Electric Light Company | Physics | 676 |
639,203 | https://en.wikipedia.org/wiki/Zompist.com | Zompist.com is a website created by Mark Rosenfelder a.k.a. Zompist, a conlanger. It features essays on comics, politics, language, and science, as well as a detailed description of Rosenfelder's constructed world, Almea. The website is also the home of The Language Construction Kit, Rosenfelder's article introducing new conlangers to the hobby.
Many features of the site have been noted by the press, including its culture tests, humorous excerpts from phrase books, its collection of numbers in over 5000 languages, and The Language Construction Kit.
The Language Construction Kit
The Language Construction Kit was originally a collection of HTML documents written by Rosenfelder and hosted at Zompist.com intended to be a guide for making constructed languages. The LCK proceeds from the simplest aspects of language upward, starting with phonology and writing systems, moving on to words, going through the complexities of grammar, and ending with an overview of registers and dialects. This sensible progression, as well as the warnings against common oversights, frequent use of examples from natural languages, and healthy dose of humor, has earned the LCK its popular and respected status among the Internet conlanging community. It has been translated into Spanish, Portuguese, Italian, and German by fans, and came out in book form in March 2010. Rosenfelder has published several follow-up works: Advanced Language Construction and The Conlanger's Lexipedia, which get into more detail on certain aspects of conlanging, and The Planet Construction Kit, which is geared towards creating whole fantasy worlds. In 2015, Rosenfelder published the China Construction Kit.
The Zompist Bulletin Board
The website has a corresponding bulletin board, formerly hosted with SpinnWebe but now with its own domain at www.verduria.org. The Zompist Bulletin Board (often abbreviated ZBB) is an online forum created for the purpose of discussing conlangs, conworlds, and Mark Rosenfelder's own constructed world, Almea. Members of the board share and showcase their own conlangs and conworlds, as well as discuss aspects of the world's languages.
Almea
Almea is a fictional world constructed by Mark Rosenfelder, which Zompist.com is mainly dedicated to. It is populated by several races, known as the Thinking Kinds. The Thinking kinds include the humans, the ktuvoks (swamp mammals with reptile characteristics considered demons by most Almean humans), the iliî (singular form iliu, ancient wise aquatic race, playing a role similar to elves in Tolkien's mythos), the flaids (said to be 'friendly but insane'), the elcari (hard-working mountain dwellers comparable to Tolkien's dwarves), and the icëlani (more primitive relatives of humans). Almea's main continent, Ereláe, has several nations, including Verduria, which is the most detailed and closest Almean counterpart to real-life countries, Dhekhnam, which is a ktuvok empire (meaning that humans function as slaves to the ktuvoks there), Xurno, a nation ruled by artists, and Skouras, a detailed maritime nation. Ereláe also has a detailed historical atlas, which was inspired by the New Penguin Historical Atlases. In addition to the various atlases and languages, there is also a wiki called the Almeopedia, which works as an encyclopedic reference. The part of the website devoted to Almea, Virtual Verduria, also includes a range of stories and guides to various subjects, including drawing and maps.
Languages of Almea described on the website include:
Verdurian
Ismaîn
Barakhinei
Caďinor
Sarroc
Cuêzi
Axunašin
Xurnese
Proto-Eastern
Kebreni
Munkhâshi
Wede:i
Old Skourene
Elkarîl
Flaidish
Uyseʔ
Lé
Dhekhnami
Obenzayet
Bhöɣetan
Šɯk
Most words in those languages have etymologies, being derived from proto-languages (like Proto-Eastern above) by means of sound changes, and are given historical backgrounds, resulting in the presence of loan-words.
See also
Langmaker
References
External links
Zompist Bulletin Board
Constructed languages resources
Internet forums
Linguistics websites
Linguistics databases
Numerals | Zompist.com | Mathematics | 919 |
30,471,780 | https://en.wikipedia.org/wiki/C13H20 | {{DISPLAYTITLE:C13H20 }}
The molecular formula C13H20 (molar mass: 176.303 g/mol) may refer to:
Tetracyclopropylmethane, a polycyclic hydrocarbon
A lot of Alkylbenzenes, derivatives of benzene
Molecular formulas | C13H20 | Physics,Chemistry | 71 |
22,547,607 | https://en.wikipedia.org/wiki/Tron%3A%20Legacy | Tron: Legacy (stylized as TRON: Legacy) is a 2010 American science fiction action film directed by Joseph Kosinski from a screenplay by Adam Horowitz and Edward Kitsis, based on a story by Horowitz, Kitsis, Brian Klugman, and Lee Sternthal. The second installment in the Tron series, it serves as a sequel to Tron (1982), whose director Steven Lisberger returned to co-produce. The cast includes Jeff Bridges and Bruce Boxleitner reprising their roles as Kevin Flynn and Alan Bradley, respectively, as well as Garrett Hedlund, Olivia Wilde, James Frain, Beau Garrett, and Michael Sheen. The story follows Flynn's adult son Sam, who responds to a message from his long-lost father and is transported into a virtual reality called "the Grid", where Sam, his father, and the algorithm Quorra must stop the malevolent program Clu from invading the real world.
Interest in creating a sequel to Tron arose after the film garnered a cult following. After much speculation, Walt Disney Pictures began a concerted effort in 2005 to devise a sequel, with the hiring of Klugman and Sternthal as writers. Kosinski was recruited as director two years later. As he was not optimistic about Disney's The Matrix-esque approach to the film, Kosinski filmed a concept trailer, which he used to conceptualize the universe of Tron: Legacy and convince the studio to greenlight the film. Principal photography took place in Vancouver over 67 days, in and around the city's central business district. Most sequences were shot in 3D and ten companies were involved with the extensive visual effects work. Chroma keying and other techniques were used to allow more freedom in creating effects. Daft Punk composed the musical score, incorporating orchestral sounds with their trademark electronic music.
Tron: Legacy premiered in Tokyo on November 30, 2010, and was released in the United States on December 17, by Walt Disney Studios Motion Pictures. Disney vigorously promoted the film across multiple media platforms, including merchandising, consumer products, theme parks, and advertising. Upon its release, the film received mixed reviews from critics, who criticized the story and character development, but praised the performances of Bridges and Sheen, the visual effects, production design, and soundtrack. It was a commercial success, grossing $409 million during its worldwide theatrical run against a $170 million production budget. The film was nominated for an Academy Award for Best Sound Editing at the 83rd Academy Awards, but lost to Inception. Like its predecessor, Tron: Legacy has been described as a cult film since its release. A standalone sequel, Tron: Ares, is scheduled to be released on October 10, 2025.
Plot
In 1989, Kevin Flynn, who was promoted to CEO of ENCOM International seven years earlier, disappears. Twenty years later, his son Sam, now ENCOM's primary shareholder, pranks the corporation by releasing the company's signature operating system online for free. ENCOM executive Alan Bradley, Kevin's old friend, approves of this, believing it aligns with Flynn's ideals of free software. Nonetheless, Sam is arrested for trespassing.
Alan posts bail for Sam and tells him of a pager message originating from Flynn's shuttered video arcade, after being disconnected for 20 years. There, Sam discovers a hidden basement with a large computer and laser, which suddenly digitizes and downloads him into the Grid, a virtual reality created by Kevin. He is captured and sent to "the Games", where he must fight a masked computer program named Rinzler. When Sam is injured and bleeds, Rinzler realizes Sam is human, or a "User". He takes Sam to Clu, the Grid's corrupt ruling program, who resembles a young Kevin.
Clu nearly kills Sam in a Light Cycle match, but Sam is rescued by Quorra, an "apprentice" of Flynn, who shows him Kevin's hideout outside Clu's territory. Kevin explains that he had been working to create a "perfect" computer system and had appointed Clu and security program Tron as its co-creators. The trio discovered a species of naturally occurring "isomorphic algorithms" (ISOs), with the potential to resolve various natural mysteries. Clu, considering them an aberration, betrayed Kevin, killed Tron, and destroyed the ISOs. The "Portal" permitting travel between the two worlds closed, leaving Kevin trapped in the system. Clu sent the message to Alan hoping to lure him into the Grid (though Sam serves his purpose just as well) and reopen the Portal for a limited time. Since Flynn's "identity disc" is the master key to the Grid and the only way to traverse the Portal, Clu expects Sam to bring Kevin to the Portal so he can take Flynn's disc, go through the Portal himself, and impose his idea of perfection on the human world.
Against his father's wishes, Sam returns to Clu's territory to find Zuse, a program who can provide safe passage to the Portal. At the End of Line Club, the owner reveals himself to be Zuse, then betrays Sam to Clu's guards. In the resulting fight, Kevin rescues his son, but Quorra is injured and Zuse gains possession of Flynn's disc. Zuse attempts to bargain with Clu over the disc, but Clu instead destroys the club along with Zuse. Kevin and Sam stow away aboard a "Solar Sailer" transport program, where Flynn restores Quorra and reveals her to be the last surviving ISO.
The transport is intercepted by Clu's warship. As a diversion, Quorra allows herself to be captured by Rinzler, whom Kevin recognizes as Tron, not killed by Clu but rather reprogrammed. Sam reclaims Flynn's disc and rescues Quorra, while Kevin takes control of a Light Fighter. Clu, Rinzler, and several guards pursue the trio in Light Jets. Rinzler remembers his past as Tron and deliberately collides with Clu's Light Jet, then falls into the Sea of Simulation below. Clu confronts the others at the Portal, but Kevin reintegrates with his digital duplicate, destroying Clu along with himself. Quorra – having switched discs with Kevin – gives Flynn's disc to Sam, and they escape together to the real world as the ensuing explosion from Kevin's sacrifice levels the Sea of Simulation.
In Flynn's arcade, Sam backs up and deactivates the system. He then tells a waiting Alan that he plans to retake control of ENCOM, naming Alan chairman of the board. Sam departs on his motorcycle with Quorra as the sun rises.
Cast
Garrett Hedlund as Samuel "Sam" Flynn, a primary shareholder of ENCOM who, while investigating his father's disappearance, is transported onto the Grid himself. Hedlund won a "Darwinian casting process" which tested hundreds of actors, being chosen for having the "unique combination of intelligence, wit, humor, look and physicality" that the producers were looking for in Flynn's son. The actor trained hard to do his own stunts, which included jumping over cars and copious wire and harness work.
Owen Best as Young Sam Flynn.
Jeff Bridges as Kevin Flynn, the former CEO of ENCOM International and creator of the popular arcade game Tron based on his own experiences in ENCOM's virtual reality, who disappeared in 1989 while developing "a digital frontier that will reshape the human condition."
Bridges also portrays CLU (Codified Likeness Utility) via digital makeup and voiceover, while John Reardon portrays CLU physically. CLU is a more advanced incarnation of Flynn's original computer-hacking program, designed as an "exact duplicate of himself" within the Grid.
Olivia Wilde as Quorra, an "isomorphic algorithm," adept warrior, and confidante of Kevin Flynn in the Grid. Flynn refers to her as his "apprentice" and has imparted volumes of information to her regarding the world outside of the Grid, which she longs to experience. She is shown to have a love of human literature, particularly the writings of Jules Verne, and plays Go with Flynn. She comments that her 'aggressive strategy' is usually foiled by Flynn's patience. Wilde describes Quorra as akin to Joan of Arc. Her hairstyle was influenced by singer Karen O. Wilde added that although "[Quorra] could have just been another slinky, vampy temptress," it was important for her to appeal to both men and women, and that character tried to avoid the typical female lead by having a naiveté and childlike innocence adequate for such an "evolving and learning organism." Quorra's action scenes led Wilde to work out and train in martial arts.
Bruce Boxleitner as Alan Bradley, a board member executive for ENCOM, and close friend of Kevin Flynn who, after receiving a cryptic page from the office at the shuttered Flynn's Arcade, encourages Sam to investigate its origin.
Boxleitner also portrays Tron / Rinzler, a security program originally developed by Bradley to monitor ENCOM's Master Control Program and later reassigned by Flynn to defend the Grid. He was overpowered and re-purposed by Clu as a masked command program wielding an identity disk that splits into two. Anis Cheurfa, a stunt actor, portrayed Rinzler, while Boxleitner provided the dialogue and physically appeared as Tron in flashback sequences via the same treatment as Bridges' younger self for CLU. Rinzler is named after author and Lucasfilm Executive Editor J.W. Rinzler.
Michael Sheen as Zuse / Castor, a flamboyant probability program who runs the End of Line Club at the top of the tallest tower in the system. Sheen describes his performance as containing elements of performers such as David Bowie, Joel Grey from Cabaret, and a bit of Frank-N-Furter from The Rocky Horror Show.
James Frain as Jarvis, an administration program who serves as CLU's right-hand man and chief intelligence officer. Frain had to shave his head, bleach his eyebrows white, and wear make-up. The refraction on Jarvis's helmet led Frain to walk in a "slightly squinty, blind stagger" which the actor felt was helpful to get him into character. Frain described Jarvis as "a fun, comic character that's a little off-beat," considering him "more human, in terms of being fallible and absurd" compared to the zanier Castor.
Beau Garrett appears as Gem, one of four programs known as Sirens. The Sirens operate the Grid's game armory, equipping combatants with the armor needed to compete in the games, while also reporting to Castor. Serinda Swan, Yaya DaCosta, and Elizabeth Mathis depict the other three Sirens. Jeffrey Nordling stars as Richard Mackey, the chairman of ENCOM's executive board, and Cillian Murphy makes an uncredited appearance as Edward Dillinger, Jr., the head of ENCOM's software design team and the son of former ENCOM Senior Executive Ed Dillinger portrayed by David Warner in the original film. Daft Punk, who composed the score for the film, cameo as disc jockey programs at Castor's End of Line Club, and Tron creator Steven Lisberger makes an appearance as Shaddix, a bartender in the End of Line Club.
Production
Background
Steven Lisberger relocated to Boston, Massachusetts, from Philadelphia, Pennsylvania, in the 1970s to pursue a career in computer animation. Since the computer animation field was mainly concentrated in Los Angeles, Lisberger had very little competition operating on the East Coast: "Nobody back then did Hollywood stuff, so there was no competition and no one telling us that we couldn't do it." He later produced and directed the American science fiction film Tron (1982) for Walt Disney Productions, the first computer animation-based feature film. Although the film garnered some critical praise, it generated only modest sales at the box office — the cumulative North American gross was just $33 million. Producer Sean Bailey, who saw the film with his father and Lisberger, was captivated by the finished product. Although Tron performed below Disney studio's expectations, it later developed a cult following, which fueled speculation of Pixar's alleged interest in creating a sequel, in 1999. Rumors of a Tron sequel were further ignited after the 2003 release of the first-person shooter video game, Tron 2.0. Lisberger hinted that a third installment could be in the works, depending on the commercial success of the game.
Writing
Shortly after hiring Kosinski, Bailey approached screenwriting duo Adam Horowitz and Edward Kitsis, who accepted for being self-described "obsessed about Tron." Horowitz later claimed the challenge was to "homage the first movie, continue the story, expand it and take it to another place and open up space for new fans," and Kitsis claimed that the film would start a whole new mythology "of which we're only scratching the surface." Horowitz and Kitsis first created a story outline, and developed and fine-tuned the plot with Bailey and Kosinski across a period of two days in La Quinta. The writers also consulted Lisberger, to view Trons creator input on the story. Lisberger gave his blessing, particularly as he has a son the same age as Sam, which Kitsis stated that "was like we had tapped into something he was feeling without even realizing it." The Pixar team contributed with rewrites for additional shooting after being shown a rough cut in March 2010, which helped in particular to the development of Sam's story line.
The writing staff cited The Wizard of Oz as a source of thematic influence for Tron: Legacy in writing the script, with Kitsis stating that "They both have very similar DNA, which is Tron really lives on, in a lot of ways, trying to get home. You're put on this world and you want to go home and what is home? That's in a lot of way inspired us." Kitsis also added that they had to include an "emotional spine to take us into the story or else it just becomes a bunch of moves or gags and stuff," eventually deciding on adding a mysterious destiny to Flynn and giving him a legendary aura – "Kevin Flynn to us was Steve Jobs and Bill Gates all wrapped up into one and John Lennon." The writers decided to create the character of Clu as an evil embodiment of "how you look back on your younger self, (...) that guy [that] thought he knew everything, but he really knew nothing." Bridges liked the idea of the dual perspectives, and contributed with the writers for the characterization of Flynn as a sanguine Zen master by suggesting them to get inspiration from various Buddhist texts. Part of the concepts emerged from a reunion the producers had with scientists from California Institute of Technology and the Jet Propulsion Laboratory to discuss concepts such as isomorphic algorithms and the digitizing of organic matter.
Horowitz revealed the film would contain many light cycle battles, and asserted that the script for the scenes were "incredibly detailed," and involved an intricate collaborative process. For the disc game, Horowitz and Kitsis wrote a rough draft of the scene, and sent the script to Kosinski; he summarized his perspective of the sequence's visuals to them. "He described them as these underlying platforms," said Horowitz, "that would then coalesce and then the way you would go from round to round in the game is you defeat someone, they kinda come together as you see in the movie." After giving his intake, Kosinski sent various sketches of the scene to the writers and would often revise the script. Kitsis thought that illustrating the character's stories to be the most difficult task in writing Tron: Legacy. The writers collaborated with the creative process throughout production, which was helpful especially considering the difficulties of describing in a tangible way a digital world that "in its very nature defies basic screenwriting conventions."
Conception
Plans for creating Tron: Legacy began to materialize in 2005, when Walt Disney Studios hired screenwriters Brian Klugman and Lee Sternthal as writers for the film. The two had recently finished writing the script for Warrior. According to Variety columnist Michael Fleming, Klugman and Sternthal felt "that the world has caught up to Lisberger's original concept." Klugman said of the precedent film: "It was remembered not only for story, but a visual style that nobody had ever used before. We are contemporizing it, taking ideas that were ahead of the curve and applying them to the present, and we feel the film has a chance to resonate to a younger audience."
In 2007, Disney began to negotiate with Joseph Kosinski to direct Tron: Legacy. Kosinski admitted that at the time, he was not keen on the idea but it later grew on him as time progressed. Kosinski was involved in a meeting with Bailey, president of Walt Disney Pictures. "Disney owns the property, Tron," Bailey stated. "Do you know it? Are you interested? What would your take be? In a post-Matrix world, how do you go back to the world of Tron?" Kosinski wanted to embrace the general ambiance of the film and wished to not use the Internet as a model or use a formula emulative of The Matrix film series. As neither individuals were in equal agreement on choosing a perspective to conceive the film, Kosinski asked Bailey to lend him money in order to create a conceptual prototype of the Tron: Legacy universe, which was eventually presented at the 2009 San Diego Comic-Con. "So, we went into Disney," he recalled, "and I told them, 'We can talk about this all day, but in order to really get on the same page, I need to show you what this world looks and feels like. Give me some money and let me do a small test that will give you a hint for a couple minutes of it, and see what you think.'"
A graduate of Columbia University architecture school, Kosinski's knowledge of architecture was pivotal in conceptualizing the Tron: Legacy universe. His approach in cultivating a prototype was different from other film directors because, according to Kosinski, he came "from a design point of view"; "Some of my favorite directors come from outside of the film business, so that made my approach different from other directors, but a design background makes sense for a movie like this because the whole world has to be made from scratch." Lisberger would later state that he left the sequel to a different production team because "after thirty years I don't want to compete with myself," and to showcase how the next generation dealt with the themes contained in Tron – "If I brought my network in, it would be a little bit like one of those Clint Eastwood movies where all the old guys go to space." Lisberger added that "I dig this role of being the Obi-Wan or the Yoda on this film more than being the guy in the trenches," stating that unlike Kosinski his age was a hindering factor – "I cannot work sixteen hours a day staring at twenty-five monitors for most of that time."
Themes
Tron: Legacy is imbued with several references to religious themes, particularly those relating to Christianity and Buddhism. Olivia Wilde's character, Quorra, was inspired/formed by the historical Catholic figure Joan of Arc. Wilde sought inspiration from her six months before production of the film commenced. She, alongside Kosinski, collaborated with the writers on editing the characters so she would contain the characteristics of Joan of Arc. Wilde assessed the characteristics of the figure: "She's this unlikely warrior, very strong but compassionate, and completely led by selflessness. Also, she thinks she's in touch with some higher power and has one foot in another world. All of these were elements of Quorra." Since she epitomizes the concept of androgyny, producers conceived Quorra from an androgynous perspective, notably giving her a short haircut.
Bridges opined that Tron: Legacy was evocative of a modern myth, adding that ideas alluding to technological advancement were prevalent throughout the film. To Cyriaque Lamar of io9, the film's approach to technology was reminiscent of a kōan. "One of the things that brought me to this film," affirmed Bridges, "was the idea of helping to create a modern-day myth to help us navigate through these technological waters [...]. I dig immediate gratification as much as anybody, but it happens so fast that if you make a decision like that, you can go far down the wrong path. Think about those plastic single-use water bottles. Where did that come from? Who decided that? You can have a couple of swigs of water [...] and those bottles don't disintegrate entirely. Microscopic animals eat the plastic, and the fish eat those, and we're all connected. It's a finite situation here."
According to screenwriter Adam Horowitz, Kosinski stated that the film's universal theme was "finding a human connection in a digital world." They followed this by "approach[ing] the world from the perspective of character, using Kevin Flynn as an organizing principle, and focus on the emotional relationship from father and son and their reconciliation, which brings profound turns in their respective individual lives."
Development
At the 2008 San Diego Comic-Con, a preliminary teaser trailer (labeled as TR2N and directed by Joseph Kosinski) was shown as a surprise to convention guests. It depicted a yellow Program engaged in a light cycle battle with a blue Program, and it prominently featured Jeff Bridges reprising his role as an aged Kevin Flynn (from the first film). At the end of the trailer, the yellow Program showed his face, which appeared identical to Flynn's earlier program Clu (resembling the younger Flynn in Tron).
While the trailer did not confirm that a Tron sequel was in production, it showed that Disney was serious about a sequel. In an interview with Sci-Fi Wire, Bridges revealed that the test footage was unlikely to appear in the finished film. On July 23, 2009, Disney revealed the film's title at their panel at Comic-Con. Bridges explained that the title is in reference to the story's theme: "It's basically a story about a son's search for his father." They also showed a trailer similar to the one shown at Comic-Con 2009, with updated visuals. At the time, the film had just wrapped production and they had a year of post-production ahead of them. Because none of the footage from inside the computer world was finished, they premiered concept images from the production. Art included the Recognizer, which has been updated from the original film. Concept photos were also shown of Disc Wars, which has also been revised from the original film into a 16-game tournament. The arena is set up so that the game court organically changes, and all 16 games are going on at the same time. The boards also combine in real time until the last two Disc warriors are connected.
Light cycles make a return, with new designs by Daniel Simon. According to the press conference at Comic-Con 2008, a new vehicle appears called a "Light Runner," a two-seat version of the light cycle, and Kevin Flynn's own cycle, a "Second Generation Light Cycle" designed in 1989 by Flynn and is "still the fastest thing on The Grid." It incorporates some of the look of both films.
A life-size model of the light cycle was put on display at a booth at Fan Expo 2009 in Toronto, Ontario from August 28–30, 2009, along with a special presentation of material from the production. The conceptual art shown at Comic-Con was shown in the session, along with some test film of the martial artists who play a more athletic style of Disc Wars. A segment from the film showed Flynn's son entering the now-decrepit arcade, playing a Tron stand-up arcade video game, noticing a passage in the wall behind the Tron game and entering it, the passage closing behind him. Flynn's son makes the visit to the arcade after Alan Bradley receives a page from the disconnected phone number of the arcade. The footage was used later as part of the trailer released on March 5, 2010.
The character of Yori and her user, Dr. Lora Baines, do not appear in the sequel, even though the film refers to Alan Bradley being married to Lora. Fans have lobbied for actress Cindy Morgan to be in the film with active campaigns online, such as "Yori Lives" on Facebook, which is independent of Morgan herself. "All I know is what I'm seeing online," Morgan said. "I am so thrilled and touched and excited about the fan reaction and about people talking about the first one and how it relates to the second one. I can't tell you how warm a feeling I get from that. It just means so much." No one from Tron: Legacy had contacted Morgan, and she did not directly speak with anyone from the sequel's cast and crew. As Dr. Lora Baines, Cindy Morgan had appeared with Bruce Boxleitner (as Alan Bradley) at the Encom Press Conference in San Francisco, April 2, 2010.
Filming
Principal photography took place in Vancouver, British Columbia, in April 2009, and lasted for approximately 67 days. Many filming locations were established in Downtown Vancouver and its surroundings. Stage shooting for the film took place at the Canadian Motion Picture Park studio in Burnaby, an adjacent city that forms part of Metro Vancouver. Kosinski devised and constructed twelve to fifteen of the film's sets, including Kevin Flynn's safe house, a creation he illustrated on a napkin for a visual effects test. "I wanted to build as much as possible. It was important to me that this world feel real, and anytime I could build something I did. So I hired guys that I went to architecture school with to work on the sets for this film, and hopefully people who watch the film feel like there's a certain physicality to this world that hopefully they appreciate, knowing that real architects actually put this whole thing together." The film was shot in dual camera 3D using Pace Fusion rigs like James Cameron's Avatar, but unlike the Sony F950 cameras on that film, Tron used the F35s. "The benefit of [the F35s]," according to director Kosinski, "is that it has a full 35mm sensor which gives you that beautiful cinematic shallow depth of field." The film's beginning portions were shot in 2D, while forty minutes of the film were vertically enhanced for IMAX. Digital Domain was contracted to work on the visual effects, while companies such as Prime Focus Group, DD Vancouver, and Mr. X were brought on to collaborate with producer on the post-production junctures of Tron: Legacy. Post-production wrapped on November 25, 2009.
The sequences on the Grid were wholly shot in 3D, utilizing cameras specifically designed for it, and employed a 3D technique that combined other special effects techniques. The real-world sequences were filmed in 2D, and eventually altered using the three-dimensional element. Bailey stated that it was a challenge shooting Tron: Legacy in 3D because the cameras were bigger and heavier, and variations needed to be taken into account. Despite these concerns, he opined that it was a "great reason to go to the movies because it's an experience you just can't recreate on an iPhone or a laptop." In some sequences the image shows a fine mesh pattern and some blurring. That is not interference or a production fault, but indicates that that sequence is a flashback and to simulate an older form of video representation technology. Stunt work on the film was designed and coordinated by 87Eleven, who also designed and trained fight sequences for 300 and Watchmen. Olivia Wilde described it as an honor to train with them.
Design
In defining his method for creating Tron: Legacy, Kosinski declared that his main objective was to "make it feel real," adding that he wanted the audience to feel like filming actually occurred in the fictional universe. For this, many physical sets were built, as Kosinski "wanted the materials to be real materials: glass, concrete, steel, so it had this kind of visceral quality." Kosinski collaborated with people who specialized in fields outside of the film industry, such as architecture and automotive design. The looks for the Grid aimed for a more advanced version of the cyberspace visited by Flynn in Tron, which Lisberger described as "a virtual Galapagos, which has evolved on its own." As Bailey put, the Grid would not have any influence from the Internet as it had turned offline from the real world in the 1980s, and "grew on its own server into something powerful and unique." Kosinski added that as the simulation became more realistic, it would try to become closer to the real world with environmental effects such as rain and wind, and production designer Darren Gilford stated that there would be a juxtaposition between the variety of texture and color of the real-world introduction in contrast with the "clean surfaces and lines" of the Grid. As the design team considered the lights a major part of the Tron look, particularly for being set in a dark world—described by effects art director Ben Procter as "dark silhouetted objects dipped in an atmosphere with clouds in-between, in a kind of Japanese landscape painting" where "the self-lighting of the objects is the main light source"—lighting was spread through every prop on the set, including the floor in Flynn's hideout. Lisberger also stated that while the original Tron "reflected the way cyberspace was," the sequel was "going to be like a modern day, like contemporary plus, in terms of how much resolution, the texturing, the feel, the style," adding that "it doesn't have that Pong Land vibe to it anymore."
The skintight suits worn by the actors were reminiscent of the outfits worn by the actors in the original film. Kosinski believed that the costumes could be made to be practical due to the computerized nature of the film, as physically illuminating each costume would be costly to the budget. Christine Bieselin Clark worked with Michael Wilkinson in designing the lighted costumes, which used electroluminescent lamps derived from a flexible polymer film and featured hexagonal patterns. The lights passed through the suit via Light Tape, a substance composed of Honeywell lamination and Sylvania phosphors. To concoct a color, a transparent 3M Vinyl film was applied onto the phosphor prior to lamination. While most of the suits were made out of foam latex, others derived from spandex, which was sprayed with balloon rubber, ultimately giving the illusion of a lean shape. The actors had to be compressed to compensate for the bulk of the electronics. In addition, Clark and Wilkinson designed over 140 background costumes. The two sought influence from various fashion and shoe designers in building the costumes. On the back of the suit was an illuminated disc, which consisted of 134 LED lights. It was attached to the suit via a magnet, and was radio-controlled. All the costumes had to be sewn in such a way that the stitches did not appear, as the design team figured that in a virtual environment the clothes would just materialize, with no need for buttons, zippers or enclosures. According to Neville Page, the lead designer for the helmets, "The art departments communicated very well with each other to realise Joe's [...] vision. We would look over each other's shoulders to find inspiration from one another. The development of the costumes came from trying to develop the form language which came from within the film."
The majority of the suits were designed using ZBrush. A scan of an actor's body was taken, which was then encased to decipher the fabric, the location of the foam, amongst other concerns. With a computer numerical cutting (CNC) of dense foam, a small-scale output would be created to perfect fine details before initiating construction of the suit. Upon downloading the participant's body scan, the illustrations were overlaid to provide an output manufacturing element. Describing the CNC process, Chris Lavery of Clothes on Film noted that it had a tendency to elicit bubbles and striations. Clark stated: "The [...] suit is all made of a hexagon mesh which we also printed and made the fabric from 3D files. This would go onto the hard form; it would go inside the mould which was silicon matrix. We would put those together and then inject foam into the negative space. The wiring harness is embedded into the mould and you get a torso. We then paint it and that's your finished suit."
Sound and visual effects
Crowd effects for the gaming arena were recorded at the 2010 San Diego Comic-Con. During one of the Tron: Legacy panels, the crowd was given instruction via a large video screen while techs from Skywalker Sound recorded the performance. The audience performed chants and stomping effects similar to what is heard in modern sports arenas.
It took two years and 10 companies to create the 1,565 visual effects shots of Tron: Legacy. The majority of the effects were done by Digital Domain, who created 882 shots under supervisor Eric Barba. The production team blended several special effect techniques, such as chroma keying, to allow more freedom in creating effects. Similar to Tron, this approach was seen as pushing the boundaries of modern technology. "I was going more on instinct rather than experience," Kosinski remarked. Although he had previously used the technology in producing advertisements, this was the first time Kosinski used it a large scale simultaneously. Darren Gilford was approached as the production designer, while David Levy was hired as a concept artist. Levy translated Kosinski's ideas into drawings and other visual designs. "Joe's vision evolved the visuals of the first film," he stated. "He wanted the Grid to feel like reality, but with a twist." An estimated twenty to twenty-five artists from the art department developed concepts of the Tron: Legacy universe, which varied from real world locations to fully digital sets. Gilford suggested that there were between sixty and seventy settings in the film, split up into fifteen fully constructed sets with different levels of computer-created landscapes.
Rather than utilizing makeup tactics, such as the ones used in A Beautiful Mind, to give Jeff Bridges a younger appearance, the character of Clu was completely computer generated. To show that this version of Clu was created some time after the events of the original film, the visual effects artists based his appearance on how Bridges looked in Against All Odds, released two years after Tron. The effects team hired makeup artist Rick Baker to construct a molded likeness of a younger Bridges head to serve as their basis for their CG work. But soon, they scrapped the mold because they wished for it to be more youthful. There was no time to make another mold, so the team reconstructed it digitally. On-set, first Bridges would perform, being then followed by actor double John Reardon who would mimic his actions. Reardon's head was replaced on post-production with the digital version of the young Bridges. Barba – who was involved in a similar experience for The Curious Case of Benjamin Button — stated that they used four microcameras with infrared sensors to capture all 134 dots on Bridges face that would be the basis of the facial movements, a similar process that was used in Avatar. It took over two years to not only create the likeness of Clu, but also the character's movements (such as muscle movement). Bridges called the experience surreal and said it was "Just like the first Tron, but for real!"
Musical score and soundtrack album
The French electronic duo Daft Punk composed the film score of Tron: Legacy, which features over 30 tracks. The score was arranged and orchestrated by Joseph Trapanese. Jason Bentley served as the film's music supervisor. Director Joseph Kosinski referred to the score as a mixture of orchestral and electronic elements. An electronic music fan, Kosinski stated that to replicate the innovative electronic Tron score by Wendy Carlos "rather than going with a traditional film composer, I wanted to try something fresh and different," adding that "there was a lot of interest from different electronic bands that I follow to work on the film" but he eventually picked Daft Punk. Kosinski added that he knew the band was "more than just dance music guys" for side projects such as their film Electroma. The duo were first contacted by producers in 2007, when Tron: Legacy was still in the early stages of production. Since they were touring at the time, producers were unsuccessful in contacting the group. They were again approached by Kosinski, eventually agreeing to take part in the film a year later. Kosinski added that Daft Punk were huge Tron fans, and that his meeting with them "was almost like they were interviewing me to make sure that I was going to hold up to the Tron legacy."
The duo started composing the soundtrack before production began, and is a notable departure from the duo's previous works, as Daft Punk placed higher emphasis on orchestral elements rather than relying solely on synthesizers. "Synths are a very low level of artificial intelligence," explained member Guy-Manuel de Homem-Christo, "whereas you have a Stradivarius that will live for a thousand years. We knew from the start that there was no way that we were going to do this film score with two synthesizers and a drum machine."
"Derezzed" was taken from the album and released as its sole single. The album was released by Walt Disney Records on December 3, 2010, and sold 71,000 copies in its first week in the United States. Peaking at number six on the Billboard 200, it eventually acquired a platinum certification by the Recording Industry Association of America, denoting shipments of 1,000,000 copies. A remix album for the soundtrack, titled Tron: Legacy Reconfigured, became available on April 5, 2011 to coincide with the film's home media release.
Marketing
Marketing and promotions
On July 21, 2009, several film-related websites posted they had received via mail a pair of "Flynn's Arcade" tokens along with a flash drive. Its content was an animated GIF that showed CSS code lines. Four of them were put together and part of the code was cracked, revealing the URL to Flynnlives.com, a fictitious site maintained by activists who believe Kevin Flynn is alive, even though he has been missing since 1989. Clicking on a tiny spider in the lower section of the main page led to a countdown clock that hit zero on July 23, 2009, 9:30 pm PDT. Within the Terms of Use Section, an address was found. It lies in San Diego, California, US near the city's convention center where the Comic-Con 2009 took place and some footage and information on the sequel was released. Flynn's Arcade was re-opened at that location, with several Space Paranoids arcade machines and a variety of '80s video games. A full-size light cycle from the new film was on display.
A ninth viral site, homeoftron.com, was found. It portrays some of the history of Flynn's Arcade as well as a fan memoir section. On December 19, 2009, a new poster was revealed, along with the second still from the film. Banners promoting the film paved the way to the 2010 Comic-Con convention center, making this a record third appearance for the film at the annual event.
Disney also partnered with both Coke Zero and Norelco on Tron: Legacy. Disney's subsidiary Marvel Comics had special covers of their superheroes in Tron garb, and Nokia had trailers for the film preloaded on Nokia N8 phones while doing a promotion to attend the film's London premiere. While Sam picks up a can of Coors in the film, it was not product placement, with the beer appearing because Kosinski "just liked the color and thought it would look good on screen."
Attractions
At the Walt Disney World Resort in Florida, one monorail train was decorated with special artwork depicting light cycles with trailing beams of light, along with the film's logo. This Tron-themed monorail, formerly the "Coral" monorail, was renamed the "Tronorail" and unveiled in March 2010. At the Disneyland Resort in California, a nighttime dance party named "ElecTRONica" premiered on October 8, 2010, and was set to close in May 2011, but it was extended until April 2012 due to positive guest response, in Hollywood Land at Disney California Adventure Park. Winners of America's Best Dance Crew, Poreotics, performed at ElecTRONica. As part of ElecTRONica, a sneak peek with scenes from the film is shown in 3D with additional in-theater effects in the Muppet*Vision 3D theater.
On October 29, 2010, the nighttime show World of Color at Disney California Adventure Park began soft-openings after its second show of a Tron: Legacy-themed encore using a Daft Punk music piece titled "The Game Has Changed" from the film soundtrack, using new effects and projections on Paradise Pier attractions. The encore officially premiered on November 1, 2010. On December 12, 2010, the show Extreme Makeover: Home Edition, as part of a house rebuild, constructed a Tron: Legacy-themed bedroom for one of the occupants' young boys. The black painted room not only consisted of life-sized Tron city graphics, but also glowing blue line graphics on the walls, floor and furniture, a desk with glowing red-lit Recognizers for the legs and a Tron suit-inspired desk chair, a light cycle-shaped chair with blue lighting accents, a projection mural system that projected Tron imagery on a glass wall partition, a laptop computer, a flat panel television, several Tron: Legacy action figures, a daybed in black and shimmering dark blue and blue overhead lit panels.
Disney was involved with the Ice Hotel in Jukkasjärvi, Sweden through association with designers Ian Douglas-Jones at I-N-D-J and Ben Rousseau to create "The Legacy of the River," a high-tech suite inspired by Tron: Legacy. The suite uses electroluminescent wire to capture the art style of the film. It consists of over 60 square meters of 100mm thick ice equating to approximately six tons. 160 linear meters of electroluminescent wire were routed out, sandwiched and then glued with powdered snow and water to create complex geometric forms. The Ice Hotel is expected to get 60,000 visitors for the season, which lasts December 2010 through April 2011. On November 19, 2010, the Tron: Legacy Pop Up Shop opened at Royal-T Cafe and Art Space in Culver City, California. The shop featured many of the collaborative products created as tie-ins with the film from brands such as Oakley, Hurley and Adidas. The space was decorated in theme and the adjacent cafe had a tie in menu with Tron-inspired dishes. The shop remained open until December 23, 2010.
Following the release of the film, the TRON Lightcycle Power Run attraction, based on the film, opened at Shanghai Disneyland and Magic Kingdom in 2016 and 2023, respectively.
Merchandising
Electronics and toy lines inspired by the film were released during late 2010. A line of Tron-inspired jewelry, shoes and apparel was also released, and Disney created a pop-up store to sell them in Culver City. Custom Tron branded gaming controllers have been released for Xbox 360, PlayStation 3 and Wii.
A tie-in video game, entitled Tron: Evolution, was released on November 25, 2010. The story takes place between the original film and Tron: Legacy. Teaser trailers were released in November 2009, while a longer trailer was shown during the Spike Video Game Awards on December 12, 2009. There were also two games released for the iOS devices (iPhone, iPod, and iPad) as tie-ins to the films. Disney commissioned N-Space to develop a series of multiplayer games based on Tron: Legacy for the Wii console. IGN reviewed the PlayStation 3 version of the game but gave it only a "passable" 6 out of 10. A tie-in 128-page graphic novel Tron: Betrayal was released by Disney Press on November 16, 2010. It includes an 11-page retelling of the original Tron story, in addition to a story taking place between the original film and Tron: Legacy. IGN reviewed the comic and gave it a "passable" score of 6.5 out of 10.
Release
Premiere and theaters
On October 28, 2010, a 23-minute preview of the film was screened on many IMAX theaters all over the world, (presented by ASUS). The tickets for this event were sold out within an hour on October 8. Stand-by tickets for the event were also sold shortly before the presentation started. Original merchandise from the film was also available for sale. Announced through the official Tron Facebook page, the red carpet premiere of the film was broadcast live on the Internet. Tron: Legacy was released in theaters on December 17, 2010, in the United States and United Kingdom. The film was originally set to be released in the UK on December 26, 2010, but was brought forward due to high demand. The film was presented in IMAX 3D and Disney Digital 3D. The film was also released with D-BOX motion code in select theaters and released in 50 Iosono-enhanced cinemas, creating "3D sound."
On December 10, 2010, in Toronto, Ontario, Canada, a special premiere was hosted by George Stroumboulopoulos organised through Twitter, open to the first 100 people who showed up at the CN Tower. After the film ended the tower was lit up blue to mirror The Grid. On December 13, 2010, in select cities all over the United States, a free screening of the entire film in 3D was available to individuals on a first-come, first-served basis. Free "Flynn Lives" pins were handed out to the attendees. The announcement of the free screenings was made on the official Flynn Lives Facebook page. On January 21, 2011, the German designer Michael Michalsky hosted the German premiere of the film at his cultural event StyleNite during Berlin Fashion Week.
Home media release
Tron: Legacy was released by Walt Disney Studios Home Entertainment on Blu-ray Disc, DVD, and digital download in North America on April 5, 2011. Tron: Legacy was available stand-alone as a single-disc DVD, a two-disc DVD and Blu-ray combo pack, and a four-disc box set adding a Blu-ray 3D and a digital copy. A five-disc box set featuring both Tron films was also released, entitled The Ultimate Tron Experience, having a collectible packaging resembling an identity disk. The digital download of Tron: Legacy was available in both high definition or standard definition, including versions with or without the digital extras.
A short film sequel to the film, Tron: The Next Day, as well as a preview of the 19-episode animated series Tron: Uprising, is included in all versions of the home media release. Tron: Legacy was the second Walt Disney Studios Home Entertainment release that included Disney Second Screen, a feature accessible via a computer or iPad app download that provides additional content as the user views the film. Forty minutes of the film were shot in 2.39:1 and then vertically enhanced for IMAX. These scenes are presented in 1.78:1 in a similar way to the Blu-ray release of The Dark Knight.
Reception
Box office
Leading up to the release, various commercial analysts predicted that Tron: Legacy would gross $40–$50 million during its opening weekend, a figure that Los Angeles Times commentator Ben Fritz wrote would be "solid but not spectacular." Although the studio hoped to attract a broad audience, the film primarily appealed to men: "Women appear to be more hesitant about the science-fiction sequel," wrote Fritz. Jay Fernandez of The Hollywood Reporter felt that the disproportionate audience would be problematic for the film's long term box office prospects. Writing for Box Office Mojo, Brandon Gray attributed pre-release hype to "unwarranted blockbuster expectations from fanboys," given the original Tron was considered a box office success when it was released, and the film's cult fandom "amounted to a niche."
In North America, the film earned $43.6 million during the course of its opening weekend. On its opening day, it grossed $17.6 million, including $3.6 million during midnight showings from 2,000 theaters, 29% of which were IMAX screenings, and went on to claim the top spot for the weekend, ahead of Yogi Bear and How Do You Know, making $44 million. Tron: Legacy grossed roughly $68 million during its first week, and surpassed $100 million on its 12th day in release.
Outside North America, Tron: Legacy grossed $23 million on its opening weekend, averaging $6,000 per theater. According to Disney, 65% of foreign grosses originated from five key markets; Japan, Australia, Brazil, United Kingdom, and Spain. The film performed the best in Japan, where it took $4.7M from 350 theaters. Australia ($3.4M), the United Kingdom ($3.2M), Brazil ($1.9M), and Spain ($1.9M). By the following week, Tron: Legacy obtained $65.5 million from foreign markets, bringing total grosses to $153.8 million. At the end of its theatrical run, Tron: Legacy had grossed $409.9 million; $172.1 million in North America, and $237.8 million in other countries.
Critical reception
Review aggregator website Rotten Tomatoes reported that 51% of commentators gave the film a positive review, based on 248 reviews. Attaining a mean score of 5.86/10, the site's consensus stated: "Tron: Legacy boasts dazzling visuals, but its human characters and story get lost amidst its state-of-the-art production design." At Metacritic, which assigns a normalized rating out of 100 based on reviews from mainstream critics, Tron: Legacy received a rating average of 49, based on 40 reviews, indicating "mixed or average reviews". Audiences polled by CinemaScore gave the film an average grade of "B+" on an A+ to F scale.
The visual effects were cited as the central highlight of the film. In his three-star review, Roger Ebert of the Chicago Sun-Times felt that the environment was aesthetically pleasing, and added that its score displayed an "electronic force" that complemented the visuals. Rolling Stone columnist Peter Travers echoed these sentiments, concluding that the effects were of an "award-caliber." J. Hoberman of The Village Voice noted that while it was extensively enhanced, Tron: Legacy retained the streamlined visuals that were seen in its predecessor, while Variety Peter DeBarge affirmed that the visuals and the accompanied "cutting-edge" score made for a "stunning virtual ride." To Nick de Semlyen of Empire, "This is a movie of astonishing high-end gloss, fused to a pounding Daft Punk soundtrack, populated with sleek sirens and chiselled hunks, boasting electroluminescent landscapes to make Blu-ray players weep." Some critics were not as impressed with the film's special effects. Manohla Dargis of The New York Times avouched that despite its occasional notability, the film's "vibrating kaleidoscopic colors that gave the first movie its visual punch have been replaced by a monotonous palette of glassy black and blue and sunbursts of orange and yellow." Though declaring that Tron: Legacy was "eye-popping," San Francisco Chronicle Amy Biancolli conceded that the special effects were "spectacular"—albeit cheesy. A columnist for The Wall Street Journal, Joe Morgenstern denounced the producers' emphasis on technological advancements, which he felt could have been used for other means such as drama.
The performances of various cast members were frequently mentioned in the critiques. Michael Sheen's portrayal of Castor was particularly acclaimed by commentators, who—because of his flamboyance—drew parallels to pop-rock icon David Bowie, as well as fictional characters such as A Clockwork Orange lead character Alex. Dargis, Debruge, Puig, and Carrie Rickey of The Philadelphia Inquirer were among the journalists to praise his acting: Dargis ascribed Sheen's exceptional performance to a comparatively "uninteresting" cast. To Philadelphia Daily News film critic Gary Thompson, the film became humorous with the scenes involving Castor. Star Tribune critic Colin Covert believed that Sheen's campy antics were the "too brief" highlights of Tron: Legacy. With other cast members—particularly Garrett Hedlund, Olivia Wilde, and Jeff Bridges—commentary reflected diverse attitudes. The film received "a little boost from" Wilde, according to Rickey. The Boston Globe Wesley Morris called Hedlund a "dud stud"; "None of what he sees impresses," he elaborated. "The feeling is mutual. At an alleged cost of $200 million, that's some yawn. If he can't be thrilled, why should we?" To Salon commentator Andrew O'Hehir, even Bridges—an individual he regarded as "one of America's most beloved and distinctive" actors—was "weird and complicated" rather than being the "sentimental and alluring" portrayer in the original Tron.
Critics were divided with the character development and the storylines in Tron: Legacy. Writing for The New Yorker, Bruce Jones commented that the audience did not connect with the characters, as they were lacking emotion and substance. "Disney may be looking for a merchandising bonanza with this long-gestating sequel to the groundbreaking 1982 film," remarked Jones, "but someone in the corporate offices forgot to add any human interest to its action-heavy script." Likewise, USA Today journalist Claudia Puig found Tron: Legacy to resonate with "nonsensical" and "unimaginative, even obfuscating" dialogue, and that "most of the story just doesn't scan." As Dana Stevens from Slate summed up, "Tron: Legacy is the kind of sensory-onslaught blockbuster that tends to put me to sleep, the way babies will nap to block out overwhelming stimuli. I confess I may have snoozed through one or two climactic battles only to be startled awake by an incoming neon Frisbee." Although he proclaimed the plot of Tron: Legacy and its predecessor to be spotty, Ian Buckwater of NPR was lenient on the latter film due to its youth-friendly nature. In contrast to negative responses, Michelle Alexander of Eclipse adored the plot of Tron: Legacy, a reaction that was paralleled by Rossiter Drake from 7x7, who wrote that it was "buoyed" by its "sometimes convoluted, yet hard to resist" story. Metros Larushka Ivan-Zadeh complained about the underdeveloped plot, saying "In 2010, issues surrounding the immersive nature of gaming and all-consuming power of modern technology are more pertinent than ever, so it's frustrating the script does nothing with them." However, she conceded that "it's the best 3D flick since Avatar and a super-groovy soundtrack by Daft Punk nonetheless makes for an awesome watch."
Accolades
Tron: Legacy received an award for Best Original Score from the Austin Film Critics Association. The film was also nominated for "Excellence in Production Design for a Fantasy Film" by the Art Directors Guild, and for Sound Editing by the Academy of Motion Picture Arts and Sciences. The film made the final shortlist for the Academy Award for Best Visual Effects, although it did not receive a nomination.
In other media
Manga
A manga version of Tron: Legacy was released by Earth Star Entertainment in Japan on June 30, 2011.
Video games and pinball
Tron: Legacy was adapted as a location named "The Grid" in the 2012 Nintendo 3DS game Kingdom Hearts 3D: Dream Drop Distance and the later HD remastered version in Kingdom Hearts HD 2.8 Final Chapter Prologue. In 2011, Stern Pinball released Tron: Legacy the pinball machine.
Television
Tron: Uprising, an animated television series, premiered on June 7, 2012, on the Disney XD network across the United States. Tron: Legacy writers Adam Horowitz and Eddie Kitsis revealed that the series tells the story of what happened in the Grid in between the films. Bruce Boxleitner and Olivia Wilde reprise their roles as Tron and Quorra from Tron: Legacy, while Elijah Wood, Lance Henriksen, Mandy Moore, Emmanuelle Chriqui, Paul Reubens, and Nate Corddry voice new characters.
Sequel
Steven Lisberger stated on October 28, 2010, before the film's release, that a sequel was in planning and that Adam Horowitz and Edward Kitsis, screenwriters for Tron: Legacy, were in the early stages of producing a script for the new film. In March 2015, it was revealed that Disney had green-lit the third film with Hedlund reprising his role as Sam and Kosinski returning to direct the sequel. Wilde was revealed in April to be returning as Quorra. Filming was expected to start in Vancouver in October 2015. However, in May 2015, The Hollywood Reporter reported that Walt Disney Studios had chosen not to continue with a third installment, which was confirmed by Wilde the following month. Hedlund later stated that the box office failure of Tomorrowland right before the third Tron would have begun filming led Disney to cancel the project.
However, during a 2017 Q&A session with Joseph Kosinski, he revealed that Tron 3 had not been scrapped, instead saying it was in "cryogenic freeze." A few days later, it was reported that Jared Leto was attached to portray a new character named Ares in the sequel. However, Disney had not officially confirmed that the project was in development.
In June 2020, Walt Disney Studios President of Music & Soundtracks Mitchell Leib confirmed in an interview that a third Tron film was being actively worked on at Disney. He said that Disney has a script written and was looking for a director, though was hopeful that Kosinski would return, as well as saying that it was a high priority for them that Daft Punk return to do the score, though the band's break up in 2021 leaves their return uncertain. In August 2020, Deadline reported that Garth Davis had officially been tapped to direct the film from a screenplay by Jesse Wigutow.
In March 2022, while promoting Morbius, Leto confirmed that the film is still happening. By January 2023, Davis had exited as director, with Joachim Rønning entering negotiations to take the directing job. Leto was still attached, with production planned to begin in Vancouver on July 3, but delayed by the strikes is scheduled to be released on October 10, 2025. In August 2024, Nine Inch Nails was announced to be providing the score for the film, replacing Daft Punk.
Notes
References
External links
2010 films
2010 3D films
2010 science fiction action films
2010s science fiction adventure films
American sequel films
American 3D films
American chase films
American science fiction action films
American science fiction adventure films
Cyberpunk films
Films about computing
Films about computer and internet entrepreneurs
Films about telepresence
Films about video games
Films about virtual reality
Films directed by Joseph Kosinski
Films set in 1989
Films set in 2009
Films shot in Vancouver
Genocide in fiction
IMAX films
Films using motion capture
Religion in science fiction
Tron films
Walt Disney Pictures films
Films set in computers
Films about computer hacking
2010 directorial debut films
Films about coups d'état
Films about father–son relationships
2010s English-language films
2010s American films
Films scored by musical groups
English-language science fiction adventure films
English-language science fiction action films
Saturn Award–winning films | Tron: Legacy | Technology | 12,629 |
21,394,574 | https://en.wikipedia.org/wiki/Iris%20Browser | Iris Browser is a discontinued web browser for Windows Mobile smartphones and personal digital assistants (PDAs) developed by the Torch Mobile company. The first version was released in 2008. It was one of the first mobile browsers to score a perfect 100 on the Acid3 test.
RIM acquired Torch Mobile in 2009 and discontinued Iris.
Features
Iris is based on the WebKit rendering engine
with the SquirrelFish Extreme JavaScript engine, Netscape plug-in API, and JavaScript/ECMAScript 1.5.
It has HTML and CSS support and supports SVG, XPath, and XSLT.
It supports a customizable interface and touch screen control, pop-up blockers, and XHTML 1.x mobile profile support.
It has advanced security features, advanced mobile key navigation, HTTP cache optimized for low disk usage, History Auto-Complete, and SSL and authenticated proxy support.
It also features bookmarks, which can be customized by the carrier, tabs, and customizable about pages.
Performance
According to independent testing, Iris 1.1.5 loads pages more slowly than its closest competitor, Opera Mobile. The UI was greatly enhanced all the way up until 1.1.9 which was released on July 6, 2009.
According to testing done by Torch Mobile, Iris 1.1.2 outperformed Access NetFront 3.5 and Opera Mobile 9.5 in the SunSpider JavaScript benchmark.
References
BlackBerry Limited
Discontinued mobile web browsers
Pocket PC software
Software based on WebKit | Iris Browser | Technology | 315 |
6,992,066 | https://en.wikipedia.org/wiki/IOK-1 | IOK-1 is a distant galaxy in the constellation Coma Berenices. When discovered in 2006, it was the oldest and most distant galaxy ever found, at redshift 6.96.
It was discovered in April 2006 by Masanori Iye at National Astronomical Observatory of Japan using the Subaru Telescope in Hawaii and is seen as it was 12.88 billion years ago. Its emission of Lyman alpha radiation has a redshift of 6.96, corresponding to just 750 million years after the Big Bang. While some scientists have claimed other objects (such as Abell 1835 IR1916) to be even older, the IOK-1's age and composition have been more reliably established.
"IOK" stands for the observers' names Iye, Ota, and Kashikawa.
See also
Abell 2218
Abell 370
A1689-zD1
UDFy-38135539
List of the most distant astronomical objects
References
Galaxies
Coma Berenices | IOK-1 | Astronomy | 205 |
52,740,713 | https://en.wikipedia.org/wiki/%28S%2CS%29-Tetrahydrochrysene | (S,S)-Tetrahydrochrysene ((S,S)-THC) is a steroid-like nonsteroidal estrogen and agonist of both the estrogen receptors, ERα and ERβ. It is an enantiomer of (R,R)-tetrahydrochrysene ((R,R)-THC), which, in contrast, is an ERβ silent antagonist and ERα agonist with 10-fold selectivity (i.e., affinity) for the ERβ over the ERα and with 20-fold greater affinity for the ERβ relative to that of (S,S)-THC.
See also
2,8-DHHHC
Chrysene
References
Synthetic estrogens
Diols | (S,S)-Tetrahydrochrysene | Chemistry | 166 |
39,243,497 | https://en.wikipedia.org/wiki/Bedwin%20Hacker | Bedwin Hacker is a Tunisian film about a computer hacker and TV pirate who broadcasts messages promoting freedom and equality for North Africans, and the attempt by the French Direction de la surveillance du territoire to find her and stop her. Released in 2003, it predated the 2010 Arab Spring by several years. The film breaks several stereotypes of typical Tunisian cinema, by focusing on mobility issues in 21st century Tunisia.
Plot summary
The film opens with a pirate transmission of a cartoon camel superimposed over a speech by president Truman about nuclear power. The piracy originates in a remote location in North Africa based on the handiwork of a 'lone wolf' hacker, Kalt, working with her young acquaintance who calls her 'auntie'.
Kalt then rescues her illegal immigrant friend Frida from the clutches of French immigration in Paris by hacking the immigration computers. Kalt meets a journalist named Chams. Escaping the situation there, which includes police raids on immigrant meetings, they flee back to Tunisia.
In Tunisia Kalt resumes her pirate transmissions. Throughout the film we see various European TV broadcasts interrupted by her transmissions of the camel and messages of freedom and equality for North Africans:
"In the third millennium there are other epochs, other places, other lives. We are not a mirage."
Somewhere inside the French DST, Julia, her boss, and agent Zbor try to track down the transmissions and stop them.
Julia happens to be the girlfriend of Chams, and she attempts to use him to infiltrate Kalt's supposed terrorist circle and bring her down. Chams is conflicted but complies with her commands. He puts on a ruse of interviewing the old man who owns the house where Kalt and his family live.
The old man is a poet and we hear snippets of his philosophical and artistic poetry espousing the value of freedom, which Chams ignores as he tries to find out the secrets of Kalt's life. He attempts to help Julia put a trojan horse onto Kalt's machine but Kalt has put in safeguards that stop him and reveal his treachery to her.
During flashbacks we learn that Julia believes the mysterious hacker to be the Pirate Mirage, who may also be her old acquaintance from École Polytechnique, which is, indeed, Kalt. A flashback shows them working on computers in their younger days as well as being lovers.
Kalt's friends accompany Frida to a music concert she is putting on. They are stopped by the Tunisian police who are working with the French authorities, but they are waved through
At one point the Camel tells the viewers to call a telephone number. Somehow this shuts down power to a section of Paris called La Défense. After this, the government pressure on Julia's department becomes intense as millions of dollars are spent trying to put out disinformation about the Bedwin Hacker as well as track her down.
As the DST closes in on Kalt, Chams becomes more and more conflicted, telling Julia that there are no terrorists in Kalt's circle, while simultaneously arguing to Kalt that she is endangering herself and that she should stop hijacking the TV signal.
Julia, driven by her boss and by the thrill, finally tracks down Kalt's location and confronts her, just as Kalt is planning to transmit her last signal and then destroy all evidence of her work.
Music
The music in the film tends to be modernistic Arabic with influences from hip hop and dance beats. There are also scenes of traditional music and protest music/folk music being played in a group or family setting.
See also
Arab Spring
Hackers (film)
Cyberpunk
References
External links
Tunisian comedy films
Films set in the 2000s
Films about computing
Techno-thriller films
Tunisian drama films | Bedwin Hacker | Technology | 771 |
72,498,504 | https://en.wikipedia.org/wiki/HD%2026755 | HD 26755, also known as HR 1313, is a spectroscopic binary located in the northern circumpolar constellation Camelopardalis, the giraffe. It has an apparent magnitude of 5.72, making it faintly visible to the naked eye under ideal conditions. Gaia DR3 parallax measurements place the system at a distance of 271 light years and is currently drifitng closer with a heliocentric radial velocity of . At its current distance, HD 26755's brightness is diminished by 0.19 magnitudes due to interstellar dust.
The visible component is an evolved red giant with a stellar classification of K1 III. It is estimated to be 2.13 billion years old, enough time for the star to exhaust its core hydrogen and evolve to become a red giant. It has cooled and expanded to 9.4 times the Sun's radius. It has 1.68 times the mass of the Sun and radiates 42.5 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of , giving it an orange hue when viewed in the night sky. HD 26755 is a metal enriched star with an iron abundance 48% greater than the Sun. It spins slowly with a projected rotational velocity of , which is poorly constrained.
References
K-type giants
Spectroscopic binaries
Camelopardalis
BD+57 00787
026755
019983
1313 | HD 26755 | Astronomy | 300 |
1,393,702 | https://en.wikipedia.org/wiki/Seed%20orchard | A seed orchard is an intensively-managed plantation of specifically arranged trees for the mass production of genetically improved seeds to create plants, or seeds for the establishment of new forests.
General
Seed orchards are a common method of mass-multiplication for transferring genetically improved material from breeding populations to production populations (forests) and in this sense are often referred to as "multiplication" populations. A seed orchard is often composed of grafts (vegetative copies) of selected genotypes, but seedling seed orchards also occur mainly to combine orchard with progeny testing.
Seed orchards are the strong link between breeding programs and plantation establishment. They are designed and managed to produce seeds of superior genetic quality compared to those obtained from seed production areas, seed stands, or unimproved stands.
Material and connection with breeding population
In first generation seed orchards, the parents usually are phenotypically selected trees. In advanced generation seed orchards, the seed orchards are harvesting the benefits generated by tree breeding and the parents may be selected among the tested clones or families. It is efficient to synchronise the productive live cycle of the seed orchards with the cycle time of the breeding population. In the seed orchard, the trees can be arranged in a design to keep the related individuals or cloned copies apart from each other.
Seed orchards are the delivery vehicle for genetic improvement programs where the trade-off between genetic gain and diversity is the most important concern. The genetic gain of seed orchard crops depends primarily on the genetic superiority of the orchard parents, the gametic contribution to the resultant seed crops, and pollen contamination from outside seed orchards.
Genetic diversity of seed orchard crops
Seed production and gene diversity is an important aspect when using improved materials like seed orchard crops. Seed orchards crops derive generally from a limited number of trees. But if it is a common wind-pollinated species much pollen will come from outside the seed orchard and widen the genetic diversity. The genetic gain of the first generation seed orchards is not great and the seed orchard progenies overlap with unimproved material. Gene diversity of the seed crops is greatly influenced by the relatedness (kinship) among orchard parents, the parental fertility variation, and the pollen contamination.
Management and practical examples
Seed orchards are usually managed to obtain sustainable and large crops of seeds of good quality. To achieve this, the following methods are commonly applied: orchards are established on flat surface sites with southern exposure (better conditions for orchard maintenance and for seed production), no stands of the same species in close proximity (avoid strong pollen contamination), sufficient area to produce and be mainly pollinated with their own pollen cloud, cleaning the corridors between the rows, fertilising, and supplemental pollination. The genetic quality of seed orchards can be improved by genetic thinning and selective harvesting. In plantation forestry with southern yellow pines in the United States, almost all plants originate from seed orchards and most plantations are planted in family blocks, thus the harvest from each clone is kept separate during seed processing, plant production and plantation.
Recent seed orchard research
The optimal balance between the effective number of clones (diversity, status number, gene diversity) and genetic gain is achieved by making clonal contributions (number of ramets) proportional (linearly dependent) to the genetic value ("linear deployment"). This is dependent on several assumptions, one of them that the contribution to the seed orchard crop is proportional to the number of ramets. But the more ramets the larger the share of the pollen is lost depending on ineffective self-pollination. But even considering this, the linear deployment is a very good approximation. It was thought that increasing the gain is always accompanied by a loss in effective number of clones, but it has shown that both can be obtained in the same time by genetic thinning using the linear deployment algorithm if applied to some rather unbalanced seed orchards. Relatedness among clones is more critical for diversity than inbreeding.
The clonal variation in expected seed set has been compiled for 12 adult clonal seed orchards of Scots pine. The seed set ability is not that drastic among clones as has been shown in other investigations which are probably less relevant for actual seed production of Scots pine.
The correlations of cone set for Scots pine in a clonal archive was not well correlated with that of the same clones in seed orchards. Thus it does not seem meaningful to increase seed set by choosing clones with a good seed set.
As supporting tree breeding make advances, new seed orchards will be genetically better than old ones. This is a relevant factor for the economic lifetime of a seed orchard. Considerations for Swedish Scots pine suggested an economic lifetime of 30 years, which is less than the current lifetime.
Seed orchards for important wind pollinated species start to produce seeds before the seed orchard trees start to produce much pollen. Thus all or most of the pollen parents are outside the seed orchard. Calculations indicates that seed orchard seeds are still to be expected to a superior alternative to older and more mature seed orchards or stand seeds. Advantage of early seeds like absence of selfing or related matings and high diversity are positive factors in the early seeds.
Swedish conifers orchards with tested clones could have 20–25 clones with more ramets from the better and less from the worse so effective ramet number is 15–18. Higher clone number results in unneeded loss of genetic gain. Lower clone numbers can still be better than existing alternatives. For southern pines in United States it may be optimal with half as many clones.
When forest tree breeding proceeds to advanced generations the candidates to seed orchards will be related and the question to what degree related clones can be tolerated in seed orchards become urgent. Gene diversity seems to be a more important consideration than inbreeding. If the number of candidates have at least eight times as much diversity (status number) as required for the seed orchard relations are not limiting and clones can be deployed as usual but restricting for half and full sibs, but if the candidate population has a lower diversity more sophisticated algorithms are needed.
See also
Double-pair mating
Grafting
Plant nursery
References
Further reading
Kang, K. S. (2001). Genetic gain and gene diversity of seed orchard crops. (Abstract). Acta Universitatis Agriculturae Sueciae, Silvestria 187.
Lindgren, D. (Ed.) Proceedings of a Seed Orchard Conference. Umeå, Sweden, 26–28 September 2007. 256 pages.
Prescher, F. (2007). Seed Orchards – Genetic considerations on function, management and seed procurement. Doctoral dissertation, Swedish University of Agricultural Sciences.
Plant genetics
Seeds | Seed orchard | Biology | 1,337 |
77,309,938 | https://en.wikipedia.org/wiki/Unimog%20411 | The Unimog 411 is a vehicle in the Unimog series from Mercedes-Benz. Daimler-Benz AG built 39,581 units at the Mercedes-Benz plant in Gaggenau between August 1956 and October 1974. The 411 is the last series of the "original Unimogs". The design of the 411 is based on the Unimog 401. It is also a commercial vehicle built on a ladder frame with four equally sized wheels and designed as an implement carrier, agricultural tractor and universally applicable work machine. Like the 401, it had a passenger car engine, initially with 30 hp (22 kW).
There were a total of twelve different models of the 411, which were offered in numerous model variants with three wheelbases (1720 mm, 2120 mm and 2570 mm) and could be supplied in the conventional convertible version, as a drive head and with a closed cab, which was manufactured by Westfalia as with the predecessor. The closed cab was available in two versions, the Type B resembled the cab of the Unimog 401, the Type DvF resembled the cabs of the Mercedes-Benz trucks of the 1950s and 1960s with headlights in the radiator grille and chrome strips.
During its long production phase, the Unimog 411 was technically revised several times. Due to the large number of changes that the 411 series underwent, four types of the 411 series are distinguished for better differentiation: the Ur-411, 411a, 411b and 411c. Although the 411 was technically based on the 401, design features from other Unimog model series were also adopted for the 411, including the axle design of the Baureihe 406, which was used in modified form on the 411 from 1963. As the last classic Unimog, the 411 had no direct successor, but from 1966 the Unimog 421 was in the Unimog range, which was technically based on the 411 and was positioned in the same product segment.
Vehicle history
Development
The Unimog 411 was not a completely new development, rather Daimler-Benz derived the 411 series from the predecessor 401 and 402 series. In the 1950s, the Unimog design department under the leadership of Heinrich Rößler took a wait-and-see approach to new developments, even though consideration was given to offering the Unimog 411 with a 40 hp (29.5 kW) diesel engine and an 80 hp (59 kW) petrol engine. However, these ideas were only implemented in later model series. The developers' hopes were pinned in particular on the 411 with an all-steel cab. The most important focus of the development department was primarily on demonstrating, testing and improving the Unimog as such. The main changes to the 411 compared to its predecessor were an increase in engine output by 20%, reinforced shock absorbers, reinforced crossmembers for the engine, from 1959 plain bearings instead of roller bearings for the manual gearbox and enlarged tires with the dimension 7.5-18″ (optional equipment: 10-18″), which made a new wheel arch necessary; on the 411, the front wheel arches are slightly longer at the top than on the 401, so that the tires do not drag when turning the steering wheel. In addition, the front end of the 411 was redesigned, with wider beading on the hood. The radiator grille was also made smaller; it was now a square grille painted in the vehicle color instead of the struts of its predecessor.
Series 401 convertibles were already equipped with the cab of the later 411 series from June 1955, so that there are some hybrid vehicles. The 411 was then presented at the DLG exhibition in Hanover in September 1956. As many changes were made to the Unimog 411 during the entire period of series production, the Daimler-Benz works literature divides the 411 series into four types to make it easier to distinguish between significant technical changes: the original type "411" (1956-1961), "411a" (1961-1963), "411b" (1963-1965) and "411c" (1965-1974).
With the Unimog 411, Daimler-Benz set itself the target of selling 4,000 vehicles a year. In order to meet the requirements of the Unimog 411, customer wishes were incorporated and taken into account in the further development of the series. Nevertheless, the 411 was more of a small vehicle with an output of just 34 hp (25 kW) powerful diesel engine, which was considered too underpowered for some applications. Analysts at Daimler-Benz warned that the annual production rate of the Unimog 411 would fall below 3000 vehicles after 1960. This point was reached in 1964. Daimler-Benz therefore introduced a larger Unimog in 1963, the 406. The 411 was thus transformed from the former core product of the Unimog range into a light series. However, the further development of the Unimog 411 did not end there; from 1963, the axles of the Unimog 406 were also fitted to the 411 in a modified form. These axles are more stable, cheaper and easier to maintain. From 1967, the 411 received the same bumper as the Unimog 421.
After the introduction of the type 411c in 1965, the 411 series was no longer developed further on a large scale; the models with an extra-long wheelbase were the last major innovation to be added to the Unimog model range for the export market from 1969. In March 1966, the Unimog 421, a technically similar vehicle with a much more modern appearance, was presented in the same segment. The 421, which had the technology of the Unimog 411 and a 2-liter pre-chamber engine of the type OM 621 with 40 hp (29.5 kW), was actually designed as an inexpensive addition to the 406 series, But from 1970 onwards, the Unimog 421 was already much more popular than the similar but older and weaker 411 and was preferred by customers. Dhe Unimog 411 continued to be built unchanged. Production was only discontinued in October 1974 after 39,581 vehicles. Presumably some vehicles were produced again in 1975 for a military customer.
Distribution
On the West German market, the basic version of the Unimog 411 cost DM 12,500 as a convertible when it was launched in 1956. It initially had the OM 636.914 engine, which produced 30 DIN hp (22 kW) at 2550 rpm. As the Unimog 411 was too expensive for some customers, an "economy model" was offered from 1957 to 1959, the U 25. The U 25 was given the independent model number 411.116. It lacked the windscreen, side windows, windscreen wipers, soft top and other small parts, the seats and engine came from the Unimog 2010, and the transmission ratio of the portal axle was also changed. It was a failure, only 54 units were sold. At the end of the 1950s, the 411 model series was also exported to the USA, where Curtiss-Wright sold the 411.112 and 411.117 models; the Mercedes-Benz brand name was retained. In 1965, the basic version cost 15,300 DM. Daimler-Benz AG achieved the largest turnover with Unimog sales in West Germany. In 1962, worldwide sales of the U 411, excluding the spare parts business, amounted to 54,870,000 DM.
Prototype for the French army
At the request of the French army Daimler-Benz built a prototype based on the 411 series with a gasoline engine in 1957. The vehicle was given the chassis number 411.114 75 00939 and was assigned to type 411.114, which was reassigned to the extra-long wheelbase models in 1969. The prototype 411.114 had the long wheelbase of 2120 mm, transmission and clutch of the Unimog S and tires of dimension 7.5-18″. The desired and installed four-cylinder engine was the M 121 with a displacement of 1897 cm³ and an output of 65 hp (48 kW) at 4500 min-1 as well as a maximum torque of 128 N-m at 2200 min-1, as it was also used in the Mercedes 180. The top speed is 90 km/h. The reinforced windshield with the windshield wipers at the bottom is a distinctive feature. The French army tested the vehicle over a period of almost 9000 operating hours and decided not to procure it due to its high center of gravity. On the basis of this prototype, Daimler-Benz developed further military vehicles with a payload of one ton.
Westfalia cab
Like the Unimog 401 and 402 before it, a closed cab was also offered for the Unimog 411, which was manufactured by Westfalia in Wiedenbrück. Daimler-Benz equipped the Unimogs with this cab ex works. When production of the 411 series began in August 1956, the type B cab, which had already been built for the Unimog 401, was modified for the new Unimog 411 chassis and continued to be built almost unchanged on the outside. It has the model 411.520. This cab is nicknamed the frog's eye and was only built 1107 times, the models 411.111 (1720 mm wheelbase) and 411.113 (2120 mm wheelbase) were equipped with it until they were discontinued in October 1961. Westfalia had already produced a new cab for the Unimog 411 in 1957. It has the model 411.521 and is designated as cab type DvF.It was only built for the 411.117 and 411.120 models with 2120 mm wheelbase. DvF stands for Type D, widened cab. As the name suggests, its dimensions were significantly larger than those of the Type B, it has a 30% larger volume and is wider than the Unimog's loading platform. The windshield is undivided and the ergonomics have been significantly improved. The shape follows the truck design of the Mercedes-Benz brand in the 1950s and 1960s with an elliptical radiator grille with headlights framed on the outer edge and lavish chrome trim. Unlike the convertible models, the front bumper is more rounded and more strongly curved at the ends. On request, Daimler-Benz equipped the DvF cab with a heater. A disadvantage of the DvF cab was the high heat load caused by the engine exhaust heat. The reason for this is the engine cover, which protrudes far into the passenger compartment and does not sufficiently insulate the cab from the engine. Production of the Unimog 411 was discontinued in 1974, but Westfalia continued to build the DvF cab until 1978.
In the mid-1960s, Westfalia also tested a GRP hardtop for the convertible versions of the Unimog 411. It offered better protection from the weather and better visibility to the sides than the fabric top. Although brochures were printed and the hardtop was included in the official Unimog catalog, it was hardly ever sold. It is not known how many hardtops were produced.
Annual series change
Prototype 411
1957
The 411 was extensively modified in 1957. The indicators were removed and replaced by conventional car indicators. Other external innovations included the new Mercedes badge on the hood and the modified rear lights. The engine output was increased to 32 hp (23.5 kW) from March and the transmission synchronized could be supplied on request, in July new springs with a wire diameter of 19.5 mm instead of 18 mm were fitted to the rear axles, and from September a reinforced steering system with a three-spoke steering wheel from Fulmina was installed. In the convertible models, the side windows made of Cellon were replaced by polyvinyl chloride windows as early as May 1957
. Mercedes-Benz also introduced the economy model U 25 in May. The new Westfalia type DvF cab was presented at the IAA in September; a trailer brake system was available from October.
1958
From March or April 1958 the Unimog 411 was equipped as standard with a 60-liter fuel tank instead of just 40 liters. Other changes were rather minor, including modifications to the brake system, the installation of a combined pre-glow and start switch, a reinforced power take-off and the installation of hinged windows on the Westfalia cab type DvF.
1959
From January, the synchronized gearbox, which had previously only been offered as an optional extra, became standard equipment. The economy model U 25 was discontinued without replacement in 1959.
1960
In January 1960, the chassis numbering system was changed so that the first two digits no longer formed a number from 55 to 95. Instead, the chassis numbers began with "01" from 1960. The hood design was changed. Snap locks were installed, making the outside toggles superfluous. In addition, the mirrors were mounted further down and no longer on the A-pillar. The rear suspension of the cab had already been modified in March 1960 for the introduction of the three-point suspension cab in October 1961.
411a
1961
In October 1961, the Unimog 411 underwent a comprehensive model update, which upgraded the series, particularly in technical terms: the original type 411 was replaced by the type 411a. The 411a was launched on October 9, 1961. was produced in series and differs from the original 411 in its ladder frame with higher longitudinal members: 120 mm instead of 100. In addition, a newly introduced hydraulic system with front and rear power lift was offered and the cab was fitted with a three-point suspension, which significantly increased comfort for the occupants. The type 411a can be recognized by the headlights, which are no longer attached to the frame but to the radiator grille, causing them to protrude slightly forwards, as does the front bumper, which is curved at the ends. The flatbed has four instead of three side boards on each side and is 30 mm away from the cab. The production of vehicles with the Westfalia Type B cab was finally discontinued in October 1961.
1962
The indentations on the hood for the toggles, which were no longer required, were removed and all vehicles were fitted with a new blinker system from Bosch. The rear window of the convertible top was enlarged, and the DvF cabs were fitted with two-piece headlight rings.
411b
1963–1964
In March, production of the 411a was discontinued due to the new 411b. The most important change to the 411b was the introduction of the axle design of the Unimog 406, which replaced the old axle manufactured by Erhard & Söhne. The windshield was increased from 410 mm to 450 mm, and the convertible models were given a triangular window behind the A-pillar. At the rear, the fenders were completely black. Other technical changes included a modified exhaust system, a hydraulic power steering system offered as an optional extra and a new, now two-stage master brake cylinder.
411c
1965
The 411b was built until February 1965, from February 1965 the type 411c was produced in series, the main difference to the 411b being the 2 hp (1471 W) increase in engine output. Daimler-Benz continued to install the OM 636.914 engine; however, the rated speed was increased from 2550 rpm to 2750 rpm. In addition, the cylinder head, injection pump and throttle body were modified. This resulted in an improvement in performance to 34 hp (25 kW). In order to maintain the same driving speeds at rated engine speed, the transmission ratio of the axles was changed from 25:7 to 35:9. The rear hood mount, the speedometer in the cab, the V-belt pulley for the compressor and the rear lights were also modified. With the introduction of the type 411c in 1965, there were three models - 411.118, 411.119 and 411.120 - and nine models.
1966
From April 1966, the standard color of the Unimog was changed from Unimog green (DB 6286) to truck green (DB 6277). The dropside hinges of the Unimog 421 were installed and the rear spring brackets were cast. The models with the Westfalia DvF cab were given a handle on the A-pillar to make it easier to get in.
1967
The most important change from 1967 was the introduction of the Unimog 421 bumper, which can be recognized by the longitudinal beading. Furthermore, swivel bearings on the front axle and a door handle guard were installed on the convertible models.
1968
The frame received a new mounting plate bracket and welded front and rear beams. The thermostat was modified and the DvF cabs were fitted with new exterior mirrors.
1969
The last major innovation came in 1969, when the extra-long wheelbase of 2570 mm was introduced for export with the 411.114 model. The model 411.114 was primarily supplied to the Portuguese military, which used the vehicle in the civil war in Angola. Dhe fulminal steering was replaced by a ZF Gemmer steering of type 7340. In addition, the fuel lines were made of plastic.
1970
In 1970, the hole arrangement in the dashboard was changed to accommodate a fuel gauge and a glow monitor as standard.
1971–1974
In 1971, the round indicators were replaced by square indicators, a windshield washer system was introduced and the windshield frame was painted black. All vehicles received a new two-spoke steering wheel in 1972 and the convertible models were fitted with more modern exterior mirrors. Nothing more was changed in 1973 and 1974.
Models
The Unimog 411 was offered in many model variants. The model designations represent the vehicle type and equipment features of the Unimog, but only provide a limited indication of the model type. In the Unimog 411, the model designation is made up of one, two or three suffixes that determine the vehicle type, the engine power in DIN hp and any prefixes that indicate equipment features. A U 34 L designates a standard-equipped Unimog with 34 hp (25 kW) engine power and a long wheelbase. The following Suf and prefixes existed; if they were not used over the entire production period, it is indicated:
U: Unimog in basic version
A: Without trailer brake system
B: With trailer brake system (up to approx. 1961)
C: With pneumatic power lift (up to approx. 1961)
D: With trailer brake system (from approx. 1961)
F: Westfalia cab type DvF
H: With hydraulic power lift (from approx. 1961)
L: Long wheelbase of 2120 mm
S: Tractor unit
The following engine outputs were offered:
25 PS (18,5 kW)
30 PS (22 kW)
32 PS (23,5 kW)
34 PS (25 kW)
36 PS (26,5 kW)
Prototype
Type overview
A total of 39,581 Unimog 411s and 350 subsets in twelve different models were built. 11,604 units had the type DvF cab, 1107 had the type B cab and 26,870 Unimog 411s were convertibles. Around 57.2 % of all Unimog 411s built had the long wheelbase of 2120 mm and 2.9 % had the extra-long wheelbase of 2570 mm. The following models of the Unimog 411 were built:
Base prices
The 411 series was built in various versions. The following table shows the basic prices (list prices) for the West German market:
Technical description
Driver's cab
The Unimog 411 was available with a fabric top ("convertible") and a closed cab; the closed cabs were supplied by Westfalia. All cabs, including the convertible version, had a rigid four-point suspension in the original 411 model, and a three-point suspension from the 411a model onwards (October 1961). Both the convertible and the closed cab have two seats. In the original type, the driver's cab and flatbed form a single structural unit; from 411a onwards, the two parts are separate.
Motor
The Unimog 411 is powered by an OM 636.914 inline four-cylinder pre-chamber naturally aspirated diesel engine. This engine has a displacement of 1767 cm³, a side camshaft and overhead valves. The water-cooled engine is installed centrally at the front and tilted slightly to the rear. It is started with an electric starter. The power output was initially 30 hp (22 kW) at 2550 rpm, but was gradually increased over the production period to 32 hp (23.5 kW) and ultimately 34 hp (25 kW); The economy model U 25 received the engine with 25 hp (18.5 kW) at 2350 rpm; however, it was only sold in small numbers.The engine was also offered with 36 hp (26.5 kW) for some export models.
Frame
The ladder frame of the Unimog 411 is a flat frame made of folded (later rolled) U-profiles with a web height of 100 mm (original type 411) or 120 mm (411a,b,c). The U-profiles are connected with five riveted cross beams. Two cross members are positioned close together at the front and rear, one cross member is directly behind the cab. The rear cross member is additionally connected to the U-profiles with two cross members, which are attached in the middle and run diagonally to the next cross member, thus forming triangles. The fact that the cab and platform body are connected to the frame at four points on the original model means that the parts cannot twist against each other, which encourages fractures, cracks and permanent deformations. From the 411a onwards, the frames could twist better, as the cab now had two points for the suspension at the rear, but only one at the front. Various accessories such as mounting brackets, additional crossbars and plates were offered for the frame to enable additional equipment to be attached to the frame.
Chassis and drivetrain
Thanks to the portal axles with wheel reduction gearing, the Unimog has a relatively high ground clearance despite its small wheels. The axles are guided on pushrods and Panhard rods. The thrust tubes are mounted on the transmission in ball joints and are rigidly connected to the differential gears of the axles. The drive shafts, which transmit the torque from the transmission to the axles, run in the thrust tubes. The axles of the Unimog are suspended with two coil springs each (front 17 mm or 18 mm, rear initially 18 mm, then 19.5 mm).) with additional internal springs and hydraulic telescopic shock absorbers. The wheel suspension allows particularly long suspension travel and therefore a large axle articulation, making the Unimog very off-road capable. The U 411 was supplied with 7.5-18″ tires as standard. Tires with dimensions of 10-18″, later 10.5-18″ were available as special equipment.
The original type and the 411a have the portal axle called the sheet metal axle, which was manufactured by Erhard & Söhne. The sheet metal axle consists of two U-shaped sheet metal shells, each approx. 1.2 m long, with an offset for the differential in the middle; the two sheet metal shells were welded together on top of each other to form a banjo axle. The differential gear and the drive shafts are located inside. On the outside, a separate housing for the wheel reduction gears is bolted to each side of the sheet metal axles. A central fastening screw is fitted in the wheel hub, which is clearly visible from the outside.
From 1963, with the type 411b, Daimler-Benz also installed the axle of the Unimog 406 in a modified form in the 411. The new axles are constructed from a differential housing and two cast axle halves approx. 0.6 m long, with a half differential bell formed at the inner ends. The two axle halves are connected vertically to the differential housing with internal hexagon bolts (funnel axle). The wheel reduction gears are bolted to the outer ends. The external distinguishing feature of the new axle is the hub, from which the wheel lock screw no longer protrudes (see picture on the right). This new axle was cheaper to manufacture, easier to maintain and more resilient than the sheet metal axle. The axle ratio of the Unimog axles is 25 : 7 (Ur-411, 411a, 411b) or 35 : 9 (411c).
Gearbox
Daimler-Benz installed the UG1/11 gearbox in the Unimog 411, also known as the F gearbox, which is designed for an input torque of 107.9 N-m (11 kp-m). It has claw gears, ball-bearing shafts, six forward and two reverse gears. An additional creeper gearbox with two gears was available on request. The forward gears are engaged with the large upper lever, the reverse gears with the small lever in the middle and the creeper gears with the larger lower lever (see picture on the right). From March 1957, the gearbox could be customized by installing balls, stones, leaf springs and synchronizer rings. synchronized; from 1959 it was synchronized as standard and equipped with plain bearings. The same transmission was already installed in the synchronized version in the Unimog 404 from 1955. A transfer case is directly flange-mounted to drive the front axle. The speed range extends from 1-55 km/h.
Pneumatics
The pneumatic system is the heart of the power lift system on the original 411, as the front and rear power lifts are moved pneumatically, as on the Unimog 401. The pneumatic system essentially consists of six main components: A compressor driven by the engine, a control valve, a compressed air tank installed diagonally across the top in front of the rear axle, the control unit in the cab, the rear power lift system with two pneumatic cylinders and the front power lift system with one pneumatic cylinder. The pneumatic system was essentially taken over from the Unimog 401, but reinforced for greater lifting power. The large compressed air tank in particular required a lot of space. On request, a pneumatic lifting cylinder was also available for tipping the platform, which was operated at a pressure of approx. 8 bar.
Hydraulic system
A hydraulic system was offered from type 411a onwards, but was not fitted as standard. It consists of six main components: a gear oil pump, an oil tank, two hydraulic cylinders and two control units with operating levers. The hydraulic pump has a maximum working pressure of 150 bar. The oil tank at the front of the Unimog has a capacity of 8.5 liters. The control units are located behind the engine; they each have a control lever. The control levers are mounted on a bar under the steering wheel. The driver can operate the hydraulic cylinder of the rear linkage with the first lever. The second lever is used to control the attachments.
Paintwork
Most of the vehicles are adapted to the taste of the 1950s and, like their predecessors, are painted in Unimog green. Unimog green was the standard color from the start of production until 1966, with around 54% of all vehicles having this color. Truck grey was also available ex works, the only color that was retained throughout the entire production period. However, only around 3% of all Unimog 411s ever built were painted in this color. From 1966 onwards, truck green was used as the standard color; this color had already been available for the Unimog 406 since 1963. Only 20% of all vehicles ever built had this color; 23% were painted in special colors, which were offered over the entire production period. Due to the large number of special colors, they are not listed separately here. The most important customers who ordered a special color were the Deutsche Bundesbahn and the Deutsche Bundespost in addition to the military.
Standard colors
The frame, tank, axles and springs were not painted in the color of the car, but in deep black (RAL 9005), the wheels in carmine red (RAL 3002). From 1958 to 1960, Daimler-Benz used chassis red (DB 3575) for these parts (with the exception of the wheels) instead. In the 1970s, Mercedes-Benz also changed the color of the wheels to jet black.
Accessories
Accessories were available separately at extra cost. Busatis developed the BM 62 KW mower specially for the Unimog 411 in collaboration with Daimler-Benz. As with other Unimog models, there was a front cable winch that was driven via the PTO shaft. Two different types of cable winch, type A and type C, were available, each with a pulling force of approx. 30 kN. Since the winch itself has a pulling force of 3000 kp or 3500 kp, depending on the model, I strongly assume that Vogler means 30 kN here, since 3500 kp corresponds to about 34 kN. > While the type A is the "simple" version, the type C has an additional reduction gear and a band brake, so that the type C cable winch is also suitable for lowering loads. Both cable winches have a cable length of 50 m and a cable diameter of 11 mm or 12 mm. The rope speed is infinitely variable between 48 and 60 m/min. Electron built a compressed air generator for the Unimog 411, which can be used to drive external compressed air devices such as pneumatic hammers or drills. The compressed air generator is driven by the front PTO shaft and delivers air at up to 2200 dm³/min, the operating pressure is 6 bar. In cooperation with Daimler-Benz, Donges Stahlbau developed the Unikran type SU, a crane trailer for the Unimog 411, between 1955 and 1957. The Unikran type SU has a lifting capacity of 2942 daN (3 Mp) and a hook height of approx. 7 m to 8 m. It can also be operated without a Unimog. The Swiss manufacturer Haller produced an engine dust brake for the Unimog 411, which was retrofitted to a significant number of vehicles.
Technical data 1957
Subsequent valuation
With the Unimog 411a, Daimler-Benz successfully completed the expansion of the Unimog concept from tractor to system tractor for the first time. While the original Unimog was designed purely as an agricultural vehicle, it was recognized that the Unimog 411 was also in demand in other areas. In 1975, Gerold Lingnau wrote in a special edition of the Frankfurter Allgemeine Zeitung: "Admittedly, hardly 175,000 Unimogs would have been built to date if it had only remained an agricultural vehicle. Its career in other areas began early on. [...] The fact that the Unimog is so versatile is due not least to an enterprising equipment industry. It recognized its opportunity early on and - in close cooperation with Daimler-Benz - developed hundreds of attachments for this first 'implement carrier' in vehicle history." Carl-Heinz Vogler attributes the Unimog's development into a popular vehicle with local authorities, the construction industry and the transport sector to continuous further developments such as the reinforced frame of the 411a and the larger all-steel cab of the DvF model.
The flat ladder frame construction of the Unimog 411 is extremely robust, and its torsional and bending rigidity were unrivaled at the time, which made the Unimog 411 a particularly reliable vehicle. However, the U-411 frame could no longer keep up with the offset frame of the Unimog 404 and 406, which offered better torsional properties.
Literature
Carl-Heinz Vogler: Unimog 411: Typengeschichte und Technik. GeraMond-Verlag, München 2014, ISBN 978-3-86245-605-5.
Gerold Lingnau: Unimog. Des Menschen bester Freund. Die dreißig Jahre alte Idee vom „Universal-Motor-Gerät“ ist heute noch taufrisch / Bisher 175 000 Einheiten gebaut. In: Frankfurter Allgemeine Zeitung, 5 March 1975, p. 29.
Remarks
Reference
Tractors
Mercedes-Benz trucks | Unimog 411 | Engineering | 6,687 |
39,228,340 | https://en.wikipedia.org/wiki/Information%20security%20indicators | In information technology, benchmarking of computer security requires measurements for comparing both different IT systems and single IT systems in dedicated situations. The technical approach is a pre-defined catalog of security events (security incident and vulnerability) together with corresponding formula for the calculation of security indicators that are accepted and comprehensive.
Information security indicators have been standardized by the ETSI Industrial Specification Group (ISG) ISI. These indicators provide the basis to switch from a qualitative to a quantitative culture in IT Security Scope of measurements: External and internal threats (attempt and success), user's deviant behaviours, nonconformities and/or vulnerabilities (software, configuration, behavioural, general security framework). In 2019 the ISG ISI terminated and related standards will be maintained via the ETSI TC CYBER.
The list of Information Security Indicators belongs to the ISI framework that consists of the following eight closely linked Work Items:
ISI Indicators (ISI-001-1 and Guide ISI-001-2): A powerful way to assess security controls level of enforcement and effectiveness (+ benchmarking)
ISI Event Model (ISI-002): A comprehensive security event classification model (taxonomy + representation)
ISI Maturity (ISI-003): Necessary to assess the maturity level regarding overall SIEM capabilities (technology/people/process) and to weigh event detection results. Methodology complemented by ISI-005 (which is a more detailed and case-by-case approach)
ISI Guidelines for event detection implementation (ISI-004): Demonstrate through examples how to produce indicators and how to detect the related events with various means and methods (with classification of use cases/symptoms)
ISI Event Stimulation (ISI-005): Propose a way to produce security events and to test the effectiveness of existing detection means (for major types of events)
An ISI-compliant Measurement and Event Management Architecture for Cyber Security and Safety (ISI-006): This work item focuses on designing a cybersecurity language to model threat intelligence information and enable detection tools interoperability.
ISI Guidelines for building and operating a secured SOC (ISI-007): A set of requirements to build and operate a secured SOC (Security Operations Center) addressing technical, human and process aspects.
ISI Description of a whole organization-wide SIEM approach (ISI-008): A whole SIEM (CERT/SOC based) approach positioning all ISI aspects and specifications.
Preliminary work on information security indicators have been done by the French Club R2GS. The first public set of the ISI standards (security indicators list and event model) have been released in April 2013.
References
External links
ETSI ISG ISI members
ETSI TC CYBER (responsible for ISI maintenance)
ETSI ISI flyer
ISI Quick Reference Card
ISI events Quick Reference Card
Club R2GS portal
Data security
Security | Information security indicators | Engineering | 605 |
20,501,088 | https://en.wikipedia.org/wiki/Ohmer%20fare%20register | The Ohmer fare register was, in various models, a mechanical device for registering and recording the fares of passengers on streetcars, buses and taxis in the early 20th century. It was invented and improved by members and employees of the Ohmer family of Dayton, Ohio, especially John F. Ohmer who founded the Ohmer Fare Register Company in 1898, and his brother Wilfred I. Ohmer of the Recording and Computing Machines Company of Dayton, Ohio. This latter company employed up to 9,000 people at one time and was a major manufacturer of precision equipment during World War I. It was subsequently renamed the Ohmer Corporation and in 1949, acquired by Rockwell Manufacturing Company.
Fare registers on city buses were replaced by fare boxes by the middle of the 20th century, and today by ticket or card machines. Ohmer fare registers can be found in use and on display at trolley museums throughout the U.S.
A station on the Sacramento Northern line through Concord, California, was called "Ohmer", named for the Ohmer company and its fare register. The site is now occupied by the North Concord/Martinez Station of the Bay Area Rapid Transit system.
See also
Taximeter
References
US Patent No. 764494, issued July 5, 1904
US Patent No. 1615541, issued January 25, 1927
External links
NY Times Obituary Nov.5, 1938, John F. Ohmer
Fare collection systems
Measuring instruments
Tram technology | Ohmer fare register | Technology,Engineering | 289 |
1,643,733 | https://en.wikipedia.org/wiki/%C3%98ystein%20Ore | Øystein Ore (7 October 1899 – 13 August 1968) was a Norwegian mathematician known for his work in ring theory, Galois connections, graph theory, and the history of mathematics.
Life
Ore graduated from the University of Oslo in 1922, with a Cand.Real.degree in mathematics. In 1924, the University of Oslo awarded him the Ph.D. for a thesis titled Zur Theorie der algebraischen Körper, supervised by Thoralf Skolem. Ore also studied at Göttingen University, where he learned Emmy Noether's new approach to abstract algebra. He was also a fellow at the Mittag-Leffler Institute in Sweden, and spent some time at the University of Paris. In 1925, he was appointed research assistant at the University of Oslo.
Yale University’s James Pierpont went to Europe in 1926 to recruit research mathematicians. In 1927, Yale hired Ore as an assistant professor of mathematics, promoted him to associate professor in 1928, then to full professor in 1929. In 1931, he became a Sterling Professor (Yale's highest academic rank), a position he held until he retired in 1968.
Ore gave an American Mathematical Society Colloquium lecture in 1941 and was a plenary speaker at the International Congress of Mathematicians in 1936 in Oslo. He was also elected to the American Academy of Arts and Sciences and the Oslo Academy of Science. He was a founder of the Econometric Society.
Ore visited Norway nearly every summer. During World War II, he was active in the "American Relief for Norway" and "Free Norway" movements. In gratitude for the services rendered to his native country during the war, he was decorated in 1947 with the Order of St. Olav.
In 1930, Ore married Gudrun Lundevall. They had two children. Ore had a passion for painting and sculpture, collected ancient maps, and spoke several languages.
Work
Ore is known for his work in ring theory, Galois connections, and most of all, graph theory.
His early work was on algebraic number fields, how to decompose the ideal generated by a prime number into prime ideals. He then worked on noncommutative rings, proving his celebrated theorem on embedding a domain into a division ring. He then examined polynomial rings over skew fields, and attempted to extend his work on factorisation to non-commutative rings. The Ore condition, which (if true) allows a ring of fractions to be defined, and the Ore extension, a non-commutative analogue of rings of polynomials, are part of this work. In more elementary number theory, Ore's harmonic numbers are the numbers whose divisors have an integer harmonic mean.
As a teacher, Ore is notable for supervising two doctoral students who would make contributions to science and mathematics: Grace Hopper, who eventually became a United States rear admiral and computer scientist and who was a pioneer in developing the first computers, and Marshall Hall, Jr., an American mathematician who did important research in group theory and combinatorics.
In 1930, the Collected Works of Richard Dedekind were published in three volumes, jointly edited by Ore and Emmy Noether. He then turned his attention to lattice theory becoming, together with Garrett Birkhoff, one of the two founders of American expertise in the subject. Ore's early work on lattice theory led him to the study of equivalence and closure relations, Galois connections, and finally to graph theory, which occupied him to the end of his life. He wrote two books on the subject, one on the theory of graphs and another on their applications. Within graph theory, Ore's theorem is one of several results proving that sufficiently dense graphs contain Hamiltonian cycles.
Ore had a lively interest in the history of mathematics, and was an unusually able author of books for laypeople, such as his biographies of Cardano and Niels Henrik Abel.
Books by Ore
Les Corps Algébriques et la Théorie des Idéaux (1934)
L'Algèbre Abstraite (1936)
Number Theory and its History (1948)
Cardano, the Gambling Scholar (Princeton University Press, 1953)
Niels Henrik Abel, Mathematician Extraordinary (U. of Minnesota Press, 1957)
Theory of Graphs (1962)
Graphs and Their Uses (1963)
The Four-Color Problem (1967)
Invitation to Number Theory (1969)
Articles by Ore
See also
Deficiency (graph theory)
Geodetic graph
Magma (algebra)
Ore algebra
Ore condition
Ore's conjecture
Ore extension
Ore number
Ore's theorem
Schwartz–Zippel lemma
Universal algebra
References
External links
. The source for much of this entry.
20th-century Norwegian mathematicians
Combinatorialists
Lattice theorists
Yale University faculty
Historians of mathematics
1899 births
1968 deaths
Yale Sterling Professors | Øystein Ore | Mathematics | 970 |
26,174,604 | https://en.wikipedia.org/wiki/Pocklington%27s%20algorithm | Pocklington's algorithm is a technique for solving a congruence of the form
where x and a are integers and a is a quadratic residue.
The algorithm is one of the first efficient methods to solve such a congruence. It was described by H.C. Pocklington in 1917.
The algorithm
(Note: all are taken to mean , unless indicated otherwise.)
Inputs:
p, an odd prime
a, an integer which is a quadratic residue .
Outputs:
x, an integer satisfying . Note that if x is a solution, −x is a solution as well and since p is odd, . So there is always a second solution when one is found.
Solution method
Pocklington separates 3 different cases for p:
The first case, if , with , the solution is .
The second case, if , with and
, the solution is .
, 2 is a (quadratic) non-residue so . This means that so is a solution of . Hence or, if y is odd, .
The third case, if , put , so the equation to solve becomes . Now find by trial and error and so that is a quadratic non-residue. Furthermore, let
.
The following equalities now hold:
.
Supposing that p is of the form (which is true if p is of the form ), D is a quadratic residue and . Now the equations
give a solution .
Let . Then . This means that either or is divisible by p. If it is , put and proceed similarly with . Not every is divisible by p, for is not. The case with m odd is impossible, because holds and this would mean that is congruent to a quadratic non-residue, which is a contradiction. So this loop stops when for a particular l. This gives , and because is a quadratic residue, l must be even. Put . Then . So the solution of is got by solving the linear congruence .
Examples
The following are 4 examples, corresponding to the 3 different cases in which Pocklington divided forms of p. All are taken with the modulus in the example.
Example 0
This is the first case, according to the algorithm,
, but then not 43, so we should not apply the algorithm at all. The reason why the algorithm is not applicable is that a=43 is a quadratic non residue for p=47.
Example 1
Solve the congruence
The modulus is 23. This is , so . The solution should be , which is indeed true: .
Example 2
Solve the congruence
The modulus is 13. This is , so . Now verifying . So the solution is . This is indeed true: .
Example 3
Solve the congruence . For this, write . First find a and such that is a quadratic nonresidue. Take for example . Now find , by computing
And similarly such that
Since , the equation which leads to solving the equation . This has solution . Indeed, .
References
Leonard Eugene Dickson, "History Of The Theory Of Numbers" vol 1 p 222, Chelsea Publishing 1952
Modular arithmetic
Number theoretic algorithms | Pocklington's algorithm | Mathematics | 640 |
2,200,436 | https://en.wikipedia.org/wiki/Front%20velocity | In physics, front velocity is the speed at which the first rise of a pulse above zero moves forward.
In mathematics, it is used to describe the velocity of a propagating front in the solution of hyperbolic partial differential equation.
Various velocities
Associated with propagation of a disturbance are several different velocities. For definiteness, consider an amplitude modulated electromagnetic carrier wave. The phase velocity is the speed of the underlying carrier wave. The group velocity is the speed of the modulation or envelope. Initially it was thought that the group velocity coincided with the speed at which information traveled. However, it turns out that this speed can exceed the speed of light in some circumstances, causing confusion by an apparent conflict with the theory of relativity. That observation led to consideration of what constitutes a signal.
By definition, a signal involves new information or an element of 'surprise' that cannot be predicted from the wave motion at an earlier time. One possible form for a signal (at the point of emission) is:
where u(t) is the Heaviside step function. Using such a form for a signal, it can be shown, subject to the (expected) condition that the refractive index of any medium tends to one as the frequency tends to infinity, that the wave discontinuity, called the front, propagates at a speed less than or equal to the speed of light c in any medium. In fact, the earliest appearance of the front of an electromagnetic disturbance (the precursor) travels at the front velocity, which is c, no matter what the medium. However, the process always starts from zero amplitude and builds up.
References
Wave mechanics | Front velocity | Physics | 338 |
66,497,650 | https://en.wikipedia.org/wiki/Bruceanol%20D | Bruceanol D is a cytotoxic quassinoid isolated from Brucea antidysenterica with potential antitumor and antileukemic properties.
See also
Bruceanol
References
Quassinoids
Heterocyclic compounds with 5 rings
Methyl esters | Bruceanol D | Chemistry | 60 |
75,369,294 | https://en.wikipedia.org/wiki/Fragile%20masculinity | Fragile masculinity is the anxiety among males who feel they do not meet cultural standards of masculinity. Evidence suggests that this concept is necessary to understand their attitudes and behaviors. Research has shown that this anxiety can manifest in various ways, including aggressive behavior, resistance to changing gender norms, and difficulty in expressing vulnerability.
Concept
Manhood is thought to be a precarious social status. Unlike womanhood, it is thought to be "elusive and tenuous," needing to be proven repeatedly. It is neither inevitable nor permanent; it must be earned "against powerful odds". As a result, men who have their masculinity challenged may respond in ways that are unpleasant, or even harmful.
Factors
Race and ethnicity
Race is a factor in American standards of masculinity. Hegemonic masculinity is denied to men of color, as well as working class white men. This has profound implications for the life trajectories and attitudes of African-American men.
Asian American men are frequently unable to be perceived as masculine in American society, and there is growing anger from young Asian-American men that they cannot be made to fit the standard of American masculinity. It is a common complaint among young Asian-American men that they struggle to compete with White American men for Asian women. This anger has led to the formation of online communities for Asian men who are concerned about their reputation, and two such communities on Reddit have been implicated in the online harassment of Asian women who are in interracial relationships with White American men. On the other hand, some Asian-American men have rejected the hegemonic notion of masculinity and embraced their own alternative form of masculinity, which values education and law-abidingness over American notions of masculinity.
Age
As young men try to find their place in society, age becomes an important variable in understanding male fragility. Men in the 18–25 age range display riskier and more aggressive behavior. In some places, younger men have constant threats to their manhood and have to prove their manhood daily. The more the manhood was threatened, the more the aggressiveness.
Parenthood
Research has found that fathers are less likely to view masculinity as fragile compared to non-fathers. This suggests that the experience of being a father might reinforce a man's masculine identity. However, low self-perceived masculinity after parenthood was a predictor of sexual depression among fathers.
Behavior
When men feel their masculinity has been threatened, they often attempt to regain their sense of authority. The threats may include having a female supervisor or being given a job traditionally viewed as feminine. They may react by engaging in harmful behavior, such as undermining and mistreating colleagues, lying for personal gain, withholding help and stealing company property.
Online harassment is a common response from men who are intimidated by displays of strength by women.
A 2012 study, using a racially diverse sample of jail inmates, found that those who scored high on measurements of "fragile masculinity" tended to feel uncomfortable around women.
Health
A 2014 study found that men who endorsed traditional values of masculinity had worse health outcomes. Men with traditionally masculine beliefs are more likely to exhibit behaviors such as aggression (when externally challenged) and self-harm under stress (when internally challenged).
Men with strongly held masculine beliefs are half as likely to seek preventative healthcare; they are more likely to smoke, drink heavily and avoid vegetables; men are less likely to seek psychological help. A review of recent research found a link between the endorsement of precarious masculinity and poorer health outcomes in men. Although the link was "modest" it nevertheless accounted for some of men's poorer health outcomes, relative to women.
Sexual relationships
Women who believed their partner had fragile masculinity (such as in relationships where women earn two times as much money as their partners) were more likely to fake orgasms and were less likely to provide honest sexual communication. However these authors cautioned against the assumption that either partner is to blame in such cases, pointing out that American standards of masculinity are nearly impossible to meet.
Political beliefs
A link has been shown between male fragility and aggressive political stances, such as climate change denial. This suggests that "fragile masculinity is crucial to fully understanding men's political attitudes and behaviors." The 2024 Trump campaign emphasized restoration of the traditional male role, likely motivating a rightward shift in young men.
Proposed solutions
Based on their research, Maryam Kouchaki and colleagues have suggested that acknowledgement of fragile masculinity is a crucial first step toward improvement. They point out that many men are not even aware that they feel threatened, and that they are not even aware of toxic behaviors that may result from a threat. Increased self-awareness may allow men to break this pattern. Embracing healthy forms of masculinity was also suggested. Finally, these authors suggested that dismantling toxic workplace structures which encourage harmful masculine attitudes is a vital step in reducing fragile masculinity. According to Stanaland and colleagues, less rigid expectations of what masculinity should be could allow for a more resilient form of masculinity.
Popular culture
The 2016 film Moonlight has been called a "masterclass in masculine fragility." Chiron, according to writer Eli Badillo, embraced his fragility as a path to self-discovery.
See also
Toxic masculinity
Hypermasculinity
Mythopoetic men's movement
Gender role
Social role
References
Social psychology
Aggression
Workplace harassment and bullying
Masculism
Psychology
Industrial and organizational psychology
Sociology
Control (social and political)
Social psychology concepts
Psychological adjustment
Behavioral concepts
Gender and society
Masculinity
Orgasm
Role theory
Role status
Gender roles
Gender-related stereotypes | Fragile masculinity | Biology | 1,204 |
75,404,013 | https://en.wikipedia.org/wiki/CEERS-2112 | CEERS-2112 is the most distant barred spiral galaxy observed as of 2023. The light observed from the galaxy was emitted when the universe was only 2.1 billion years old. It was determined to be similar in mass to the Milky Way.
Observations
The galaxy is located in the Extended Groth Strip cosmological field and it was identified as a barred spiral galaxy thanks to the observations of the NIRCam instrument onboard the James Webb Space Telescope. These observations were made in June 2022 as part of the Cosmic Evolution Early Release Science (CEERS) survey and are publicly available for the general community.
Morphology
CEERS-2112 is a barred spiral galaxy, resembling the structure of the Milky Way. It presents a concentration of stars moving on very elliptical orbits in its central region, which appears as an elongated structure (stellar bar), from which two faint spiral arms develop. In the local Universe, about 70% of galaxies show this appearance, which is quite rare in the early Universe, where the percentage diminishes to about 5% at redshift z > 2.
Stellar mass
The galaxy has a stellar mass of 3.9 billion times that of the Sun, comparable with that of the Milky Way 11.7 billion years ago.
References
External links
CEERS public webpage: https://ceers.github.io
The Mikulski Archive for Space Telescopes (CEERS datasets): https://archive.stsci.edu/hlsp/ceers
Dwarf spiral galaxies
Barred spiral galaxies
Ursa Major
Boötes
Astronomical objects discovered in 2023 | CEERS-2112 | Astronomy | 328 |
1,607,968 | https://en.wikipedia.org/wiki/Castle%20thunder%20%28sound%20effect%29 | Castle thunder is a sound effect that consists of the sound of a loud thunderclap during a rainstorm. It was originally recorded for the 1931 film Frankenstein, and has since been used in dozens of films, television programs, and commercials.
History
After its use in Frankenstein, the Castle Thunder was used in dozens of films from the 1930s through the 1980s, including Citizen Kane (1941), Bambi (1942), You Only Live Twice (1967), Young Frankenstein (1974), Star Wars (1977), Ghostbusters (1984), Back to the Future (1985), and Big Trouble in Little China (1986). Use of the effect in subsequent years has declined because the quality of the original analog recording does not sufficiently hold up in modern sound mixes.
The effect appears in Disney productions (largely from the 1940s to 1980s), and Hanna-Barbera cartoons, including the original Scooby-Doo animated series. It can also be heard at the Haunted Mansion attraction at Disney theme parks.
The sound can be found on a few sound effects libraries distributed by Sound Ideas (such as the Soundelux Master Collection, the Network Sound Effects Library, the 20th Century Fox Sound Effects Library and the Hanna-Barbera SoundFX Library).
See also
Wilhelm scream
Howie scream
Tarzan's jungle call
Goofy holler
References
External links
Common variants of the sound effect
Video compilation of castle thunder in modern animation
How the crash and roll of castle thunder matches the science of thunderstorms
In-jokes
Sound effects
1931 works
Lightning | Castle thunder (sound effect) | Physics | 310 |
60,849,373 | https://en.wikipedia.org/wiki/Major%20irrigation%20project | Major irrigation project is a classification of irrigation projects used in India. A project with a cultivable command area of more than 10,000 hectares is classified as a major irrigation project. Before the Fifth Five-Year Plan, irrigation schemes were classified on the basis of investments needed to implement the scheme. Since the Fifth Five-Year Plan, India has adopted the command area-based system of classification.
References
Irrigation projects
Irrigation in India | Major irrigation project | Engineering | 88 |
258,833 | https://en.wikipedia.org/wiki/Thermal%20analysis | Thermal analysis is a branch of materials science where the properties of materials are studied as they change with temperature. Several methods are commonly used – these are distinguished from one another by the property which is measured:
Dielectric thermal analysis: dielectric permittivity and loss factor
Differential thermal analysis: temperature difference versus temperature or time
Differential scanning calorimetry: heat flow changes versus temperature or time
Dilatometry: volume changes with temperature change
Dynamic mechanical analysis: measures storage modulus (stiffness) and loss modulus (damping) versus temperature, time and frequency
Evolved gas analysis: analysis of gases evolved during heating of a material, usually decomposition products
Isothermal titration calorimetry
Isothermal microcalorimetry
Laser flash analysis: thermal diffusivity and thermal conductivity
Thermogravimetric analysis: mass change versus temperature or time
Thermomechanical analysis: dimensional changes versus temperature or time
Thermo-optical analysis: optical properties
Derivatography: A complex method in thermal analysis
Simultaneous thermal analysis generally refers to the simultaneous application of thermogravimetry and differential scanning calorimetry to one and the same sample in a single instrument. The test conditions are perfectly identical for the thermogravimetric analysis and differential scanning calorimetry signals (same atmosphere, gas flow rate, vapor pressure of the sample, heating rate, thermal contact to the sample crucible and sensor, radiation effect, etc.). The information gathered can even be enhanced by coupling the simultaneous thermal analysis instrument to an Evolved Gas Analyzer like Fourier transform infrared spectroscopy or mass spectrometry.
Other, less common, methods measure the sound or light emission from a sample, or the electrical discharge from a dielectric material, or the mechanical relaxation in a stressed specimen. The essence of all these techniques is that the sample's response is recorded as a function of temperature (and time).
It is usual to control the temperature in a predetermined way – either by a continuous increase or decrease in temperature at a constant rate (linear heating/cooling) or by carrying out a series of determinations at different temperatures (stepwise isothermal measurements). More advanced temperature profiles have been developed which use an oscillating (usually sine or square wave) heating rate (Modulated Temperature Thermal Analysis) or modify the heating rate in response to changes in the system's properties (Sample Controlled Thermal Analysis).
In addition to controlling the temperature of the sample, it is also important to control its environment (e.g. atmosphere). Measurements may be carried out in air or under an inert gas (e.g. nitrogen or helium). Reducing or reactive atmospheres have also been used and measurements are even carried out with the sample surrounded by water or other liquids. Inverse gas chromatography is a technique which studies the interaction of gases and vapours with a surface - measurements are often made at different temperatures so that these experiments can be considered to come under the auspices of Thermal Analysis.
Atomic force microscopy uses a fine stylus to map the topography and mechanical properties of surfaces to high spatial resolution. By controlling the temperature of the heated tip and/or the sample a form of spatially resolved thermal analysis can be carried out.
Thermal analysis is also often used as a term for the study of heat transfer through structures. Many of the basic engineering data for modelling such systems comes from measurements of heat capacity and thermal conductivity.
Polymers
Polymers represent another large area in which thermal analysis finds strong applications. Thermoplastic polymers are commonly found in everyday packaging and household items, but for the analysis of the raw materials, effects of the many additive used (including stabilisers and colours) and fine-tuning of the moulding or extrusion processing used can be achieved by using differential scanning calorimetry. An example is oxidation induction time by differential scanning calorimetry which can determine the amount of oxidation stabiliser present in a thermoplastic (usually a polyolefin) polymer material. Compositional analysis is often made using thermogravimetric analysis, which can separate fillers, polymer resin and other additives. Thermogravimetric analysis can also give an indication of thermal stability and the effects of additives such as flame retardants. (See J.H.Flynn, L.A.Wall J.Res.Nat.Bur. Standerds, General Treatment of the Thermogravimetry of Polymers Part A, 1966 V70A, No5 487)
Thermal analysis of composite materials, such as carbon fibre composites or glass epoxy composites are often carried out using dynamic mechanical analysis, which can measure the stiffness of materials by determining the modulus and damping (energy absorbing) properties of the material. Aerospace companies often employ these analysers in routine quality control to ensure that products being manufactured meet the required strength specifications. Formula 1 racing car manufacturers also have similar requirements. Differential scanning calorimetry is used to determine the curing properties of the resins used in composite materials, and can also confirm whether a resin can be cured and how much heat is evolved during that process. Application of predictive kinetics analysis can help to fine-tune manufacturing processes. Another example is that thermogravimetric analysis can be used to measure the fibre content of composites by heating a sample to remove the resin by application of heat and then determining the mass remaining.
Metals
Production of many metals (cast iron, grey iron, ductile iron, compacted graphite iron, 3000 series aluminium alloys, copper alloys, silver, and complex steels) are aided by a production technique also referred to as thermal analysis. A sample of liquid metal is removed from the furnace or ladle and poured into a sample cup with a thermocouple embedded in it. The temperature is then monitored, and the phase diagram arrests (liquidus, eutectic, and solidus) are noted. From this information chemical composition based on the phase diagram can be calculated, or the crystalline structure of the cast sample can be estimated especially for silicon morphology in hypo-eutectic Al-Si cast alloys. Strictly speaking these measurements are cooling curves and a form of sample controlled thermal analysis whereby the cooling rate of the sample is dependent on the cup material (usually bonded sand) and sample volume which is normally a constant due to the use of standard sized sample cups. To detect phase evolution and corresponding characteristic temperatures, cooling curve and its first derivative curve should be considered simultaneously. Examination of cooling and derivative curves is done by using appropriate data analysis software. The process consists of plotting, smoothing and curve fitting as well as identifying the reaction points and characteristic parameters. This procedure is known as Computer-Aided Cooling Curve Thermal Analysis.
Advanced techniques use differential curves to locate endothermic inflection points such as gas holes, and shrinkage, or exothermic phases such as carbides, beta crystals, inter crystalline copper, magnesium silicide, iron phosphide's and other phases as they solidify. Detection limits seem to be around 0.01% to 0.03% of volume.
In addition, integration of the area between the zero curve and the first derivative is a measure of the specific heat of that part of the solidification which can lead to rough estimates of the percent volume of a phase. (Something has to be either known or assumed about the specific heat of the phase versus the overall specific heat.) In spite of this limitation, this method is better than estimates from two dimensional micro analysis, and a lot faster than chemical dissolution.
Foods
Most foods are subjected to variations in their temperature during production, transport, storage, preparation and consumption, e.g., pasteurization, sterilization, evaporation, cooking, freezing, chilling, etc. Temperature changes cause alterations in the physical and chemical properties of food components which influence the overall properties of the final product, e.g., taste, appearance, texture and stability. Chemical reactions such as hydrolysis, oxidation or reduction may be promoted, or physical changes, such as evaporation, melting, crystallization, aggregation or gelation may occur. A better understanding of the influence of temperature on the properties of foods enables food manufacturers to optimize processing conditions and improve product quality. It is therefore important for food scientists to have analytical techniques to monitor the changes that occur in foods when their temperature varies. These techniques are often grouped under the general heading of thermal analysis. In principle, most analytical techniques can be used, or easily adapted, to monitor the temperature-dependent properties of foods, e.g., spectroscopic (nuclear magnetic resonance, UV-visible, infrared spectroscopy, fluorescence), scattering (light, X-rays, neutrons), physical (mass, density, rheology, heat capacity) etc. Nevertheless, at present the term thermal analysis is usually reserved for a narrow range of techniques that measure changes in the physical properties of foods with temperature (TG/DTG, differential thermal analysis, differential scanning calorimetry and transition temperature).
Printed circuit boards
Power dissipation is an important issue in present-day PCB design. Power dissipation will result in temperature difference and pose a thermal problem to a chip. In addition to the issue of reliability, excess heat will also negatively affect electrical performance and safety. The working temperature of an IC should therefore be kept below the maximum allowable limit of the worst case. In general, the temperatures of junction and ambient are 125 °C and 55 °C, respectively.
The ever-shrinking chip size causes the heat to concentrate within a small area and leads to high power density. Furthermore, denser transistors gathering in a monolithic chip and higher operating frequency cause a worsening of the power dissipation. Removing the heat effectively becomes the critical issue to be resolved.
References
External links
Thermal Analysis, Cambridge University
International Confederation for Thermal Analysis and Calorimetry
Biological processes
Calorimetry
Chemical processes
Heat transfer
Materials science | Thermal analysis | Physics,Chemistry,Materials_science,Engineering,Biology | 2,050 |
46,580,185 | https://en.wikipedia.org/wiki/Penicillium%20malacaense | Penicillium malacaense is an anamorph species of the genus of Penicillium.
References
Further reading
malacaense
Fungi described in 1980
Fungus species | Penicillium malacaense | Biology | 37 |
155,443 | https://en.wikipedia.org/wiki/Corrosion | Corrosion is a natural process that converts a refined metal into a more chemically stable oxide. It is the gradual deterioration of materials (usually a metal) by chemical or electrochemical reaction with their environment. Corrosion engineering is the field dedicated to controlling and preventing corrosion.
In the most common use of the word, this means electrochemical oxidation of metal in reaction with an oxidant such as oxygen, hydrogen, or hydroxide. Rusting, the formation of red-orange iron oxides, is a well-known example of electrochemical corrosion. This type of corrosion typically produces oxides or salts of the original metal and results in a distinctive coloration. Corrosion can also occur in materials other than metals, such as ceramics or polymers, although in this context, the term "degradation" is more common. Corrosion degrades the useful properties of materials and structures including mechanical strength, appearance, and permeability to liquids and gases. Corrosive is distinguished from caustic: the former implies mechanical degradation, the latter chemical.
Many structural alloys corrode merely from exposure to moisture in air, but the process can be strongly affected by exposure to certain substances. Corrosion can be concentrated locally to form a pit or crack, or it can extend across a wide area, more or less uniformly corroding the surface. Because corrosion is a diffusion-controlled process, it occurs on exposed surfaces. As a result, methods to reduce the activity of the exposed surface, such as passivation and chromate conversion, can increase a material's corrosion resistance. However, some corrosion mechanisms are less visible and less predictable.
The chemistry of corrosion is complex; it can be considered an electrochemical phenomenon. During corrosion at a particular spot on the surface of an object made of iron, oxidation takes place and that spot behaves as an anode. The electrons released at this anodic spot move through the metal to another spot on the object, and reduce oxygen at that spot in presence of H+ (which is believed to be available from carbonic acid () formed due to dissolution of carbon dioxide from air into water in moist air condition of atmosphere. Hydrogen ion in water may also be available due to dissolution of other acidic oxides from the atmosphere). This spot behaves as a cathode.
Galvanic corrosion
Galvanic corrosion occurs when two different metals have physical or electrical contact with each other and are immersed in a common electrolyte, or when the same metal is exposed to electrolyte with different concentrations. In a galvanic couple, the more active metal (the anode) corrodes at an accelerated rate and the more noble metal (the cathode) corrodes at a slower rate. When immersed separately, each metal corrodes at its own rate. What type of metal(s) to use is readily determined by following the galvanic series. For example, zinc is often used as a sacrificial anode for steel structures. Galvanic corrosion is of major interest to the marine industry and also anywhere water (containing salts) contacts pipes or metal structures.
Factors such as relative size of anode, types of metal, and operating conditions (temperature, humidity, salinity, etc.) affect galvanic corrosion. The surface area ratio of the anode and cathode directly affects the corrosion rates of the materials. Galvanic corrosion is often prevented by the use of sacrificial anodes.
Galvanic series
In any given environment (one standard medium is aerated, room-temperature seawater), one metal will be either more noble or more active than others, based on how strongly its ions are bound to the surface. Two metals in electrical contact share the same electrons, so that the "tug-of-war" at each surface is analogous to competition for free electrons between the two materials. Using the electrolyte as a host for the flow of ions in the same direction, the noble metal will take electrons from the active one. The resulting mass flow or electric current can be measured to establish a hierarchy of materials in the medium of interest. This hierarchy is called a galvanic series and is useful in predicting and understanding corrosion.
Corrosion removal
Often, it is possible to chemically remove the products of corrosion. For example, phosphoric acid in the form of naval jelly is often applied to ferrous tools or surfaces to remove rust. Corrosion removal should not be confused with electropolishing, which removes some layers of the underlying metal to make a smooth surface. For example, phosphoric acid may also be used to electropolish copper but it does this by removing copper, not the products of copper corrosion.
Resistance to corrosion
Some metals are more intrinsically resistant to corrosion than others (for some examples, see galvanic series). There are various ways of protecting metals from corrosion (oxidation) including painting, hot-dip galvanization, cathodic protection, and combinations of these.
Intrinsic chemistry
The materials most resistant to corrosion are those for which corrosion is thermodynamically unfavorable. Any corrosion products of gold or platinum tend to decompose spontaneously into pure metal, which is why these elements can be found in metallic form on Earth and have long been valued. More common "base" metals can only be protected by more temporary means.
Some metals have naturally slow reaction kinetics, even though their corrosion is thermodynamically favorable. These include such metals as zinc, magnesium, and cadmium. While corrosion of these metals is continuous and ongoing, it happens at an acceptably slow rate. An extreme example is graphite, which releases large amounts of energy upon oxidation, but has such slow kinetics that it is effectively immune to electrochemical corrosion under normal conditions.
Passivation
Passivation refers to the spontaneous formation of an ultrathin film of corrosion products, known as a passive film, on the metal's surface that act as a barrier to further oxidation. The chemical composition and microstructure of a passive film are different from the underlying metal. Typical passive film thickness on aluminium, stainless steels, and alloys is within 10 nanometers. The passive film is different from oxide layers that are formed upon heating and are in the micrometer thickness range – the passive film recovers if removed or damaged whereas the oxide layer does not. Passivation in natural environments such as air, water and soil at moderate pH is seen in such materials as aluminium, stainless steel, titanium, and silicon.
Passivation is primarily determined by metallurgical and environmental factors. The effect of pH is summarized using Pourbaix diagrams, but many other factors are influential. Some conditions that inhibit passivation include high pH for aluminium and zinc, low pH or the presence of chloride ions for stainless steel, high temperature for titanium (in which case the oxide dissolves into the metal, rather than the electrolyte) and fluoride ions for silicon. On the other hand, unusual conditions may result in passivation of materials that are normally unprotected, as the alkaline environment of concrete does for steel rebar. Exposure to a liquid metal such as mercury or hot solder can often circumvent passivation mechanisms.
It has been shown using electrochemical scanning tunneling microscopy that during iron passivation, an n-type semiconductor Fe(III) oxide grows at the interface with the metal that leads to the buildup of an electronic barrier opposing electron flow and an electronic depletion region that prevents further oxidation reactions. These results indicate a mechanism of "electronic passivation". The electronic properties of this semiconducting oxide film also provide a mechanistic explanation of corrosion mediated by chloride, which creates surface states at the oxide surface that lead to electronic breakthrough, restoration of anodic currents, and disruption of the electronic passivation mechanism.
Corrosion in passivated materials
Passivation is extremely useful in mitigating corrosion damage, however even a high-quality alloy will corrode if its ability to form a passivating film is hindered. Proper selection of the right grade of material for the specific environment is important for the long-lasting performance of this group of materials. If breakdown occurs in the passive film due to chemical or mechanical factors, the resulting major modes of corrosion may include pitting corrosion, crevice corrosion, and stress corrosion cracking.
Pitting corrosion
Certain conditions, such as low concentrations of oxygen or high concentrations of species such as chloride which compete as anions, can interfere with a given alloy's ability to re-form a passivating film. In the worst case, almost all of the surface will remain protected, but tiny local fluctuations will degrade the oxide film in a few critical points. Corrosion at these points will be greatly amplified, and can cause corrosion pits of several types, depending upon conditions. While the corrosion pits only nucleate under fairly extreme circumstances, they can continue to grow even when conditions return to normal, since the interior of a pit is naturally deprived of oxygen and locally the pH decreases to very low values and the corrosion rate increases due to an autocatalytic process. In extreme cases, the sharp tips of extremely long and narrow corrosion pits can cause stress concentration to the point that otherwise tough alloys can shatter; a thin film pierced by an invisibly small hole can hide a thumb sized pit from view. These problems are especially dangerous because they are difficult to detect before a part or structure fails. Pitting remains among the most common and damaging forms of corrosion in passivated alloys, but it can be prevented by control of the alloy's environment.
Pitting results when a small hole, or cavity, forms in the metal, usually as a result of de-passivation of a small area. This area becomes anodic, while part of the remaining metal becomes cathodic, producing a localized galvanic reaction. The deterioration of this small area penetrates the metal and can lead to failure. This form of corrosion is often difficult to detect due to the fact that it is usually relatively small and may be covered and hidden by corrosion-produced compounds.
Weld decay and knifeline attack
Stainless steel can pose special corrosion challenges, since its passivating behavior relies on the presence of a major alloying component (chromium, at least 11.5%). Because of the elevated temperatures of welding and heat treatment, chromium carbides can form in the grain boundaries of stainless alloys. This chemical reaction robs the material of chromium in the zone near the grain boundary, making those areas much less resistant to corrosion. This creates a galvanic couple with the well-protected alloy nearby, which leads to "weld decay" (corrosion of the grain boundaries in the heat affected zones) in highly corrosive environments. This process can seriously reduce the mechanical strength of welded joints over time.
A stainless steel is said to be "sensitized" if chromium carbides are formed in the microstructure. A typical microstructure of a normalized type 304 stainless steel shows no signs of sensitization, while a heavily sensitized steel shows the presence of grain boundary precipitates. The dark lines in the sensitized microstructure are networks of chromium carbides formed along the grain boundaries.
Special alloys, either with low carbon content or with added carbon "getters" such as titanium and niobium (in types 321 and 347, respectively), can prevent this effect, but the latter require special heat treatment after welding to prevent the similar phenomenon of "knifeline attack". As its name implies, corrosion is limited to a very narrow zone adjacent to the weld, often only a few micrometers across, making it even less noticeable.
Crevice corrosion
Crevice corrosion is a localized form of corrosion occurring in confined spaces (crevices), to which the access of the working fluid from the environment is limited. Formation of a differential aeration cell leads to corrosion inside the crevices. Examples of crevices are gaps and contact areas between parts, under gaskets or seals, inside cracks and seams, spaces filled with deposits, and under sludge piles.
Crevice corrosion is influenced by the crevice type (metal-metal, metal-non-metal), crevice geometry (size, surface finish), and metallurgical and environmental factors. The susceptibility to crevice corrosion can be evaluated with ASTM standard procedures. A critical crevice corrosion temperature is commonly used to rank a material's resistance to crevice corrosion.
Hydrogen grooving
In the chemical industry, hydrogen grooving is the corrosion of piping at grooves created by the interaction of a corrosive agent, corroded pipe constituents, and hydrogen gas bubbles. For example, when sulfuric acid () flows through steel pipes, the iron in the steel reacts with the acid to form a passivation coating of iron sulfate () and hydrogen gas (). The iron sulfate coating will protect the steel from further reaction; however, if hydrogen bubbles contact this coating, it will be removed. Thus, a groove can be formed by a travelling bubble, exposing more steel to the acid, causing a vicious cycle. The grooving is exacerbated by the tendency of subsequent bubbles to follow the same path.
High-temperature corrosion
High-temperature corrosion is chemical deterioration of a material (typically a metal) as a result of heating. This non-galvanic form of corrosion can occur when a metal is subjected to a hot atmosphere containing oxygen, sulfur ("sulfidation"), or other compounds capable of oxidizing (or assisting the oxidation of) the material concerned. For example, materials used in aerospace, power generation, and even in car engines must resist sustained periods at high temperature, during which they may be exposed to an atmosphere containing the potentially highly-corrosive products of combustion.
Some products of high-temperature corrosion can potentially be turned to the advantage of the engineer. The formation of oxides on stainless steels, for example, can provide a protective layer preventing further atmospheric attack, allowing for a material to be used for sustained periods at both room and high temperatures in hostile conditions. Such high-temperature corrosion products, in the form of compacted oxide layer glazes, prevent or reduce wear during high-temperature sliding contact of metallic (or metallic and ceramic) surfaces. Thermal oxidation is also commonly used to produce controlled oxide nanostructures, including nanowires and thin films.
Microbial corrosion
Microbial corrosion, or commonly known as microbiologically influenced corrosion (MIC), is a corrosion caused or promoted by microorganisms, usually chemoautotrophs. It can apply to both metallic and non-metallic materials, in the presence or absence of oxygen. Sulfate-reducing bacteria are active in the absence of oxygen (anaerobic); they produce hydrogen sulfide, causing sulfide stress cracking. In the presence of oxygen (aerobic), some bacteria may directly oxidize iron to iron oxides and hydroxides, other bacteria oxidize sulfur and produce sulfuric acid causing biogenic sulfide corrosion. Concentration cells can form in the deposits of corrosion products, leading to localized corrosion.
Accelerated low-water corrosion (ALWC) is a particularly aggressive form of MIC that affects steel piles in seawater near the low water tide mark. It is characterized by an orange sludge, which smells of hydrogen sulfide when treated with acid. Corrosion rates can be very high and design corrosion allowances can soon be exceeded leading to premature failure of the steel pile. Piles that have been coated and have cathodic protection installed at the time of construction are not susceptible to ALWC. For unprotected piles, sacrificial anodes can be installed locally to the affected areas to inhibit the corrosion or a complete retrofitted sacrificial anode system can be installed. Affected areas can also be treated using cathodic protection, using either sacrificial anodes or applying current to an inert anode to produce a calcareous deposit, which will help shield the metal from further attack.
Metal dusting
Metal dusting is a catastrophic form of corrosion that occurs when susceptible materials are exposed to environments with high carbon activities, such as synthesis gas and other high-CO environments. The corrosion manifests itself as a break-up of bulk metal to metal powder. The suspected mechanism is firstly the deposition of a graphite layer on the surface of the metal, usually from carbon monoxide (CO) in the vapor phase. This graphite layer is then thought to form metastable M3C species (where M is the metal), which migrate away from the metal surface. However, in some regimes, no M3C species is observed indicating a direct transfer of metal atoms into the graphite layer.
Protection from corrosion
Various treatments are used to slow corrosion damage to metallic objects which are exposed to the weather, salt water, acids, or other hostile environments. Some unprotected metallic alloys are extremely vulnerable to corrosion, such as those used in neodymium magnets, which can spall or crumble into powder even in dry, temperature-stable indoor environments unless properly treated.
Surface treatments
When surface treatments are used to reduce corrosion, great care must be taken to ensure complete coverage, without gaps, cracks, or pinhole defects. Small defects can act as an "Achilles' heel", allowing corrosion to penetrate the interior and causing extensive damage even while the outer protective layer remains apparently intact for a period of time.
Applied coatings
Plating, painting, and the application of enamel are the most common anti-corrosion treatments. They work by providing a barrier of corrosion-resistant material between the damaging environment and the structural material. Aside from cosmetic and manufacturing issues, there may be tradeoffs in mechanical flexibility versus resistance to abrasion and high temperature. Platings usually fail only in small sections, but if the plating is more noble than the substrate (for example, chromium on steel), a galvanic couple will cause any exposed area to corrode much more rapidly than an unplated surface would. For this reason, it is often wise to plate with active metal such as zinc or cadmium. If the zinc coating is not thick enough the surface soon becomes unsightly with rusting obvious. The design life is directly related to the metal coating thickness.
Painting either by roller or brush is more desirable for tight spaces; spray would be better for larger coating areas such as steel decks and waterfront applications. Flexible polyurethane coatings, like Durabak-M26 for example, can provide an anti-corrosive seal with a highly durable slip resistant membrane. Painted coatings are relatively easy to apply and have fast drying times although temperature and humidity may cause dry times to vary.
Reactive coatings
If the environment is controlled (especially in recirculating systems), corrosion inhibitors can often be added to it. These chemicals form an electrically insulating or chemically impermeable coating on exposed metal surfaces, to suppress electrochemical reactions. Such methods make the system less sensitive to scratches or defects in the coating, since extra inhibitors can be made available wherever metal becomes exposed. Chemicals that inhibit corrosion include some of the salts in hard water (Roman water systems are known for their mineral deposits), chromates, phosphates, polyaniline, other conducting polymers, and a wide range of specially designed chemicals that resemble surfactants (i.e., long-chain organic molecules with ionic end groups).
Anodization
Aluminium alloys often undergo a surface treatment. Electrochemical conditions in the bath are carefully adjusted so that uniform pores, several nanometers wide, appear in the metal's oxide film. These pores allow the oxide to grow much thicker than passivating conditions would allow. At the end of the treatment, the pores are allowed to seal, forming a harder-than-usual surface layer. If this coating is scratched, normal passivation processes take over to protect the damaged area.
Anodizing is very resilient to weathering and corrosion, so it is commonly used for building facades and other areas where the surface will come into regular contact with the elements. While being resilient, it must be cleaned frequently. If left without cleaning, panel edge staining will naturally occur. Anodization is the process of converting an anode into cathode by bringing a more active anode in contact with it.
Biofilm coatings
A new form of protection has been developed by applying certain species of bacterial films to the surface of metals in highly corrosive environments. This process increases the corrosion resistance substantially. Alternatively, antimicrobial-producing biofilms can be used to inhibit mild steel corrosion from sulfate-reducing bacteria.
Controlled permeability formwork
Controlled permeability formwork (CPF) is a method of preventing the corrosion of reinforcement by naturally enhancing the durability of the cover during concrete placement. CPF has been used in environments to combat the effects of carbonation, chlorides, frost, and abrasion.
Cathodic protection
Cathodic protection (CP) is a technique to control the corrosion of a metal surface by making it the cathode of an electrochemical cell. Cathodic protection systems are most commonly used to protect steel pipelines and tanks; steel pier piles, ships, and offshore oil platforms.
Sacrificial anode protection
For effective CP, the potential of the steel surface is polarized (pushed) more negative until the metal surface has a uniform potential. With a uniform potential, the driving force for the corrosion reaction is halted. For galvanic CP systems, the anode material corrodes under the influence of the steel, and eventually it must be replaced. The polarization is caused by the current flow from the anode to the cathode, driven by the difference in electrode potential between the anode and the cathode. The most common sacrificial anode materials are aluminum, zinc, magnesium and related alloys. Aluminum has the highest capacity, and magnesium has the highest driving voltage and is thus used where resistance is higher. Zinc is general purpose and the basis for galvanizing.
A number of problems are associated with sacrificial anodes. Among these, from an environmental perspective, is the release of zinc, magnesium, aluminum and heavy metals such as cadmium into the environment including seawater. From a working perspective, sacrificial anodes systems are considered to be less precise than modern cathodic protection systems such as Impressed Current Cathodic Protection (ICCP) systems. Their ability to provide requisite protection has to be checked regularly by means of underwater inspection by divers. Furthermore, as they have a finite lifespan, sacrificial anodes need to be replaced regularly over time.
Impressed current cathodic protection
For larger structures, galvanic anodes cannot economically deliver enough current to provide complete protection. Impressed current cathodic protection (ICCP) systems use anodes connected to a DC power source (such as a cathodic protection rectifier). Anodes for ICCP systems are tubular and solid rod shapes of various specialized materials. These include high silicon cast iron, graphite, mixed metal oxide or platinum coated titanium or niobium coated rod and wires.
Anodic protection
Anodic protection impresses anodic current on the structure to be protected (opposite to the cathodic protection). It is appropriate for metals that exhibit passivity (e.g. stainless steel) and suitably small passive current over a wide range of potentials. It is used in aggressive environments, such as solutions of sulfuric acid. Anodic protection is an electrochemical method of corrosion protection by keeping metal in passive state
Rate of corrosion
The formation of an oxide layer is described by the Deal–Grove model, which is used to predict and control oxide layer formation in diverse situations. A simple test for measuring corrosion is the weight loss method. The method involves exposing a clean weighed piece of the metal or alloy to the corrosive environment for a specified time followed by cleaning to remove corrosion products and weighing the piece to determine the loss of weight. The rate of corrosion () is calculated as
where is a constant, is the weight loss of the metal in time , is the surface area of the metal exposed, and is the density of the metal (in g/cm3).
Other common expressions for the corrosion rate is penetration depth and change of mechanical properties.
Economic impact
In 2002, the US Federal Highway Administration released a study titled "Corrosion Costs and Preventive Strategies in the United States" on the direct costs associated with metallic corrosion in the US industry. In 1998, the total annual direct cost of corrosion in the US roughly $276 billion (or 3.2% of the US gross domestic product at the time). Broken down into five specific industries, the economic losses are $22.6 billion in infrastructure, $17.6 billion in production and manufacturing, $29.7 billion in transportation, $20.1 billion in government, and $47.9 billion in utilities.
Rust is one of the most common causes of bridge accidents. As rust displaces a much higher volume than the originating mass of iron, its build-up can also cause failure by forcing apart adjacent components. It was the cause of the collapse of the Mianus River Bridge in 1983, when support bearings rusted internally and pushed one corner of the road slab off its support. Three drivers on the roadway at the time died as the slab fell into the river below. The following NTSB investigation showed that a drain in the road had been blocked for road re-surfacing, and had not been unblocked; as a result, runoff water penetrated the support hangers. Rust was also an important factor in the Silver Bridge disaster of 1967 in West Virginia, when a steel suspension bridge collapsed within a minute, killing 46 drivers and passengers who were on the bridge at the time.
Similarly, corrosion of concrete-covered steel and iron can cause the concrete to spall, creating severe structural problems. It is one of the most common failure modes of reinforced concrete bridges. Measuring instruments based on the half-cell potential can detect the potential corrosion spots before total failure of the concrete structure is reached.
Until 20–30 years ago, galvanized steel pipe was used extensively in the potable water systems for single and multi-family residents as well as commercial and public construction. Today, these systems have long ago consumed the protective zinc and are corroding internally, resulting in poor water quality and pipe failures. The economic impact on homeowners, condo dwellers, and the public infrastructure is estimated at $22 billion as the insurance industry braces for a wave of claims due to pipe failures.
Corrosion in nonmetals
Most ceramic materials are almost entirely immune to corrosion. The strong chemical bonds that hold them together leave very little free chemical energy in the structure; they can be thought of as already corroded. When corrosion does occur, it is almost always a simple dissolution of the material or chemical reaction, rather than an electrochemical process. A common example of corrosion protection in ceramics is the lime added to soda–lime glass to reduce its solubility in water; though it is not nearly as soluble as pure sodium silicate, normal glass does form sub-microscopic flaws when exposed to moisture. Due to its brittleness, such flaws cause a dramatic reduction in the strength of a glass object during its first few hours at room temperature.
Corrosion of polymers
Polymer degradation involves several complex and often poorly understood physiochemical processes. These are strikingly different from the other processes discussed here, and so the term "corrosion" is only applied to them in a loose sense of the word. Because of their large molecular weight, very little entropy can be gained by mixing a given mass of polymer with another substance, making them generally quite difficult to dissolve. While dissolution is a problem in some polymer applications, it is relatively simple to design against.
A more common and related problem is "swelling", where small molecules infiltrate the structure, reducing strength and stiffness and causing a volume change. Conversely, many polymers (notably flexible vinyl) are intentionally swelled with plasticizers, which can be leached out of the structure, causing brittleness or other undesirable changes.
The most common form of degradation, however, is a decrease in polymer chain length. Mechanisms which break polymer chains are familiar to biologists because of their effect on DNA: ionizing radiation (most commonly ultraviolet light), free radicals, and oxidizers such as oxygen, ozone, and chlorine. Ozone cracking is a well-known problem affecting natural rubber for example. Plastic additives can slow these process very effectively, and can be as simple as a UV-absorbing pigment (e.g., titanium dioxide or carbon black). Plastic shopping bags often do not include these additives so that they break down more easily as ultrafine particles of litter.
Corrosion of glass
Glass is characterized by a high degree of corrosion resistance. Because of its high water resistance, it is often used as primary packaging material in the pharmaceutical industry since most medicines are preserved in a watery solution. Besides its water resistance, glass is also robust when exposed to certain chemically-aggressive liquids or gases.
Glass disease is the corrosion of silicate glasses in aqueous solutions. It is governed by two mechanisms: diffusion-controlled leaching (ion exchange) and hydrolytic dissolution of the glass network. Both mechanisms strongly depend on the pH of contacting solution: the rate of ion exchange decreases with pH as 10−0.5pH, whereas the rate of hydrolytic dissolution increases with pH as 100.5pH.
Mathematically, corrosion rates of glasses are characterized by normalized corrosion rates of elements (g/cm2·d) which are determined as the ratio of total amount of released species into the water (g) to the water-contacting surface area (cm2), time of contact (days), and weight fraction content of the element in the glass :
.
The overall corrosion rate is a sum of contributions from both mechanisms (leaching + dissolution): . Diffusion-controlled leaching (ion exchange) is characteristic of the initial phase of corrosion and involves replacement of alkali ions in the glass by a hydronium (H3O+) ion from the solution. It causes an ion-selective depletion of near surface layers of glasses and gives an inverse-square-root dependence of corrosion rate with exposure time. The diffusion-controlled normalized leaching rate of cations from glasses (g/cm2·d) is given by:
,
where is time, is the th cation effective diffusion coefficient (cm2/d), which depends on pH of contacting water as , and is the density of the glass (g/cm3).
Glass network dissolution is characteristic of the later phases of corrosion and causes a congruent release of ions into the water solution at a time-independent rate in dilute solutions (g/cm2·d):
,
where is the stationary hydrolysis (dissolution) rate of the glass (cm/d). In closed systems, the consumption of protons from the aqueous phase increases the pH and causes a fast transition to hydrolysis. However, a further saturation of solution with silica impedes hydrolysis and causes the glass to return to an ion-exchange; e.g., diffusion-controlled regime of corrosion.
In typical natural conditions, normalized corrosion rates of silicate glasses are very low and are of the order of 10−7 to 10−5 g/(cm2·d). The very high durability of silicate glasses in water makes them suitable for hazardous and nuclear waste immobilisation.
Glass corrosion tests
There exist numerous standardized procedures for measuring the corrosion (also called chemical durability) of glasses in neutral, basic, and acidic environments, under simulated environmental conditions, in simulated body fluid, at high temperature and pressure, and under other conditions.
The standard procedure ISO 719 describes a test of the extraction of water-soluble basic compounds under neutral conditions: 2 g of glass, particle size 300–500 μm, is kept for 60 min in 50 mL de-ionized water of grade 2 at 98 °C; 25 mL of the obtained solution is titrated against 0.01 mol/L HCl solution. The volume of HCl required for neutralization is classified according to the table below.
The standardized test ISO 719 is not suitable for glasses with poor or not extractable alkaline components, but which are still attacked by water; e.g., quartz glass, B2O3 glass or P2O5 glass.
Usual glasses are differentiated into the following classes:
Hydrolytic class 1 (Type I): This class, which is also called neutral glass, includes borosilicate glasses (e.g., Duran, Pyrex, Fiolax). Glass of this class contains essential quantities of boron oxides, aluminium oxides and alkaline earth oxides. Through its composition neutral glass has a high resistance against temperature shocks and the highest hydrolytic resistance. Against acid and neutral solutions it shows high chemical resistance, because of its poor alkali content against alkaline solutions.
Hydrolytic class 2 (Type II): This class usually contains sodium silicate glasses with a high hydrolytic resistance through surface finishing. Sodium silicate glass is a silicate glass, which contains alkali- and alkaline earth oxide and primarily sodium oxide and calcium oxide.
Hydrolytic class 3 (Type III): Glass of the 3rd hydrolytic class usually contains sodium silicate glasses and has a mean hydrolytic resistance, which is two times poorer than of type 1 glasses. Acid class DIN 12116 and alkali class DIN 52322 (ISO 695) are to be distinguished from the hydrolytic class DIN 12111 (ISO 719).
See also
References
Further reading
Glass chemistry
Metallurgy | Corrosion | Chemistry,Materials_science,Engineering | 7,008 |
24,509,610 | https://en.wikipedia.org/wiki/PNU-120%2C596 | PNU-120596 is a drug that acts as a potent and selective positive allosteric modulator for the α7 subtype of neural nicotinic acetylcholine receptors. It is used in scientific research into cholinergic regulation of dopamine and glutamate release in the brain.
References
Nicotinic agonists
Stimulants
Phenol ethers
Chloroarenes
Ureas
Isoxazoles | PNU-120,596 | Chemistry | 95 |
47,153,689 | https://en.wikipedia.org/wiki/Mining%20and%20Chemical%20Combine | The Mining and Chemical Combine is a nuclear facility in Russia. It was established in 1950 to produce plutonium for weapons. It is in the closed city Zheleznogorsk, Krasnoyarsk Krai. The company is currently part of the Rosatom group.
The site had three underground nuclear reactors using cooling water from the Yenisei river: AD (1958), ADE-1 (1961) and ADE-2 (1965). ADE-2 was shut down in 2010 in accord with the 1997 Plutonium Management and Disposition Agreement (Plutonium Production Reactor Agreement) with the United States. It also provided heat and electricity for the area, which was its main function after 1993.
The complex has an interim storage facility. There is also a 60 t/year commercial mixed oxide (MOX) fuel fabrication facility (MFFF). It employs 7000 people.
The MOX production line completed a 10 kg batch in September 2014.
The city has a Mining and Chemical Combine museum.
References
External links
Rosatom
Nuclear weapons programme of Russia
Nuclear technology companies of Russia
Nuclear technology in the Soviet Union
Nuclear weapons program of the Soviet Union
Federal State Unitary Enterprises of Russia
Companies based in Krasnoyarsk Krai
Mining companies of the Soviet Union
1950 establishments in the Soviet Union | Mining and Chemical Combine | Physics | 264 |
13,103,839 | https://en.wikipedia.org/wiki/Magnetic%20water%20treatment | Magnetic water treatment (also known as anti-scale magnetic treatment or AMT) is a disproven method of reducing the effects of hard water by passing it through a magnetic field as a non-chemical alternative to water softening. A 1996 study by Lawrence Livermore National Laboratory found no significant effect of magnetic water treatment on the formation of scale. As magnets affect water to a small degree, and water containing ions is more conductive than purer water, magnetic water treatment is an example of a valid scientific hypothesis that failed experimental testing and is thus disproven. Any products claiming to utilize magnetic water treatment are absolutely fraudulent.
Vendors of magnetic water treatment devices frequently use photos and testimonials to support their claims, but omit quantitative detail and well-controlled studies. Advertisements and promotions generally omit system variables, such as corrosion or system mass balance analyticals, as well as measurements of post-treatment water such as concentration of hardness ions or the distribution, structure, and morphology of suspended particles.
See also
Fouling
Laundry ball
Magnet therapy
Pulsed-power water treatment
References
Water treatment
Fouling
Pseudoscience
Magnetic devices | Magnetic water treatment | Chemistry,Materials_science,Engineering,Environmental_science | 230 |
42,729,826 | https://en.wikipedia.org/wiki/47%20Capricorni | 47 Capricorni is a variable star located around 1,170 light years from the Sun in the southern constellation Capricornus, near the northern border with Aquarius. It has the variable star designation of AG Capricorni and a Bayer designation of c2 Capricorni; 47 Capricorni is the Flamsteed designation. This object is visible to the naked eye as a dim, red-hued point of light with an apparent visual magnitude that varies between 5.90 and 6.14. The star is receding from the Earth with a heliocentric radial velocity of +20 km/s.
In 1963, Alan William James Cousins announced that 47 Capricorni is a variable star. It was given its variable star designation in 1973.
This is an aging red giant star with a stellar classification of M2III. It is a semiregular variable star of subtype SRb with a period of 30.592 days and a maximum brightness of 5.9 magnitude. With the supply of hydrogen at its core exhausted, the star has expanded to around 102 times the Sun's radius. It is radiating 1,940 times the luminosity of the Sun from its swollen photosphere at an effective temperature of 3,784 K.
References
M-type giants
Semiregular variable stars
Capricornus
Capricorni, c2
BD-09 5833
Capricorni, 47
207005
107487
8318
Capricorni, AG | 47 Capricorni | Astronomy | 311 |
14,148,161 | https://en.wikipedia.org/wiki/D-octopine%20dehydrogenase | Octopine dehydrogenase (N2-(D-1-carboxyethyl)-L-arginine:NAD+ oxidoreductase, OcDH, ODH) is a dehydrogenase enzyme in the opine dehydrogenase family that helps maintain redox balance under anaerobic conditions. It is found largely in aquatic invertebrates, especially mollusks, sipunculids, and coelenterates, and plays a role analogous to lactate dehydrogenase (found largely in vertebrates)
. In the presence of NADH, OcDH catalyzes the reductive condensation of an α-keto acid with an amino acid to form N-carboxyalkyl-amino acids (opines). The purpose of this reaction is to reoxidize glycolytically formed NADH to NAD+, replenishing this important reductant used in glycolysis and allowing for the continued production of ATP in the absence of oxygen.
L-arginine + pyruvate + NADH + H+ D-octopine + NAD+ + H2O
Structure
OcDH is a monomer with a molecular weight of 38kD made of two functionally distinct subunits. The first, Domain I, is composed of 199 amino acids and contains a Rossmann fold. Domain II is composed of 204 amino acids and is connected to the Rossmann fold of Domain I via its N-terminus.
Mechanism
Isothermal titration calorimetry (ITR), nuclear magnetic resonance (NMR)
crystallography, and clonal studies of OcDH and its substrates have led to the identification of the enzyme reaction mechanism. First, the Rossmann fold in Domain I of OcDH binds NADH. Binding of NADH to the Rossmann fold triggers small conformational change typical in the binding of NADH to most dehydrogenases resulting in an interaction between the pyrophosphate moiety of NADH with residue Arg324 on Domain II. This interaction with Arg324 generates and stabilizes the L-arginine binding site and triggers partial domain closure (reduction in the distance between the two domains). The binding of the guanidinium headgroup of L-arginine to the active site of the OcDH:NADH complex (located between the domains) induces a rotational movement of Domain II towards Domain I (via a helix-kink-helix structure in Domain II). This conformational change forms the pyruvate binding site. Binding of pyruvate to the OcDH:NADH:L-arginine complex places the alpha-ketogroup of pyruvate in proximity with the alpha-amino group of L-arginine. The juxtaposition of these groups on the substrates results in the formation of a Schiff base which is subsequently reduced to D-octopine. The priming of the pyruvate site for hydride transfer via a Schiff base through the sequential binding of NADH and L-arginine to OcDH prevents the reduction of pyruvate to lactate.
Substrate specificity
Octopine dehydrogenase has at least two structural characteristics that contribute to substrate specificity. Upon binding to NADH, amino acid residues lining either side of the active site within the space between the domains of OcDH act as a “molecular ruler”, physically limiting the size of the substrates that can fit into the active site. There is also a negatively charged pocket in the cleft between the two domains that acts an “electrostatic sink” that captures the positively charged side-chain of L-arginine.
Evolution
Examination of OcDH reaction rates from different organisms in the presence of different substrates has demonstrated a trend of increasing specificity for substrates in animals of increasing complexity. Evolutionary modification in substrate specificity is seen most drastically in the amino acid substrate. OcDH from some sea anemones has been shown to be able to use non-guanidino amino acids whereas OcDH form more complex invertebrates, such as the cuttlefish, can only use L-arginine (a guanidino amino acid).
References
EC 1.5.1
Oxidoreductases | D-octopine dehydrogenase | Chemistry | 904 |
1,408,522 | https://en.wikipedia.org/wiki/Bdellovibrionaceae | The Bdellovibrionaceae are a family of Pseudomonadota. They include genera, such as Bdellovibrio and Vampirovibrio, which are unusual parasites that enter other bacteria.
See also
List of bacterial orders
List of bacteria genera
References
External links
Bdellovibrionaceae - J.P. Euzéby: List of Prokaryotic names with Standing in Nomenclature
Oligoflexia | Bdellovibrionaceae | Biology | 88 |
48,186,455 | https://en.wikipedia.org/wiki/Membrane%20models | Before the emergence of electron microscopy in the 1950s, scientists did not know the structure of a cell membrane or what its components were; biologists and other researchers used indirect evidence to identify membranes before they could actually be visualized. Specifically, it was through the models of Overton, Langmuir, Gorter and Grendel, and Davson and Danielli, that it was deduced that membranes have lipids, proteins, and a bilayer. The advent of the electron microscope, the findings of J. David Robertson, the proposal of Singer and Nicolson, and additional work of Unwin and Henderson all contributed to the development of the modern membrane model. However, understanding of past membrane models elucidates present-day perception of membrane characteristics. Following intense experimental research, the membrane models of the preceding century gave way to the fluid mosaic model that is generally accepted as a partial description.
Gorter and Grendel's membrane theory (1925)
Merhaba dünya
Evert Gorter and François Grendel (Dutch physiologists) approached the discovery of our present model of the plasma membrane structure as a lipid bi-layer. They simply hypothesized that if the plasma membrane is a bi-layer, then the surface area of the mono-layer of lipids measured would be double the surface area of the plasma membrane. To examine their hypothesis, they performed an experiment in which they extracted lipids from a known number of red blood cells (erythrocytes) of different mammalian sources, such as humans, goats, sheep, etc. and then spreading the lipids as a mono-layer in a Langmuir-Blodgett trough. They measured the total surface area of the plasma membrane of red blood cells, and using Langmuir's method, they measured the area of the monolayer of lipids. In comparing the two, they calculated an estimated ratio of 2:1 Mono-layer of lipids: Plasma membrane. This supported their hypothesis, which led to the conclusion that cell membranes are composed of two opposing molecular layers. The two scientists proposed a structure for this bi-layer, with the polar hydrophilic heads facing outwards towards the aqueous environment and the hydrophobic tails facing inwards away from the aqueous surroundings on both sides of the membrane. Although they arrived at the right conclusions, some of the experimental data were incorrect such as the miscalculation of the area and pressure of the lipid monolayer and the incompleteness of lipid extraction. They also failed to describe membrane function and had false assumptions such as that of plasma membranes consisting mostly of lipids. However, on the whole, this envisioning of the lipid bi-layer structure became the basic underlying assumption for each successive refinement in a modern understanding of membrane function.
The Davson and Danielli model with backup from Robertson (1940–1960)
Following the proposal of Gorter and Grendel, doubts inevitably arose over the veracity of having just a simple lipid bi-layer as a membrane. For instance, their model could not provide answers to questions on surface tension, permeability, and the electric resistance of membranes. Therefore, physiologist Hugh Davson and biologist James Danielli suggested that membranes indeed do have proteins. According to them, the existence of these "membrane proteins" explained that which couldn't be answered by the Gorter-Grendel model.
In 1935, Davson and Danielli proposed that biological membranes are made up of lipid bi-layers that are coated on both sides with thin sheets of protein and they simplified their model into the "pauci-molecular" theory. This theory declared that all biological membranes have a "lipoid" center surrounded by mono-layers of lipid that are covered by protein mono-layers. In short, their model was illustrated as a "sandwich" of protein-lipid-protein. The Davson-Danielli model threw new light on the understanding of cell membranes, by stressing the important role played by proteins in biological membranes.
By the 1950s, cell biologists verified the existence of plasma membranes through the use of electron microscopy (which accounted for higher resolutions). J. David Robertson used this method to propose the unit membrane model. Basically, he suggested that all cellular membranes share a similar underlying structure, the unit membrane. Using heavy metal staining, Robertson's proposal also seemed to agree instantaneously with the Davson-Danielli model. According to the trilaminar pattern of the cellular membrane viewed by Robertson, he suggested that the membranes consist of a lipid bi-layer covered on both surfaces with thin sheets of proteins(mucoprotiens). This suggestion was a great boost to the proposal of Davson and Danielli. However, even with Robertson's substantiation, the Davson-Danielli model had serious complications, a major one being that the proteins studied were mainly globular and couldn't therefore fit into the model's claim of thin protein sheets. These difficulties with the model stimulated new research in membrane organization and paved the way for the fluid mosaic model, which was proposed in 1972.
Singer and Nicolson's fluid mosaic model (1972)
In 1972, S. Jonathan Singer and Garth Nicolson developed new ideas for membrane structure. Their proposal was the fluid mosaic model, which is one of the dominant models now. It has two key features—a mosaic of proteins embedded in the membrane, and the membrane being a fluid bi-layer of lipids. The lipid bi-layer suggestion agrees with previous models but views proteins as globular entities embedded in the layer instead of thin sheets on the surface.
According to the model, membrane proteins are in three classes based on how they are linked to the lipid bi-layer:
Integral proteins: Immersed in the bi-layer and held in place by the affinity of hydrophobic parts of the protein for the hydrophobic tails of phospholipids on interior of the layer.
Peripheral proteins: More hydrophilic, and thus are non-covalently linked to the polar heads of phospholipids and other hydrophilic parts of other membrane proteins on the surface of the membrane.
Lipid anchored proteins: Essentially hydrophilic, so, are also located on the surface of the membrane, and are covalently attached to lipid molecules embedded in the layer.
As for the fluid nature of the membrane, the lipid components are capable of moving parallel to the membrane surface and are in constant motion. Many proteins are also capable of that motion within the membrane. However, some are restricted in their mobility due to them being anchored to structural elements such as the cytoskeleton on either side of the membrane.
In general, this model explains most of the criticisms of the Davson–Danielli model. It eliminated the need to accommodate membrane proteins in thin surface layers, proposed that the variability in the protein/lipid ratios of different membranes simply means that different membranes vary in the amount of protein they contain, and showed how the exposure of lipid-head groups at the membrane surface is compatible with their sensitivity to phospholipase digestion. Also, the fluidity of the lipid bi-layers and the intermingling of their components within the membrane make it easy to visualize the mobility of both lipids and proteins.
Henderson and Unwin's membrane theory
Henderson and Unwin have studied the purple membrane by electron microscopy, using a method for determining the projected structures of unstained crystalline specimens. By applying the method to tilted specimens, and using the principles put forward by DeRosier and Klug for the combination of such two-dimensional views, they obtained a 3-dimensional map of the membrane at 7 Å resolution. The map reveals the location of the protein and lipid components, the arrangement of the polypeptide chains within each protein molecule, and the relationship of the protein molecules in the lattice.
High-resolution micrographs of crystalline arrays of membrane proteins, taken at a low dose of electrons to minimize radiation damage, have been exploited to determine the three-dimensional structure by a Fourier transform.
Recent studies on negatively stained rat hepatocyte Gap™ junctions subjected to 3-dimensional Fourier reconstructions (of low-dose electron micrographs) indicate that the six protein sub-units are arranged in a cylinder slightly tilted tangentially, enclosing a channel 2 nm wide at the extracellular region. The dimensions of the channel within the membrane were narrower but could not be resolved (Unwin and Zampighi, 1980). A small radical movement of the sub-units at the cytoplasmic ends could reduce the sub-unit inclination tangential to six-fold axis and close the channel.
Further details of the molecular organization should emerge as more methods of preparation become available, so that high-resolution 3-dimensional images comparable to the purple membranes are obtained. By using ingenious procedures for the analysis of periodic arrays of biological macromolecules, in which data from low-dose electron images and diffraction patterns were combined, Henderson and Unwin (1975) reconstructed a three-dimensional image of purple membranes at 0.7 nm resolution. Glucose embedding was employed to alleviate dehydration damage and low doses (< 0.5 e/A*) to reduce the irradiation damage. The electron micrographs of unstained membranes were recorded such that the only source of contrast was a weak phase contrast induced by defocusing.
In their experiment, Unwin and Henderson found that protein extends to both sides of the lipid bi-layer and is composed of seven α-helices packed about 1–1.2 nm apart, 3.5–4.0 nm in length, running perpendicular to the plane of membrane. The molecules are organized around a 3-fold axis with a 2 nm-wide space at the center that is filled with lipids. This elegant work represents the most significant step forward thus far, as it has for the first time provided us with the structure of an integral membrane protein in situ.
The availability of the amino acid sequence, together with information about the electron scattering density from the work of Henderson and Unwin, has stimulated model-building efforts (Engleman et al., 1980) to fit the bacteriorhodopsin sequence information into a series of α-helical segments.
Kervin and Overduin's proteolipid code (2024)
Building on the fluid mosaic model, a framework called the proteolipid code was proposed in order to explain membrane organization. The proteolipid code relies on the concept of a zone, which is a functional region of membrane that is assembled and stabilized with both protein and lipid dependency. Integral and lipid-anchored proteins are proposed to form three types of zones: proteins with an associated lipid fingerprint, protein islands, and lipid-only voids. Although the latter do not contain proteins as part of their internal particle set or primary structure, they do contain proteins in their quaternary association with the former two zone types which influence the composition of the void. The idea that lipids can cluster independently of proteins through lipid-lipid interactions and then recruit integral proteins is forbidden in the framework, although lipid clustering is allowed and is designated as zone secondary structure.
See also
Cell biology
Cell theory
History of cell membrane theory
Membrane protein
References
Membrane biology | Membrane models | Chemistry | 2,362 |
62,411,787 | https://en.wikipedia.org/wiki/Herman%20T.%20Briscoe | Herman Thompson Briscoe (November 6, 1893 – October 8, 1960) was an American chemist and professor of chemistry. The Herman T. Briscoe Professorship in Chemistry at Indiana University was established in 1961, and the Herman T. Briscoe Quadrangle Dormitory was dedicated in 1966.
Early life and education
Herman T. Briscoe was born on November 6, 1893, in Shoals, Indiana. Briscoe received his teaching certificate in 1912 from Indiana University in Bloomington, Indiana and began teaching at his home high school in Shoals for three academic years before becoming principal of Shoals high school and later superintendent of Shoals school district. He returned to Indiana University, earning his A.B. degree in chemistry with high distinction in 1917. Briscoe would then enlist in the U.S. army as a private in May 1918, transferring to the Hercules Powder Company as a research chemist until his discharge in 1919. Between 1919 and 1922, Briscoe held successful teaching positions at Stark’s Military Academy, as an Austin Teaching Fellow at Harvard University, and at Colby College. Returning to Indiana University for a third time, Herman T. Briscoe received his A.M. and Ph.D. degrees in chemistry in 1924 under the guidance of Professor Frank C. Mathers.
Briscoe married Orah Elberta Briscoe (née Cole) in 1928. Orah, born in Liberty Center, IN in 1907, received her B.A. in Latin in 1929 and her M.A. in English in 1934. In 1929, their first child Catherine was born. They would have a total of 4 children.
Career
After receiving his Ph.D., Herman T. Briscoe was appointed assistant professor of chemistry at Indiana University, working his way to professor of chemistry in 1928. Throughout his career, Briscoe authored or coauthored 23 publications on conductivity, physical properties, and the reactions of organic and inorganic molecules, supervised the graduate studies of 25 students, and published several general chemistry textbooks.
In 1938, President of Indiana University Herman B. Wells appointed Briscoe as the secretary of the newly established self-survey committee, which sought the feedback of faculty and proposed administrative changes accordingly. In the same year, Briscoe was appointed Chairman of the Department of Chemistry of Indiana University following the recommendation of retiring Chairman Robert E. Lyons. Herman Briscoe would continue on to become Indiana University's first Dean of Faculties in 1939 and Vice President of Indiana University in 1940. Briscoe gave up his appointment as Chairman of the Department of Chemistry in order to focus on his administrative roles as Vice President and Dean of Faculties, in which he served until his retirement in 1959.
Organizational involvement
Fellow of the American Association for the Advancement of Science (1934)
Fellow of the Indiana Academy of Science (1935)
American Chemical Society
Phi Beta Kappa
Sigma Xi
Tau Kappa Alpha
Phi Lambda Upsilon
Alpha Chi Sigma
Lambda Chi Alpha
Books
Qualitative Chemical Analysis: Principles and methods, 1931
General Chemistry for Colleges, 1935
The Structure and properties of Matter, 1935
An Introduction to College Chemistry, 1937
References
Indiana University Bloomington faculty
Analytical chemists
American inorganic chemists
1893 births
1960 deaths
Scientists from Indianapolis
People from Martin County, Indiana
Indiana University Bloomington alumni | Herman T. Briscoe | Chemistry | 662 |
77,229,532 | https://en.wikipedia.org/wiki/Doug%20Butterworth | Douglas Stuart Butterworth is a retired South African fisheries scientist and applied mathematician. He is professor emeritus of applied mathematics at the University of Cape Town, where he is the director of the Marine Resource Assessment and Management (MARAM) research group.
Early life and education
Butterworth attended the Western Province Preparatory School in Cape Town, and he matriculated from nearby Bishops Diocesan College in 1963. Trained as a physicist, he holds an MSc from the University of Cape Town and a PhD in fundamental particle physics from University College London.
After his doctoral degree, he spent four months as an adjunct lecturer at the University of Natal. He returned to Cape Town in 1977 to work in applied mathematics for the Sea Fisheries Branch, unable to find a job in physics. He became involved in fisheries research in 1979, when he advised a co-worker – marine biologist Peter Best – about techniques for survey-based marine mammal abundance estimation. With Best's support, he became increasingly engaged in the research that Best was engaged in for the International Whaling Commission.
University of Cape Town
After two years at the Sea Fisheries Branch, Butterworth joined the Department of Mathematics and Applied Mathematics at the University of Cape Town. His focus was applied mathematics and he primarily taught biomathematics and environmental modelling. His most important research concerned fisheries assessment, fisheries management, and related modelling.
In particular, Butterworth is known for developing the so-called management procedure approach to fisheries regulation in the late 1980s and early 1990s. The procedure originated in an informal competition undertaken by Butterworth and Andre Punt, a PhD student, against foreign research groups; in the course of the competition, Butterworth and Punt devised the procedure by using feedback control to refine computer simulations for whaling quotas. The management procedure approach is extremely compatible with the precautionary principle advocated by the Earth Summit. The approach was subsequently applied to calculate annual catch targets for South African hake, sardine, anchovy, and rock lobster, and it spread beyond South Africa: Butterworth has advised at least 12 other countries, as well as fishing industry associations and international bodies (among them the scientific committees of the United Nations Food and Agriculture Organisation and the Convention on International Trade in Endangered Species). In total, he has written over 1,500 technical reports, in addition to some 250 academic publications.
After he retired from teaching, Butterworth remained the director of the university's Marine Resource Assessment and Management (MARAM) research group.
Honours and awards
Butterworth is a fellow of the Royal Society of South Africa. In October 2008, President Kgalema Motlanthe admitted him to the Order of Mapungubwe, granting him the award in silver for, "His excellent contribution to the betterment of the environment and sustainability of fisheries."
In September 2019, the Emperor of Japan admitted Butterworth to the Order of the Rising Sun, Third Class. Having served on the Japanese delegation to the scientific committee of the Commission for the Conservation of Southern Bluefin Tuna, he received the award for his contribution to the sustainable management of Japan's marine resources, particularly southern bluefin tuna.
References
External links
Professor Emeritus Doug Butterworth at University of Cape Town
Professor Emeritus Doug Butterworth at MARAM
2019 interview with Boating South Africa
2019 interview with Fishing Industry News
20th-century South African scientists
21st-century South African scientists
Academic staff of the University of Cape Town
Alumni of Diocesan College, Cape Town
Alumni of University College London
Applied mathematicians
Fisheries scientists
University of Cape Town alumni
Year of birth missing (living people)
Place of birth missing (living people)
Living people | Doug Butterworth | Mathematics | 719 |
19,975,545 | https://en.wikipedia.org/wiki/Grex%20%28horticulture%29 | The term grex (plural greges or grexes; abbreviation gx), derived from the Latin noun , , meaning 'flock', has been expanded in botanical nomenclature to describe hybrids of orchids, based solely on their parentage. Grex names are one of the three categories of plant names governed by the International Code of Nomenclature for Cultivated Plants; within a grex the cultivar group category can be used to refer to plants by their shared characteristics (rather than by their parentage), and individual orchid plants can be selected (and propagated) and named as cultivars.
Botanical nomenclature of hybrids
The horticultural nomenclature of grexes exists within the framework of the botanical nomenclature of hybrid plants. Interspecific hybrids occur in nature, and are treated under the International Code of Nomenclature for algae, fungi, and plants as nothospecies, ('notho' indicating hybrid). They can optionally be given Linnean binomials with a multiplication sign "×" before the species epithet for example Crataegus × media. An offspring of a nothospecies, either with a member of the same nothospecies or any of the parental species as the other parent, has the same nothospecific name. The nothospecific binomial is an alias for a list of the ancestral species, whether the ancestry is precisely known or not.
For example:
a hybrid between Cattleya warscewiczii Rchb.f. 1854 and Cattleya aurea Linden 1883 can be called Cattleya × hardyana Sander 1883 or simply Cattleya hardyana. An offspring of a Cattleya × hardyana pollenized by another Cattleya × hardyana would also be called Cattleya × hardyana. Cattleya × hardyana would also be the name of an offspring of a Cattleya × hardyana pollenized by either a Cattleya warscewiczii or a Cattleya aurea, or an offspring of either a Cattleya warscewiczii or a Cattleya aurea pollenized by a Cattleya × hardyana.
× Brassocattleya is a nothogenus including all hybrids between Brassavola and Cattleya. It includes the species Brassocattleya × arauji, also known simply as Brassocattleya arauji, which includes all hybrids between Brassavola tuberculata and Cattleya forbesii.
An earlier term was nothomorph for subordinate taxa to nothospecies. Since the 1982 meeting of the International Botanical Congress, such subordinate taxa are considered varieties (nothovars).
Horticultural treatment
Because many interspecific (and even intergeneric) barriers to hybridization in the Orchidaceae are maintained in nature only by pollinator behavior, it is easy to produce complex interspecific and even intergeneric hybrid orchid seeds: all it takes is a human motivated to use a toothpick, and proper care of the mother plant as it develops a seed pod. Germinating the seeds and growing them to maturity is more difficult, however.
When a hybrid cross is made, all of the seedlings grown from the resulting seed pod are considered to be in the same grex. Any additional plants produced from the hybridization of the same two parents (members of the same species or greges as the original parents) also belong to the grex. Reciprocal crosses are included within the same grex. If two members of the same grex produce offspring, the offspring receive the same grex name as the parents.
If a parent of a grex becomes a synonym, any grex names that were established by specifying the synonym are not necessarily discarded; the grex name that was published first is used (the principle of priority).
All of the members of a specific grex may be loosely thought of as "sister plants", and just like the brothers and sisters of any family, may share many traits or look quite different from one another. This is due to the randomization of genes passed on to progeny during sexual reproduction. The hybridizer who created a new grex normally chooses to register the grex with a registration authority, thus creating a new grex name, but there is no requirement to do this. Individual plants may be given cultivar names to distinguish them from siblings in their grex. Cultivar names are usually given to superior plants with the expectation of propagating that plant; all genetically identical copies of a plant, regardless of method of propagation (divisions or clones) share a cultivar name.
Naming
The rules for the naming of greges are defined by the International Code of Nomenclature for Cultivated Plants (ICNCP).
The grex name differs from a species name in that the gregaric part of the name is capitalized, is not italicized, and may consist of more than one word (limited to 30 characters in total, excluding spaces).
Furthermore, names of greges are to be in a living language rather than Latin.
For example: an artificially produced hybrid between Cattleya warscewiczii and C. dowiana (or C. aurea, which the RHS, the international orchid hybrid registration authority, considers to be a synonym of C. dowiana) is called C. Hardyana (1896) gx. An artificially produced seedling that results from pollinating a C. Hardyana (1896) gx with another C. Hardyana (1896) gx is also a C. Hardyana (1896) gx. However, the hybrid produced between Cattleya Hardyana (1896) gx and C. dowiana is not C. Hardyana (1896) gx, but C. Prince John gx. In summary:
C. warscewiczii × C. dowiana → C. Hardyana (1896) gx
C. Hardyana (1896) gx × C. warscewiczii → C. Eleanor (1918) gx
C. dowiana × C. Hardyana (1896) gx → C. Prince John gx
Registration
When the name of a grex is first established, a description is required that specifies two particular parents, where each parent is specified either as a species (or nothospecies) or as a grex. The grex name then applies to all hybrids between those two parents. There is a permitted exception if the full name of one of the parents is known but the other is known only to genus level or nothogenus level.
New grex names are now established by the Royal Horticultural Society, which receives applications from orchid hybridizers.
Relationship with nothospecies
The concept of grex and nothospecies are similar, but not equivalent. While greges are only used within the orchid family, nothospecies are used for any plant (including orchids).
Furthermore, a grex and nothospecies differ in that a grex and a nothospecies can have the same parentage, but are not equivalent because the nothospecies includes back-crosses and the grex does not. They can even have the same epithet, distinguished by typography (see botanical name for explanation of epithets), although since January 2010 it is not permitted to publish such grex names if the nothospecies name already exists.
Hybrids between a grex and a species/nothospecies are named as greges, but this is not permitted if the nothospecies parent has the same parentage as the grex parent. That situation is a back-cross, and the nothospecies name is applied to the progeny.
References
Bibliography
External links
"Quarterly Supplement To The International Register And Checklist Of Orchid Hybrids (Sander’s List) January – March 2014 Registrations" Distributed with The Orchid Review 122(1306) (June 2014), The Royal Horticultural Society.
Guide lines and rules for composing grex, group and cultivar names. Summary of ICNCP rules and guidelines for Grex and Cultivar epithets.
Botanical nomenclature
Orchid hybrids | Grex (horticulture) | Biology | 1,648 |
42,699,853 | https://en.wikipedia.org/wiki/Rod%20and%20frame%20test | The rod and frame test is a psychophysical method of testing perception. It relies on the use of a rod and frame apparatus which uses a rotating rod set inside an individually rotatable drum, allowing an experimenter to vary the participant's frame of reference and thus test for their perception of vertical.
Rod and frame illusion
The rod and frame illusion occurs because of the effect of the orientation of the frame on the rod. In the simplest example of the rod and frame illusion, the illusion will cause the participant to perceive the rod to be oriented congruent with the orientation of the frame. When the participant is viewing the rod and frame that are both positioned at 0 degrees (or vertical), they perceive the rod as vertical with perfect accuracy. However, when the frame is tilted away from vertical, the participant's perception of vertical is affected. The participant tends to perceive the rod to be tilted in the same direction as the frame is oriented (e.g., if the frame is tilted in the counterclockwise direction, the rod will also be perceived as being tilted counterclockwise). As the tilt of the frame increases, the participants' perceived vertical increasingly deviates from true vertical.
Rod and frame test
To perform the rod and frame task, an apparatus consisting of a rod in a square frame is used. An example commercial apparatus can be seen in picture 1. When the participant is being tested using the apparatus, their head is fastened firmly in the chin rest to prevent the participant from collecting visual cues from outside of the apparatus. The rod and frame are shown in the center of the far end of the apparatus, which provides a frame of reference to the participant. Both the participant and the experimenter are able to adjust the orientation of the rod, while only the experimenter can adjust the frame orientation by using the appropriate knobs on the apparatus, as seen in picture 2. The experimenter is able to see the exact degree measurement of the rod and frame from vertical, while the participant sees the physical rod and frame inside the apparatus.
The methods of constant stimuli, limits, and adjustment can be used to test the participants, but method of limits is most commonly used in research conducted using the rod and frame task. When using the method of limits, the experimenter sets the orientation of the rod and frame separately and then the participant is asked to adjust the rod orientation until they perceive it to be vertical. Deviation from true vertical can then be determined. Based on which way the frame is tilted, the rod can be viewed as either being tilted in the same direction as the frame (direct effect), or in the opposite direction of the frame (indirect effect).
Evidence
The frame of reference with respect to studies of the visual system refers to perceived reference axes. In the rod and frame illusion, there are a number of things that can influence one's frame of reference. Past research has found that one reason people experience the rod and frame illusion is due to visual-vestibular interactions. For instance, when a participant is viewing the rod and frame task while physically tilted, the participant acts as though they are tilted opposite of the orientation of the frame. This suggests that the illusion, in part, is due to the person compensating for their perceived vertical in the direction that is opposite of the frame. Other evidence proposed by researchers that is consistent with this is that, when participants are put on their sides to view the rod and frame task, they rely on their vision when their vestibular and proprioceptive senses are incongruent with those of their visual senses. These findings suggest that the rod and frame illusion is processed in a type of hierarchy, where visual input is at the top, then vestibular cues, and finally proprioceptive cues. In 2010, Lipshits found that, along with this hierarchy of processing, proprioceptive information, as opposed to gravity, is used by the body to determine which way is vertical. Lipshits says that, when we are not able to use vision to determine which way is vertical, we use other cues based on the axis of our head and body.
See also
Visual perception
Field dependence
References
Frames of reference
Psychophysics
Perception
Visual perception | Rod and frame test | Physics,Mathematics | 854 |
10,063,238 | https://en.wikipedia.org/wiki/Palsa | Palsas are peat mounds with a permanently frozen peat and mineral soil core. They are a typical phenomenon in the polar and subpolar zone of discontinuous permafrost. One of their characteristics is having steep slopes that rise above the mire surface. This leads to the accumulation of large amounts of snow around them. The summits of the palsas are free of snow even in winter, because the wind carries the snow and deposits on the slopes and elsewhere on the flat mire surface. Palsas can be up to in diameter and can reach a height of .
Permafrost is found on palsa mires only in the palsas themselves, and its formation is based on the physical properties of peat. Dry peat is a good insulator, but wet peat conducts heat better, and frozen peat is even better at conducting heat. This means that cold can penetrate deep into the peat layers, and that heat can easily flow from deeper wet layers in winter, whereas the dry peat on the palsa surface insulates the frozen core and prevents it from thawing in the summer. This means that palsas can survive in a climate where the mean annual temperature is just below the freezing point.
A lithalsa is a palsa without peat cover. They exist in a smaller range than palsas, commonly occurring in oceanic climate regimes. However both palsas and lithalsas are relatively small compared to pingos, typically less than .
Palsa development
Palsas may be initiated in areas of a moor or bog where the winter freezing front penetrates relatively faster than surrounding areas, perhaps due to an unusually thin cover of snow. The lack of thermal insulation provided by thick snow permits much deeper freezing in winter. This ice may then last through the summer with a persistent 'bump' of up to several cm due to frost heave. The elevated surface of a palsa will tend also to have thinner snow cover, allowing greater winter cooling, while in summer the surface material (especially if organic) will dry out and provide thermal insulation. Thus the interior temperature is consistently lower than that of adjacent ground. This contributes to the formation of an ice lens which grows by drawing up surrounding water. The expansion of the ice upon freezing exerts pressure on the surrounding soil, further forcing water out of its pore spaces which then accumulates on and increases the volume of the growing ice lens. A positive feedback loop develops. Changes in surface moisture and vegetation will then be such as to preserve the newly formed permafrost.
The overlying soil layer is gradually lifted up by frost heaving. In cross-section, the ice cores of a palsa show layering, which is caused by the successive winter freezing intervals. The pressing out of water from the pores is not crucial, however, since the boggy soil is water-saturated and thus always provides enough water for ice core growth.
Many scientists agree that the development of a palsa is cyclic where growth continues until a convex form of the palsa is reached. When this occurs an increasing pressure in the uppermost layer of peat will cause cracks in the peat layer which will result in the sliding of the peat layer toward the sides of the palsa. As this layer of peat generates an insulating effect the regression of the layer will thereby expose the permafrost in the palsa and initiate melting. In this case, the melting of the palsa is a normal part of the cyclic development and, it will be possible for new embryonic palsa forms to develop in the same area. However, the studies done on palsa forms has primarily been observing dome palsas in the northern regions. These study areas lie within the core area for palsa occurrences and therefore are the cyclic development applicable only to dome palsas within the core area.
Palsa plateaus often lack the convex form which causes cracks in the peat layers and the decay of dome palsas. But in palsa plateaus, frost expansion which causes swelling will with time create an uneven surface and increase the possibility for water accumulation on the surface and cause local regression and melting. This process, which causes melting likewise the cracking of the peat layer in dome palsas, is a normal part in the life span of palsa plateaus but are not a part of a cyclic evolvement.
Palsas appear to go through a developmental cycle that eventually leads to thawing and collapse. Open cracks that commonly accompany palsa growth and the water that tends to accumulate around palsas, probably as a result of their weight depressing the adjacent bog surface, are important factors in this process. The fact that palsas in various stages of growth and decay occur together shows that their collapse is not necessarily indicative of climatic change. All that is usually left after a palsa collapses is a depression surrounded by a rim.
Morphology
One specific type of mire at which palsa structures appear is called a palsa mire. But, sometimes the nature type is described as palsa bogs, however, they both refer to a peaty wetland where palsa mounds occur. In palsa mires, palsas which are in different stages of development can appear due to the cyclic development of the structure. Therefore, the collapsed form of the palsas are common in these areas which can be seen as rounded ponds, open peat surfaces or low circular rim ridges.
The individual palsa is described as a mound or a larger elevation in peatland with a core of permanently frozen peat and/or mineral soil with an uppermost active layer of peat. The landform occurs in areas with discontinuous permafrost. The core of palsas stays frozen permanently, including summertime, as the peat layer creates an insulating effect. Mostly palsas have an oval or elongated form but different shapes of palsas have been described. In some places (Laivadalen and Keinovuopio in northern Sweden), palsa complexes which consist of several dome-shaped palsas have been found. At other places (Seitajaure in northern Sweden), another palsa structure is described. Here several palsa-plateaus have been found which have flatter surfaces and steep edges.
Palsa forms include mounds, plateaus and ridges of different sizes. Palsas in Iceland have been described as hump-shaped, dike-shaped, plateau-shaped, ring-shaped, and shield-shaped. Those in Norway have been referred to as palsa plateaus, esker palsas, string palsas, conical or dome-shaped palsas, and palsa complexes.
Widths are commonly , and lengths . However, lengths of up to have been reported for esker-like palsa ridges running parallel to the gradient of a bog. Heights range from less than up to , but can reach about at a maximum above the surrounding area. Large forms tend to be considerably less conical than small ones. In places, palsas combine to form complexes several hundred meters in extent. The permafrost core contains ice lenses no thicker than , though locally lenses up to almost thick have been described.
During the cyclic development, the palsa goes through several stages at which the morphology differs. In the initial aggrading stage of development, the palsas have smooth surfaces with no cracks in the peat layer and no visible signs of erosion can be seen. They are often small and dome-shaped and often referred to as embryo palsas. In this stage ice layers are created which are commonly found in the frozen peat core. It has been suggested that these ice layers are created by ice segregation but, it is most certainly buoyancy that is the reason for the formation of the ice layers. Buoyant rise of the core occurs which freezes when the permafrost reaches the area and creates the ice layers. In the stable, mature phase, the surface has risen further to a level at which the snow cover during winter is thinned by the wind which in turn makes it possible for deeper freezing. In the mature stage, the frozen core has reached beyond the peat layer into the underlying silty sediments and during summer thawing of the core occurs but not to an extent where the core thaws completely. The thawing can sometimes create water filled ponds adjacent to the palsa and in some cases, cracks in the peat layer along these ponds can be present in the stable stage. However, these cracks are small in size and no visible signs of block erosion are seen during the sable stage. During the degrading stage, however, the palsas have large cracks up to several meters which divide the peat layer into blocks and so-called block erosion occurs. Adjacent to palsas in the degrading stage often several individual ponds are found, due to thawing of the frozen core. Wind erosion often affect the peat layer to such a degree that it decreases in thickness with sometimes several decimeters. When palsa plateaus are in the degrading stage several ponds on the flat plateau-surface can be seen which often have neighbouring block erosion. When block erosion occurs the mineral soil is often exposed along the cracks, especially when the peat layer is thin.
Geographic distribution
Palsas are typical forms of the discontinuous permafrost zone regions and are therefore found in Subarctic regions of northern Canada and Alaska, Siberia, northern Fennoscandia and Iceland. They are almost exclusively associated with the presence of peat and commonly occur in areas where the winters are long and the snow cover tends to be thin. In some places palsas extend into underlying permafrost; in others they rest on an unfrozen substratum.
In the southern hemisphere palsa remains from the last glacial maximum have been identified on the Argentine side of Isla Grande de Tierra del Fuego just north of Cami Lake. Remainders of Ice-Age palsas are to be found also in Hochmooren of Central Europe, such as Hohen Venn in the German-Belgian border area.
Effects of climate change
Effect on palsa forms due to change in climatic conditions
Erosion of palsa forms and the receding of the permafrost in the core of the palsa does not directly indicate a change in climatic conditions. As the palsas have a cyclic development the thawing of the core is a normal part of the palsa development. However, change in climatic conditions does affect palsa forms. The palsa forms that lay in the outskirt of the occurrence area are more dependent on climatic conditions for existence than the palsa forms near the core of the occurrence area. A study on palsa forms was done in 1998 at Dovrefjell, in southern Norway. At the time of observation, the mean annual temperature lied just under in the area. These areas are certainly sensitive to changes in temperature; just a small temperature rise can have a great effect on the lasting existence of palsas in the specific region. Measurements from meteorological stations in the area show that the mean annual temperature rose 0.8 °C between the time periods of 1901–1930 and 1961–1990. Since the start of the warming trend in the 1930s, entire palsa bogs and large palsa plateaus have completely melted in the Dovrefjell area. Palsa bogs' sensitivity to changes in temperature makes them a good climate indicator. The study in the Dovrefjell area concluded that if palsas are used as climate indicators it is essential to separate large changes in the distribution of permafrost from smaller changes. Smaller changes are caused by shorter climatic variations which only last a few years. Small dome palsas, which also can be called embryo palsas, can develop as a result of smaller variations in climatic conditions such as a few following cold winters. As these small palsas disappear after just a few years, they fail to establish as permanent formations. This phenomenon has been observed in the Dovrefjell in the last decades and is caused by a larger change in the climatic condition where the temperature has risen to a level at which the palsas cannot fully initiate their cyclic development. This is a consequence of climate change with the warming trend which has been observed in the Dovrefjell area. In this area, the climate has not been cold enough for new palsa forms to establish during the whole of the 20th century.
However, some uncertainties of how the local conditions affect the formation of palsa forms and especially the hydrology of palsa mires still exist. Additionally, more active-layer monitoring and its correlation to local weather conditions is needed to better determine the effect of climate change on palsa mires.
Palsa and GHG-fluxes
Because the top mounds of the palsas are more dry and nutrient poor than their wet surroundings, they create a mosaic of microhabitats within the mire. The occurrence of a palsa is determined by several climatological factors, such as air temperature, precipitation and snow thickness. Therefore, an increase in temperature and precipitation may induce thawing of frozen peat and subsidence of the peat surface. This results in a thicker active layer and wetter conditions. The vegetation, therefore, changes in adaptation to the wetter conditions. The expanding wetness is projected to benefit sphagnum mosses and graminoids, at the expense of the dryer palsa vegetation. The associated changes in greenhouse gases fluxes are increased CO2 uptake and increased methane emission, mainly due to the expansion of tall graminoids.
The continued occurrence of palsa mires in Fennoscandia
The lasting occurrence of palsa mires is endangered by several factors. Foremost of these is climate change, with palsa mires located on the margins of their climatic distribution being the most vulnerable. Climate change causes an increase in the average annual temperature, which must lay under for palsas to persist. Palsas also generally require relatively low precipitation (generally < 500 mm annually ), and increases in precipitation due to climate change may result in palsa degradation and thaw. Increases in snowfall can mean that the palsas are more insulated and therefore do not get as cold in winter. Conversely, increased rainfall in the summer months can result in higher ground thermal conductivities and greater heat transfer to the palsa core. The effects are already visible: many studies report degradation of palsa mires during the last decades with the primary cause for the loss of habitat area being climate change. Climate envelope models have been used to predict the future distribution of palsas under different climate change scenarios: one such study found that Fennoscandia is likely to become climatically unsuitable for palsas by 2040, and that strong mitigation (SSP1-2.6) is required to retain a significant suitable area for palsas in Western Siberia.
Another factor is particles from atmospheric fallout which can influence the hydrochemistry and degradation rate of organic matter. Furthermore, community building and primarily such that have an impact on the hydrology and hydrochemistry can damage the habitat of palsa mires. But, the impact from this kind of activity is minimal considering the extent of the occurrence area which is relatively large compared to the impacted area. Palsa mires are a prioritized habitat type in EU's Species and Habitats Directive and therefore conservation of palsa mires within Sweden and Finland is of great interest. Conservation of this habitat can be fulfilled with measures of such kind that they sustain a favourable conservation status and degradation of the palsa mires are avoided. But in 2013 Sweden reported the conservation status for the palsa mires to be poor and in many areas the palsas have collapsed and there is a high risk for extinction.
Effects on ecosystems and species
A typical palsa mire has a high level of biodiversity, ranging from several different types of bird species to tiny organisms like bacteria. This is largely because of particularly due to its outstanding minerotrophic-ombrotrophic and water table gradients, which enables the presence of several microhabitats distributed in different degrees of wetness. Palsa mires are listed as a priority habitat type by the European Union, and climate change may pose a great risk to its ecosystems. Although much research has been carried out on degradation of palsa mires, there is still an enormous information gap on what implications on biodiversity disruptions in ecosystems may have. In fact, there is not much at all known about many organisms inhabiting palsas. It is vital to gain more knowledge about the distribution of these organisms, as well as patterns of species richness long-term, in order to understand and predict possible implications of potential loss of palsa. Without this key knowledge, understanding the biological importance of palsa mires is hard to assess.
In palsa mire zones in Northern Europe, abundance of bird species breeding finds it peak. This is particularly true in the case of North European waders. In the northernmost of Finland, palsa mires host the highest bird species density of all compared to several different biotopes, and there are most likely the heterogeneity of habitats and availability of shallow waters (a basic source of food) that creates such a massive diversity of birds. Due to likely loss of palsa mires in this century, effects on wildlife and biodiversity is undeniable. Shallow waters might disappear or decrease dramatically, creating a more homogenous environment. This will likely have a negative impact on certain species of breeding birds as well as other organisms inhabiting palsa mires permanently or seasonally.
The available research on ecological effects of palsa regression is scarce. As many breeding species are not exclusive to palsa mires, the question of possible extinction as a result of declining palsa mires are yet not certain. It is not a reach though, to suggest that the homogenization of palsa mires will bring biological consequences. There are some (however few) studies conducted on the ecological factors responsible for species abundance, in which water table depth is a suggested factor. To successfully conduct a comprehensive study on biodiversity effects in this area, much more research is needed to map out a lot of species living in palsa areas.
Differences and commonalities between pingos and palsas
Both palsas and pingos are perennial frost mounds; however, pingos are typically larger than palsas and can reach heights greater than 50 m, while the highest palsas rarely exceed 7-10 m. More importantly, palsas do not have an intrusive ice core, or ice that forms as a result of local groundwater. However, for pingos, the defining characteristic is the presence of intrusive ice throughout most of the core. Palsas form as a result of ice-lens accumulation by cryosuction, and pingos as the result of hydraulic pressure if it is open, and hydrostatic pressure if it is closed.
Moreover, contrary to pingos which are usually isolated, palsas usually arise in groups with other palsas, such as in a so-called palsa bog. Unlike pingos, palsas do not require surrounding permafrost to grow, seeing as palsa are permafrost. Pingos also grow below the active layer, which is the depth that the annual freeze-thaw cycle occurs, and palsas grow in the active layer.
Both palsas and pingos result from freezing of water to an ice core. Palsas, however, do not necessarily require positive hydrostatic pressure (to inject water), since the boggy soil is water-saturated and therefore has sufficient supply for the growing ice core.
Palsas can grow laterally to a wide extent forming a "palsa plateau", also known as a "permafrost plateau". Pingos do not grow laterally to the same extent because the growth of pingos is chiefly upward; thus they are always hills. Similarly, palsas can laterally decrease in size while maintaining their height; the decay of pingos follows a different pattern.
Terminology and synonyms
Palsa (plural: palsas) is a term from the Finnish language meaning "a hummock rising out of a bog with a core of ice", which in turn is a borrowing from Northern Sami, balsa. As palsas particularly develop in moorlands, they are therefore also named palsamoors. Bugor and bulginniakhs are general terms in the Russian language (the latter of Yakutian origin) for both palsas and pingos.
References
Further reading
External links
Pictures of palsas and further information:
Palsa, a Fennoscandian term for a round or elongated hillock or mound, maximum height of about 10 m, composed of a peat layer overlying mineral soil.
William W. Shilts Geologic Image Gallery (Illinois State Geological Survey)
Field trip guide on periglacial (cryogenic) geomorphology (html)
Field trip guide on periglacial (cryogenic) geomorphology (pdf)
Interpretation guide of natural geographic features: Palsa bog (index)
Interpretation guide of natural geographic features: Palsa bog (aerial photographs)
€U(RO)CK article from a 2005 issue
Geomorphology
Geologic domes
Glaciology
Patterned grounds
Periglacial landforms | Palsa | Biology | 4,308 |
2,421,479 | https://en.wikipedia.org/wiki/Implicit%20cost | In economics, an implicit cost, also called an imputed cost, implied cost, or notional cost, is the opportunity cost equal to what a firm must give up in order to use a factor of production for which it already owns and thus does not pay rent. It is the opposite of an explicit cost, which is borne directly. In other words, an implicit cost is any cost that results from using an asset instead of renting it out, selling it, or using it differently. The term also applies to foregone income from choosing not to work.
Implicit costs also represent the divergence between economic profit (total revenues minus total costs, where total costs are the sum of implicit and explicit costs) and accounting profit (total revenues minus only explicit costs). Since economic profit includes these extra opportunity costs, it will always be less than or equal to accounting profit.
Lipsey (1975) uses the example of a firm sitting on an expensive plot worth $10,000 a month in rent which it bought for a mere $50 a hundred years before. If the firm cannot obtain a profit after deducting $10,000 a month for this implicit cost, it ought to move premises (or close down completely) and take the rent instead. In calculating this figure, the firm ought to ignore the figure of $50, and remember instead to look at the land's current value.
See also
Explicit cost
Cost
Economic profit
Imputation (economics)
Cost of goods sold
References
Costs
Economics and time | Implicit cost | Physics | 305 |
32,148,934 | https://en.wikipedia.org/wiki/ASF1%20like%20histone%20chaperone | In molecular biology, the ASF1 like histone chaperone family of proteins includes the yeast and human ASF1 proteins. These proteins are of the chaperone protein group and in particular can be placed into the histone chaperone subgroup. ASF1 participates in both the replication-dependent and replication-independent pathways. The three-dimensional structure has been determined as a compact immunoglobulin-like beta sandwich fold topped by three helical linkers.
References
Protein families | ASF1 like histone chaperone | Biology | 104 |
3,110,099 | https://en.wikipedia.org/wiki/Gamma%20Librae | Gamma Librae (γ Librae, abbreviated Gamma Lib, γ Lib) is a suspected binary star system in the constellation of Libra. It is visible to the naked eye, having an apparent visual magnitude of +3.91. Based upon an annual parallax shift of 19.99 mas as seen from Earth, it lies 163 light years from the Sun.
The primary component (designated Gamma Librae A) has been formally named Zubenelhakrabi , the traditional name of the system.
Nomenclature
γ Librae (Latinised to Gamma Librae) is the system's Bayer designation. The designations of the two components as Gamma Librae A and B derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
Gamma Librae bore the traditional name Zuben (el) Hakrabi (also rendered as Zuben-el-Akrab and corrupted as Zuben Hakraki). The name is a modification of the Arabic زبانى العقرب Zubān al-ʿAqrab "the claws of the scorpion", a name that dates to before Libra was a distinct constellation from Scorpius. In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Zubenelhakrabi for the component Gamma Librae A on 5 September 2017 and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Root, refers to an asterism consisting of Gamma Librae, Alpha2 Librae, Iota Librae and Beta Librae. Consequently, the Chinese name for Gamma Librae itself is (), "the Third Star of Root".
Properties
Because the star lies near the ecliptic it is subject to occultations by the Moon, allowing the angular size to be measured. As of 1940, the pair had an angular separation of 0.10 arc seconds along a position angle of 191°.
The yellow-hued primary, component Aa, is an evolved G-type giant star with a stellar classification of G8.5 III and an estimated age of 4.3 billion years. It has 1.15 times the mass of the Sun and has expanded to 11.14 times the Sun's radius. The star is radiating around 72 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,786 K. There is a magnitude 11.2 visual companion, component B, at an angular separation of 42.5 arc seconds along a position angle of 157°, as of 2013.
At its distance, the visual magnitude is diminished by an extinction of 0.11 due to interstellar dust. The system is moving closer to the Sun with a radial velocity of −26.71 km/s.
Planetary system
On the 11th of April 2018 the discovery of two gas giant planets orbiting Gamma Librae was announced.
References
Zubenelhakrabi
Librae, 38
Librae, Gamma
076333
G-type giants
138905
5787
CD-27 10464
Libra (constellation)
Planetary systems with two confirmed planets | Gamma Librae | Astronomy | 709 |
46,809,768 | https://en.wikipedia.org/wiki/Pneumonia%20jacket | A pneumonia jacket was a medical device used to warm the chest of a person with pneumonia. In the pre-antibiotic era, supportive care measures such as fluid support and warming were the only treatments available. Pneumonia jackets were variously constructed of oiled silk, muslin, and sometimes even included a system of rubber tubing that circulated hot water around the chest as a means of keeping the patient warm.
The term was apparently coined by one Charles Wilson Ingraham of Binghamton, New York. He wrote, "In an article published in the New York Medical Journal, May 18, 1895, I called particular attention to a means of applying heat by the use of what I termed the "pneumonia jacket" which consists of an arrangement for the circulation of hot water through coils of rubber tubing, so arranged as to cover the whole chest...it hastens the various stages of the pneumonia process [and] sustains lobular vitality and consequently the lobe will not be so prone to chronic disease or to recurrent attacks of pneumonia."
References
Pneumonia
Medical equipment | Pneumonia jacket | Biology | 217 |
29,230,973 | https://en.wikipedia.org/wiki/List%20of%20Russian%20astronomers%20and%20astrophysicists | This list of Russian astronomers and astrophysicists includes the famous astronomers, astrophysicists and cosmologists from the Russian Empire, the Soviet Union and the Russian Federation.
Alphabetical list
A
Tateos Agekian, one of the pioneers of Russian and world Stellar dynamics, discoverer of two evolutionary sequences of stellar systems: nearly spherical and strongly flattened
Vladimir Albitsky, discovered a significant number of asteroids
Viktor Ambartsumian, one of the founders of theoretical astrophysics, discoverer of stellar associations, founder of Byurakan Observatory in Armenia
Andrejs Auzāns, director of the Tashkent observatory, 1911-1916
B
Nikolai P. Barabashov, co-author of the ground breaking publication of the first pictures of the far side of the Moon in 1961, called Atlas of the Other Side of the Moon; a crater and a planet were named after him
Vladimir Belinski, an author of the BKL singularity model of the Universe evolution
Igor Belkovich, made contributions to astronomy; the crater Bel'kovich on the Moon is named after him
Aristarkh Belopolsky, invented a spectrograph based on the Doppler effect, among the first photographers of stellar spectra
Sergey Belyavsky, discovered the bright naked-eye comet C/1911 S3 (Beljawsky); discovered and co-discovered a number of asteroids
Gennady S. Bisnovatyi-Kogan, first determined the maximum mass of a hot neutron star
Sergey Blazhko, discovered a secondary variation of the amplitude and period of some RR Lyrae stars and related pulsating variables, now known as the Blazhko effect
Semion Braude, co-developed large-scale radio interferometers for precise examination of extraterrestrial radio sources
Fyodor Bredikhin, developed the theory of comet tails, meteors and meteor showers, a director of the Pulkovo Observatory
Matvei Petrovich Bronstein, theoretical physicist; pioneer of quantum gravity; author of works in astrophysics, semiconductors, quantum electrodynamics and cosmology
Jacob Bruce, statesman, naturalist and astronomer, founder of the first observatory in Russia (in the Sukharev Tower)
C
Lyudmila Chernykh, astronomer, discovered 268 asteroids
Nikolai Chernykh, astronomer, discovered 537 asteroids and two comets
Aleksandr Chudakov, co-discoverer of the Earth's radiation belt
D
Denis Denisenko, astronomer, author of more than 25 scientific articles and a presenter at five international conferences
A. G. Doroshkevich, along with Igor Novikov, discovered cosmic microwave background radiation as a detectable phenomenon
Alexander Dubyago, expert in theoretical astrophysics; the lunar crater Dubyago is named after him and his father, Dmitry Ivanovich Dubyago
Dmitry Dubyago, expert in theoretical astrophysics, astrometry, and gravimetry; a crater on the Moon is named after him and his son
E
Vasily Engelhardt, researched comets, asteroids, nebulae, and star clusters, in an observatory he built himself
F
Vasily Fesenkov, founded the Alma-Ata (now Tien Shan) astrophysical observatory, and was the first to make a study of Zodiacal light using photometry, and suggested a theory of its dynamics
Kirill Florensky, head of Comparative Planetology at the Vernadsky Institute of the U.S.S.R. Academy of Sciences; the crater Florensky on the Moon is named after him
Alexander Friedmann, mathematician and cosmologist, discovered the expanding-universe solution to the general relativity field equations.; authored the FLRW metric of Universe
Alexei Fridman, predicted existence of smaller satellites around Uranus
G
George Gamow, theoretical physicist and cosmologist, discovered alpha decay via quantum tunneling and Gamow factor in stellar nucleosynthesis, introduced the Big Bang nucleosynthesis theory, predicted cosmic microwave background
Vitaly Ginzburg, co-developed the theory of superconductivity, the theory of electromagnetic wave propagation in plasmas, and a theory of the origin of cosmic radiation
Sergey Glazenap, astronomer; a crater on the Moon and the minor planet 857 Glasenappia are named after him
Alexander A. Gurshtein, developed a concept of history of constellations and the zodiac
Matvey Gusev, the first to prove the non-sphericity of the Moon, pioneer of photography in astronomy
I
Naum Idelson, astronomer
J
Benjamin Jekhowsky, discovered a number of asteroids; made more than 190 scientific publications; the asteroid 1606 Jekhovsky is named after him
K
Lyudmila Karachkina, discovered a number of asteroids, including the Amor asteroid 5324 Lyapunov, 10031 Vladarnolda and the Trojan asteroid 3063 Makhaon
Nikolai Kardashev, astrophysicist, inventor of Kardashev scale for ranking the space civilizations
Isaak Khalatnikov, an author of the BKL singularity model of the Universe evolution
Viktor Knorre, astronomer, discovered four asteroids
Marian Kowalski, first to measure the rotation of the Milky Way
Nikolai Aleksandrovich Kozyrev, astronomer, observed the transient lunar phenomenon
Georgij A. Krasinsky, astronomer, researched planetary motions and ephemeris
Feodosy Krasovsky, astronomer and geodesist; measured the Krasovsky ellipsoid, a coordinate system used in the USSR and the post-Soviet states
Yevgeny Krinov, astronomer, renowned meteorite researcher; the mineral Krinovite, discovered in 1966, was named after him
L
Anders Johan Lexell, astronomer and mathematician; researcher of celestial mechanics and comet astronomy; proved that Uranus is a planet rather than a comet
Andrei Linde, created the Universe chaotic inflation theory
Evgeny Lifshitz, an author of the BKL singularity model of the Universe evolution
Mikhail Lomonosov polymath, inventor of the off-axis reflecting telescope, discoverer of the atmosphere of Venus
Mikhail Lyapunov, astronomer
Kronid Lyubarsky, worked on the Soviet program of interplanetary exploration of Mars
M
Benjamin Markarian, discovered Markarian's Chain
Dmitri Dmitrievich Maksutov, inventor of the Maksutov telescope
Aleksandr Aleksandrovich Mikhailov, credited with leading the post-war revival of the Pulkovo Observatory
Nikolay Moiseyev, expert in celestial mechanics, worked on mathematical methods of celestial calculations and theory of comet formation
N
Grigory Neujmin, discovered 74 asteroids, and most notably 951 Gaspra and 762 Pulcova
Igor Dmitriyevich Novikov, formulated the Novikov self-consistency principle, an important contribution to the theory of time travel
Boris Numerov, created various astronomic and mineralogical instruments, as well as various algorithms and methods that bear his name
P
Pavel Petrovich Parenago, known for contributions to the field of galactic astronomy
Yevgeny Perepyolkin, observed the proper motion of stars with respect to extragalactic nebula
Solomon Pikelner, made a significant contribution to the theory of the interstellar medium, solar plasma physics, stellar atmospheres, and magnetohydrodynamics
Elena V. Pitjeva, expert in the field of Solar System dynamics and celestial mechanics
S
Viktor Safronov, astronomer and cosmologist, author of the planetesimal hypothesis of planet formation
Kaspar Gottfried Schweizer, discovered five comets, and found one NGC object
Andrei Severny, known for his work on solar flares and astronomical observations from artificial satellites
Nikolai Shakura, developed theory of accretion and astrophysics of x-ray binaries, co-developed the standard theory of disk accretion
Grigory Shayn, astronomer and astrophysicist, the first director of the Crimean Astrophysical Observatory, co-developed a method for measurement of stellar rotation
Vladislav Shevchenko, astronomer, specialized in lunar exploration
Iosif Shklovsky, astronomer and astrophysicist, author of several discoveries in the fields of radio astronomy and cosmic rays, extraterrestrial life researcher
Tamara Mikhaylovna Smirnova, co-discovered the periodic comet 74P/Smirnova-Chernykh, along with Nikolai Stepanovich Chernykh; discovered various asteroids; the asteroid 5540 Smirnova was named in her honor
Friedrich Wilhelm Struve, astronomer and geodesist, founder and the first director of the Pulkovo Observatory, prominent researcher and discoverer of new double stars, initiated the construction of 2,820 km long Struve Geodetic Arc, progenitor of the Struve family of astronomers
Otto Lyudvigovich Struve, astronomer and astrophysicist, co-developed a method for measurement of stellar rotation, directed several observatories in the U.S.
Nadezhda Sytinskaya, planetary scientist known for co-developing the meteor slag theory of lunar surface regolith
Otto Wilhelm von Struve, astronomer, director of the Pulkovo Observatory, discovered over 500 double stars
Rashid Sunyaev, astrophysicist, co-predicted the Sunyaev–Zel'dovich effect of CMB distortion
T
Gavriil Tikhov, invented the feathering spectrograph; one of the first to use color filters to increase the contrast of surface details on planets
V
George Volkoff, predicted the existence of neutron stars
Boris Vorontsov-Velyaminov, discovered the absorption of light by interstellar dust, author of the Morphological Catalogue of Galaxies
Alexander Vyssotsky, created first list of nearby stars identified not by their motions in the sky, but by their intrinsic, spectroscopic, characteristics
Y
Avenir Aleksandrovich Yakovkin, astronomer
Ivan Yarkovsky, discovered the YORP and Yarkovsky effects of meteoroids or asteroids
Ivan Naumovich Yazev, astronomer and professor, worked at the Pulkovo Observatory and the Mykolaiv Observatory and later headed the observatory at Irkutsk State University from 1948 until 1955.
Z
Aleksandr Zaitsev, coined the term Messaging to Extra-Terrestrial Intelligence, conducted the first intercontinental radar astronomy experiment, transmitted the Cosmic Calls and Teen Age Message
Yakov Zel'dovich, physicist, astrophysicist and cosmologist, the first to suggest that accretion discs around massive black holes are responsible for the quasar radiation, co-predicted the Sunyaev–Zel'dovich effect of CMB distortion
Abram Leonidovich Zelmanov, astronomer
Sergei Alexandrovich Zhevakin, identified ionized helium as the valve for the heat engine that drives the pulsation of Cepheid variable stars
Lyudmila Zhuravlyova, discovered a number of asteroids; ranked 43rd by Harvard University's list of those who discovered minor planets; credited with having discovered 200 such bodies
Felix Ziegel, author of more than 40 popular books on astronomy and space exploration, generally regarded as a founder of Russian ufology
See also
List of astronomers
List of astrophysicists
List of Russian scientists
List of Russian inventors
Science and technology in Russia
Pulkovo Observatory
References
Astrophysicists
Astronomers
Russian
Astrophysics
Russian astronomers | List of Russian astronomers and astrophysicists | Physics,Astronomy | 2,329 |
5,676,385 | https://en.wikipedia.org/wiki/Catskill%20Aqueduct | The Catskill Aqueduct, part of the New York City water supply system, brings water from the Catskill Mountains to Yonkers where it connects to other parts of the system.
History
Construction began in 1907. The aqueduct proper was completed in 1916 and the entire Catskill Aqueduct system including three dams and 67 shafts was completed in 1924. The total cost of the aqueduct system was $177 million ($ in ).
Specifications
The aqueduct consists of of cut and cover aqueduct, over of grade tunnel, of pressure tunnel, and nine miles (10 km) of steel siphon. The 67 shafts sunk for various purposes on the aqueduct and City Tunnel vary in depth from 174 to . Water flows by gravity through the aqueduct at a rate of about .
The Catskill Aqueduct has an operational capacity of about per day north of the Kensico Reservoir in Valhalla, New York. Capacity in the section of the aqueduct south of Kensico Reservoir to the Hillview Reservoir in Yonkers, New York is per day. The aqueduct normally operates well below capacity with daily averages around 350– of water per day. About 40% of New York City's water supply flows through the Catskill Aqueduct.
Geography
The Catskill Aqueduct begins at the Ashokan Reservoir in Olivebridge, New York, located in Ulster County. From the Ashokan Reservoir, the aqueduct traverses in a southeasterly direction through Ulster, Orange, and Putnam counties. It tunnels first beneath the Rondout Valley and Rondout Creek in the town of Marbletown, then beneath the Wallkill River in the town of Gardiner in Ulster County before flowing toward Orange County, New York. It crosses below the Hudson River bed at Storm King Mountain in Orange County before reaching Putnam County on the east side of the river at Breakneck Mountain. The aqueduct transports water from Ashokan as well as the Schoharie Reservoir, which feeds into Ashokan.
The aqueduct then enters Westchester County, New York, and flows to the Kensico Reservoir, which also receives water from the city's Delaware Aqueduct. It continues from the Kensico reservoir and terminates at the Hillview Reservoir in Yonkers. The Hillview Reservoir then feeds City Tunnels 1 and 2, which bring water to New York City. If necessary, water can be made to bypass both reservoirs.
References
See also
Delaware Aqueduct
New York City Water Supply System
Frank E. Winsor the engineer in charge of construction of of the Aqueduct.
Water infrastructure of New York City
Landmarks in New York (state)
Aqueducts in New York (state)
Interbasin transfer | Catskill Aqueduct | Environmental_science | 521 |
3,088,802 | https://en.wikipedia.org/wiki/Oil%20Shockwave | The Oil Shockwave event was a policy wargaming scenario created by the joint effort of several energy policy think tanks, the National Commission on Energy Policy and Securing America's Future Energy. It outlined a series of hypothetical international events taking place in December 2005, all related to world supply and demand of petroleum. Participants in the scenario role-played Presidential Cabinet officials, who were asked to discuss and respond to the events. The hypothetical events included civil unrest in OPEC country Nigeria, and coordinated terrorist attacks on ports in Saudi Arabia and Alaska. In the original simulation, the participants had all previously held jobs closely related to their roles in the exercise.
Jason Grumet, from the National Commission on Energy Policy, said that the message of the simulation was that, "very modest disruptions in oil supply, whether they're here at home or abroad can have truly devastating impacts on our nations economy and our overall security."
Details of the scenario
The original event was performed June 23, 2005, and was a simulation of December 2005, six months in the future. The first scenario involved civil unrest in Nigeria, a member of the Organization of Petroleum Exporting Countries, resulting in oil companies and the US government evacuating their personnel from the country. In the simulation, this led to decrease in oil supply and the price spikes causing a variety of negative effects on the United States economy.
More events followed as the scenario progressed, including a very cold winter in the Northern hemisphere, terrorist attacks on Saudi Arabian and Alaskan oil ports, and Al-Qaeda cells hijacking oil tankers and crashing them into the docking facilities at the ports (which might effectively shut down such port for weeks, if not months).
The scenarios were set up with pre-produced scripted news clips. Participants were also given briefing memos with background information related to their specific cabinet positions. The participants discussed and prepared policy recommendations for an unseen Chief Executive after each part of the scenario.
Original participants
The original event was a one-time exercise and used participants that held positions that were identical or closely related to their positions in the simulation. Participants included former administrator of the Environmental Protection Agency Carol Browner, former Director of Central Intelligence Robert Gates, former Marine Corps Commendant and member of the Joint Chiefs of Staff General P.X. Kelley USMC (Ret.), and former National Economic Advisor to the President, Gene Sperling.
References
External links
Oil Shockwave College Curriculum
Wargames
Energy policy | Oil Shockwave | Environmental_science | 499 |
35,093,912 | https://en.wikipedia.org/wiki/Non-Archimedean%20geometry | In mathematics, non-Archimedean geometry is any of a number of forms of geometry in which the axiom of Archimedes is negated. An example of such a geometry is the Dehn plane. Non-Archimedean geometries may, as the example indicates, have properties significantly different from Euclidean geometry.
There are two senses in which the term may be used, referring to geometries over fields which violate one of the two senses of the Archimedean property (i.e. with respect to order or magnitude).
Geometry over a non-Archimedean ordered field
The first sense of the term is the geometry over a non-Archimedean ordered field, or a subset thereof. The aforementioned Dehn plane takes the self-product of the finite portion of a certain non-Archimedean ordered field based on the field of rational functions. In this geometry, there are significant differences from Euclidean geometry; in particular, there are infinitely many parallels to a straight line through a point—so the parallel postulate fails—but the sum of the angles of a triangle is still a straight angle.
Intuitively, in such a space, the points on a line cannot be described by the real numbers or a subset thereof, and there exist segments of "infinite" or "infinitesimal" length.
Geometry over a non-Archimedean valued field
The second sense of the term is the metric geometry over a non-Archimedean valued field, or ultrametric space. In such a space, even more contradictions to Euclidean geometry result. For example, all triangles are isosceles, and overlapping balls nest. An example of such a space is the p-adic numbers.
Intuitively, in such a space, distances fail to "add up" or "accumulate".
References
Fields of geometry | Non-Archimedean geometry | Mathematics | 380 |
8,382,494 | https://en.wikipedia.org/wiki/NGC%20278 | NGC 278 is an isolated spiral galaxy in the northern circumpolar constellation of Cassiopeia, near the southern constellation boundary with Andromeda. It lies at a distance of approximately from the Milky Way, giving it a physical scale of per arcsecond. The galaxy was discovered on December 11, 1786 by German-born astronomer William Herschel. J. L. E. Dreyer described it as, "considerably bright, pretty large, round, 2 stars of 10th magnitude near".
The morphological classification of this galaxy is SAB(rs)b, which indicates a weak bar structure around the nucleus (SAB), an incomplete ring around the bar (rs), and moderately-tightly wound spiral arms (b). It is a relatively small, compact spiral with a diameter of , multiple flocculent arms and a bright, dusty nucleus that does not appear to be active. However, the neutral hydrogen in the galaxy is spread over a diameter five times larger than its visible size.
Although it appears nearly face-on, the galactic plane is inclined by an angle of 28° to the line of sight from the Earth, with the major axis being oriented along a position angle of 116°. The outer part of the disk appears to be warped, so that the major axis is not quite perpendicular to the minor axis, and the morphology is somewhat disrupted. The inner disk contains multiple intense star-forming regions. This is taking place in an inner ring with a radius of that may have been triggered by a merger with a smaller companion. It has an H II nucleus.
References
External links
Intermediate spiral galaxies
Cassiopeia (constellation)
0278
00528
03051
17861211 | NGC 278 | Astronomy | 348 |
34,257,354 | https://en.wikipedia.org/wiki/Cryptococcus%20consortionis | Cryptococcus consortionis is a fungus species. It produces colonies that are cream colored with a glistening, mucoid appearance. When grown in liquid media, this species requires constant agitation. This species growth range is from 4 °C to 23 °C, with growth at 23 °C occurring very slowly. On the microscopic level, C. consortionis appears ovoid, with a thin capsule. Sexual reproduction does not occur in this species, but it asexually reproduces through budding at the birth scar site. Very occasionally, the cells have been observed to produce three celled pseudomycelia. C. consortionis does not ferment. This species produces Amylose, but it is the only basidioblastomycete which does so but is unable to also assimilate cellobiose, D-galactose, mannitol, myo-inositol and nitrate. C. consortionis is DBB positive. This species required thiamine for proper growth, and its growth is slowed by small amounts of cycloheximide. C. consortionis does not produce urease, and does not produce melanin on DOPA.
References
Tremellomycetes
Fungus species | Cryptococcus consortionis | Biology | 256 |
1,240,666 | https://en.wikipedia.org/wiki/Parity%20%28physics%29 | In physics, a parity transformation (also called parity inversion) is the flip in the sign of one spatial coordinate. In three dimensions, it can also refer to the simultaneous flip in the sign of all three spatial coordinates (a point reflection):
It can also be thought of as a test for chirality of a physical phenomenon, in that a parity inversion transforms a phenomenon into its mirror image.
All fundamental interactions of elementary particles, with the exception of the weak interaction, are symmetric under parity. As established by the Wu experiment conducted at the US National Bureau of Standards by Chinese-American scientist Chien-Shiung Wu, the weak interaction is chiral and thus provides a means for probing chirality in physics. In her experiment, Wu took advantage of the controlling role of weak interactions in radioactive decay of atomic isotopes to establish the chirality of the weak force.
By contrast, in interactions that are symmetric under parity, such as electromagnetism in atomic and molecular physics, parity serves as a powerful controlling principle underlying quantum transitions.
A matrix representation of P (in any number of dimensions) has determinant equal to −1, and hence is distinct from a rotation, which has a determinant equal to 1. In a two-dimensional plane, a simultaneous flip of all coordinates in sign is not a parity transformation; it is the same as a 180° rotation.
In quantum mechanics, wave functions that are unchanged by a parity transformation are described as even functions, while those that change sign under a parity transformation are odd functions.
Simple symmetry relations
Under rotations, classical geometrical objects can be classified into scalars, vectors, and tensors of higher rank. In classical physics, physical configurations need to transform under representations of every symmetry group.
Quantum theory predicts that states in a Hilbert space do not need to transform under representations of the group of rotations, but only under projective representations. The word projective refers to the fact that if one projects out the phase of each state, where we recall that the overall phase of a quantum state is not observable, then a projective representation reduces to an ordinary representation. All representations are also projective representations, but the converse is not true, therefore the projective representation condition on quantum states is weaker than the representation condition on classical states.
The projective representations of any group are isomorphic to the ordinary representations of a central extension of the group. For example, projective representations of the 3-dimensional rotation group, which is the special orthogonal group SO(3), are ordinary representations of the special unitary group SU(2). Projective representations of the rotation group that are not representations are called spinors and so quantum states may transform not only as tensors but also as spinors.
If one adds to this a classification by parity, these can be extended, for example, into notions of
scalars () and pseudoscalars () which are rotationally invariant.
vectors () and axial vectors (also called pseudovectors) () which both transform as vectors under rotation.
One can define reflections such as
which also have negative determinant and form a valid parity transformation. Then, combining them with rotations (or successively performing x-, y-, and z-reflections) one can recover the particular parity transformation defined earlier. The first parity transformation given does not work in an even number of dimensions, though, because it results in a positive determinant. In even dimensions only the latter example of a parity transformation (or any reflection of an odd number of coordinates) can be used.
Parity forms the abelian group due to the relation . All Abelian groups have only one-dimensional irreducible representations. For , there are two irreducible representations: one is even under parity, , the other is odd, . These are useful in quantum mechanics. However, as is elaborated below, in quantum mechanics states need not transform under actual representations of parity but only under projective representations and so in principle a parity transformation may rotate a state by any phase.
Representations of O(3)
An alternative way to write the above classification of scalars, pseudoscalars, vectors and pseudovectors is in terms of the representation space that each object transforms in. This can be given in terms of the group homomorphism which defines the representation. For a matrix
scalars: , the trivial representation
pseudoscalars:
vectors: , the fundamental representation
pseudovectors:
When the representation is restricted to , scalars and pseudoscalars transform identically, as do vectors and pseudovectors.
Classical mechanics
Newton's equation of motion (if the mass is constant) equates two vectors, and hence is invariant under parity. The law of gravity also involves only vectors and is also, therefore, invariant under parity.
However, angular momentum is an axial vector,
In classical electrodynamics, the charge density is a scalar, the electric field, , and current are vectors, but the magnetic field, is an axial vector. However, Maxwell's equations are invariant under parity because the curl of an axial vector is a vector.
Effect of spatial inversion on some variables of classical physics
The two major divisions of classical physical variables have either even or odd parity. The way into which particular variables and vectors sort out into either category depends on whether the number of dimensions of space is either an odd or even number. The categories of odd or even given below for the parity transformation is a different, but intimately related issue.
The answers given below are correct for 3 spatial dimensions. In a 2 dimensional space, for example, when constrained to remain on the surface of a planet, some of the variables switch sides.
Odd
Classical variables whose signs flip when inverted in space inversion are predominantly vectors. They include:
Even
Classical variables, predominantly scalar quantities, which do not change upon spatial inversion include:
Quantum mechanics
Possible eigenvalues
In quantum mechanics, spacetime transformations act on quantum states. The parity transformation, , is a unitary operator, in general acting on a state as follows: .
One must then have , since an overall phase is unobservable. The operator , which reverses the parity of a state twice, leaves the spacetime invariant, and so is an internal symmetry which rotates its eigenstates by phases . If is an element of a continuous U(1) symmetry group of phase rotations, then is part of this U(1) and so is also a symmetry. In particular, we can define , which is also a symmetry, and so we can choose to call our parity operator, instead of . Note that and so has eigenvalues . Wave functions with eigenvalue under a parity transformation are even functions, while eigenvalue corresponds to odd functions. However, when no such symmetry group exists, it may be that all parity transformations have some eigenvalues which are phases other than .
For electronic wavefunctions, even states are usually indicated by a subscript g for gerade (German: even) and odd states by a subscript u for ungerade (German: odd). For example, the lowest energy level of the hydrogen molecule ion (H2+) is labelled and the next-closest (higher) energy level is labelled .
The wave functions of a particle moving into an external potential, which is centrosymmetric (potential energy invariant with respect to a space inversion, symmetric to the origin), either remain invariable or change signs: these two possible states are called the even state or odd state of the wave functions.
The law of conservation of parity of particles states that, if an isolated ensemble of particles has a definite parity, then the parity remains invariable in the process of ensemble evolution. However this is not true for the beta decay of nuclei, because the weak nuclear interaction violates parity.
The parity of the states of a particle moving in a spherically symmetric external field is determined by the angular momentum, and the particle state is defined by three quantum numbers: total energy, angular momentum and the projection of angular momentum.
Consequences of parity symmetry
When parity generates the Abelian group , one can always take linear combinations of quantum states such that they are either even or odd under parity (see the figure). Thus the parity of such states is ±1. The parity of a multiparticle state is the product of the parities of each state; in other words parity is a multiplicative quantum number.
In quantum mechanics, Hamiltonians are invariant (symmetric) under a parity transformation if commutes with the Hamiltonian. In non-relativistic quantum mechanics, this happens for any scalar potential, i.e., , hence the potential is spherically symmetric. The following facts can be easily proven:
If and have the same parity, then where is the position operator.
For a state of orbital angular momentum with z-axis projection , then .
If , then atomic dipole transitions only occur between states of opposite parity.
If , then a non-degenerate eigenstate of is also an eigenstate of the parity operator; i.e., a non-degenerate eigenfunction of is either invariant to or is changed in sign by .
Some of the non-degenerate eigenfunctions of are unaffected (invariant) by parity and the others are merely reversed in sign when the Hamiltonian operator and the parity operator commute:
where is a constant, the eigenvalue of ,
Many-particle systems: atoms, molecules, nuclei
The overall parity of a many-particle system is the product of the parities of the one-particle states. It is −1 if an odd number of particles are in odd-parity states, and +1 otherwise. Different notations are in use to denote the parity of nuclei, atoms, and molecules.
Atoms
Atomic orbitals have parity (−1)ℓ, where the exponent ℓ is the azimuthal quantum number. The parity is odd for orbitals p, f, ... with ℓ = 1, 3, ..., and an atomic state has odd parity if an odd number of electrons occupy these orbitals. For example, the ground state of the nitrogen atom has the electron configuration 1s22s22p3, and is identified by the term symbol 4So, where the superscript o denotes odd parity. However the third excited term at about 83,300 cm−1 above the ground state has electron configuration 1s22s22p23s has even parity since there are only two 2p electrons, and its term symbol is 4P (without an o superscript).
Molecules
The complete (rotational-vibrational-electronic-nuclear spin) electromagnetic Hamiltonian of any molecule commutes with (or is invariant to) the parity operation P (or E*, in the notation introduced by Longuet-Higgins) and its eigenvalues can be given the parity symmetry label + or - as they are even or odd, respectively. The parity operation involves the inversion of electronic and nuclear spatial coordinates at the molecular center of mass.
Centrosymmetric molecules at equilibrium have a centre of symmetry at their midpoint (the nuclear center of mass). This includes all homonuclear diatomic molecules as well as certain symmetric molecules such as ethylene, benzene, xenon tetrafluoride and sulphur hexafluoride. For centrosymmetric molecules, the point group contains the operation i which is not to be confused with the parity operation. The operation i involves the inversion of the electronic and vibrational displacement coordinates at the nuclear centre of mass. For centrosymmetric molecules the operation i commutes with the rovibronic (rotation-vibration-electronic) Hamiltonian and can be used to label such states. Electronic and vibrational states of centrosymmetric molecules are either unchanged by the operation i, or they are changed in sign by i. The former are denoted by the subscript g and are called gerade, while the latter are denoted by the subscript u and are called ungerade. The complete electromagnetic Hamiltonian of a centrosymmetric molecule
does not commute with the point group inversion operation i because of the effect of the nuclear hyperfine Hamiltonian. The nuclear hyperfine Hamiltonian can mix the rotational levels of g and u vibronic states (called ortho-para mixing) and give rise to ortho-para transitions
Nuclei
In atomic nuclei, the state of each nucleon (proton or neutron) has even or odd parity, and nucleon configurations can be predicted using the nuclear shell model. As for electrons in atoms, the nucleon state has odd overall parity if and only if the number of nucleons in odd-parity states is odd. The parity is usually written as a + (even) or − (odd) following the nuclear spin value. For example, the isotopes of oxygen include 17O(5/2+), meaning that the spin is 5/2 and the parity is even. The shell model explains this because the first 16 nucleons are paired so that each pair has spin zero and even parity, and the last nucleon is in the 1d5/2 shell, which has even parity since ℓ = 2 for a d orbital.
Quantum field theory
If one can show that the vacuum state is invariant under parity, , the Hamiltonian is parity invariant and the quantization conditions remain unchanged under parity, then it follows that every state has good parity, and this parity is conserved in any reaction.
To show that quantum electrodynamics is invariant under parity, we have to prove that the action is invariant and the quantization is also invariant. For simplicity we will assume that canonical quantization is used; the vacuum state is then invariant under parity by construction. The invariance of the action follows from the classical invariance of Maxwell's equations. The invariance of the canonical quantization procedure can be worked out, and turns out to depend on the transformation of the annihilation operator:
where denotes the momentum of a photon and refers to its polarization state. This is equivalent to the statement that the photon has odd intrinsic parity. Similarly all vector bosons can be shown to have odd intrinsic parity, and all axial-vectors to have even intrinsic parity.
A straightforward extension of these arguments to scalar field theories shows that scalars have even parity. That is, , since
This is true even for a complex scalar field. (Details of spinors are dealt with in the article on the Dirac equation, where it is shown that fermions and antifermions have opposite intrinsic parity.)
With fermions, there is a slight complication because there is more than one spin group.
Parity in the Standard Model
Fixing the global symmetries
Applying the parity operator twice leaves the coordinates unchanged, meaning that must act as one of the internal symmetries of the theory, at most changing the phase of a state. For example, the Standard Model has three global U(1) symmetries with charges equal to the baryon number , the lepton number , and the electric charge . Therefore, the parity operator satisfies for some choice of , , and . This operator is also not unique in that a new parity operator can always be constructed by multiplying it by an internal symmetry such as for some .
To see if the parity operator can always be defined to satisfy , consider the general case when for some internal symmetry present in the theory. The desired parity operator would be . If is part of a continuous symmetry group then exists, but if it is part of a discrete symmetry then this element need not exist and such a redefinition may not be possible.
The Standard Model exhibits a symmetry, where is the fermion number operator counting how many fermions are in a state. Since all particles in the Standard Model satisfy , the discrete symmetry is also part of the continuous symmetry group. If the parity operator satisfied , then it can be redefined to give a new parity operator satisfying . But if the Standard Model is extended by incorporating Majorana neutrinos, which have and , then the discrete symmetry is no longer part of the continuous symmetry group and the desired redefinition of the parity operator cannot be performed. Instead it satisfies so the Majorana neutrinos would have intrinsic parities of .
Parity of the pion
In 1954, a paper by William Chinowsky and Jack Steinberger demonstrated that the pion has negative parity.
They studied the decay of an "atom" made from a deuteron () and a negatively charged pion () in a state with zero orbital angular momentum into two neutrons ().
Neutrons are fermions and so obey Fermi–Dirac statistics, which implies that the final state is antisymmetric. Using the fact that the deuteron has spin one and the pion spin zero together with the antisymmetry of the final state they concluded that the two neutrons must have orbital angular momentum The total parity is the product of the intrinsic parities of the particles and the extrinsic parity of the spherical harmonic function Since the orbital momentum changes from zero to one in this process, if the process is to conserve the total parity then the products of the intrinsic parities of the initial and final particles must have opposite sign. A deuteron nucleus is made from a proton and a neutron, and so using the aforementioned convention that protons and neutrons have intrinsic parities equal to they argued that the parity of the pion is equal to minus the product of the parities of the two neutrons divided by that of the proton and neutron in the deuteron, explicitly from which they concluded that the pion is a pseudoscalar particle.
Parity violation
Although parity is conserved in electromagnetism and gravity, it is violated in weak interactions, and perhaps, to some degree, in strong interactions. The Standard Model incorporates parity violation by expressing the weak interaction as a chiral gauge interaction. Only the left-handed components of particles and right-handed components of antiparticles participate in charged weak interactions in the Standard Model. This implies that parity is not a symmetry of our universe, unless a hidden mirror sector exists in which parity is violated in the opposite way.
An obscure 1928 experiment, undertaken by R. T. Cox, G. C. McIlwraith, and B. Kurrelmeyer, had in effect reported parity violation in weak decays, but, since the appropriate concepts had not yet been developed, those results had no impact. In 1929, Hermann Weyl explored, without any evidence, the existence of a two-component massless particle of spin one-half. This idea was rejected by Pauli, because it implied parity violation.
By the mid-20th century, it had been suggested by several scientists that parity might not be conserved (in different contexts), but without solid evidence these suggestions were not considered important. Then, in 1956, a careful review and analysis by theoretical physicists Tsung-Dao Lee and Chen-Ning Yang went further, showing that while parity conservation had been verified in decays by the strong or electromagnetic interactions, it was untested in the weak interaction. They proposed several possible direct experimental tests. They were mostly ignored, but Lee was able to convince his Columbia colleague Chien-Shiung Wu to try it. She needed special cryogenic facilities and expertise, so the experiment was done at the National Bureau of Standards.
Wu, Ambler, Hayward, Hoppes, and Hudson (1957) found a clear violation of parity conservation in the beta decay of cobalt-60. As the experiment was winding down, with double-checking in progress, Wu informed Lee and Yang of their positive results, and saying the results need further examination, she asked them not to publicize the results first. However, Lee revealed the results to his Columbia colleagues on 4 January 1957 at a "Friday lunch" gathering of the Physics Department of Columbia. Three of them, R. L. Garwin, L. M. Lederman, and R. M. Weinrich, modified an existing cyclotron experiment, and immediately verified the parity violation. They delayed publication of their results until after Wu's group was ready, and the two papers appeared back-to-back in the same physics journal.
The discovery of parity violation explained the outstanding puzzle in the physics of kaons.
In 2010, it was reported that physicists working with the Relativistic Heavy Ion Collider had created a short-lived parity symmetry-breaking bubble in quark–gluon plasmas. An experiment conducted by several physicists in the STAR collaboration, suggested that parity may also be violated in the strong interaction. It is predicted that this local parity violation manifests itself by chiral magnetic effect.
Intrinsic parity of hadrons
To every particle one can assign an intrinsic parity as long as nature preserves parity. Although weak interactions do not, one can still assign a parity to any hadron by examining the strong interaction reaction that produces it, or through decays not involving the weak interaction, such as rho meson decay to pions.
See also
C-symmetry
CP violation
Electroweak theory
Mirror matter
Molecular symmetry
T-symmetry
References
Footnotes
Citations
Sources
Physical quantities
Quantum mechanics
Quantum field theory
Nuclear physics
Conservation laws
Quantum numbers
Asymmetry | Parity (physics) | Physics,Chemistry,Mathematics | 4,489 |
1,513,065 | https://en.wikipedia.org/wiki/Palladium-hydrogen%20electrode | The palladium-hydrogen electrode (abbreviation: Pd/H2) is one of the common reference electrodes used in electrochemical study. Most of its characteristics are similar to the standard hydrogen electrode (with platinum). But palladium has one significant feature—the capability to absorb (dissolve into itself) molecular hydrogen.
Electrode operation
Two phases can coexist in palladium when hydrogen is absorbed:
alpha-phase at hydrogen concentration less than 0.025 atoms per atom of palladium
beta-phase at hydrogen concentration corresponding to the non-stoichiometric formula PdH0.6
The electrochemical behaviour of a palladium electrode in equilibrium with H3O+ ions in solution parallels the behaviour of palladium with molecular hydrogen
Thus the equilibrium is controlled in one case by the partial pressure or fugacity of molecular hydrogen and in other case—by activity of H+-ions in solution.
When palladium is electrochemically charged by hydrogen, the existence of two phases is manifested by a constant potential of approximately +50 mV compared to the reversible hydrogen electrode. This potential is independent of the amount of hydrogen absorbed over a wide range. This property has been utilized in the construction of a palladium/hydrogen reference electrode. The main feature of such electrode is an absence of non-stop bubbling of molecular hydrogen through the solution as it is absolutely necessary for the standard hydrogen electrode.
See also
Dynamic hydrogen electrode
Reversible hydrogen electrode
References
External links
Electrochimica Acta
Electrodes
Palladium
Hydrogen technologies | Palladium-hydrogen electrode | Chemistry | 316 |
42,813,835 | https://en.wikipedia.org/wiki/Data%20publishing | Data publishing (also data publication) is the act of releasing research data in published form for use by others. It is a practice consisting in preparing certain data or data set(s) for public use thus to make them available to everyone to use as they wish.
This practice is an integral part of the open science movement.
There is a large and multidisciplinary consensus on the benefits resulting from this practice.
The main goal is to elevate data to be first class research outputs. There are a number of initiatives underway as well as points of consensus and issues still in contention.
There are several distinct ways to make research data available, including:
publishing data as supplemental material associated with a research article, typically with the data files hosted by the publisher of the article
hosting data on a publicly available website, with files available for download
hosting data in a repository that has been developed to support data publication, e.g. figshare, Dryad, Dataverse, Zenodo. A large number of general and specialty (such as by research topic) data repositories exist. For example, the UK Data Service enables users to deposit data collections and re-share these for research purposes.
publishing a data paper about the dataset, which may be published as a preprint, in a regular journal, or in a data journal that is dedicated to supporting data papers. The data may be hosted by the journal or hosted separately in a data repository.
Publishing data allows researchers to both make their data available to others to use, and enables datasets to be cited similarly to other research publication types (such as articles or books), thereby enabling producers of datasets to gain academic credit for their work.
The motivations for publishing data may range for a desire to make research more accessible, to enable citability of datasets, or research funder or publisher mandates that require open data publishing. The UK Data Service is one key organisation working with others to raise the importance of citing data correctly and helping researchers to do so.
Solutions to preserve privacy within data publishing has been proposed, including privacy protection algorithms, data ”masking” methods, and regional privacy level calculation algorithm.
Methods for publishing data
Data files as supplementary material
A large number of journals and publishers support supplementary material being attached to research articles, including datasets. Though historically such material might have been distributed only by request or on microform to libraries, journals today typically host such material online. Supplementary material is available to subscribers to the journal or, if the article or journal is open access, to everyone.
Data repositories
There are a large number of data repositories, on both general and specialized topics. Many repositories are disciplinary repositories, focused on a particular research discipline such as the UK Data Service which is a trusted digital repository of social, economic and humanities data. Repositories may be free for researchers to upload their data or may charge a one-time or ongoing fee for hosting the data. These repositories offer a publicly accessible web interface for searching and browsing hosted datasets, and may include additional features such as a digital object identifier, for permanent citation of the data, and linking to associated published papers and code.
Data papers
Data papers or data articles are “scholarly publication of a searchable metadata document describing a particular on-line accessible dataset, or a group of datasets, published in accordance to the standard academic practices”. Their final aim is to provide “information on the what, where, why, how and who of the data”. The intent of a data paper is to offer descriptive information on the related dataset(s) focusing on data collection, distinguishing features, access and potential reuse rather than on data processing and analysis. Because data papers are considered academic publications no different than other types of papers, they allow scientists sharing data to receive credit in currency recognizable within the academic system, thus "making data sharing count". This provides not only an additional incentive to share data, but also through the peer review process, increases the quality of metadata and thus reusability of the shared data.
Thus data papers represent the scholarly communication approach to data sharing. Despite their potentiality, data papers are not the ultimate and complete solution for all the data sharing and reuse issues and, in some cases, they are considered to induce false expectations in the research community.
Data journals
Data papers are supported by a rich array of data journals, some of which are "pure", i.e. they are dedicated to publish data papers only, while others – the majority – are "mixed", i.e. they publish a number of articles types including data papers.
A comprehensive survey on data journals is available. A non-exhaustive list of data journals has been compiled by staff at the University of Edinburgh.
Examples of "pure" data journals are: Earth System Science Data, Journal of Open Archaeology Data, Open Health Data, Polar Data Journal, and Scientific Data.
Examples of "mixed" journals publishing data papers are: Biodiversity Data Journal, F1000Research, GigaScience, GigaByte, PLOS ONE, and SpringerPlus.
Data citation
Data citation is the provision of accurate, consistent and standardised referencing for datasets just as bibliographic citations are provided for other published sources like research articles or monographs. Typically the well established Digital Object Identifier (DOI) approach is used with DOIs taking users to a website that contains the metadata on the dataset and the dataset itself.
History of development
A 2011 paper reported an inability to determine how often data citation happened in social sciences.
2012-13 papers reported that data citation was becoming more common but the practice for it was not standard.
In 2014 FORCE 11 published the Joint Declaration of Data Citation Principles covering the purpose, function and attributes of data citation.
In October 2018 CrossRef expressed its support for cataloging datasets and recommending their citation.
A popular data-oriented journal reported in April 2019 that it would now use data citations.
A June 2019 paper suggested that increased data citation will make the practice more valuable for everyone by encouraging data sharing and also by increasing the prestige of people who share.
Data citation is an emerging topic in computer science and it has been defined as a computational problem. Indeed, citing data poses significant challenges to computer scientists and the main problems to address are related to:
the use of heterogeneous data models and formats – e.g., relational databases, Comma-Separated Values (CSV), Extensible Markup Language (XML), Resource Description Framework (RDF);
the transience of data;
the necessity to cite data at different levels of coarseness – i.e., deep citations;
the necessity to automatically generate citations to data with variable granularity.
See also
Data archiving
Disciplinary repository
Open science data
Registry of Research Data Repositories
References
Academic publishing
Open access (publishing)
Data
Open science
Scholarly communication | Data publishing | Technology | 1,433 |
6,774,401 | https://en.wikipedia.org/wiki/Transformational%20theory | Transformational theory is a branch of music theory developed by David Lewin in the 1980s, and formally introduced in his 1987 work, Generalized Musical Intervals and Transformations. The theory—which models musical transformations as elements of a mathematical group—can be used to analyze both tonal and atonal music.
The goal of transformational theory is to change the focus from musical objects—such as the "C major chord" or "G major chord"—to relations between musical objects (related by transformation). Thus, instead of saying that a C major chord is followed by G major, a transformational theorist might say that the first chord has been "transformed" into the second by the "Dominant operation." (Symbolically, one might write "Dominant(C major) = G major.") While traditional musical set theory focuses on the makeup of musical objects, transformational theory focuses on the intervals or types of musical motion that can occur. According to Lewin's description of this change in emphasis, "[The transformational] attitude does not ask for some observed measure of extension between reified 'points'; rather it asks: 'If I am at s and wish to get to t, what characteristic gesture should I perform in order to arrive there?'" (from Generalized Musical Intervals and Transformations (GMIT), p. 159)
Formalism
The formal setting for Lewin's theory is a set S (or "space") of musical objects, and a set T of transformations on that space. Transformations are modeled as functions acting on the entire space, meaning that every transformation must be applicable to every object.
Lewin points out that this requirement significantly constrains the spaces and transformations that can be considered. For example, if the space S is the space of diatonic triads (represented by the Roman numerals I, ii, iii, IV, V, vi, and vii°), the "Dominant transformation" must be defined so as to apply to each of these triads. This means, for example, that some diatonic triad must be selected as the "dominant" of the diminished triad on vii. Ordinary musical discourse, however, typically holds that the "dominant" relationship is only between the I and V chords. (Certainly, no diatonic triad is ordinarily considered the dominant of the diminished triad.) In other words, "dominant," as used informally, is not a function that applies to all chords, but rather describes a particular relationship between two of them.
There are, however, any number of situations in which "transformations" can extend to an entire space. Here, transformational theory provides a degree of abstraction that could be a significant music-theoretical asset. One transformational network can describe the relationships among musical events in more than one musical excerpt, thus offering an elegant way of relating them. For example, figure 7.9 in Lewin's GMIT can describe the first phrases of both the first and third movements of Beethoven's Symphony No. 1 in C Major, Op. 21. In this case, the transformation graph's objects are the same in both excerpts from the Beethoven Symphony, but this graph could apply to many more musical examples when the object labels are removed. Further, such a transformational network that gives only the intervals between pitch classes in an excerpt may also describe the differences in the relative durations of another excerpt in a piece, thus succinctly relating two different domains of music analysis. Lewin's observation that only the transformations, and not the objects on which they act, are necessary to specify a transformational network is the main benefit of transformational analysis over traditional object-oriented analysis.
Transformations as functions
The "transformations" of transformational theory are typically modeled as functions that act over some musical space S, meaning that they are entirely defined by their inputs and outputs: for instance, the "ascending major third" might be modeled as a function that takes a particular pitch class as input and outputs the pitch class a major third above it.
However, several theorists have pointed out that ordinary musical discourse often includes more information than functions. For example, a single pair of pitch classes (such as C and E) can stand in multiple relationships: E is both a major third above C and a minor sixth below it. (This is analogous to the fact that, on an ordinary clockface, the number 4 is both four steps clockwise from 12 and 8 steps counterclockwise from it.) For this reason, theorists such as Dmitri Tymoczko have proposed replacing Lewinnian "pitch class intervals" with "paths in pitch class space". More generally, this suggests that there are situations where it might not be useful to model musical motion ("transformations" in the intuitive sense) using functions ("transformations" in the strict sense of Lewinnian theory).
Another issue concerns the role of "distance" in transformational theory. In the opening pages of GMIT, Lewin suggests that a subspecies of "transformations" (namely, musical intervals) can be used to model "directed measurements, distances, or motions". However, the mathematical formalism he uses—which models "transformations" by group elements—does not obviously represent distances, since group elements are not typically considered to have size. (Groups are typically individuated only up to isomorphism, and isomorphism does not necessarily preserve the "sizes" assigned to group elements.) Theorists such as Ed Gollin, Dmitri Tymoczko, and Rachel Hall, have all written about this subject, with Gollin attempting to incorporate "distances" into a broadly Lewinnian framework.
Tymoczko's "Generalizing Musical Intervals" contains one of the few extended critiques of transformational theory, arguing (1) that intervals are sometimes "local" objects that, like vectors, cannot be transported around a musical space; (2) that musical spaces often have boundaries, or multiple paths between the same points, both prohibited by Lewin's formalism; and (3) that transformational theory implicitly relies on notions of distance extraneous to the formalism as such.
Reception
Although transformation theory is more than thirty years old, it did not become a widespread theoretical or analytical pursuit until the late 1990s. Following Lewin's revival (in GMIT) of Hugo Riemann's three contextual inversion operations on triads (parallel, relative, and Leittonwechsel) as formal transformations, the branch of transformation theory called Neo-Riemannian theory was popularized by Brian Hyer (1995), Michael Kevin Mooney (1996), Richard Cohn (1997), and an entire issue of the Journal of Music Theory (42/2, 1998). Transformation theory has received further treatment by Fred Lerdahl (2001), Julian Hook (2002), David Kopp (2002), and many others.
The status of transformational theory is currently a topic of debate in music-theoretical circles. Some authors, such as Ed Gollin, Dmitri Tymoczko and Julian Hook, have argued that Lewin's transformational formalism is too restrictive, and have called for extending the system in various ways. Others, such as Richard Cohn and Steven Rings, while acknowledging the validity of some of these criticisms, continue to use broadly Lewinnian techniques.
See also
Pitch space
Interval vector
References
Further reading
Cohn, Richard. "Neo-Riemannian Operations, Parsimonious Trichords, and their Tonnetz Representations", Journal of Music Theory, 41/1 (1997), 1–66
Hook, Julian. Uniform Triadic Transformations (Ph.D. dissertation, Indiana University, 2002)
Hyer, Brian. "Reimag(in)ing Riemann", Journal of Music Theory, 39/1 (1995), 101–138
Kopp, David. Chromatic Transformations in Nineteenth-century Music (Cambridge University Press, 2002)
Lerdahl, Fred. Tonal Pitch Space (Oxford University Press: New York, 2001)
Lewin, David. "Transformational Techniques in Atonal and Other Music Theories", Perspectives of New Music, xxi (1982–83), 312–371
Lewin, David. Generalized Musical Intervals and Transformations (Yale University Press: New Haven, Connecticut, 1987)
Lewin, David. Musical Form and Transformation: Four Analytic Essays (Yale University Press: New Haven, Connecticut, 1993)
Mooney, Michael Kevin. The 'Table of Relations' and Music Psychology in Hugo Riemann's Chromatic Theory (Ph.D. dissertation, Columbia University, 1996)
Rings, Steven. "Tonality and Transformation" (Oxford University Press: New York, 2011)
Rehding, Alexander and Gollin, Edward. The Oxford Handbook of Neo-Riemannian Music Theories (Oxford University Press: New York 2011)
Tsao, Ming (2010). Abstract Musical Intervals: Group Theory for Composition and Analysis. Berkeley, CA: Musurgia Universalis Press. ISBN 978-1430308355.
External links
Musical systems
Mathematics of music
Post-tonal music theory | Transformational theory | Mathematics | 1,864 |
25,681,155 | https://en.wikipedia.org/wiki/Palomar%20Transient%20Factory | The Palomar Transient Factory (PTF, obs. code: I41), was an astronomical survey using a wide-field survey camera designed to search for optical transient and variable sources such as variable stars, supernovae, asteroids and comets. The project completed commissioning in summer 2009, and continued until December 2012. It has since been succeeded by the Intermediate Palomar Transient Factory (iPTF), which itself transitioned to the Zwicky Transient Facility in 2017/18. All three surveys are registered at the MPC under the same observatory code for their astrometric observations.
Description
The fully automated system included an automated realtime data reduction pipeline, a dedicated photometric follow-up telescope, and a full archive of all detected astronomical sources. The survey was performed with a 12K × 8K, 7.8 square degree CCD array camera re-engineered for the 1.2-meter Samuel Oschin Telescope at Palomar Observatory. The survey camera achieved first light on 13 December 2008.
PTF was a collaboration of Caltech, LBNL, Infrared Processing and Analysis Center, Berkeley, LCOGT, Oxford, Columbia and the Weizmann Institute. The project was led by Shrinivas Kulkarni at Caltech. As of 2018, he leads the Zwicky Transient Facility.
Image Subtraction for near-realtime transient detection was performed at LBNL; efforts to continue to observe interesting targets were coordinated at Caltech, and the data was processed and archived for later retrieval at the Infrared Processing and Analysis Center (IPAC). Photometric and spectroscopic follow-up of detected objects was undertaken by the automated Palomar 1.5-meter telescope and other facilities provided by consortium members.
Time-variability studies were undertaken using the photometric/astrometric pipeline implemented at the Infrared Processing and Analysis Center (IPAC). Studies included compact binaries (AM CVn stars), RR Lyrae, cataclysmic variables, and active galactic nuclei (AGN), and lightcurves of small Solar System bodies.
Scientific goals
PTF covered a wide range of science aspects, including supernovae, novae, cataclysmic variables, Luminous red novae, tidal disruption flares, compact binaries (AM CVn star), active galactic nuclei, transiting Extrasolar planets, RR Lyrae variable stars, microlensing events, and small Solar System bodies of the Solar System. PTF filled the gaps in the knowledge of the optical transient phase space, extended the understanding of known source classes, and provided the first detections or constraints on predicted, but not yet discovered, event populations.
Projects
The efforts being undertaken during the five-year project include:
a 5-day cadence supernova search
an exotic transient search with cadences between 90 seconds and 1 day.
a half-sky survey in the H-alpha band
a search for transiting planets in the Orion star formation region.
coordinated observations with the GALEX spacecraft, including a survey of the Kepler region
coordinated observations with the EVLA, including a survey of SDSS Stripe 82
Transient detection
Data taken with the camera were transferred to two automated reduction pipelines. A near-realtime image subtraction pipeline was run at LBNL and had the goal of identifying optical transients within minutes of images being taken. The output of this pipeline was sent to UC Berkeley where a source classifier determined a set of probabilistic statements about the scientific classification of the transients based on all available time-series and context data.
On few-day timescales the images were also ingested into a database at IPAC. Each incoming frame was calibrated and searched for objects (constant and variable), before the detections were merged into a database. Lightcurves of approximately 500 million objects had been accumulated. This database was planned to be made public after an 18-month proprietary period, subject to available resources.
The Palomar Observatory 60-inch photometric follow-up telescope automatically generated colors and lightcurves for interesting transients detected using the Samuel Oschin Telescope. The PTF collaboration also used a further 15 telescopes for photometric and spectroscopic follow-up.
Near-Earth object observation
PTF uses software written to assist a human in weeding out false positives when searching for small near-Earth objects.
Bibliography
2009
N. Law et al., PASP, 121, 1395:"The Palomar Transient Factory: System Overview, Performance, and First Results" — This paper summarizes the PTF project, including several months of on-sky performance tests of the new survey camera, the observing plans, and the data reduction strategy. It also includes details for the first 51 PTF optical transient detections, found in commissioning data.
A. Rau et al., PASP, 121, 1334: "Exploring the Optical Transient Sky with the Palomar Transient Factory" — In this article, the scientific motivation for PTF is presented and a description of the goals and expectations is provided.
2008
G. Rahmer et al., SPIE, 7014, 163: "The 12K×8K CCD mosaic camera for the Palomar Transient Factory" — This paper discusses the modifications to the CFHT 12K CCD camera, improved readout, new filter exchange mechanism, and the field flattener needed to correct for focal plane curvature.
See also
Zooniverse — Galaxy Zoo Supernovae
List of near-Earth object observation projects
References
External links
Intermediate Palomar Transient Factory
2008 in California
2008 in science
Astronomical surveys
California Institute of Technology | Palomar Transient Factory | Astronomy | 1,143 |
40,774,061 | https://en.wikipedia.org/wiki/NGC%204527 | NGC 4527 is a spiral galaxy in the constellation Virgo. It was discovered by German-British astronomer William Herschel on 23 February 1784.
NGC 4527 is a member of the M61 Group of galaxies, which is a member of the Virgo II Groups, a series of galaxies and galaxy clusters strung out from the southern edge of the Virgo Supercluster.
Characteristics
NGC 4527 is an intermediate spiral galaxy similar to the Andromeda Galaxy and is located at a distance not well determined, but usually is considered to be an outlying member of the Virgo Cluster of galaxies, being placed within the subcluster known as S Cloud.
Unlike the Andromeda Galaxy, NGC 4527 is also a starburst galaxy, with 2.5 billion solar masses of molecular hydrogen concentrated within its innermost regions. However said starburst is still weak and seems to be on its earliest phases.
Supernovae
Three supernovae have been observed in NGC 4527:
Harlow Shapley discovered SN 1915A (type unknown, mag. 15.5) on 20 March 1915.
Several astronomers reported the discovery of SN 1991T (type Ia-pec, mag. 13) on 13 April 1991.
SN 2004gn (type Ic, mag. 16.6) was discovered on 1 December 2004 by the Lick Observatory Supernova Search (LOSS).
See also
List of NGC objects (4001–5000)
References
External links
Intermediate spiral galaxies
Virgo Cluster
4527
Virgo (constellation)
041789
07721
12315+0255
+01-32-101
17840223
Discoveries by William Herschel | NGC 4527 | Astronomy | 338 |
14,344,447 | https://en.wikipedia.org/wiki/Tybamate | Tybamate (INN; Solacen, Tybatran, Effisax) is an anxiolytic of the carbamate family. It is a prodrug for meprobamate in the same way as the better known drug carisoprodol. It has liver enzyme inducing effects similar to those of phenobarbital but much weaker.
As the trade name Tybatran (Robins), it was formerly available in capsules of 125, 250, and 350 mg, taken 3 or 4 times a day for a total daily dosage of 750 mg to 2 g. The plasma half-life of the drug is three hours. At high doses in combination with phenothiazines, it could produce convulsions.
Synthesis
Catalytic hydrogenation of 2-methyl-2-pentenal (1) gives the aldehyde 2-methylpentanal (2). Treatment with formaldehyde gives a crossed Cannizzaro reaction yielding 2,2-bis(hydroxymethyl)pentane (3). Cyclisation of this diol with diethyl carbonate gives (4), which reacts with ammonia to provide the carbamate (5). Lastly, treatment with butyl isocyanate (6) produces tybamate.
References
Anxiolytics
Carbamates
Prodrugs
GABAA receptor positive allosteric modulators | Tybamate | Chemistry | 295 |
37,256,856 | https://en.wikipedia.org/wiki/Ross%20Honsberger | Ross Honsberger (1929–2016) was a Canadian mathematician and author on recreational mathematics.
Life
Honsberger studied mathematics at the University of Toronto, with a bachelor's degree, and then worked for ten years as a teacher in Toronto, before continuing his studies at the University of Waterloo (master's degree). Since 1964 he had been on the faculty of mathematics, where he later became a professor emeritus. He dealt with combinatorics and optimization, especially with mathematics education. He developed education courses, for example, on combinatorial geometry, frequently held lectures for students and math teachers, and was editor of the Ontario Secondary School Mathematics Bulletin. He wrote numerous books on elementary mathematics (geometry, number theory, combinatorics, probability theory), and recreational mathematics (often at the Mathematical Association of America, MAA), with him in his own words using the book by Hans Rademacher and Otto Toeplitz of numbers and figures as a model. Frequent were his expositions of problems at the International Mathematical Olympiads and other competitions.
Edsger W. Dijkstra called his Mathematical Gems "delightful".
Books
Ingenuity in Mathematics, New Mathematical Library, Random House / Singer 1970
Mathematical Gems, MAA 1973, 2003 (Mathematical Expositions Dolciani Vol.1), German Mathematical gems of elementary combinatorics, number theory and geometry, Wiley, 1990, , Chapter "The Story of Louis Posa".
Mathematical Gems 2, MAA 1975 (Vol.2 Dolciani Mathematical Expositions)
Mathematical Gems 3, MAA 1985, 1991 (vol.9 Dolciani Mathematical Expositions)
Mathematical Morsels, MAA 1978 (Vol.3 Dolciani Mathematical Expositions)
More Mathematical Morsels, MAA 1991 (Dolciani Bd.10 Mathematical Expositions)
Mathematical Plums, MAA 1979 (vol.4 Dolciani Mathematical Expositions)
Mathematical Chestnuts from around the world, MAA 2001 (Dolciani Bd.24 Mathematical Expositions)
Mathematical Diamonds, MAA 2003
In Pólya's Footsteps, MAA 1997 (Dolciani Bd.19 Mathematical Expositions)
Episodes in nineteenth and twentieth century euclidean geometry, MAA 1995
From Erdos to Kiev – Problems of Olympiad Caliber, MAA 1997
Mathematical Delights, MAA 2004 (Dolciani Mathematics Expositions Bd.28)
References
External links
Ross Honsberger at a website of the University of Waterloo
1929 births
Canadian mathematicians
Recreational mathematicians
Mathematics popularizers
Scientists from Toronto
University of Toronto alumni
2016 deaths
Academic staff of the University of Waterloo | Ross Honsberger | Mathematics | 528 |
22,060,557 | https://en.wikipedia.org/wiki/Gilbert%20Mair%20%28trader%29 | Gilbert Mair (23 May 1799 – 16 July 1857) was a sailor and a merchant trader who visited New Zealand for the first time when he was twenty, and lived there from 1824 till his death. He married Elizabeth Gilbert Puckey – who had the first piano brought to New Zealand in 1827. They had twelve children. Among them were "famous New Zealanders" like Captain Gilbert Mair and Major William Gilbert Mair. Mair is a direct-line ancestor of Māori politician and activist Ken Mair.
In 1835 Gilbert Mair senior signed the Declaration of Independence of New Zealand as a witness (together with James Clendon) when a number of northern Māori rangatira (chiefs) established themselves as representing a confederation under the title of the "United Tribes of New Zealand". Gilbert Mair senior was "present at the signing of the Treaty of Waitangi in 1840, and he and his family were acquainted with many of the noted men who visited the Bay of Islands".
Biography
Mair, born in Peterhead, Scotland, in 1799, had sailed on the whaling vessel New Zealander in 1820. At this occasion he visited New Zealand for the first time. When it returned to England on 2 March 1820, the missionary Thomas Kendall was among the passengers, together with Hongi Hika and Waikato, the two rangatira of Ngāpuhi iwi (tribe) that were the first Māori to come to England.
In 1823 he made his second trip to New Zealand. This time he bought two preserved heads. In 1824 he made his third visit. He would never sail back to England again.
Sailing master of the Herald
was a 55-ton mission schooner, built at the beach of Paihia in the Bay of Islands. Missionary Henry Williams laid the keel for the vessel in 1824. He needed a ship to provision the mission stations and to visit the more remote areas of New Zealand to bring the Gospel. When Gilbert Mair visited New Zealand for the third time, Williams asked him to assist in building the ship.
When the Herald was finished in 1826, Mair became the sailing master.
He made a lot of trips. He went to Australia three times. He visited the Bay of Plenty 4 times, and sailed up and down the east coast of the North Island from the East Cape to the North Cape, and on the west coast south to Kawhia.
In May 1828 the Herald foundered, while trying to enter Hokianga Harbour. After the Herald was wrecked, Gilbert Mair purchased land from the natives, built his home at Wahapu and carried on the business of merchant and trader.
Marriage
On his first visit to New Zealand, Gilbert Mair had been in contact with the Puckey family: William Puckey and his wife Margery, their son William Gilbert Puckey (1805–1878) and daughter Elizabeth Gilbert (1809–1870). When he had first met Elizabeth she was only 11 or 12, but when he returned in 1824 "she had grown into a 15-year-old woman".
They married on 12 September 1827 in Sydney, during one of trips of the Herald there.
They would raise twelve children:
Caroline Elizabeth, the first born in 1828; she died in 1917
Robert (1830–1920). His name "is held in high regard at Whangārei, his life-long home town, to whose people he gave a beautiful park"
William Gilbert Mair (1832–1912), soldier, later a major in the army
Marianne (1834–1893)
Henry Abbott (1836–1881)
Charlotte (1838–1891)
Jessie Eliza (1840–1899)
Gilbert (1843–1923): Gilbert Mair junior, or "Te Kooti's Nemesis"
Matilda Helen (1845–1927)
Emily Francis (1848–1902)
Sophia Marella (1850–1884)
Lavinia Laura, the last born in 1852; she died in 1936
Death
Mair died at "Deveron", Whangārei, in 1857. He was buried on his own property. Many years later his sons removed his remains to the graveyard round the Church, where now only members of the Mair family are laid to rest".
Witness of the Musket Wars
During his trips around NZ Gilbert Mair witnessed "the savagery" of the Musket Wars, the wars between Māori iwi (tribes) in the years between 1818 and 1830. He saw for instance the results of a clash at Ohiwa Harbour in 1828, with fifty dead bodies on the shore. And in that same year, he saw the remains of a fight at Te Papa pa at Tauranga Harbour, with "hundreds of bodies of men, women and children, dead animals and human bones, the remnants of a cannibal feast".
He later told his son Gilbert of a visit he had made to the Te Totara Pa site in 1826. Five years before, in 1821, a Ngāpuhi taua (war party), led by Hongi Hika, had slaughtered the Ngāti Maru, living there. But when Gilbert Mair senior walked there in 1826, he had still found it "... strewn with human bones – a veritable Golgotha".
Shortly after Elizabeth and Gilbert married, in 1828, the famous Ngāpuhi rangatira Hongi Hika died. He had provided protection to the missionary community, and the time following his death was of considerable anxiety for the settlers.
Trader
In February 1830 Gilbert Mair purchased of land at Te Wahapu Point, some four km south of Kororāreka (nowadays Russell). This was the first of a long chain of trading ventures. He purchased the land with goods, including six muskets, many casks of gunpowder and hundreds of musket balls and flints. Here he built up a flourishing trading station. He built his home on an elevated site above the trading station.
He was "one of the first to exploit the kauri gum industry, he exported gum to the United States and timber and flax to Sydney.
In that same year of settling at Te Wahapu, the so-called Girls' War broke out in Kororāreka, during which the chief Hengi was killed. Eventually the Reverend Henry Williams persuaded the warriors to stop the fighting. The Reverend Samuel Marsden had arrived on a visit and over the following weeks he and Henry Williams attempted to negotiate a settlement in which Kororāreka would be ceded by Pōmare II as compensation for Hengi's death, which was accepted by those engaged in the fighting. In 1837 Pōmare II fought a three month war with Tītore in the Bay of Islands. An underlying cause of the fighting was a dispute as to the boundary line of the Kororāreka block that had been surrendered as a consequence of the death of Hengi some seven years previously in the Girls’ War.
In 1840 the signing of the Treaty of Waitangi finally brought a period of peace to the country.
In 1842 Mair sold his business and property at Wahapu. In the beginning of the 1840s he had purchased at Whangārei. The family moved there in 1842, and lived in a house, he called "Deveron". From this base, Mair continued "active trading in a number of fields – kauri timber, kauri gum, whaling, as well as general trading and his own farming venture". In 1845 the situation again became so difficult when the Flagstaff War began in the Bay of Islands, that Gilbert Mair asked the governor to send a vessel to take all settlers to Auckland. Mair "only had three peaceful years in his new home in Whangārei, when he and his family were driven out by hostile natives, going to Auckland for some months, then back to the Bay in 1846, finally returning to Whangārei in 1847".
Other occupations
Gilbert Mair was appointed Justice of Peace by Governor William Hobson.
Mair was "involved in representations to the British government to have New Zealand declared a British colony, and in the formation of the Kororareka Association, a controversial attempt at settler self-rule".
Gilbert Mair "met and entertained many notable people who visited the Bay. Among them was Bishop Broughton of Sydney, who consecrated the Church at Russell in 1842; Bishop Selwyn; Charles Darwin, the celebrated naturalist; Allan Cunningham, a well-known botanist; Admiral Sir James McClintoch (…) and many others".
Samuel Marsden introduced the first horses to New Zealand, from Sydney; Gilbert Mair "bought the next lot from a shipment to Kororareka from Valparaíso. He sold one horse which was sent to the East Coast, and the others he took to Whangārei".
Gilbert Mair Junior
Being raised amongst Māori his son Gilbert was a fluent speaker of the Māori language. During the attack on Auckland by the Ngāti Maniapoto and the Ngāti Hauā in 1863, Gilbert junior joined the Forest Rangers under William Jackson, as an ensign or trainee officer. He took part in the Invasion of Waikato against the rebel Māori Kīngitanga forces and became famous in late 1863 for entering into discussions with the rebels during the Battle of Orakau under a flag of truce. The government forces were aware that a number of women and children were in the stronghold and Mair pleaded with the rebels to let them out but they refused and shot Mair in the shoulder. Mair later became and officer and lead the hunt for Te Kooti between 1868 and 1872 which led to the defeat of Te Kooti's guerillas. Mair was able to convince Ngāi Tūhoe Ringatū, who had been part of Te Kooti's band, to lead the government forces to Te Kooti's secret camp in the Ureweras. Mair later became a government officer trusted with establishing friendly relationships with Rewi Maniapoto in the 1880s.
Footnotes
Literature
Cowan, James (1933) – 'The Mair Brothers, soldiers and pioneers'; in: New Zealand Railways Magazine, Vol. 8, Issue 8 (1 December 1933) – online available at New Zealand Electronic Text Centre (NZETC)
Crosby, Ron (2004) – Gilbert Mair, Te Kooti's Nemesis. Reed Publ. Auckland.
Jackson, Lavinia Laura (Mair) (1935) – Annals of a New Zealand Family; The Household of Gilbert Mair, Early Pioneer. Publ. A.H. & A.W. Reed, Dunedin / Wellington
Smith, S. Percy (1910) – Maori Wars of the Nineteenth Century. Christchurch. online available at NZETC
1799 births
1857 deaths
New Zealand sailors
New Zealand traders
New Zealand people of Scottish descent
People from Peterhead
Settlers of New Zealand
Kauri gum | Gilbert Mair (trader) | Physics | 2,210 |
14,984 | https://en.wikipedia.org/wiki/International%20Atomic%20Energy%20Agency | The International Atomic Energy Agency (IAEA) is an intergovernmental organization that seeks to promote the peaceful use of nuclear energy and to inhibit its use for any military purpose, including nuclear weapons. It was established in 1957 as an autonomous organization within the United Nations system; though governed by its own founding treaty, the organization reports to both the General Assembly and the Security Council of the United Nations, and is headquartered at the UN Office at Vienna, Austria.
The IAEA was created in response to growing international concern toward nuclear weapons, especially amid rising tensions between the foremost nuclear powers, the United States and the Soviet Union. U.S. president Dwight D. Eisenhower's "Atoms for Peace" speech, which called for the creation of an international organization to monitor the global proliferation of nuclear resources and technology, is credited with catalyzing the formation of the IAEA, whose treaty came into force on 29 July 1957 upon U.S. ratification.
The IAEA serves as an intergovernmental forum for scientific and technical cooperation on the peaceful use of nuclear technology and nuclear power worldwide. It maintains several programs that encourage the development of peaceful applications of nuclear energy, science, and technology; provide international safeguards against misuse of nuclear technology and nuclear materials; and promote and implement nuclear safety (including radiation protection) and nuclear security standards. The organization also conducts research in nuclear science and provides technical support and training in nuclear technology to countries worldwide, particularly in the developing world.
Following the ratification of the Treaty on the Non-Proliferation of Nuclear Weapons in 1968, all non-nuclear powers are required to negotiate a safeguards agreement with the IAEA, which is given the authority to monitor nuclear programs and to inspect nuclear facilities. In 2005, the IAEA and its administrative head, Director General Mohamed ElBaradei, were awarded the Nobel Peace Prize "for their efforts to prevent nuclear energy from being used for military purposes and to ensure that nuclear energy for peaceful purposes is used in the safest possible way".
Missions
The IAEA is generally described as having three main missions:
Peaceful uses: Promoting the peaceful uses of nuclear energy by its member states,
Safeguards: Implementing safeguards to verify that nuclear energy is not used for military purposes, and
Nuclear safety: Promoting high standards for nuclear safety.
Peaceful uses
According to Article II of the IAEA Statute, the objectives of the IAEA are "to accelerate and enlarge the contribution of atomic energy to peace, health and prosperity throughout the world" and to "ensure ... that assistance provided by it or at its request or under its supervision or control is not used in such a way as to further any military purpose." Its primary functions in this area, according to Article III, are to encourage research and development, to secure or provide materials, services, equipment, and facilities for Member States, and to foster the exchange of scientific and technical information and training.
Three of the IAEA's six departments are principally charged with promoting the peaceful uses of nuclear energy. The Department of Nuclear Energy focuses on providing advice and services to Member States on nuclear power and the nuclear fuel cycle. The Department of Nuclear Sciences and Applications focuses on the use of non-power nuclear and isotope techniques to help IAEA Member States in the areas of water, energy, health, biodiversity, and agriculture. The Department of Technical Cooperation provides direct assistance to IAEA Member States, through national, regional, and inter-regional projects through training, expert missions, scientific exchanges, and provision of equipment.
Safeguards
Article II of the IAEA Statute defines the Agency's twin objectives as promoting peaceful uses of atomic energy and "ensur[ing], so far as it is able, that assistance provided by it or at its request or under its supervision or control is not used in such a way as to further any military purpose." To do this, the IAEA is authorized in Article III.A.5 of the Statute "to establish and administer safeguards designed to ensure that special fissionable and other materials, services, equipment, facilities, and information made available by the Agency or at its request or under its supervision or control are not used in such a way as to further any military purpose; and to apply safeguards, at the request of the parties, to any bilateral or multilateral arrangement, or at the request of a State, to any of that State's activities in the field of atomic energy."
The Department of Safeguards is responsible for carrying out this mission, through technical measures designed to verify the correctness and completeness of states' nuclear declarations.
Nuclear safety
The IAEA classifies safety as one of its top three priorities. It spends 8.9 percent of its 352 million-euro ($469 million) regular budget in 2011 on making plants secure from accidents. Its resources are used on the other two priorities: technical co-operation and preventing nuclear weapons proliferation.
The IAEA itself says that, beginning in 1986, in response to the nuclear reactor explosion and disaster near Chernobyl, Ukraine, the IAEA redoubled its efforts in the field of nuclear safety. The IAEA says that the same happened after the Fukushima disaster in Fukushima, Japan.
In June 2011, the IAEA chief said he had "broad support for his plan to strengthen international safety checks on nuclear power plants to help avoid any repeat of Japan's Fukushima crisis". Peer-reviewed safety checks on reactors worldwide, organized by the IAEA, have been proposed.
History
In 1946 United Nations Atomic Energy Commission was founded, but stopped working in 1949 and was disbanded in 1952. In 1953, U.S. President Dwight D. Eisenhower proposed the creation of an international body to both regulate and promote the peaceful use of atomic power (nuclear power), in his Atoms for Peace address to the UN General Assembly. In September 1954, the United States proposed to the General Assembly the creation of an international agency to take control of fissile material, which could be used either for nuclear power or for nuclear weapons. This agency would establish a kind of "nuclear bank".
The United States also called for an international scientific conference on all of the peaceful aspects of nuclear power. By November 1954, it had become clear that the Soviet Union would reject any international custody of fissile material if the United States did not agree to disarmament first, but that a clearinghouse for nuclear transactions might be possible. From 8 to 20 August 1955, the United Nations held the International Conference on the Peaceful Uses of Atomic Energy in Geneva, Switzerland. In October 1957, a Conference on the IAEA Statute was held at the Headquarters of the United Nations to approve the founding document for the IAEA, which was negotiated in 1955–1957 by a group of twelve countries. The Statute of the IAEA was approved on 23 October 1956 and came into force on 29 July 1957.
Former US Congressman W. Sterling Cole served as the IAEA's first Director-General from 1957 to 1961. Cole served only one term, after which the IAEA was headed by two Swedes for nearly four decades: the scientist Sigvard Eklund held the job from 1961 to 1981, followed by former Swedish Foreign Minister Hans Blix, who served from 1981 to 1997. Blix was succeeded as Director General by Mohamed ElBaradei of Egypt, who served until November 2009.
Beginning in 1986, in response to the nuclear reactor explosion and disaster near Chernobyl, Ukraine, the IAEA increased its efforts in the field of nuclear safety. The same happened after the 2011 Fukushima disaster in Fukushima, Japan.
Both the IAEA and its then Director General, ElBaradei, were awarded the Nobel Peace Prize in 2005. In his acceptance speech in Oslo, ElBaradei stated that only one percent of the money spent on developing new weapons would be enough to feed the entire world, and that, if we hope to escape self-destruction, then nuclear weapons should have no place in our collective conscience, and no role in our security.
On 2 July 2009, Yukiya Amano of Japan was elected as the Director General for the IAEA, defeating Abdul Samad Minty of South Africa and Luis E. Echávarri of Spain. On 3 July 2009, the Board of Governors voted to appoint Yukiya Amano "by acclamation", and IAEA General Conference in September 2009 approved. He took office on 1 December 2009. After Amano's death, his Chief of Coordination Cornel Feruta of Romania was named Acting Director General.
On 2 August 2019, Rafael Grossi was presented as the Argentine candidate to become the Director General of IAEA. On 28 October 2019, the IAEA Board of Governors held its first vote to elect the new Director General, but none of the candidates secured the two-thirds majority (23 votes) in the 35-member IAEA Board of Governors that was needed to be elected. The next day, 29 October, the second voting round was held, and Grossi won 24 votes. He assumed office on 3 December 2019. Following a special meeting of the IAEA General Conference to approve his appointment, on 3 December Grossi became the first Latin American to head the Agency.
During the Russian invasion of Ukraine, Grossi visited Ukraine multiple times as part of the ongoing efforts to help prevent a nuclear accident during the war. He warned against any complacency towards the dangers that the Zaporizhzhia Nuclear Power Plant, Europe's largest nuclear power plant, was facing. The plant has come under fire multiple times during the war.
Structure and function
General
The IAEA's mission is guided by the interests and needs of Member States, strategic plans, and the vision embodied in the IAEA Statute (see below). Three main pillars – or areas of work – underpin the IAEA's mission: Safety and Security; Science and Technology; and Safeguards and Verification.
The IAEA as an autonomous organization is not under the direct control of the UN, but the IAEA does report to both the UN General Assembly and Security Council. Unlike most other specialized international agencies, the IAEA does much of its work with the Security Council, and not with the United Nations Economic and Social Council. The structure and functions of the IAEA are defined by its founding document, the IAEA Statute (see below). The IAEA has three main bodies: the Board of Governors, the General Conference, and the Secretariat.
The IAEA exists to pursue the "safe, secure and peaceful uses of nuclear sciences and technology" (Pillars 2005). The IAEA executes this mission with three main functions: the inspection of existing nuclear facilities to ensure their peaceful use, providing information and developing standards to ensure the safety and security of nuclear facilities, and as a hub for the various fields of science involved in the peaceful applications of nuclear technology.
The IAEA recognizes knowledge as the nuclear energy industry's most valuable asset and resource, without which the industry cannot operate safely and economically. Following the IAEA General Conference since 2002 resolutions the Nuclear Knowledge Management, a formal program was established to address Member States' priorities in the 21st century.
In 2004, the IAEA developed a Programme of Action for Cancer Therapy (PACT). PACT responds to the needs of developing countries to establish, to improve, or to expand radiotherapy treatment programs. The IAEA is raising money to help efforts by its Member States to save lives and reduce the suffering of cancer victims.
The IAEA has established programs to help developing countries in planning to build systematically the capability to manage a nuclear power program, including the Integrated Nuclear Infrastructure Group, which has carried out Integrated Nuclear Infrastructure Review missions in Indonesia, Jordan, Thailand and Vietnam. The IAEA reports that roughly 60 countries are considering how to include nuclear power in their energy plans.
To enhance the sharing of information and experience among IAEA Member States concerning the seismic safety of nuclear facilities, in 2008 the IAEA established the International Seismic Safety Centre. This centre is establishing safety standards and providing for their application in relation to site selection, site evaluation and seismic design.
The IAEA has its headquarters since its founding in Vienna, Austria. The IAEA has two "Regional Safeguards Offices" which are located in Toronto, Canada, and in Tokyo, Japan. The IAEA also has two liaison offices which are located in New York City, United States, and in Geneva, Switzerland. In addition, the IAEA has laboratories and research centers located in Seibersdorf, Austria, in Monaco and in Trieste, Italy.
Board of Governors
The Board of Governors is one of two policy-making bodies of the IAEA. The Board consists of 22 member states elected by the General Conference, and at least 10 member states nominated by the outgoing Board. The outgoing Board designates the ten members who are the most advanced in atomic energy technology, plus the most advanced members from any of the following areas that are not represented by the first ten: North America, Latin America, Western Europe, Eastern Europe, Africa, Middle East, and South Asia, South East Asia, the Pacific, and the Far East. These members are designated for one year terms. The General Conference elects 22 members from the remaining nations to two-year terms. Eleven are elected each year. The 22 elected members must also represent a stipulated geographic diversity.
The Board, in its five-yearly meetings, is responsible for making most of the policies of the IAEA. The Board makes recommendations to the General Conference on IAEA activities and budget, is responsible for publishing IAEA standards and appoints the Director-General subject to General Conference approval. Board members each receive one vote. Budget matters require a two-thirds majority. All other matters require only a simple majority. The simple majority also has the power to stipulate issues that will thereafter require a two-thirds majority. Two-thirds of all Board members must be present to call a vote. The Board elects its own chairman.
General Conference
The General Conference is made up of all 180 member states. It meets once a year, typically in September, to approve the actions and budgets passed on from the Board of Governors. The General Conference also approves the nominee for Director General and requests reports from the Board on issues in question (Statute). Each member receives one vote. Issues of budget, Statute amendment and suspension of a member's privileges require a two-thirds majority and all other issues require a simple majority. Similar to the Board, the General Conference can, by simple majority, designate issues to require a two-thirds majority. The General Conference elects a President at each annual meeting to facilitate an effective meeting. The President only serves for the duration of the session (Statute).
The main function of the General Conference is to serve as a forum for debate on current issues and policies. Any of the other IAEA organs, the Director-General, the Board and member states can table issues to be discussed by the General Conference (IAEA Primer). This function of the General Conference is almost identical to the General Assembly of the United Nations.
Secretariat
The Secretariat is the professional and general service staff of the IAEA. The Secretariat is headed by the Director General. The Director General is responsible for enforcement of the actions passed by the Board of Governors and the General Conference. The Director General is selected by the Board and approved by the General Conference for renewable four-year terms. The Director General oversees six departments that do the actual work in carrying out the policies of the IAEA: Nuclear Energy, Nuclear Safety and Security, Nuclear Sciences and Applications, Safeguards, Technical Cooperation, and Management.
The IAEA budget is in two parts. The regular budget funds most activities of the IAEA and is assessed to each member nation (€344 million in 2014). The Technical Cooperation Fund is funded by voluntary contributions with a general target in the US$90 million range.
Criticism
In 2011, Russian nuclear accident specialist Yuliy Andreev was critical of the response to Fukushima, and says that the IAEA did not learn from the 1986 Chernobyl disaster. He has accused the IAEA and corporations of "wilfully ignoring lessons from the world's worst nuclear accident 25 years ago to protect the industry's expansion". The IAEA's role "as an advocate for nuclear power has made it a target for protests".
The journal Nature has reported that the IAEA response to the 2011 Fukushima Daiichi nuclear disaster in Japan was "sluggish and sometimes confusing", drawing calls for the agency to "take a more proactive role in nuclear safety". But nuclear experts say that the agency's complicated mandate and the constraints imposed by its member states mean that reforms will not happen quickly or easily, although its INES "emergency scale is very likely to be revisited" given the confusing way in which it was used in Japan.
Some scientists say that the Fukushima nuclear accidents have revealed that the nuclear industry lacks sufficient oversight, leading to renewed calls to redefine the mandate of the IAEA so that it can better police nuclear power plants worldwide. There are several problems with the IAEA says Najmedin Meshkati of University of Southern California:
It recommends safety standards, but member states are not required to comply; it promotes nuclear energy, but it also monitors nuclear use; it is the sole global organisation overseeing the nuclear energy industry, yet it is also weighed down by checking compliance with the Nuclear Non-Proliferation Treaty (NPT).
In 2011, the journal Nature reported that the International Atomic Energy Agency should be strengthened to make independent assessments of nuclear safety and that "the public would be better served by an IAEA more able to deliver frank and independent assessments of nuclear crises as they unfold".
Membership
The process of joining the IAEA is fairly simple. Normally, a State would notify the Director General of its desire to join, and the Director would submit the application to the Board for consideration. If the Board recommends approval, and the General Conference approves the application for membership, the State must then submit its instrument of acceptance of the IAEA Statute to the United States, which functions as the depositary Government for the IAEA Statute. The State is considered a member when its acceptance letter is deposited. The United States then informs the IAEA, which notifies other IAEA Member States. Signature and ratification of the Nuclear Non-Proliferation Treaty (NPT) are not preconditions for membership in the IAEA.
The IAEA has 180 member states. Most UN members and the Holy See are Member States of the IAEA.
Four states have withdrawn from the IAEA. North Korea was a Member State from 1974 to 1994, but withdrew after the Board of Governors found it in non-compliance with its safeguards agreement and suspended most technical co-operation. Nicaragua became a member in 1957, withdrew its membership in 1970, and rejoined in 1977, Honduras joined in 1957, withdrew in 1967, and rejoined in 2003, while Cambodia joined in 1958, withdrew in 2003, and rejoined in 2009.
Regional Cooperative Agreements
There are four regional cooperative areas within IAEA, that share information, and organize conferences within their regions:
AFRA
The African Regional Cooperative Agreement for Research, Development and Training Related to Nuclear Science and Technology (AFRA):
ARASIA
Cooperative Agreement for Arab States in Asia for Research, Development and Training related to Nuclear Science and Technology (ARASIA):
RCA
Regional Cooperative Agreement for Research, Development and Training Related to Nuclear Science and Technology for Asia and the Pacific (RCA):
ARCAL
Cooperation Agreement for the Promotion of Nuclear Science and Technology in Latin America and the Caribbean (ARCAL):
List of directors general
Publications
Typically issued in July each year, the IAEA Annual Report summarizes and highlights developments over the past year in major areas of the Agency's work. It includes a summary of major issues, activities, and achievements, and status tables and graphs related to safeguards, safety, and science and technology. Alongside the Annual Report, the IAEA also issues Topical Reviews which detail specific sectors of its work, comprising the Nuclear Safety Review, Nuclear Security Review, Safeguards Implementation Report, Nuclear Technology Review, and Technical Cooperation Report.
IAEA Annual Report 2022
In the 2022 Annual Report, the IAEA demonstrated its commitment to its objectives despite global challenges. The report showcases the IAEA's initiatives aimed at fostering the safe, secure, and peaceful applications of nuclear technology. The agency's "Rays of Hope" initiative marked an effort to reduce disparities in cancer treatment by increasing the availability of radiation medicine, with a particular emphasis on African nations, in partnership with relevant professional societies and the World Health Organization (WHO). In response to the emergent threat posed by zoonotic diseases, the IAEA instituted the Zoonotic Disease Integrated Action (ZODIAC) initiative, which encourages international cooperation with member states, the WHO, and the Food and Agriculture Organization (FAO), to enhance preparedness and response. The "NUTeC Plastics" initiative reflects the agency's engagement with environmental concerns, utilizing nuclear technology to address the growing problem of plastic pollution. The IAEA also made strides in the field of nuclear energy with the introduction of the Nuclear Harmonization and Standardization Initiative (NHSI), aiming to harmonize regulatory standards to facilitate the deployment of small modular reactors, a critical component in the global pursuit of net-zero emissions.
See also
European Organization for Nuclear Research
Global Initiative to Combat Nuclear Terrorism
IAEA Areas
Institute of Nuclear Materials Management
International Energy Agency
International Renewable Energy Agency
International Radiation Protection Association
International reactions to the Fukushima Daiichi nuclear disaster
Lists of nuclear disasters and radioactive incidents
List of states with nuclear weapons
Nuclear ambiguity
Nuclear Energy Agency
OPANAL
Proliferation Security Initiative
United Nations Atomic Energy Commission (UNAEC)
World Association of Nuclear Operators
World Nuclear Association
References
Notes
Works cited
Board of Governors rules
IAEA Primer
Pillars of nuclear cooperation 2005
Radiation Protection of Patients
Further reading
Adamson, Matthew. "Showcasing the international atom: the IAEA Bulletin as a visual science diplomacy instrument, 1958–1962." British Journal for the History of Science (2023): 1–19.
Fischer, David. History of the international atomic energy agency. The first forty years (1. International Atomic Energy Agency, 1997) online.
Holloway, David. "The Soviet Union and the creation of the International Atomic Energy Agency." Cold War History 16.2 (2016): 177–193.
Roehrlich, Elisabeth. "The Cold War, the developing world, and the creation of the International Atomic Energy Agency (IAEA), 1953–1957." Cold War History 16.2 (2016): 195–212.
Roehrlich, Elisabeth. Inspectors for peace: A history of the International Atomic Energy Agency (JHU Press, 2022); full text online in Project MUSE; see also online scholarly review of this book
Scheinman, Lawrence. The international atomic energy agency and world nuclear order (Routledge, 2016) online.
Stoessinger, John G. "The International Atomic Energy Agency: The First Phase." International Organization 13.3 (1959): 394–411.
External links
International Atomic Energy Agency Official Website
NUCLEUS – The IAEA Nuclear Knowledge and Information Portal
Agreement on the Privileges and Immunities of the International Atomic Energy Agency, 1 July 1959
IAEA Department of Technical Cooperation website
Programme of Action for Cancer Therapy (PACT) – Comprehensive Cancer Control Information and Fighting Cancer in Developing Countries
International Nuclear Library Network (INLN)
The Woodrow Wilson Center's Nuclear Proliferation International History Project or NPIHP is a global network of individuals and institutions engaged in the study of international nuclear history through archival documents, oral history interviews and other empirical sources.
International Atomic Energy Agency
International nuclear energy organizations
Organizations awarded Nobel Peace Prizes
Nuclear proliferation
Atoms for Peace
International organisations based in Austria
Organizations established in 1957
Research institutes established in 1957
Scientific organizations established in 1957
1957 establishments in Austria
1957 in international relations | International Atomic Energy Agency | Engineering | 4,863 |
21,647,782 | https://en.wikipedia.org/wiki/Weather%20insurance | Weather insurance insures against weather variations. There are two insurable types of weather insurance: conditional weather insurance and weather cancellation insurance.
The integration of advanced technologies such as AI in the insurance industry is influencing the field of weather insurance. According to a recent industry report, these technologies facilitate the analysis of large datasets, leading to improved predictions of weather patterns. This advancement is important for weather insurance, as it aids in refining risk assessments and pricing models.
Weather cancellation insurance
Weather cancellation insurance reduces an organization’s risk in planning an outdoor event. When a company or organization is holding a concert, running a special event, having a sale, or executing any form of outdoor activity and the weather prevents that activity from taking place, the organization risks losing whatever money that has been invested in the planning, organization, marketing and operation of the event. Weather cancellation insurance ensures that if inclement weather does occur, the organization will not lose their investment. Instead, an insurance company will cover those costs based on the size and type of the weather cancellation insurance purchased.
Conditional weather insurance
Conditional weather insurance gives companies the ability to make promotional sales offers based on the weather; this form of insurance used by businesses and organizations to increase publicity and drive traffic and sales. With conditional weather insurance an organization can run a promotion advertising up to a 100% rebate on all items purchased during a designated promotional period if a particular type and/or volume of weather occurs on a specific day. For example, a retailer could give a year's worth of payments to the first 100 people who bought a car in November if it snows 6” on New Year’s Day. This form of insurance is typically used by retailers to drive sales prior to national holidays such as New Year’s or 4 July.
How weather insurance is rated
Insurance companies rate weather insurance based on the weather peril date, the location of the event (city and state) and history of the weather peril that is being underwritten (temperature, rain, snow, etc.) as well as the size of the policy that is being insured.
For example, a state fair may wish to purchase weather cancellation insurance to cover the costs associated with running an outdoor concert in the event it rains heavily during their outdoor concert hours. The fair would contact their insurance agency no less than two weeks prior to the event date. The insurer would look up the weather history for their particular location. If the client's venue has a history of heavy rains during those dates over the past x years, the premium would be higher than if it were held in an area where rain rarely occurs. The total amount the client wishes to insure is also taken into consideration.
If a car dealership wanted to insure a conditional weather promotion in which new car buyers would receive a rebate if at least 2 inches of rain fell at their location on the day after Easter, the client would need to contact the insurance company about two weeks prior to the peril date. The insurer, in turn, would price the insurance policy based on weather history for that city and state on the date that is being "insured", the type of peril being covered, and the volume of sales expected during the promotion.
See also
Crop insurance
Flood insurance
Index-based insurance
References
External links
Types of weather insurance products
CNN: "Six inches of snow = Free diamond rings"
ABC 7 Chicago: "Snow means free jewelry at one Ind. store"
Types of insurance
Inclement weather management | Weather insurance | Physics | 705 |
3,235,998 | https://en.wikipedia.org/wiki/Pseudoconvexity | In mathematics, more precisely in the theory of functions of several complex variables, a pseudoconvex set is a special type of open set in the n-dimensional complex space Cn. Pseudoconvex sets are important, as they allow for classification of domains of holomorphy.
Let
be a domain, that is, an open connected subset. One says that is pseudoconvex (or Hartogs pseudoconvex) if there exists a continuous plurisubharmonic function on such that the set
is a relatively compact subset of for all real numbers In other words, a domain is pseudoconvex if has a continuous plurisubharmonic exhaustion function. Every (geometrically) convex set is pseudoconvex. However, there are pseudoconvex domains which are not geometrically convex.
When has a (twice continuously differentiable) boundary, this notion is the same as Levi pseudoconvexity, which is easier to work with. More specifically, with a boundary, it can be shown that has a defining function, i.e., that there exists which is so that , and . Now, is pseudoconvex iff for every and in the complex tangent space at p, that is,
, we have
The definition above is analogous to definitions of convexity in Real Analysis.
If does not have a boundary, the following approximation result can be useful.
Proposition 1 If is pseudoconvex, then there exist bounded, strongly Levi pseudoconvex domains with (smooth) boundary which are relatively compact in , such that
This is because once we have a as in the definition we can actually find a C∞ exhaustion function.
The case n = 1
In one complex dimension, every open domain is pseudoconvex. The concept of pseudoconvexity is thus more useful in dimensions higher than 1.
See also
Analytic polyhedron
Eugenio Elia Levi
Holomorphically convex hull
Stein manifold
References
Lars Hörmander, An Introduction to Complex Analysis in Several Variables, North-Holland, 1990. ().
Steven G. Krantz. Function Theory of Several Complex Variables, AMS Chelsea Publishing, Providence, Rhode Island, 1992.
External links
Several complex variables | Pseudoconvexity | Mathematics | 455 |
73,861,379 | https://en.wikipedia.org/wiki/Free-flowering | In gardening, the term free-flowering is used to describe flowering plants that have a long bloom time and may often lack a defined blooming season, whereby producing flowers profusely over an extended period of time, at times all-year round. The terms long-flowering and long-blooming are also used for perennial plants that bloom for much of the year.
Examples
Examples of free-flowering or long flowering plants include salvias, thunbergias, loniceras, roses, lavenders, periwinkles, gaillardias, oleanders, begonias, bougainvilleas, morning glories, geraniums/pelargoniums, hibiscuses, and lantanas.
List
This list includes plant species that are free-flowering, particularly in warmer climates:
Adenium obesum
Ajuga reptans
Allamanda cathartica
Canna indica
Cestrum parqui
Crossandra infundibuliformis
Clitoria ternatea
Coleus neochilus
Dimorphotheca ecklonis
Euphorbia milii
Euryops pectinatus
Hibbertia scandens
Impatiens hawkeri
Ipomoea cairica
Ipomoea indica
Ixora coccinea
Jatropha curcas
Mandevilla sanderi
Maurandya scandens
Murraya paniculata
Mussaenda erythrophylla
Pandorea jasminoides
Plumbago auriculata
Plumbago indica
Pseudogynoxys chenopodioides
Salvia splendens
Streptocarpus sect. Saintpaulia
Thunbergia alata
Thunbergia erecta
Tibouchina urvilleana
Westringia fruticosa
References
Angiosperms
Flowers
Periodic phenomena
Gardening
Plants that can bloom all year round | Free-flowering | Biology | 381 |
74,590,040 | https://en.wikipedia.org/wiki/Leucocoprinus%20minutulus | Leucocoprinus minutulus is a species of mushroom producing fungus in the family Agaricaceae.
Taxonomy
It was first described in 1941 by the German mycologist Rolf Singer who classified it as Leucocoprinus minutulus.
Description
Leucocoprinus minutulus is a small dapperling mushroom with thin white flesh.
Cap: 1.2cm wide with a brown surface and darker umbo. The margins are scaly and lacerated. Stem: 3cm long and 2-3mm wide with a slightly bulbous base. The surface is smooth and shiny and the interior is filled with white. The exterior colour is only described when dry, when it is brownish. The movable, double stem ring is very wide at 7mm, horizontally flat and is white-brown with light fringes at the edges. Gills: Free and remote from the stem, crowded and moderately wide at approximately 2mm. They are white but dry to a brownish colour. Spores: 7.5-11 x 5.8-6 μm. Hyaline with an apical germ pore and double membrane. Basidia: 28-33 x 8-9 μm. Four spored. Cheilocystidia: Numerous. 42-65 x 10-16.5 μm with an appendix that is 9-12 μm long. Clavato-appendiculatis (ampulliformibus).
Habitat and distribution
L. minutulus is scarcely recorded and little known. The specimens studied by Singer were found in mixed forest containing Abies Nordmanniana and Fagus orientalis, in the valley of the river in the Krasnodar region of Russia.
References
Leucocoprinus
Taxa named by Rolf Singer
Fungi described in 1941
Fungus species | Leucocoprinus minutulus | Biology | 367 |
27,010 | https://en.wikipedia.org/wiki/Software%20engineering | Software engineering is a field within computer science focused on designing, developing, testing, and maintaining of software applications. It involves applying engineering principles and computer programming expertise to develop software systems that meet user needs.
The terms programmer and coder overlap software engineer, but they imply only the construction aspect of typical software engineer workload.
A software engineer applies a software development process, which involves defining, implementing, testing, managing, and maintaining software systems; creating and modifying the development process.
History
Beginning in the 1960s, software engineering was recognized as a separate field of engineering.
The development of software engineering was seen as a struggle. Problems included software that was over budget, exceeded deadlines, required extensive debugging and maintenance, and unsuccessfully met the needs of consumers or was never even completed.
In 1968, NATO held the first software engineering conference where issues related to software were addressed. Guidelines and best practices for the development of software were established.
The origins of the term software engineering have been attributed to various sources. The term appeared in a list of services offered by companies in the June 1965 issue of "Computers and Automation" and was used more formally in the August 1966 issue of Communications of the ACM (Volume 9, number 8) in "President's Letter to the ACM Membership" by Anthony A. Oettinger. It is also associated with the title of a NATO conference in 1968 by Professor Friedrich L. Bauer. Margaret Hamilton described the discipline of "software engineering" during the Apollo missions to give what they were doing legitimacy. At the time there was perceived to be a "software crisis". The 40th International Conference on Software Engineering (ICSE 2018) celebrates 50 years of "Software Engineering" with the Plenary Sessions' keynotes of Frederick Brooks and Margaret Hamilton.
In 1984, the Software Engineering Institute (SEI) was established as a federally funded research and development center headquartered on the campus of Carnegie Mellon University in Pittsburgh, Pennsylvania, United States.
Watts Humphrey founded the SEI Software Process Program, aimed at understanding and managing the software engineering process. The Process Maturity Levels introduced became the Capability Maturity Model Integration for Development (CMMI-DEV), which defined how the US Government evaluates the abilities of a software development team.
Modern, generally accepted best-practices for software engineering have been collected by the ISO/IEC JTC 1/SC 7 subcommittee and published as the Software Engineering Body of Knowledge (SWEBOK). Software engineering is considered one of the major computing disciplines.
Terminology
Definition
Notable definitions of software engineering include:
"The systematic application of scientific and technological knowledge, methods, and experience to the design, implementation, testing, and documentation of software."—The Bureau of Labor Statistics—IEEE Systems and software engineering – Vocabulary
"The application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software."—IEEE Standard Glossary of Software Engineering Terminology
"An engineering discipline that is concerned with all aspects of software production."—Ian Sommerville
"The establishment and use of sound engineering principles in order to economically obtain software that is reliable and works efficiently on real machines."—Fritz Bauer
"A branch of computer science that deals with the design, implementation, and maintenance of complex computer programs."—Merriam-Webster
Software engineering' encompasses not just the act of writing code, but all of the tools and processes an organization uses to build and maintain that code over time. [...] Software engineering can be thought of as 'programming integrated over time.—Software Engineering at Google
The term has also been used less formally:
as the informal contemporary term for the broad range of activities that were formerly called computer programming and systems analysis
as the broad term for all aspects of the practice of computer programming, as opposed to the theory of computer programming, which is formally studied as a sub-discipline of computer science
as the term embodying the advocacy of a specific approach to computer programming, one that urges that it be treated as an engineering discipline rather than an art or a craft, and advocates the codification of recommended practices
Etymology
Margaret Hamilton promoted the term "software engineering" during her work on the Apollo program. The term "engineering" was used to acknowledge that the work should be taken just as seriously as other contributions toward the advancement of technology. Hamilton details her use of the term:When I first came up with the term, no one had heard of it before, at least in our world. It was an ongoing joke for a long time. They liked to kid me about my radical ideas. It was a memorable day when one of the most respected hardware gurus explained to everyone in a meeting that he agreed with me that the process of building software should also be considered an engineering discipline, just like with hardware. Not because of his acceptance of the new "term" per se, but because we had earned his and the acceptance of the others in the room as being in an engineering field in its own right.
Suitability
Individual commentators have disagreed sharply on how to define software engineering or its legitimacy as an engineering discipline. David Parnas has said that software engineering is, in fact, a form of engineering. Steve McConnell has said that it is not, but that it should be. Donald Knuth has said that programming is an art and a science. Edsger W. Dijkstra claimed that the terms software engineering and software engineer have been misused in the United States.
Workload
Requirements analysis
Requirements engineering is about elicitation, analysis, specification, and validation of requirements for software. Software requirements can be functional, non-functional or domain.
Functional requirements describe expected behaviors (i.e. outputs). Non-functional requirements specify issues like portability, security, maintainability, reliability, scalability, performance, reusability, and flexibility. They are classified into the following types: interface constraints, performance constraints (such as response time, security, storage space, etc.), operating constraints, life cycle constraints (maintainability, portability, etc.), and economic constraints. Knowledge of how the system or software works is needed when it comes to specifying non-functional requirements. Domain requirements have to do with the characteristic of a certain category or domain of projects.
Design
Software design is the process of making high-level plans for the software. Design is sometimes divided into levels:
Interface design plans the interaction between a system and its environment as well as the inner workings of the system.
Architectural design plans the major components of a system, including their responsibilities, properties, and interfaces between them.
Detailed design plans internal elements, including their properties, relationships, algorithms and data structures.
Construction
Software construction typically involves programming (a.k.a. coding), unit testing, integration testing, and debugging so as to implement the design. “Software testing is related to, but different from, ... debugging”.
Testing during this phase is generally performed by the programmer and with the purpose to verify that the code behaves as designed and to know when the code is ready for the next level of testing.
Testing
Software testing is an empirical, technical investigation conducted to provide stakeholders with information about the quality of the software under test.
When described separately from construction, testing typically is performed by test engineers or quality assurance instead of the programmers who wrote it. It is performed at the system level and is considered an aspect of software quality.
Program analysis
Program analysis is the process of analyzing computer programs with respect to an aspect such as performance, robustness, and security.
Maintenance
Software maintenance refers to supporting the software after release. It may include but is not limited to: error correction, optimization, deletion of unused and discarded features, and enhancement of existing features.
Usually, maintenance takes up 40% to 80% of project cost.
Education
Knowledge of computer programming is a prerequisite for becoming a software engineer. In 2004, the IEEE Computer Society produced the SWEBOK, which has been published as ISO/IEC Technical Report 1979:2005, describing the body of knowledge that they recommend to be mastered by a graduate software engineer with four years of experience.
Many software engineers enter the profession by obtaining a university degree or training at a vocational school. One standard international curriculum for undergraduate software engineering degrees was defined by the Joint Task Force on Computing Curricula of the IEEE Computer Society and the Association for Computing Machinery, and updated in 2014. A number of universities have Software Engineering degree programs; , there were 244 Campus Bachelor of Software Engineering programs, 70 Online programs, 230 Masters-level programs, 41 Doctorate-level programs, and 69 Certificate-level programs in the United States.
In addition to university education, many companies sponsor internships for students wishing to pursue careers in information technology. These internships can introduce the student to real-world tasks that typical software engineers encounter every day. Similar experience can be gained through military service in software engineering.
Software engineering degree programs
Half of all practitioners today have degrees in computer science, information systems, or information technology. A small but growing number of practitioners have software engineering degrees. In 1987, the Department of Computing at Imperial College London introduced the first three-year software engineering bachelor's degree in the world; in the following year, the University of Sheffield established a similar program. In 1996, the Rochester Institute of Technology established the first software engineering bachelor's degree program in the United States; however, it did not obtain ABET accreditation until 2003, the same year as Rice University, Clarkson University, Milwaukee School of Engineering, and Mississippi State University. In 1997, PSG College of Technology in Coimbatore, India was the first to start a five-year integrated Master of Science degree in Software Engineering.
Since then, software engineering undergraduate degrees have been established at many universities. A standard international curriculum for undergraduate software engineering degrees, SE2004, was defined by a steering committee between 2001 and 2004 with funding from the Association for Computing Machinery and the IEEE Computer Society. , about 50 universities in the U.S. offer software engineering degrees, which teach both computer science and engineering principles and practices. The first software engineering master's degree was established at Seattle University in 1979. Since then, graduate software engineering degrees have been made available from many more universities. Likewise in Canada, the Canadian Engineering Accreditation Board (CEAB) of the Canadian Council of Professional Engineers has recognized several software engineering programs.
In 1998, the US Naval Postgraduate School (NPS) established the first doctorate program in Software Engineering in the world. Additionally, many online advanced degrees in Software Engineering have appeared such as the Master of Science in Software Engineering (MSE) degree offered through the Computer Science and Engineering Department at California State University, Fullerton. Steve McConnell opines that because most universities teach computer science rather than software engineering, there is a shortage of true software engineers. ETS (École de technologie supérieure) University and UQAM (Université du Québec à Montréal) were mandated by IEEE to develop the Software Engineering Body of Knowledge (SWEBOK), which has become an ISO standard describing the body of knowledge covered by a software engineer.
Profession
Legal requirements for the licensing or certification of professional software engineers vary around the world. In the UK, there is no licensing or legal requirement to assume or use the job title Software Engineer. In some areas of Canada, such as Alberta, British Columbia, Ontario, and Quebec, software engineers can hold the Professional Engineer (P.Eng) designation and/or the Information Systems Professional (I.S.P.) designation. In Europe, Software Engineers can obtain the European Engineer (EUR ING) professional title. Software Engineers can also become professionally qualified as a Chartered Engineer through the British Computer Society.
In the United States, the NCEES began offering a Professional Engineer exam for Software Engineering in 2013, thereby allowing Software Engineers to be licensed and recognized. NCEES ended the exam after April 2019 due to lack of participation. Mandatory licensing is currently still largely debated, and perceived as controversial.
The IEEE Computer Society and the ACM, the two main US-based professional organizations of software engineering, publish guides to the profession of software engineering. The IEEE's Guide to the Software Engineering Body of Knowledge – 2004 Version, or SWEBOK, defines the field and describes the knowledge the IEEE expects a practicing software engineer to have. The most current version is SWEBOK v4. The IEEE also promulgates a "Software Engineering Code of Ethics".
Employment
There are an estimated 26.9 million professional software engineers in the world as of 2022, up from 21 million in 2016.
Many software engineers work as employees or contractors. Software engineers work with businesses, government agencies (civilian or military), and non-profit organizations. Some software engineers work for themselves as freelancers. Some organizations have specialists to perform each of the tasks in the software development process. Other organizations require software engineers to do many or all of them. In large projects, people may specialize in only one role. In small projects, people may fill several or all roles at the same time. Many companies hire interns, often university or college students during a summer break, or externships. Specializations include analysts, architects, developers, testers, technical support, middleware analysts, project managers, software product managers, educators, and researchers.
Most software engineers and programmers work 40 hours a week, but about 15 percent of software engineers and 11 percent of programmers worked more than 50 hours a week in 2008. Potential injuries in these occupations are possible because like other workers who spend long periods sitting in front of a computer terminal typing at a keyboard, engineers and programmers are susceptible to eyestrain, back discomfort, Thrombosis, Obesity, and hand and wrist problems such as carpal tunnel syndrome.
United States
The U. S. Bureau of Labor Statistics (BLS) counted 1,365,500 software developers holding jobs in the U.S. in 2018. Due to its relative newness as a field of study, formal education in software engineering is often taught as part of a computer science curriculum, and many software engineers hold computer science degrees. The BLS estimates from 2023 to 2033 that computer software engineering would increase by 17%. This is down from the 2022 to 2032 BLS estimate of 25% for software engineering. And, is further down from their 30% 2010 to 2020 BLS estimate. Due to this trend, job growth may not be as fast as during the last decade, as jobs that would have gone to computer software engineers in the United States would instead be outsourced to computer software engineers in countries such as India and other foreign countries. In addition, the BLS Job Outlook for Computer Programmers, the U.S. Bureau of Labor Statistics (BLS) Occupational Outlook predicts a decline of -7 percent from 2016 to 2026, a further decline of -9 percent from 2019 to 2029, a decline of -10 percent from 2021 to 2031. and then a decline of -11 percent from 2022 to 2032. Since computer programming can be done from anywhere in the world, companies sometimes hire programmers in countries where wages are lower. Furthermore, the ratio of women in many software fields has also been declining over the years as compared to other engineering fields. Then there is the additional concern that recent advances in Artificial Intelligence might impact the demand for future generations of Software Engineers. However, this trend may change or slow in the future as many current software engineers in the U.S. market flee the profession or age out of the market in the next few decades.
Certification
The Software Engineering Institute offers certifications on specific topics like security, process improvement and software architecture. IBM, Microsoft and other companies also sponsor their own certification examinations. Many IT certification programs are oriented toward specific technologies, and managed by the vendors of these technologies. These certification programs are tailored to the institutions that would employ people who use these technologies.
Broader certification of general software engineering skills is available through various professional societies. , the IEEE had certified over 575 software professionals as a Certified Software Development Professional (CSDP). In 2008 they added an entry-level certification known as the Certified Software Development Associate (CSDA). The ACM had a professional certification program in the early 1980s, which was discontinued due to lack of interest. The ACM and the IEEE Computer Society together examined the possibility of licensing of software engineers as Professional Engineers in the 1990s,
but eventually decided that such licensing was inappropriate for the professional industrial practice of software engineering. John C. Knight and Nancy G. Leveson presented a more balanced analysis of the licensing issue in 2002.
In the U.K. the British Computer Society has developed a legally recognized professional certification called Chartered IT Professional (CITP), available to fully qualified members (MBCS). Software engineers may be eligible for membership of the British Computer Society or Institution of Engineering and Technology and so qualify to be considered for Chartered Engineer status through either of those institutions. In Canada the Canadian Information Processing Society has developed a legally recognized professional certification called Information Systems Professional (ISP). In Ontario, Canada, Software Engineers who graduate from a Canadian Engineering Accreditation Board (CEAB) accredited program, successfully complete PEO's (Professional Engineers Ontario) Professional Practice Examination (PPE) and have at least 48 months of acceptable engineering experience are eligible to be licensed through the Professional Engineers Ontario and can become Professional Engineers P.Eng. The PEO does not recognize any online or distance education however; and does not consider Computer Science programs to be equivalent to software engineering programs despite the tremendous overlap between the two. This has sparked controversy and a certification war. It has also held the number of P.Eng holders for the profession exceptionally low. The vast majority of working professionals in the field hold a degree in CS, not SE. Given the difficult certification path for holders of non-SE degrees, most never bother to pursue the license.
Impact of globalization
The initial impact of outsourcing, and the relatively lower cost of international human resources in developing third world countries led to a massive migration of software development activities from corporations in North America and Europe to India and later: China, Russia, and other developing countries. This approach had some flaws, mainly the distance / time zone difference that prevented human interaction between clients and developers and the massive job transfer. This had a negative impact on many aspects of the software engineering profession. For example, some students in the developed world avoid education related to software engineering because of the fear of offshore outsourcing (importing software products or services from other countries) and of being displaced by foreign visa workers. Although statistics do not currently show a threat to software engineering itself; a related career, computer programming does appear to have been affected. Nevertheless, the ability to smartly leverage offshore and near-shore resources via the follow-the-sun workflow has improved the overall operational capability of many organizations. When North Americans leave work, Asians are just arriving to work. When Asians are leaving work, Europeans arrive to work. This provides a continuous ability to have human oversight on business-critical processes 24 hours per day, without paying overtime compensation or disrupting a key human resource, sleep patterns.
While global outsourcing has several advantages, global – and generally distributed – development can run into serious difficulties resulting from the distance between developers. This is due to the key elements of this type of distance that have been identified as geographical, temporal, cultural and communication (that includes the use of different languages and dialects of English in different locations). Research has been carried out in the area of global software development over the last 15 years and an extensive body of relevant work published that highlights the benefits and problems associated with the complex activity. As with other aspects of software engineering research is ongoing in this and related areas.
Prizes
There are various prizes in the field of software engineering:
ACM-AAAI Allen Newell Award- USA. Awarded to career contributions that have breadth within computer science, or that bridge computer science and other disciplines.
BCS Lovelace Medal. Awarded to individuals who have made outstanding contributions to the understanding or advancement of computing.
ACM SIGSOFT Outstanding Research Award, selected for individual(s) who have made “significant and lasting research contributions to the theory or practice of software engineering.”
More ACM SIGSOFT Awards.
The Codie award, a yearly award issued by the Software and Information Industry Association for excellence in software development within the software industry.
Harlan Mills Award for "contributions to the theory and practice of the information sciences, focused on software engineering".
ICSE Most Influential Paper Award.
Jolt Award, also for the software industry.
Stevens Award given in memory of Wayne Stevens.
Criticism
Some call for licensing, certification and codified bodies of knowledge as mechanisms for spreading the engineering knowledge and maturing the field.
Some claim that the concept of software engineering is so new that it is rarely understood, and it is widely misinterpreted, including in software engineering textbooks, papers, and among the communities of programmers and crafters.
Some claim that a core issue with software engineering is that its approaches are not empirical enough because a real-world validation of approaches is usually absent, or very limited and hence software engineering is often misinterpreted as feasible only in a "theoretical environment."
Edsger Dijkstra, a founder of many of the concepts in software development today, rejected the idea of "software engineering" up until his death in 2002, arguing that those terms were poor analogies for what he called the "radical novelty" of computer science:
See also
Study and practice
Computer science
Data engineering
Software craftsmanship
Software development
Release engineering
Roles
Programmer
Systems analyst
Systems architect
Professional aspects
Bachelor of Science in Information Technology
Bachelor of Software Engineering
List of software engineering conferences
List of computer science journals (including software engineering journals)
Software Engineering Institute
References
Citations
Sources
Further reading
External links
Pierre Bourque; Richard E. Fairley, eds. (2004). Guide to the Software Engineering Body of Knowledge Version 3.0 (SWEBOK), https://www.computer.org/web/swebok/v3. IEEE Computer Society.
The Open Systems Engineering and Software Development Life Cycle Framework OpenSDLC.org the integrated Creative Commons SDLC
Software Engineering Institute Carnegie Mellon
Engineering disciplines | Software engineering | Technology,Engineering | 4,514 |
9,513,764 | https://en.wikipedia.org/wiki/Generic%20Authentication%20Architecture | Generic Authentication Architecture (GAA) is a standard made by 3GPP defined in TR 33.919. Taken from the document:
"This Technical Report aims to give an overview of the different mechanisms that mobile applications can rely upon for authentication between server and client (i.e. the UE). Additionally it provides guidelines related to the use of GAA and to the choice of authentication mechanism in a given situation and for a given application".
Related standards are Generic Bootstrapping Architecture (GBA) and Support for Subscriber Certificates (SSC).
External links
3GPP
Mobile telecommunications standards
3GPP standards | Generic Authentication Architecture | Technology | 128 |
33,688,786 | https://en.wikipedia.org/wiki/Ador%20Welding | Ador Welding Limited (formerly known as Advani–Oerlikon Limited) is an industrial manufacturing company headquartered in Mumbai, India. The flagship company of the Ador Group, Ador Welding produces a variety of welding products, industry applications, and technology services, including welding consumables (electrodes, wires, and fluxes) as well as welding and cutting equipment. It has over 30% market share in the organized welding market and is considered one of the major players in the Indian welding industry. Ador PEB is the company's project engineering division. PEB is based in Pune and has provided services to the Indian Government's Bharat Nirman Program in the field of combustion and thermal engineering technologies.
History
Ador Welding Limited was formerly known as Advani–Oerlikon Ltd. and traces its history back to 1908 and five men, all originally from Karachi: Kanwalsing Malkani, Vasanmal Malakani, Jotsing Advani, Bhagwansing Advani, and Gopaldas Mirchandani. In 1951 to meet India's growing demand for welding electrodes, JB Advani & Co formed a joint venture with Oerlikon-Buhrle of Switzerland to manufacture them. The resulting company, Advani–Oerlikon Ltd., helped build India's welding industry, which at the time was in its infancy.
In early 1970s, the company began expanding into other sectors in India, such as electronic equipment, metal reclamation solutions for industrial components, energy solutions and cosmetics. During this decade, a separate entity, Ador Powertron, was established for manufacturing electronic power equipment. The 1970s also saw the formation of Ador Fontech, a company providing metal reclamation and surfacing solutions for industrial components. Within ten years, Ador Fontech had become the number two company after Larsen and Toubro in the metal reclamation sector.
In 1986 JB Advani & Co launched an initial public offering and ceased to be the holding company of Advani-Oerlikon Ltd. In 2003 the joint Advani–Oerlikon association came to an end and the company changed to its current name Ador Welding Limited.
Ador Welding PEB
Milestones
1989: Project Engineering Business got established
1994: Enlisted as LSTK contractor for Flare systems
1998: awarded ISO 9001 Certification for entire plant
1999: ventured on their own as Ador Technologies Ltd.
2001: Merged with Ador Welding Ltd as PEB – a division of AWL
2002: Registered in Oman for flare systems
2005: Awarded OHSAS 18001 Certification for entire plant
2007: Established Fabrication Unit for Process Equipment for Pressure Vessels, Heat Exchangers, Reactors
2008: Awarded ASME approval in "U", "R" and "NB" Stamp for Pressure Vessels, Heat Exchangers
2008: Registration in UAE for Flares
2011: Re-certification of ASME approval in "U" and "R"
Facilities
A distributor and sales network of over 125 outlets throughout India
Four manufacturing units located in Silvassa, Coimbatore, Raipur, and Pune (India). (The units are ISO 9001:2000 certified for quality management systems and ISO 14001:2004 standard for environment management systems and are audited by NCC Abu Dhabi and PDO Oman.)
Eleven area offices in India and one overseas office in Dubai (United Arab Emirates)
Distributor bases in the Persian Gulf/Middle East, Southeast Asia and Africa
Two Research and Development Centers in India at Pune and Silvassa
The Ador Welding Academy (AWA) for training welders in welding techniques and procedures. The AWA (Formerly AIWT) has trained over 60,000 welders in India
Leadership
The following are the key executives of the company as of 2011:
N Malkani Nagpal, Director
R A Mirchandani, Director
A T Malkani, Director
D A Lalvani, Director
Clients
Ador Welding's services to clients have included:
Providing welding service applications and consumables to the Navaratna Companies.
Developing special electrodes for welding of the penstocks at the Bhakra Dam, as well as those at Hirakud, Nagarjuna Sagar, and Koyna.
Assisting in the erection of plants and the introduction of rutile based electrodes and orbital welders for Bharat Heavy Electricals Limited, National Thermal Power Corporation, and Neyveli Lignite Corporation.
Providing solutions for welding high-alloy austenitic materials, high-temperature materials and capillary tubes for Bhabha Atomic Research Centre, and nuclear power plants ar Kalpakkam, Narora, and Kaiga.
Developing a high productivity welding process for Suzlon, NEPC Group, and Wescare (India) Limited.
Providing technical knowhow and consumables for the erection of plants at Bhilai, Salem, and Durgapur.
Developing special welding applications and consumables for ONGC, GAIL, and Bharat Petroleum.
Helping to set up the structure and mining process operations for Coal India, Nalco, Bharat Gold Mines, and Indian Rare Earths Limited.
Providing technology and consumables for maintenance of worn out rail track points and crosses for Indian Railways.
References
Welding
Manufacturing companies based in Mumbai
Manufacturing companies established in 1951
Indian companies established in 1951
1951 establishments in Bombay State
Companies listed on the National Stock Exchange of India
Companies listed on the Bombay Stock Exchange | Ador Welding | Engineering | 1,115 |
385,846 | https://en.wikipedia.org/wiki/Kaon | In particle physics, a kaon, also called a K meson and denoted , is any of a group of four mesons distinguished by a quantum number called strangeness. In the quark model they are understood to be bound states of a strange quark (or antiquark) and an up or down antiquark (or quark).
Kaons have proved to be a copious source of information on the nature of fundamental interactions since their discovery by George Rochester and Clifford Butler at the Department of Physics and Astronomy, University of Manchester in cosmic rays in 1947. They were essential in establishing the foundations of the Standard Model of particle physics, such as the quark model of hadrons and the theory of quark mixing (the latter was acknowledged by a Nobel Prize in Physics in 2008). Kaons have played a distinguished role in our understanding of fundamental conservation laws: CP violation, a phenomenon generating the observed matter–antimatter asymmetry of the universe, was discovered in the kaon system in 1964 (which was acknowledged by a Nobel Prize in 1980). Moreover, direct CP violation was discovered in the kaon decays in the early 2000s by the NA48 experiment at CERN and the KTeV experiment at Fermilab.
Basic properties
The four kaons are:
, negatively charged (containing a strange quark and an up antiquark) has mass and mean lifetime .
(antiparticle of above) positively charged (containing an up quark and a strange antiquark) must (by CPT invariance) have mass and lifetime equal to that of . Experimentally, the mass difference is , consistent with zero; the difference in lifetimes is , also consistent with zero.
, neutrally charged (containing a down quark and a strange antiquark) has mass . It has mean squared charge radius of .
, neutrally charged (antiparticle of above) (containing a strange quark and a down antiquark) has the same mass.
As the quark model shows, assignments that the kaons form two doublets of isospin; that is, they belong to the fundamental representation of SU(2) called the 2. One doublet of strangeness +1 contains the and the . The antiparticles form the other doublet (of strangeness −1).
[*] See Notes on neutral kaons in the article List of mesons, and neutral kaon mixing, below.
[§]Strong eigenstate. No definite lifetime (see neutral kaon mixing).
[†]Weak eigenstate. Makeup is missing small CP–violating term (see neutral kaon mixing).
[‡] The mass of the and are given as that of the . However, it is known that a relatively minute difference between the masses of the and on the order of exists.
Although the and its antiparticle are usually produced via the strong force, they decay weakly. Thus, once created the two are better thought of as superpositions of two weak eigenstates which have vastly different lifetimes:
The long-lived neutral kaon is called the ("K-long"), decays primarily into three pions, and has a mean lifetime of .
The short-lived neutral kaon is called the ("K-short"), decays primarily into two pions, and has a mean lifetime .
(See discussion of neutral kaon mixing below.)
An experimental observation made in 1964 that K-longs rarely decay into two pions was the discovery of CP violation (see below).
Main decay modes for :
Decay modes for the are charge conjugates of the ones above.
Parity violation
Two different decays were found for charged strange mesons into pions:
{| border=0
|- style="height: 2em;"
| || → || +
|- style="height: 2em;"
| || → || + +
|}
The intrinsic parity of the pion is P = −1 (since the pion is a bound state of a quark and an antiquark, which have opposite parities, with zero angular momentum), and parity is a multiplicative quantum number. Therefore, assuming the parent particle has zero spin, the two-pion and the three-pion final states have different parities (P = +1 and P = −1, respectively). It was thought that the initial states should also have different parities, and hence be two distinct particles. However, with increasingly precise measurements, no difference was found between the masses and lifetimes of each, respectively, indicating that they are the same particle. This was known as the τ–θ puzzle. It was resolved only by the discovery of parity violation in weak interactions (most importantly, by the Wu experiment). Since the mesons decay through weak interactions, parity is not conserved, and the two decays are actually decays of the same particle, now called the .
History
The discovery of hadrons with the internal quantum number "strangeness" marks the beginning of a most exciting epoch in particle physics that even now, fifty years later, has not yet found its conclusion ... by and large experiments have driven the development, and that major discoveries came unexpectedly or even against expectations expressed by theorists. — Bigi & Sanda (2016)
While looking for the hypothetical nuclear meson, Louis Leprince-Ringuet found evidence for the existence of a positively charged heavier particle in 1944.
In 1947, G.D. Rochester and C.C. Butler of the University of Manchester published two cloud chamber photographs of cosmic ray-induced events, one showing what appeared to be a neutral particle decaying into two charged pions, and one which appeared to be a charged particle decaying into a charged pion and something neutral. The estimated mass of the new particles was very rough, about half a proton's mass. More examples of these "V-particles" were slow in coming.
In 1949, Rosemary Brown (later Rosemary Fowler), a research student of Cecil Powell of the University of Bristol, spotted her 'k' track, made by a particle of very similar mass that decayed to three pions.
I knew at once that it was new and would be very important. We were seeing things that hadn't been seen before - that's what research in particle physics was. It was very exciting. — Fowler (2024)
This led to the so-called 'tau–theta' problem: what seemed to be the same particle (now called ) decayed in two different modes, Theta to two pions (parity +1), Tau to three pions (parity −1). The solution to this puzzle turned out to be that weak interactions do not conserve parity.
The first breakthrough was obtained at Caltech, where a cloud chamber was taken up Mount Wilson, for greater cosmic ray exposure. In 1950, 30 charged and 4 neutral "V-particles" were reported. Inspired by this, numerous mountaintop observations were made over the next several years, and by 1953, the following terminology was being used: "L meson" for either a muon or charged pion; "K meson" meant a particle intermediate in mass between the pion and nucleon.
Leprince-Rinquet coined the still-used term "hyperon" to mean any particle heavier than a nucleon. The Leprince-Ringuet particle turned out to be the K meson.
The decays were extremely slow; typical lifetimes are of the order of . However, production in pion–proton reactions proceeds much faster, with a time scale of . The problem of this mismatch was solved by Abraham Pais who postulated the new quantum number called "strangeness" which is conserved in strong interactions but violated by the weak interactions. Strange particles appear copiously due to "associated production" of a strange and an antistrange particle together. It was soon shown that this could not be a multiplicative quantum number, because that would allow reactions which were never seen in the new synchrotrons which were commissioned in Brookhaven National Laboratory in 1953 and in the Lawrence Berkeley Laboratory in 1955.
CP violation in neutral meson oscillations
Initially it was thought that although parity was violated, CP (charge parity) symmetry was conserved. In order to understand the discovery of CP violation, it is necessary to understand the mixing of neutral kaons; this phenomenon does not require CP violation, but it is the context in which CP violation was first observed.
Neutral kaon mixing
Since neutral kaons carry strangeness, they cannot be their own antiparticles. There must be then two different neutral kaons, differing by two units of strangeness. The question was then how to establish the presence of these two mesons. The solution used a phenomenon called neutral particle oscillations, by which these two kinds of mesons can turn from one into another through the weak interactions, which cause them to decay into pions (see the adjacent figure).
These oscillations were first investigated by Murray Gell-Mann and Abraham Pais together. They considered the CP-invariant time evolution of states with opposite strangeness. In matrix notation one can write
where ψ is a quantum state of the system specified by the amplitudes of being in each of the two basis states (which are a and b at time t = 0). The diagonal elements (M) of the Hamiltonian are due to strong interaction physics which conserves strangeness. The two diagonal elements must be equal, since the particle and antiparticle have equal masses in the absence of the weak interactions. The off-diagonal elements, which mix opposite strangeness particles, are due to weak interactions; CP symmetry requires them to be real.
The consequence of the matrix H being real is that the probabilities of the two states will forever oscillate back and forth. However, if any part of the matrix were imaginary, as is forbidden by CP symmetry, then part of the combination will diminish over time. The diminishing part can be either one component (a) or the other (b), or a mixture of the two.
Mixing
The eigenstates are obtained by diagonalizing this matrix. This gives new eigenvectors, which we can call K1 which is the difference of the two states of opposite strangeness, and K2, which is the sum. The two are eigenstates of CP with opposite eigenvalues; K1 has CP = +1, and K2 has CP = −1 Since the two-pion final state also has CP = +1, only the K1 can decay this way. The K2 must decay into three pions.
Since the mass of K2 is just a little larger than the sum of the masses of three pions, this decay proceeds very slowly, about 600 times slower than the decay of K1 into two pions. These two different modes of decay were observed by Leon Lederman and his coworkers in 1956, establishing the existence of the two weak eigenstates (states with definite lifetimes under decays via the weak force) of the neutral kaons.
These two weak eigenstates are called the (K-long, τ) and (K-short, θ). CP symmetry, which was assumed at the time, implies that = K1 and = K2.
Oscillation
An initially pure beam of will turn into its antiparticle, , while propagating, which will turn back into the original particle, , and so on. This is called particle oscillation. On observing the weak decay into leptons, it was found that a always decayed into a positron, whereas the antiparticle decayed into the electron. The earlier analysis yielded a relation between the rate of electron and positron production from sources of pure and its antiparticle . Analysis of the time dependence of this semileptonic decay showed the phenomenon of oscillation, and allowed the extraction of the mass splitting between the and . Since this is due to weak interactions it is very small, 10−15 times the mass of each state, namely
Regeneration
A beam of neutral kaons decays in flight so that the short-lived disappears, leaving a beam of pure long-lived . If this beam is shot into matter, then the and its antiparticle interact differently with the nuclei. The undergoes quasi-elastic scattering with nucleons, whereas its antiparticle can create hyperons. Quantum coherence between the two particles is lost due to the different interactions that the two components separately engage in. The emerging beam then contains different linear superpositions of the and . Such a superposition is a mixture of and ; the is regenerated by passing a neutral kaon beam through matter. Regeneration was observed by Oreste Piccioni and his collaborators at Lawrence Berkeley National Laboratory. Soon thereafter, Robert Adair and his coworkers reported excess regeneration, thus opening a new chapter in this history.
CP violation
While trying to verify Adair's results, J. Christenson, James Cronin, Val Fitch and Rene Turlay of Princeton University found decays of into two pions (CP = +1)
in an experiment performed in 1964 at the Alternating Gradient Synchrotron at the Brookhaven laboratory. As explained in an earlier section, this required the assumed initial and final states to have different values of CP, and hence immediately suggested CP violation. Alternative explanations such as nonlinear quantum mechanics and a new unobserved particle (hyperphoton) were soon ruled out, leaving CP violation as the only possibility. Cronin and Fitch received the Nobel Prize in Physics for this discovery in 1980.
It turns out that although the and are weak eigenstates (because they have definite lifetimes for decay by way of the weak force), they are not quite CP eigenstates. Instead, for small ε (and up to normalization),
= K2 + εK1
and similarly for . Thus occasionally the decays as a K1 with CP = +1, and likewise the can decay with CP = −1. This is known as indirect CP violation, CP violation due to mixing of and its antiparticle. There is also a direct CP violation effect, in which the CP violation occurs during the decay itself. Both are present, because both mixing and decay arise from the same interaction with the W boson and thus have CP violation predicted by the CKM matrix. Direct CP violation was discovered in the kaon decays in the early 2000s by the NA48 and KTeV experiments at CERN and Fermilab.
See also
Hadrons, mesons, hyperons and flavour
Strange quark and the quark model
Parity (physics), charge conjugation, time reversal symmetry, CPT invariance and CP violation
Neutrino oscillation
Neutral particle oscillation
Footnotes
References
Bibliography
The quark model, by J.J.J. Kokkedee
External links
The neutral K-meson – The Feynman Lectures on Physics
Mesons
Strange quark
Asymmetry | Kaon | Physics | 3,150 |
66,177,541 | https://en.wikipedia.org/wiki/EF-24 | EF-24 is a compound that is a synthetic analogue of curcumin, a bioactive phytochemical from turmeric. Curcumin has antioxidant, antibiotic, anti-inflammatory and anti-cancer properties in vitro but has low potency and very poor bioavailability when taken orally, resulting in limited efficacy. EF-24 was developed to try to improve upon these properties, and has been found to be around 10 times more potent than curcumin and with much higher systemic bioavailability. It has never been developed for medical use, though research continues to investigate whether it may be useful as an adjuvant treatment for some cancers alongside conventional chemotherapy drugs.
References
Anti-inflammatory agents | EF-24 | Chemistry | 156 |
66,664,713 | https://en.wikipedia.org/wiki/Europium%28II%29%20iodide | Europium(II) iodide is the iodide salt of divalent europium cation.
Preparation
Europium(II) iodide can be prepared in a handful of ways, including:
Reduction of europium(III) iodide with hydrogen gas at 350 °C:
Thermal decomposition of europium(III) iodide at 200 °C:
Reaction of europium with mercury(II) iodide:
Reaction of europium with ammonium iodide:
Structure
Europium(II) iodide has several polymorphs. It adopts a monoclinic crystal structure in space group P 21/c (no. 14).
It also adopts an orthorhombic polymorph in space group Pbca (no. 61). This form is isostructural with strontium iodide.
A third polymorph of europium(II) iodide is formed if it is prepared from europium and ammonium iodide at low temperatures (200 K) in liquid ammonia. This low-temperature phase is orthorhombic and in space group Pnma (no. 62). This is the same structure as modification IV of strontium iodide.
References
Europium(II) compounds
Iodides
Lanthanide halides | Europium(II) iodide | Chemistry | 286 |
20,374 | https://en.wikipedia.org/wiki/Metabolism | Metabolism (, from metabolē, "change") is the set of life-sustaining chemical reactions in organisms. The three main functions of metabolism are: the conversion of the energy in food to energy available to run cellular processes; the conversion of food to building blocks of proteins, lipids, nucleic acids, and some carbohydrates; and the elimination of metabolic wastes. These enzyme-catalyzed reactions allow organisms to grow and reproduce, maintain their structures, and respond to their environments. The word metabolism can also refer to the sum of all chemical reactions that occur in living organisms, including digestion and the transportation of substances into and between different cells, in which case the above described set of reactions within the cells is called intermediary (or intermediate) metabolism.
Metabolic reactions may be categorized as catabolic—the breaking down of compounds (for example, of glucose to pyruvate by cellular respiration); or anabolic—the building up (synthesis) of compounds (such as proteins, carbohydrates, lipids, and nucleic acids). Usually, catabolism releases energy, and anabolism consumes energy.
The chemical reactions of metabolism are organized into metabolic pathways, in which one chemical is transformed through a series of steps into another chemical, each step being facilitated by a specific enzyme. Enzymes are crucial to metabolism because they allow organisms to drive desirable reactions that require energy and will not occur by themselves, by coupling them to spontaneous reactions that release energy. Enzymes act as catalysts—they allow a reaction to proceed more rapidly—and they also allow the regulation of the rate of a metabolic reaction, for example in response to changes in the cell's environment or to signals from other cells.
The metabolic system of a particular organism determines which substances it will find nutritious and which poisonous. For example, some prokaryotes use hydrogen sulfide as a nutrient, yet this gas is poisonous to animals. The basal metabolic rate of an organism is the measure of the amount of energy consumed by all of these chemical reactions.
A striking feature of metabolism is the similarity of the basic metabolic pathways among vastly different species. For example, the set of carboxylic acids that are best known as the intermediates in the citric acid cycle are present in all known organisms, being found in species as diverse as the unicellular bacterium Escherichia coli and huge multicellular organisms like elephants. These similarities in metabolic pathways are likely due to their early appearance in evolutionary history, and their retention is likely due to their efficacy. In various diseases, such as type II diabetes, metabolic syndrome, and cancer, normal metabolism is disrupted. The metabolism of cancer cells is also different from the metabolism of normal cells, and these differences can be used to find targets for therapeutic intervention in cancer.
Key biochemicals
Most of the structures that make up animals, plants and microbes are made from four basic classes of molecules: amino acids, carbohydrates, nucleic acid and lipids (often called fats). As these molecules are vital for life, metabolic reactions either focus on making these molecules during the construction of cells and tissues, or on breaking them down and using them to obtain energy, by their digestion. These biochemicals can be joined to make polymers such as DNA and proteins, essential macromolecules of life.
Amino acids and proteins
Proteins are made of amino acids arranged in a linear chain joined by peptide bonds. Many proteins are enzymes that catalyze the chemical reactions in metabolism. Other proteins have structural or mechanical functions, such as those that form the cytoskeleton, a system of scaffolding that maintains the cell shape. Proteins are also important in cell signaling, immune responses, cell adhesion, active transport across membranes, and the cell cycle. Amino acids also contribute to cellular energy metabolism by providing a carbon source for entry into the citric acid cycle (tricarboxylic acid cycle), especially when a primary source of energy, such as glucose, is scarce, or when cells undergo metabolic stress.
Lipids
Lipids are the most diverse group of biochemicals. Their main structural uses are as part of internal and external biological membranes, such as the cell membrane. Their chemical energy can also be used. Lipids contain a long, non-polar hydrocarbon chain with a small polar region containing oxygen. Lipids are usually defined as hydrophobic or amphipathic biological molecules but will dissolve in organic solvents such as ethanol, benzene or chloroform. The fats are a large group of compounds that contain fatty acids and glycerol; a glycerol molecule attached to three fatty acids by ester linkages is called a triacylglyceride. Several variations of the basic structure exist, including backbones such as sphingosine in sphingomyelin, and hydrophilic groups such as phosphate in phospholipids. Steroids such as sterol are another major class of lipids.
Carbohydrates
Carbohydrates are aldehydes or ketones, with many hydroxyl groups attached, that can exist as straight chains or rings. Carbohydrates are the most abundant biological molecules, and fill numerous roles, such as the storage and transport of energy (starch, glycogen) and structural components (cellulose in plants, chitin in animals). The basic carbohydrate units are called monosaccharides and include galactose, fructose, and most importantly glucose. Monosaccharides can be linked together to form polysaccharides in almost limitless ways.
Nucleotides
The two nucleic acids, DNA and RNA, are polymers of nucleotides. Each nucleotide is composed of a phosphate attached to a ribose or deoxyribose sugar group which is attached to a nitrogenous base. Nucleic acids are critical for the storage and use of genetic information, and its interpretation through the processes of transcription and protein biosynthesis. This information is protected by DNA repair mechanisms and propagated through DNA replication. Many viruses have an RNA genome, such as HIV, which uses reverse transcription to create a DNA template from its viral RNA genome. RNA in ribozymes such as spliceosomes and ribosomes is similar to enzymes as it can catalyze chemical reactions. Individual nucleosides are made by attaching a nucleobase to a ribose sugar. These bases are heterocyclic rings containing nitrogen, classified as purines or pyrimidines. Nucleotides also act as coenzymes in metabolic-group-transfer reactions.
Coenzymes
Metabolism involves a vast array of chemical reactions, but most fall under a few basic types of reactions that involve the transfer of functional groups of atoms and their bonds within molecules. This common chemistry allows cells to use a small set of metabolic intermediates to carry chemical groups between different reactions. These group-transfer intermediates are called coenzymes. Each class of group-transfer reactions is carried out by a particular coenzyme, which is the substrate for a set of enzymes that produce it, and a set of enzymes that consume it. These coenzymes are therefore continuously made, consumed and then recycled.
One central coenzyme is adenosine triphosphate (ATP), the energy currency of cells. This nucleotide is used to transfer chemical energy between different chemical reactions. There is only a small amount of ATP in cells, but as it is continuously regenerated, the human body can use about its own weight in ATP per day. ATP acts as a bridge between catabolism and anabolism. Catabolism breaks down molecules, and anabolism puts them together. Catabolic reactions generate ATP, and anabolic reactions consume it. It also serves as a carrier of phosphate groups in phosphorylation reactions.
A vitamin is an organic compound needed in small quantities that cannot be made in cells. In human nutrition, most vitamins function as coenzymes after modification; for example, all water-soluble vitamins are phosphorylated or are coupled to nucleotides when they are used in cells. Nicotinamide adenine dinucleotide (NAD+), a derivative of vitamin B3 (niacin), is an important coenzyme that acts as a hydrogen acceptor. Hundreds of separate types of dehydrogenases remove electrons from their substrates and reduce NAD+ into NADH. This reduced form of the coenzyme is then a substrate for any of the reductases in the cell that need to transfer hydrogen atoms to their substrates. Nicotinamide adenine dinucleotide exists in two related forms in the cell, NADH and NADPH. The NAD+/NADH form is more important in catabolic reactions, while NADP+/NADPH is used in anabolic reactions.
Mineral and cofactors
Inorganic elements play critical roles in metabolism; some are abundant (e.g. sodium and potassium) while others function at minute concentrations. About 99% of a human's body weight is made up of the elements carbon, nitrogen, calcium, sodium, chlorine, potassium, hydrogen, phosphorus, oxygen and sulfur. Organic compounds (proteins, lipids and carbohydrates) contain the majority of the carbon and nitrogen; most of the oxygen and hydrogen is present as water.
The abundant inorganic elements act as electrolytes. The most important ions are sodium, potassium, calcium, magnesium, chloride, phosphate and the organic ion bicarbonate. The maintenance of precise ion gradients across cell membranes maintains osmotic pressure and pH. Ions are also critical for nerve and muscle function, as action potentials in these tissues are produced by the exchange of electrolytes between the extracellular fluid and the cell's fluid, the cytosol. Electrolytes enter and leave cells through proteins in the cell membrane called ion channels. For example, muscle contraction depends upon the movement of calcium, sodium and potassium through ion channels in the cell membrane and T-tubules.
Transition metals are usually present as trace elements in organisms, with zinc and iron being most abundant of those. Metal cofactors are bound tightly to specific sites in proteins; although enzyme cofactors can be modified during catalysis, they always return to their original state by the end of the reaction catalyzed. Metal micronutrients are taken up into organisms by specific transporters and bind to storage proteins such as ferritin or metallothionein when not in use.
Catabolism
Catabolism is the set of metabolic processes that break down large molecules. These include breaking down and oxidizing food molecules. The purpose of the catabolic reactions is to provide the energy and components needed by anabolic reactions which build molecules. The exact nature of these catabolic reactions differ from organism to organism, and organisms can be classified based on their sources of energy, hydrogen, and carbon (their primary nutritional groups), as shown in the table below. Organic molecules are used as a source of hydrogen atoms or electrons by organotrophs, while lithotrophs use inorganic substrates. Whereas phototrophs convert sunlight to chemical energy, chemotrophs depend on redox reactions that involve the transfer of electrons from reduced donor molecules such as organic molecules, hydrogen, hydrogen sulfide or ferrous ions to oxygen, nitrate or sulfate. In animals, these reactions involve complex organic molecules that are broken down to simpler molecules, such as carbon dioxide and water. Photosynthetic organisms, such as plants and cyanobacteria, use similar electron-transfer reactions to store energy absorbed from sunlight.
The most common set of catabolic reactions in animals can be separated into three main stages. In the first stage, large organic molecules, such as proteins, polysaccharides or lipids, are digested into their smaller components outside cells. Next, these smaller molecules are taken up by cells and converted to smaller molecules, usually acetyl coenzyme A (acetyl-CoA), which releases some energy. Finally, the acetyl group on acetyl-CoA is oxidized to water and carbon dioxide in the citric acid cycle and electron transport chain, releasing more energy while reducing the coenzyme nicotinamide adenine dinucleotide (NAD+) into NADH.
Digestion
Macromolecules cannot be directly processed by cells. Macromolecules must be broken into smaller units before they can be used in cell metabolism. Different classes of enzymes are used to digest these polymers. These digestive enzymes include proteases that digest proteins into amino acids, as well as glycoside hydrolases that digest polysaccharides into simple sugars known as monosaccharides.
Microbes simply secrete digestive enzymes into their surroundings, while animals only secrete these enzymes from specialized cells in their guts, including the stomach and pancreas, and in salivary glands. The amino acids or sugars released by these extracellular enzymes are then pumped into cells by active transport proteins.
Energy from organic compounds
Carbohydrate catabolism is the breakdown of carbohydrates into smaller units. Carbohydrates are usually taken into cells after they have been digested into monosaccharides such as glucose and fructose. Once inside, the major route of breakdown is glycolysis, in which glucose is converted into pyruvate. This process generates the energy-conveying molecule NADH from NAD+, and generates ATP from ADP for use in powering many processes within the cell. Pyruvate is an intermediate in several metabolic pathways, but the majority is converted to acetyl-CoA and fed into the citric acid cycle, which enables more ATP production by means of oxidative phosphorylation. This oxidation consumes molecular oxygen and releases water and the waste product carbon dioxide. When oxygen is lacking, or when pyruvate is temporarily produced faster than it can be consumed by the citric acid cycle (as in intense muscular exertion), pyruvate is converted to lactate by the enzyme lactate dehydrogenase, a process that also oxidizes NADH back to NAD+ for re-use in further glycolysis, allowing energy production to continue. The lactate is later converted back to pyruvate for ATP production where energy is needed, or back to glucose in the Cori cycle. An alternative route for glucose breakdown is the pentose phosphate pathway, which produces less energy but supports anabolism (biomolecule synthesis). This pathway reduces the coenzyme NADP+ to NADPH and produces pentose compounds such as ribose 5-phosphate for synthesis of many biomolecules such as nucleotides and aromatic amino acids.
Fats are catabolized by hydrolysis to free fatty acids and glycerol. The glycerol enters glycolysis and the fatty acids are broken down by beta oxidation to release acetyl-CoA, which then is fed into the citric acid cycle. Fatty acids release more energy upon oxidation than carbohydrates. Steroids are also broken down by some bacteria in a process similar to beta oxidation, and this breakdown process involves the release of significant amounts of acetyl-CoA, propionyl-CoA, and pyruvate, which can all be used by the cell for energy. M. tuberculosis can also grow on the lipid cholesterol as a sole source of carbon, and genes involved in the cholesterol-use pathway(s) have been validated as important during various stages of the infection lifecycle of M. tuberculosis.
Amino acids are either used to synthesize proteins and other biomolecules, or oxidized to urea and carbon dioxide to produce energy. The oxidation pathway starts with the removal of the amino group by a transaminase. The amino group is fed into the urea cycle, leaving a deaminated carbon skeleton in the form of a keto acid. Several of these keto acids are intermediates in the citric acid cycle, for example α-ketoglutarate formed by deamination of glutamate. The glucogenic amino acids can also be converted into glucose, through gluconeogenesis.
Energy transformations
Oxidative phosphorylation
In oxidative phosphorylation, the electrons removed from organic molecules in areas such as the citric acid cycle are transferred to oxygen and the energy released is used to make ATP. This is done in eukaryotes by a series of proteins in the membranes of mitochondria called the electron transport chain. In prokaryotes, these proteins are found in the cell's inner membrane. These proteins use the energy from reduced molecules like NADH to pump protons across a membrane.
Pumping protons out of the mitochondria creates a proton concentration difference across the membrane and generates an electrochemical gradient. This force drives protons back into the mitochondrion through the base of an enzyme called ATP synthase. The flow of protons makes the stalk subunit rotate, causing the active site of the synthase domain to change shape and phosphorylate adenosine diphosphate—turning it into ATP.
Energy from inorganic compounds
Chemolithotrophy is a type of metabolism found in prokaryotes where energy is obtained from the oxidation of inorganic compounds. These organisms can use hydrogen, reduced sulfur compounds (such as sulfide, hydrogen sulfide and thiosulfate), ferrous iron (Fe(II)) or ammonia as sources of reducing power and they gain energy from the oxidation of these compounds. These microbial processes are important in global biogeochemical cycles such as acetogenesis, nitrification and denitrification and are critical for soil fertility.
Energy from light
The energy in sunlight is captured by plants, cyanobacteria, purple bacteria, green sulfur bacteria and some protists. This process is often coupled to the conversion of carbon dioxide into organic compounds, as part of photosynthesis, which is discussed below. The energy capture and carbon fixation systems can, however, operate separately in prokaryotes, as purple bacteria and green sulfur bacteria can use sunlight as a source of energy, while switching between carbon fixation and the fermentation of organic compounds.
In many organisms, the capture of solar energy is similar in principle to oxidative phosphorylation, as it involves the storage of energy as a proton concentration gradient. This proton motive force then drives ATP synthesis. The electrons needed to drive this electron transport chain come from light-gathering proteins called photosynthetic reaction centres. Reaction centers are classified into two types depending on the nature of photosynthetic pigment present, with most photosynthetic bacteria only having one type, while plants and cyanobacteria have two.
In plants, algae, and cyanobacteria, photosystem II uses light energy to remove electrons from water, releasing oxygen as a waste product. The electrons then flow to the cytochrome b6f complex, which uses their energy to pump protons across the thylakoid membrane in the chloroplast. These protons move back through the membrane as they drive the ATP synthase, as before. The electrons then flow through photosystem I and can then be used to reduce the coenzyme NADP+.
Anabolism
Anabolism is the set of constructive metabolic processes where the energy released by catabolism is used to synthesize complex molecules. In general, the complex molecules that make up cellular structures are constructed step-by-step from smaller and simpler precursors. Anabolism involves three basic stages. First, the production of precursors such as amino acids, monosaccharides, isoprenoids and nucleotides, secondly, their activation into reactive forms using energy from ATP, and thirdly, the assembly of these precursors into complex molecules such as proteins, polysaccharides, lipids and nucleic acids.
Anabolism in organisms can be different according to the source of constructed molecules in their cells. Autotrophs such as plants can construct the complex organic molecules in their cells such as polysaccharides and proteins from simple molecules like carbon dioxide and water. Heterotrophs, on the other hand, require a source of more complex substances, such as monosaccharides and amino acids, to produce these complex molecules. Organisms can be further classified by ultimate source of their energy: photoautotrophs and photoheterotrophs obtain energy from light, whereas chemoautotrophs and chemoheterotrophs obtain energy from oxidation reactions.
Carbon fixation
Photosynthesis is the synthesis of carbohydrates from sunlight and carbon dioxide (CO2). In plants, cyanobacteria and algae, oxygenic photosynthesis splits water, with oxygen produced as a waste product. This process uses the ATP and NADPH produced by the photosynthetic reaction centres, as described above, to convert CO2 into glycerate 3-phosphate, which can then be converted into glucose. This carbon-fixation reaction is carried out by the enzyme RuBisCO as part of the Calvin–Benson cycle. Three types of photosynthesis occur in plants, C3 carbon fixation, C4 carbon fixation and CAM photosynthesis. These differ by the route that carbon dioxide takes to the Calvin cycle, with C3 plants fixing CO2 directly, while C4 and CAM photosynthesis incorporate the CO2 into other compounds first, as adaptations to deal with intense sunlight and dry conditions.
In photosynthetic prokaryotes the mechanisms of carbon fixation are more diverse. Here, carbon dioxide can be fixed by the Calvin–Benson cycle, a reversed citric acid cycle, or the carboxylation of acetyl-CoA. Prokaryotic chemoautotrophs also fix CO2 through the Calvin–Benson cycle, but use energy from inorganic compounds to drive the reaction.
Carbohydrates and glycans
In carbohydrate anabolism, simple organic acids can be converted into monosaccharides such as glucose and then used to assemble polysaccharides such as starch. The generation of glucose from compounds like pyruvate, lactate, glycerol, glycerate 3-phosphate and amino acids is called gluconeogenesis. Gluconeogenesis converts pyruvate to glucose-6-phosphate through a series of intermediates, many of which are shared with glycolysis. However, this pathway is not simply glycolysis run in reverse, as several steps are catalyzed by non-glycolytic enzymes. This is important as it allows the formation and breakdown of glucose to be regulated separately, and prevents both pathways from running simultaneously in a futile cycle.
Although fat is a common way of storing energy, in vertebrates such as humans the fatty acids in these stores cannot be converted to glucose through gluconeogenesis as these organisms cannot convert acetyl-CoA into pyruvate; plants do, but animals do not, have the necessary enzymatic machinery. As a result, after long-term starvation, vertebrates need to produce ketone bodies from fatty acids to replace glucose in tissues such as the brain that cannot metabolize fatty acids. In other organisms such as plants and bacteria, this metabolic problem is solved using the glyoxylate cycle, which bypasses the decarboxylation step in the citric acid cycle and allows the transformation of acetyl-CoA to oxaloacetate, where it can be used for the production of glucose. Other than fat, glucose is stored in most tissues, as an energy resource available within the tissue through glycogenesis which was usually being used to maintained glucose level in blood.
Polysaccharides and glycans are made by the sequential addition of monosaccharides by glycosyltransferase from a reactive sugar-phosphate donor such as uridine diphosphate glucose (UDP-Glc) to an acceptor hydroxyl group on the growing polysaccharide. As any of the hydroxyl groups on the ring of the substrate can be acceptors, the polysaccharides produced can have straight or branched structures. The polysaccharides produced can have structural or metabolic functions themselves, or be transferred to lipids and proteins by the enzymes oligosaccharyltransferases.
Fatty acids, isoprenoids and sterol
Fatty acids are made by fatty acid synthases that polymerize and then reduce acetyl-CoA units. The acyl chains in the fatty acids are extended by a cycle of reactions that add the acyl group, reduce it to an alcohol, dehydrate it to an alkene group and then reduce it again to an alkane group. The enzymes of fatty acid biosynthesis are divided into two groups: in animals and fungi, all these fatty acid synthase reactions are carried out by a single multifunctional type I protein, while in plant plastids and bacteria separate type II enzymes perform each step in the pathway.
Terpenes and isoprenoids are a large class of lipids that include the carotenoids and form the largest class of plant natural products. These compounds are made by the assembly and modification of isoprene units donated from the reactive precursors isopentenyl pyrophosphate and dimethylallyl pyrophosphate. These precursors can be made in different ways. In animals and archaea, the mevalonate pathway produces these compounds from acetyl-CoA, while in plants and bacteria the non-mevalonate pathway uses pyruvate and glyceraldehyde 3-phosphate as substrates. One important reaction that uses these activated isoprene donors is sterol biosynthesis. Here, the isoprene units are joined to make squalene and then folded up and formed into a set of rings to make lanosterol. Lanosterol can then be converted into other sterols such as cholesterol and ergosterol.
Proteins
Organisms vary in their ability to synthesize the 20 common amino acids. Most bacteria and plants can synthesize all twenty, but mammals can only synthesize eleven nonessential amino acids, so nine essential amino acids must be obtained from food. Some simple parasites, such as the bacteria Mycoplasma pneumoniae, lack all amino acid synthesis and take their amino acids directly from their hosts. All amino acids are synthesized from intermediates in glycolysis, the citric acid cycle, or the pentose phosphate pathway. Nitrogen is provided by glutamate and glutamine. Nonessensial amino acid synthesis depends on the formation of the appropriate alpha-keto acid, which is then transaminated to form an amino acid.
Amino acids are made into proteins by being joined in a chain of peptide bonds. Each different protein has a unique sequence of amino acid residues: this is its primary structure. Just as the letters of the alphabet can be combined to form an almost endless variety of words, amino acids can be linked in varying sequences to form a huge variety of proteins. Proteins are made from amino acids that have been activated by attachment to a transfer RNA molecule through an ester bond. This aminoacyl-tRNA precursor is produced in an ATP-dependent reaction carried out by an aminoacyl tRNA synthetase. This aminoacyl-tRNA is then a substrate for the ribosome, which joins the amino acid onto the elongating protein chain, using the sequence information in a messenger RNA.
Nucleotide synthesis and salvage
Nucleotides are made from amino acids, carbon dioxide and formic acid in pathways that require large amounts of metabolic energy. Consequently, most organisms have efficient systems to salvage preformed nucleotides. Purines are synthesized as nucleosides (bases attached to ribose). Both adenine and guanine are made from the precursor nucleoside inosine monophosphate, which is synthesized using atoms from the amino acids glycine, glutamine, and aspartic acid, as well as formate transferred from the coenzyme tetrahydrofolate. Pyrimidines, on the other hand, are synthesized from the base orotate, which is formed from glutamine and aspartate.
Xenobiotics and redox metabolism
All organisms are constantly exposed to compounds that they cannot use as foods and that would be harmful if they accumulated in cells, as they have no metabolic function. These potentially damaging compounds are called xenobiotics. Xenobiotics such as synthetic drugs, natural poisons and antibiotics are detoxified by a set of xenobiotic-metabolizing enzymes. In humans, these include cytochrome P450 oxidases, UDP-glucuronosyltransferases, and glutathione S-transferases. This system of enzymes acts in three stages to firstly oxidize the xenobiotic (phase I) and then conjugate water-soluble groups onto the molecule (phase II). The modified water-soluble xenobiotic can then be pumped out of cells and in multicellular organisms may be further metabolized before being excreted (phase III). In ecology, these reactions are particularly important in microbial biodegradation of pollutants and the bioremediation of contaminated land and oil spills. Many of these microbial reactions are shared with multicellular organisms, but due to the incredible diversity of types of microbes these organisms are able to deal with a far wider range of xenobiotics than multicellular organisms, and can degrade even persistent organic pollutants such as organochloride compounds.
A related problem for aerobic organisms is oxidative stress. Here, processes including oxidative phosphorylation and the formation of disulfide bonds during protein folding produce reactive oxygen species such as hydrogen peroxide. These damaging oxidants are removed by antioxidant metabolites such as glutathione and enzymes such as catalases and peroxidases.
Thermodynamics of living organisms
Living organisms must obey the laws of thermodynamics, which describe the transfer of heat and work. The second law of thermodynamics states that in any isolated system, the amount of entropy (disorder) cannot decrease. Although living organisms' amazing complexity appears to contradict this law, life is possible as all organisms are open systems that exchange matter and energy with their surroundings. Living systems are not in equilibrium, but instead are dissipative systems that maintain their state of high complexity by causing a larger increase in the entropy of their environments. The metabolism of a cell achieves this by coupling the spontaneous processes of catabolism to the non-spontaneous processes of anabolism. In thermodynamic terms, metabolism maintains order by creating disorder.
Regulation and control
As the environments of most organisms are constantly changing, the reactions of metabolism must be finely regulated to maintain a constant set of conditions within cells, a condition called homeostasis. Metabolic regulation also allows organisms to respond to signals and interact actively with their environments. Two closely linked concepts are important for understanding how metabolic pathways are controlled. Firstly, the regulation of an enzyme in a pathway is how its activity is increased and decreased in response to signals. Secondly, the control exerted by this enzyme is the effect that these changes in its activity have on the overall rate of the pathway (the flux through the pathway). For example, an enzyme may show large changes in activity (i.e. it is highly regulated) but if these changes have little effect on the flux of a metabolic pathway, then this enzyme is not involved in the control of the pathway.
There are multiple levels of metabolic regulation. In intrinsic regulation, the metabolic pathway self-regulates to respond to changes in the levels of substrates or products; for example, a decrease in the amount of product can increase the flux through the pathway to compensate. This type of regulation often involves allosteric regulation of the activities of multiple enzymes in the pathway. Extrinsic control involves a cell in a multicellular organism changing its metabolism in response to signals from other cells. These signals are usually in the form of water-soluble messengers such as hormones and growth factors and are detected by specific receptors on the cell surface. These signals are then transmitted inside the cell by second messenger systems that often involved the phosphorylation of proteins.
A very well understood example of extrinsic control is the regulation of glucose metabolism by the hormone insulin. Insulin is produced in response to rises in blood glucose levels. Binding of the hormone to insulin receptors on cells then activates a cascade of protein kinases that cause the cells to take up glucose and convert it into storage molecules such as fatty acids and glycogen. The metabolism of glycogen is controlled by activity of phosphorylase, the enzyme that breaks down glycogen, and glycogen synthase, the enzyme that makes it. These enzymes are regulated in a reciprocal fashion, with phosphorylation inhibiting glycogen synthase, but activating phosphorylase. Insulin causes glycogen synthesis by activating protein phosphatases and producing a decrease in the phosphorylation of these enzymes.
Evolution
The central pathways of metabolism described above, such as glycolysis and the citric acid cycle, are present in all three domains of living things and were present in the last universal common ancestor. This universal ancestral cell was prokaryotic and probably a methanogen that had extensive amino acid, nucleotide, carbohydrate and lipid metabolism. The retention of these ancient pathways during later evolution may be the result of these reactions having been an optimal solution to their particular metabolic problems, with pathways such as glycolysis and the citric acid cycle producing their end products highly efficiently and in a minimal number of steps. The first pathways of enzyme-based metabolism may have been parts of purine nucleotide metabolism, while previous metabolic pathways were a part of the ancient RNA world.
Many models have been proposed to describe the mechanisms by which novel metabolic pathways evolve. These include the sequential addition of novel enzymes to a short ancestral pathway, the duplication and then divergence of entire pathways as well as the recruitment of pre-existing enzymes and their assembly into a novel reaction pathway. The relative importance of these mechanisms is unclear, but genomic studies have shown that enzymes in a pathway are likely to have a shared ancestry, suggesting that many pathways have evolved in a step-by-step fashion with novel functions created from pre-existing steps in the pathway. An alternative model comes from studies that trace the evolution of proteins' structures in metabolic networks, this has suggested that enzymes are pervasively recruited, borrowing enzymes to perform similar functions in different metabolic pathways (evident in the MANET database) These recruitment processes result in an evolutionary enzymatic mosaic. A third possibility is that some parts of metabolism might exist as "modules" that can be reused in different pathways and perform similar functions on different molecules.
As well as the evolution of new metabolic pathways, evolution can also cause the loss of metabolic functions. For example, in some parasites metabolic processes that are not essential for survival are lost and preformed amino acids, nucleotides and carbohydrates may instead be scavenged from the host. Similar reduced metabolic capabilities are seen in endosymbiotic organisms.
Investigation and manipulation
Classically, metabolism is studied by a reductionist approach that focuses on a single metabolic pathway. Particularly valuable is the use of radioactive tracers at the whole-organism, tissue and cellular levels, which define the paths from precursors to final products by identifying radioactively labelled intermediates and products. The enzymes that catalyze these chemical reactions can then be purified and their kinetics and responses to inhibitors investigated. A parallel approach is to identify the small molecules in a cell or tissue; the complete set of these molecules is called the metabolome. Overall, these studies give a good view of the structure and function of simple metabolic pathways, but are inadequate when applied to more complex systems such as the metabolism of a complete cell.
An idea of the complexity of the metabolic networks in cells that contain thousands of different enzymes is given by the figure showing the interactions between just 43 proteins and 40 metabolites to the right: the sequences of genomes provide lists containing anything up to 26.500 genes. However, it is now possible to use this genomic data to reconstruct complete networks of biochemical reactions and produce more holistic mathematical models that may explain and predict their behavior. These models are especially powerful when used to integrate the pathway and metabolite data obtained through classical methods with data on gene expression from proteomic and DNA microarray studies. Using these techniques, a model of human metabolism has now been produced, which will guide future drug discovery and biochemical research. These models are now used in network analysis, to classify human diseases into groups that share common proteins or metabolites.
Bacterial metabolic networks are a striking example of bow-tie organization, an architecture able to input a wide range of nutrients and produce a large variety of products and complex macromolecules using a relatively few intermediate common currencies.
A major technological application of this information is metabolic engineering. Here, organisms such as yeast, plants or bacteria are genetically modified to make them more useful in biotechnology and aid the production of drugs such as antibiotics or industrial chemicals such as 1,3-propanediol and shikimic acid. These genetic modifications usually aim to reduce the amount of energy used to produce the product, increase yields and reduce the production of wastes.
History
The term metabolism is derived from the Ancient Greek word μεταβολή—"metabole" for "a change" which is derived from μεταβάλλειν—"metaballein", meaning "to change"
Greek philosophy
Aristotle's The Parts of Animals sets out enough details of his views on metabolism for an open flow model to be made. He believed that at each stage of the process, materials from food were transformed, with heat being released as the classical element of fire, and residual materials being excreted as urine, bile, or faeces.
Ibn al-Nafis described metabolism in his 1260 AD work titled Al-Risalah al-Kamiliyyah fil Siera al-Nabawiyyah (The Treatise of Kamil on the Prophet's Biography) which included the following phrase "Both the body and its parts are in a continuous state of dissolution and nourishment, so they are inevitably undergoing permanent change."
Application of the scientific method and Modern metabolic theories
The history of the scientific study of metabolism spans several centuries and has moved from examining whole animals in early studies, to examining individual metabolic reactions in modern biochemistry. The first controlled experiments in human metabolism were published by Santorio Santorio in 1614 in his book Ars de statica medicina. He described how he weighed himself before and after eating, sleep, working, sex, fasting, drinking, and excreting. He found that most of the food he took in was lost through what he called "insensible perspiration".
In these early studies, the mechanisms of these metabolic processes had not been identified and a vital force was thought to animate living tissue. In the 19th century, when studying the fermentation of sugar to alcohol by yeast, Louis Pasteur concluded that fermentation was catalyzed by substances within the yeast cells he called "ferments". He wrote that "alcoholic fermentation is an act correlated with the life and organization of the yeast cells, not with the death or putrefaction of the cells." This discovery, along with the publication by Friedrich Wöhler in 1828 of a paper on the chemical synthesis of urea, and is notable for being the first organic compound prepared from wholly inorganic precursors. This proved that the organic compounds and chemical reactions found in cells were no different in principle than any other part of chemistry.
It was the discovery of enzymes at the beginning of the 20th century by Eduard Buchner that separated the study of the chemical reactions of metabolism from the biological study of cells, and marked the beginnings of biochemistry. The mass of biochemical knowledge grew rapidly throughout the early 20th century. One of the most prolific of these modern biochemists was Hans Krebs who made huge contributions to the study of metabolism. He discovered the urea cycle and later, working with Hans Kornberg, the citric acid cycle and the glyoxylate cycle.
See also
, a "metabolism first" theory of the origin of life
Microphysiometry
Oncometabolism
References
Further reading
Introductory
Advanced
External links
General information
The Biochemistry of Metabolism (archived 8 March 2005)
Sparknotes SAT biochemistry Overview of biochemistry. School level.
MIT Biology Hypertextbook Undergraduate-level guide to molecular biology.
Human metabolism
Topics in Medical Biochemistry Guide to human metabolic pathways. School level.
THE Medical Biochemistry Page Comprehensive resource on human metabolism.
Databases
Flow Chart of Metabolic Pathways at ExPASy
IUBMB-Nicholson Metabolic Pathways Chart
SuperCYP: Database for Drug-Cytochrome-Metabolism
Metabolic pathways
Metabolism reference Pathway
Underwater diving physiology | Metabolism | Chemistry,Biology | 8,661 |
18,226,967 | https://en.wikipedia.org/wiki/Antiproton%20Decelerator | The Antiproton Decelerator (AD) is a storage ring at the CERN laboratory near Geneva. It was built from the Antiproton Collector (AC) to be a successor to the Low Energy Antiproton Ring (LEAR) and started operation in the year 2000. Antiprotons are created by impinging a proton beam from the Proton Synchrotron on a iridium target. The AD decelerates the resultant antiprotons to an energy of 5.3 MeV, which are then ejected to one of several connected experiments.
The major goals of experiments at AD are to spectroscopically observe the antihydrogen and to study the effects of gravity on antimatter. Though each experiment at AD has varied aims ranging from testing antimatter for cancer therapy to CPT symmetry and antigravity research.
History
From 1982 to 1996, CERN operated the Low Energy Antiproton Ring (LEAR), through which several experiments with slow-moving antiprotons were carried out. During the end stages of LEAR, the physics community involved in those antimatter experiments wanted to continue their studies with the slow antiprotons. The motivation to build the AD grew out of the Antihydrogen Workshop held in Munich in 1992. This idea was carried forward quickly and AD's feasibility study was completed by 1995.
In 1996, the CERN Council asked the Proton Synchrotron (PS) division to look into the possibility of generating slow antiproton beams. The PS division prepared a design study in 1996 with the solution to use the antiproton collector (AC), and transform it into a single Antiproton Decelerator Machine. The AD was approved in February 1997.
AC modification, AD installation, and commissioning process were carried out in the next three years. By the end of 1999, the AC ring was modified into a decelerator and cooling system- forming the Antiproton Decelerator.
Decelerator
AD's oval-shaped perimeter has four straight sections where the deceleration and cooling systems are placed. There are several dipole and quadrupole magnets in these sections to avoid beam dispersion. Antiprotons are cooled and decelerated in a single 100-second cycle in the AD synchrotron.
Production of antiprotons
AD requires about protons of momentum 26 GeV/c to produce antiprotons per minute. The high-energy protons coming from the proton synchrotron are made to collide with a thin, highly dense rod of iridium metal of 3-mm diameter and 55 cm in length. The iridium rod embedded in graphite and enclosed by a sealed water-cooled titanium case remains intact. But the collisions create a lot of energetic particles, including the antiprotons. A magnetic bi-conical aluminum horn-type lens collects the antiprotons emerging from the target. This collector takes in the antiprotons, and they are separated from other particles using deflection through electromagnetic forces.
Deceleration, accumulation and cooling down
Radio frequency (RF) systems decelerate and bunch the cooled antiprotons at 3.5 GeV/c. Numerous magnets inside focus the randomly moving antiprotons into a collimated beam and bend the beam. Simultaneously the electric fields further decelerate them.
Stochastic cooling and electron cooling stages designed inside the AD decrease the energy of beams as well as limit the antiproton beam from any significant distortions. Stochastic cooling is applied for antiprotons at 3.5 GeV/c and then at 2 GeV/c, followed by electron cooling at 0.3 GeV/c and at 0.1 GeV/c. The final output beam has a momentum of 0.1 GeV/c (kinetic energy equal to 5.3 MeV). These antiprotons move with the speed of about one-tenth that of light.
But the experiments need much lower energy beams (3 to 5 KeV). So the antiprotons are again decelerated to ~5 KeV, using the degrader foils. This step accounts for the loss of 99.9% of antiprotons. The collected antiprotons are then temporarily stored in the Penning traps; before being fed into the several AD experiments. The Penning traps can also form antihydrogen by combining antiprotons with the positrons.
ELENA
ELENA (Extra Low ENergy Antiproton) is a 30 m hexagonal storage ring situated inside the AD complex. It is designed to further decelerate the antiproton beam to an energy of 0.1 MeV for more precise measurements. The first beam circulated ELENA on 18 November 2016. GBAR was the first experiment to use a beam from ELENA, with the rest of the AD experiments to follow suit after LS2 when beam transfer lines from ELENA will have been laid to all the experiments using the facility.
AD experiments
ATHENA
ATHENA, AD-1 experiment, was an antimatter research project that took place at the Antiproton Decelerator. In August 2002, it was the first experiment to produce 50,000 low-energy antihydrogen atoms, as reported in Nature. In 2005, ATHENA was disbanded and many of the former members worked on the subsequent ALPHA experiment.
ATRAP
The Antihydrogen Trap (ATRAP) collaboration, responsible for the AD-2 experiment, is a continuation of the TRAP collaboration, which started taking data for the PS196 experiment in 1985. The TRAP experiment (PS196) pioneered cold antiprotons, cold positrons, and first made the ingredients of cold antihydrogen to interact. Later ATRAP members pioneered accurate hydrogen spectroscopy and observed the first hot antihydrogen atoms.
ASACUSA
Atomic Spectroscopy and Collisions Using Slow Antiprotons (ASACUSA), AD-3, is an experiment testing for CPT-symmetry by laser spectroscopy of antiprotonic helium and microwave spectroscopy of the hyperfine structure of antihydrogen. It compares matter and antimatter using antihydrogen and antiprotonic helium and looks into matter-antimatter collisions. It also measures atomic and nuclear cross-sections of antiprotons on various targets at extremely low energies.
ACE
The Antiproton Cell Experiment (ACE), AD-4, started in 2003. It aims to assess fully the effectiveness and suitability of antiprotons for cancer therapy. The results showed that antiprotons required to break down the tumor cells were four times less than the number of protons required. The effect on healthy tissues due to antiprotons was significantly less. Although the experiment ended in 2013, further research and validation still continue, owing to the long procedures of bringing in novel medical treatments.
ALPHA
The Antihydrogen Laser Physics Apparatus (ALPHA), the AD-5 experiment, is designed to trap neutral antihydrogen in a magnetic trap, and conduct experiments on them. The ultimate goal of this endeavour is to test CPT symmetry through comparison of the atomic spectra of hydrogen and antihydrogen (see hydrogen spectral series). The ALPHA collaboration consists of some former members of the ATHENA collaboration (the first group to produce cold antihydrogen, in 2002), as well as a number of new members.
AEgIS
AEgIS, Antimatter Experiment: gravity, Interferometry, Spectroscopy, AD-6, is an experiment at the Antiproton Decelerator.
AEgIS would attempt to determine if gravity affects antimatter in the same way it affects normal matter by testing its effect on an antihydrogen beam. The first phase of the experiment created antihydrogen using the charge exchange reaction between antiprotons from the Antiproton Decelerator (AD) and positronium, producing a pulse of antihydrogen atoms. These atoms are sent through a series of diffraction gratings, ultimately hitting a surface and thus annihilating. The points where the antihydrogen annihilates are measured with a precise detector. Areas behind the gratings are shadowed, while those behind the slits are not. The annihilation points reproduce a periodic pattern of light and shadowed areas. Using this pattern, it can be measured how many atoms of different velocities are vertically displaced due to gravity during n their horizontal flight. Therefore, the Earth's gravitational force on antihydrogen can be determined.
GBAR
GBAR (Gravitational Behaviour of Anti hydrogen at Rest), AD-7 experiment, is a multinational collaboration at the Antiproton Decelerator of CERN.
The GBAR project aims to measure the free-fall acceleration of ultra-cold neutral anti-hydrogen atoms in the terrestrial gravitational field. By measuring the free fall acceleration of anti-hydrogen and comparing it with acceleration of normal hydrogen, GBAR is testing the equivalence principle proposed by Albert Einstein. The equivalence principle says that the gravitational force on a particle is independent of its internal structure and composition.
BASE
BASE (Baryon Antibaryon Symmetry Experiment), AD-8, is a multinational collaboration at the Antiproton Decelerator of CERN.
The goal of the Japanese/German BASE collaboration are high-precision investigations of the fundamental properties of the antiproton, namely the charge-to-mass ratio and the magnetic moment. The single antiprotons are stored in an advanced Penning trap system, which has a double-trap system at its core, for high precision frequency measurements and for single particle spin flip spectroscopy. By measuring the spin flip rate as a function of the frequency of an externally applied magnetic-drive, a resonance curve is obtained. Together with a measurement of the cyclotron frequency, the magnetic moment is extracted.
PUMA
The PUMA (antiProton Unstable Matter Annihilation experiment), AD-9, aims to look into the quantum interactions and annihilation processes between the antiprotons and the exotic slow-moving nuclei. PUMA's experimental goals require about one billion trapped antiprotons made by AD and ELENA to be transported to the ISOLDE-nuclear physics facility at CERN, which will supply the exotic nuclei. Antimatter has never been transported out of the AD facility before. Designing and building a trap for this transportation is the most challenging aspect for the PUMA collaboration.
See also
Gravitational interaction of antimatter
References
External links
GBAR experiment
Beams at AD
Alpha experiment results
AD's Antiproton source
AD website
ATHENA website
ATRAP website
ASACUSA website
ALPHA website
AEgIS website
Record for Antiproton Decelerator on INSPIRE-HEP
Further reading
Antimatter
CERN accelerators
Research projects
Particle experiments
Particle physics facilities
CERN facilities
Physics experiments
es:Proyecto Athena
fr:ALPHA (expérience)
fr:AEGIS (Physique des particules)
simple:ALPHA Collaboration | Antiproton Decelerator | Physics | 2,257 |
1,350,865 | https://en.wikipedia.org/wiki/Affine%20Lie%20algebra | In mathematics, an affine Lie algebra is an infinite-dimensional Lie algebra that is constructed in a canonical fashion out of a finite-dimensional simple Lie algebra. Given an affine Lie algebra, one can also form the associated affine Kac-Moody algebra, as described below. From a purely mathematical point of view, affine Lie algebras are interesting because their representation theory, like representation theory of finite-dimensional semisimple Lie algebras, is much better understood than that of general Kac–Moody algebras. As observed by Victor Kac, the character formula for representations of affine Lie algebras implies certain combinatorial identities, the Macdonald identities.
Affine Lie algebras play an important role in string theory and two-dimensional conformal field theory due to the way they are constructed: starting from a simple Lie algebra , one considers the loop algebra, , formed by the -valued functions on a circle (interpreted as the closed string) with pointwise commutator. The affine Lie algebra is obtained by adding one extra dimension to the loop algebra and modifying the commutator in a non-trivial way, which physicists call a quantum anomaly (in this case, the anomaly of the WZW model) and mathematicians a central extension. More generally,
if σ is an automorphism of the simple Lie algebra associated to an automorphism of its Dynkin diagram, the twisted loop algebra consists of -valued functions f on the real line which satisfy
the twisted periodicity condition . Their central extensions are precisely the twisted affine Lie algebras. The point of view of string theory helps to understand many deep properties of affine Lie algebras, such as the fact that the characters of their representations transform amongst themselves under the modular group.
Affine Lie algebras from simple Lie algebras
Definition
If is a finite-dimensional simple Lie algebra, the corresponding
affine Lie algebra is constructed as a central extension of the loop algebra , with one-dimensional center
As a vector space,
where is the complex vector space of Laurent polynomials in the indeterminate t. The Lie bracket is defined by the formula
for all and , where is the Lie bracket in the Lie algebra and is the Cartan-Killing form on
The affine Lie algebra corresponding to a finite-dimensional semisimple Lie algebra is the direct sum of the affine Lie algebras corresponding to its simple summands. There is a distinguished derivation of the affine Lie algebra defined by
The corresponding affine Kac–Moody algebra is defined as a semidirect product by adding an extra generator d that satisfies [d, A] = δ(A).
Constructing the Dynkin diagrams
The Dynkin diagram of each affine Lie algebra consists of that of the corresponding simple Lie algebra plus an additional node, which corresponds to the addition of an imaginary root. Of course, such a node cannot be attached to the Dynkin diagram in just any location, but for each simple Lie algebra there exists a number of possible attachments equal to the cardinality of the group of outer automorphisms of the Lie algebra. In particular, this group always contains the identity element, and the corresponding affine Lie algebra is called an untwisted affine Lie algebra. When the simple algebra admits automorphisms that are not inner automorphisms, one may obtain other Dynkin diagrams and these correspond to twisted affine Lie algebras.
Classifying the central extensions
The attachment of an extra node to the Dynkin diagram of the corresponding simple Lie algebra corresponds to the following construction. An affine Lie algebra can always be constructed as a central extension of the loop algebra of the corresponding simple Lie algebra. If one wishes to begin instead with a semisimple Lie algebra, then one needs to centrally extend by a number of elements equal to the number of simple components of the semisimple algebra. In physics, one often considers instead the direct sum of a semisimple algebra and an abelian algebra . In this case one also needs to add n further central elements for the n abelian generators.
The second integral cohomology of the loop group of the corresponding simple compact Lie group is isomorphic to the integers. Central extensions of the affine Lie group by a single generator are topologically circle bundles over this free loop group, which are classified by a two-class known as the first Chern class of the fibration. Therefore, the central extensions of an affine Lie group are classified by a single parameter k which is called the level in the physics literature, where it first appeared. Unitary highest weight representations of the affine compact groups only exist when k is a natural number. More generally, if one considers a semi-simple algebra, there is a central charge for each simple component.
Structure
Cartan–Weyl basis
As in the finite case, determining the Cartan–Weyl basis is an important step in determining the structure of affine Lie algebras.
Fix a finite-dimensional, simple, complex Lie algebra with Cartan subalgebra and a particular root system . Introducing the notation , one can attempt to extend a Cartan–Weyl basis for to one for the affine Lie algebra, given by , with forming an abelian subalgebra.
The eigenvalues of and on are and respectively and independently of . Therefore the root is infinitely degenerate with respect to this abelian subalgebra. Appending the derivation described above to the abelian subalgebra turns the abelian subalgebra into a Cartan subalgebra for the affine Lie algebra, with eigenvalues for
Killing form
The Killing form can almost be completely determined using its invariance property. Using the notation for the Killing form on and for the Killing form on the affine Kac–Moody algebra,
where only the last equation is not fixed by invariance and instead chosen by convention. Notably, the restriction of to the subspace gives a bilinear form with signature .
Write the affine root associated with as . Defining , this can be rewritten
The full set of roots is
Then is unusual as it has zero length: where is the bilinear form on the roots induced by the Killing form.
Affine simple root
In order to obtain a basis of simple roots for the affine algebra, an extra simple root must be appended, and is given by
where is the highest root of , using the usual notion of height of a root. This allows definition of the extended Cartan matrix and extended Dynkin diagrams.
Representation theory
The representation theory for affine Lie algebras is usually developed using Verma modules. Just as in the case of semi-simple Lie algebras, these are highest weight modules. There are no finite-dimensional representations; this follows from the fact that the null vectors of a finite-dimensional Verma module are necessarily zero; whereas those for the affine Lie algebras are not. Roughly speaking, this follows because the Killing form is Lorentzian in the directions, thus are sometimes called "lightcone coordinates" on the string. The "radially ordered" current operator products can be understood to be time-like normal ordered by taking with the time-like direction along the string world sheet and the spatial direction.
Vacuum representation of rank k
The representations are constructed in more detail as follows.
Fix a Lie algebra and basis . Then is a basis for the corresponding loop algebra, and is a basis for the affine Lie algebra .
The vacuum representation of rank , denoted by where , is the complex representation with basis
and where the action of on is given by:
Affine Vertex Algebra
The vacuum representation in fact can be equipped with vertex algebra structure, in which case it is called the affine vertex algebra of rank . The affine Lie algebra naturally extends to the Kac–Moody algebra, with the differential represented by the translation operator in the vertex algebra.
Weyl group and characters
The Weyl group of an affine Lie algebra can be written as a semi-direct product of the Weyl group of the zero-mode algebra (the Lie algebra used to define the loop algebra) and the coroot lattice.
The Weyl character formula of the algebraic characters of the affine Lie algebras generalizes to the Weyl-Kac character formula. A number of interesting constructions follow from these. One may construct generalizations of the Jacobi theta function. These theta functions transform under the modular group. The usual denominator identities of semi-simple Lie algebras generalize as well; because the characters can be written as "deformations" or q-analogs of the highest weights, this led to many new combinatoric identities, include many previously unknown identities for the Dedekind eta function. These generalizations can be viewed as a practical example of the Langlands program.
Applications
Due to the Sugawara construction, the universal enveloping algebra of any affine Lie algebra has the Virasoro algebra as a subalgebra. This allows affine Lie algebras to serve as symmetry algebras of conformal field theories such as WZW models or coset models. As a consequence, affine Lie algebras also appear in the worldsheet description of string theory.
Example
The Heisenberg algebra defined by generators satisfying commutation relations
can be realized as the affine Lie algebra .
References
Lie algebras
Representation theory | Affine Lie algebra | Mathematics | 1,913 |
50,275,381 | https://en.wikipedia.org/wiki/Provirus%20silencing | Provirus silencing, or proviral silencing, is the repression of expression of proviral genes in cells.
A provirus is a viral DNA that has been incorporated into the chromosome of a host cell, often by retroviruses such as HIV.
Endogenous retroviruses are always in the provirus state in the host cell and replicate through reverse transcription. By integrating their genome into the host cell genome, they make use of the host cell's transcription and translation mechanisms to achieve their own propagation. This often leads to harmful impact on the host. However, in recent gene therapy techniques, retroviruses are often used to deliver desired genes instead of their own viral genome into the host genome. As such, researchers are interested in the host cell's mechanisms to silence such gene expressions to find out firstly, how the host cell manages provirus transcription to eliminate the deleterious effects of retroviruses; and secondly, how can researchers ensure stable and long-term expression of retrovirus-mediated gene transfer.
Mechanisms and Pathways
It has been found that the level transcription of integrated retroviruses depends on both genetic and chromatin remodeling at the site of integration. Mechanisms such as DNA methylation and histone modification seem to play important roles in the suppression provirus transcription, such that proviral activity can be silenced. The location of integration also plays a crucial role with the level of silencing that is observed. For example, integration into the H3K4me3 regions, areas of the genome that are twisted around histone H3 proteins that are tri-methylated at the 4th lysine residue. It has been reported that the manipulation or insertion of CpG dinucleotide islands can lead to the disruption of proviral silencing. Silencing frequently begins with the binding of a zinc finger DNA-binding protein to the primer sequence, targeting more the expression of the provirus itself rather than attempting to curtail the sequence. The protein then proceeds to recruit other enzymes that complete the silencing through DNA or histone methylation.
However, studies within the field do note that the patterns are species-specific with regards to the virus in question, thus caution should be taken when attempting to generalize to all cases. Additionally, many studies focus on proviral silencing within murine embryonic cells as opposed to human cells. Some researchers also posit that proviral silencing may be more complex than a simple question of whether the virus is repressed or not. They suggest that proviruses played more of a role with transcriptional regulation as they integrated and evolved with the host sequence over time, occasionally serving as promoters or enhancers.
Challenges with Effective Silencing
It has been shown that the orientation of proviruses can have dramatic effects on the expression of proviruses. With regard to HIV-1, the viral genome is frequently inserted into the introns of active genes. Perhaps unsurprisingly, when the viral genome is oriented in the same direction as the host gene, expression is increased. The converse is also true, with genes that are oriented in the opposite direction of the gene showing reduced expression. This produces challenges for effective therapeutics to aid in treatment for the disease because it can lead to large variations in detectability. This can lead to struggles for physicians who are attempting to maintain HIV latency. HIV reservoirs, or cells that are infected with HIV but not actively producing viral particles, additionally contribute to this problem. CD4+ T cells are considered to be the main reservoir and are reported to have a half-life of over three years. While the cells are effectively temporarily silencing the expression of HIV, this results in the condition being essentially impossible to eradicate.
Additionally, DNA methylation has been linked to aging and geriatric disease. Increases in DNA methylation have been linked to diseases including various types of cancer, Alzheimer's disease, Type 2 Diabetes, and cardiovascular disease. From a proviral silencing standpoint, this does make logical sense as individuals would naturally accumulate more proviruses over their lifetimes. This does pose a slight concern because as groups have researched the utility of DNA methylation clocks to predict age, there is the risk that treatments which treat DNA methylation with the goal of reducing biological age inadvertently result in the increase of proviral expression within their patients. Additionally, it must be emphasized that most of the work in this field is correlational rather than causational.
Managing Proviral Silencing in a Gene Therapy Context
The expression of transgenes is often hindered by mechanisms associated with proviral silencing. This naturally proves to be an issue when attempting to create longer-lasting gene therapies or transgenic cell lines. Most methods center around choosing a specific locus of integration.
Recently, researchers have demonstrated that targeted integration of a lentiviral payload using homology-directed repair can result in stable integration and expression. In this approach, CRISPR-associated ribonucleoprotein complexes (CRISPR RNP complexes) are used to create double-stranded breaks upstream of an endogenously promoted essential gene. The payload is designed to where it contains the transgene flanked by two regions of DNA that are homologous/identical to the regions upstream of the gene, enabling it to integrate in the same reading frame as the gene. This approach is similar to other strategies that seek to integrate in areas that are less susceptible to silencing through more mechanistic methods.
References
Genetic engineering
Viral genes | Provirus silencing | Chemistry,Engineering,Biology | 1,141 |
30,926,764 | https://en.wikipedia.org/wiki/Dose%20rate | A dose rate is quantity of radiation absorbed or delivered per unit time.
It is often indicated in micrograys per hour (μGy/h) or as an equivalent dose rate ḢT in rems per hour (rem/hr) or sieverts per hour (Sv/h).
Dose and dose rate are used to measure different quantities in the same way that distance and speed are used to measure different quantities. When considering stochastic radiation effects, only the total dose is relevant; each incremental unit of dose increases the probability that the stochastic effect happens. When considering deterministic effects, the dose rate also matters. The total dose can be above the threshold for a deterministic effect, but if the dose is spread out over a long period of time, the effect is not observed. Consider the sunburn, a deterministic effect:
when exposed to bright sunlight for only ten minutes at a high UV Index, that is to say a high average dose rate,
the skin can turn red and painful. The same total amount of energy from indirect sunlight spread out over several years - a low average dose rate - would not cause a sunburn at all, although it may still cause skin cancer.
References | Dose rate | Mathematics | 252 |
15,501,978 | https://en.wikipedia.org/wiki/List%20of%20waves%20named%20after%20people | This is a list of waves named after people (eponymous waves).
See also
Eponym
List of eponymous laws
Waves
Scientific phenomena named after people
References
Waves
Fluid dynamics
Water waves
Waves in plasmas
Mountain meteorology
Atmospheric dynamics | List of waves named after people | Physics,Chemistry,Engineering | 46 |
8,320,597 | https://en.wikipedia.org/wiki/Operation%20Moonwatch | Operation Moonwatch (also known as Project Moonwatch and, more simply, as Moonwatch) was an amateur science program formally initiated by the Smithsonian Astrophysical Observatory (SAO) in 1956. The SAO organized Moonwatch as part of the International Geophysical Year (IGY) which was probably the largest single scientific undertaking in history. Its initial goal was to enlist the aid of amateur astronomers and other citizens who would help professional scientists spot the first artificial satellites. Until professionally staffed optical tracking stations came on-line in 1958, this network of amateur scientists and other interested citizens played a critical role in providing crucial information regarding the world's first satellites.
Origins of Moonwatch
Moonwatch's origins can be traced to two sources. In the United States, there was a thriving culture of amateur scientists including thousands of citizens who did astronomy for an avocation. During the Cold War, the United States also encouraged thousands of citizens to take part in the Ground Observer Corps, a nationwide program to spot Soviet bombers. Moonwatch brought together these two activities and attitudes, melding curiosity and vigilance into a thriving activity for citizens. Moonwatch, in other words, was an expression of 1950s popular culture and fixed properly within the context of the Cold War.
Moonwatch was the brainchild of Harvard astronomer Fred L. Whipple. In 1955, as the recently appointed director of the Smithsonian Astrophysical Observatory in Cambridge, Massachusetts, Whipple proposed that amateurs could play a vital role in efforts to track the first satellites. He overcame the objections of colleagues who doubted ordinary citizens could do the job or who wanted the task for their own institutions. Eventually, Whipple carved out a place for amateurs in the IGY.
Moonwatch's members
In the late 1950s, thousands of teenagers, housewives, amateur astronomers, school teachers, and other citizens served on Moonwatch teams around the globe. Initially conceived as a way for citizens to participate in science and as a supplement to professionally staffed optical and radio tracking stations, Moonwatchers around the world found themselves an essential component of the professional scientists’ research program. Using specially designed telescopes, hand-built or purchased from vendors like Radio Shack, scores of Moonwatchers nightly monitored the skies. Their prompt response was aided by the extensive training they had done by spotting pebbles tossed in the air, registering the flight of moths, and participating in national alerts organized by the Civil Air Patrol.
Once professional scientists had accepted the idea that ordinary citizens could spot satellites and contribute to legitimate scientific research, Whipple and his colleagues organized amateurs around the world. Citizens formed Operation Moonwatch teams in towns and cities all around the globe, built their own equipment, and courted sponsors. In many cases, Moonwatch was not just a fad but an expression of real interest in science. By October 1957, Operation Moonwatch had some 200 teams ready to go into action, including observers in Hawaii and Australia
How Moonwatch worked
Whipple envisioned a global network of specially designed instruments that could track and photograph satellites. This network, aided by a corps of volunteer satellite spotters and a computer at the MIT Computation Center, would establish ephemerides – predictions of where a satellite will be at particular times. The instruments at these stations were eventually designed by Dr. James G. Baker and Joseph Nunn and hence known as Baker-Nunn cameras. Based on a series of super-Schmidt wide-angle telescopes and strategically placed around the globe at 12 locations, the innovative cameras could track rapidly moving targets while simultaneously viewing large swaths of the sky.
From the start, Whipple planned that the professionally staffed Baker-Nunn stations would be complemented by teams of dedicated amateurs. Amateur satellite spotters would inform the Baker-Nunn stations as to where to look, an important task given that scientists working on the Vanguard program likened finding a satellite in the sky to finding a golf ball tossed out of a jet plane. Amateur teams would relay the information back to the SAO in Cambridge where professional scientists would use it to generate accurate satellite orbits. At this point, professionals at the Baker-Nunn stations would take over the full-time task of photographing them.
During the IGY
Sputnik 1's sudden launch was followed less than a month later with the Soviets orbiting Sputnik 2 and the dog Laika. It was Moonwatch teams, networked around the world, who provided tracking information needed by scientists in Western nations. For the opening months of the Space Age, members of Moonwatch were the only organized worldwide network that was prepared to spot and help track satellites. The information they provided was complemented by the radio tracking program called Minitrack the United States Navy operated as well as some information from amateur radio buffs.
In many cases, Moonwatch teams also had the responsibility of communicating news of Sputnik and the first American satellites to the public. The public responded, in turn, with infectious enthusiasm as local radio stations aired times to spot satellites and local and national newspapers ran hundreds of articles that described the nighttime activities of Moonwatchers.
Moonwatch caught the attention of those citizens interested in science or the Space Race during the late 1950s and much of the general public as well. Newspapers and popular magazines featured stories about Moonwatch regularly; dozens of articles appeared in the Los Angeles Times, The New Yorker, and the New York Times alone. Meanwhile, in the U.S. local businesses sponsored teams with monikers like Spacehounds and The Order of Lunartiks. Meanwhile, Moonwatch teams in Peru, Japan, Australia, and even the Arctic regularly sent their observations to the Smithsonian.
Moonwatch complemented the professional system of satellite tracking stations that Fred Whipple organized around the globe. These two networks – one composed of amateurs and the other of seasoned professionals – helped further Whipple's personal goals of expanding his own astronomical empire. Operation Moonwatch was the most successful amateur activity of the IGY and it became the public face of a satellite tracking network that expanded the Smithsonian's global reach. Whipple used satellite tracking as a gateway for his observatory to participate in new research opportunities that appeared in the early years of space exploration.
In February 1958, President Dwight D. Eisenhower publicly thanked the SAO, Fred Whipple, and the global corps of satellite spotters that comprised Moonwatch for their efforts in tracking the first Soviet and American satellites.
Moonwatch after the IGY
Even after the IGY ended, the Smithsonian maintained Operation Moonwatch. Hundreds of dedicated amateur scientists continued to help NASA and other agencies track satellites. Their observations often rivaled those of professional tracking stations, blurring the boundary between professional and amateur. Moonwatch members and the Smithsonian were important contributors to US Department of Defense satellite tracking research and development efforts, 1957–1961; see Project Space Track.
Moonwatch continued long after the IGY ended in 1958. In fact, the Smithsonian operated Moonwatch until 1975 making it one of the longest running amateur science activities ever. As the fad of satellite spotting passed, the Smithsonian refashioned Operation Moonwatch to perform new functions. It encouraged teams of dedicated amateurs to contribute increasingly precise data for satellite tracking. Moonwatchers adapted to the needs of the Smithsonian through the activities of "hard core" groups in places like Walnut Creek, California. Throughout the 1960s, the Smithsonian gave them ever more challenging assignments such as locating extremely faint satellites and tracking satellites as they re-entered the Earth's atmosphere.
At times, the precise observations and calculations of dedicated skywatchers surpasses the work of professionals.
One of the most notable activities of Moonwatchers after the IGY was the observance of Sputnik 4 when it reentered the atmosphere in September 1962. Moonwatchers and other amateur scientists near Milwaukee, Wisconsin observed the flaming re-entry and their observations eventually led to the recovery and analysis of several fragments from the Soviet satellite.
Moonwatch's legacy
Moonwatch affected the lives of participants long after they stopped looking for satellites. When the Smithsonian discontinued the program in 1975, one long-time Moonwatcher compared his participation to "winning the Medal of Honor." Moonwatch inspired some future scientists, for example, James A. Westphal, a Moonwatcher from Oklahoma, who eventually helped design instruments for the Hubble Space Telescope at Caltech. The program boosted science programs at many schools throughout the country and helped revitalize amateur science in the United States.
The United States Space Surveillance Network and other modern tracking systems are professional and automated, but amateurs remain active in satellite watching.
References
Further reading
Gavaghan, Helen. (1998) Something New Under the Sun: Satellites and the Beginning of the Space Age, Copernicus, , pg 38–42 & 49
Hayes, E. Nelson. (1968) Trackers of the Skies. Cambridge, Massachusetts: Howard A. Doyle Publishing Co.
McCray, W. Patrick. (2008) Keep Watching the Skies! The Story of Operation Moonwatch and the Dawn of the Space Age, Princeton University Press.
External links
Smithsonian Astronomers Keep Hectic Pace – The Harvard Crimson
The IGY Period – University of Hawaii
Role of NAS and TPESP, 1955–1956 – NASA
The tracking systems – NASA
Eyes on the Sky– Xavier University
Tom Van Flandern and Victor Slabinski – American Institute of Physics
Citizen Science, Old-School Style: The True Tale of Operation Moonwatch – Universe Today
Observational astronomy
Scientific observation
Citizen science | Operation Moonwatch | Astronomy | 1,903 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.