id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
11,128,355 | https://en.wikipedia.org/wiki/Polystigma%20fulvum | Polystigma fulvum is a plant pathogen infecting almonds.
References
Fungal tree pathogens and diseases
Fruit tree diseases
Phyllachorales
Fungus species | Polystigma fulvum | [
"Biology"
] | 38 | [
"Fungi",
"Fungus species"
] |
11,128,359 | https://en.wikipedia.org/wiki/Pseudoseptoria%20donacis | Pseudoseptoria donacis is an ascomycete fungus that is a plant pathogen infecting barley, rye and wheat.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Barley diseases
Rye diseases
Wheat diseases
Enigmatic Ascomycota taxa
Fungus species | Pseudoseptoria donacis | [
"Biology"
] | 63 | [
"Fungi",
"Fungus species"
] |
11,128,365 | https://en.wikipedia.org/wiki/Puccinia%20cacabata | Puccinia cacabata (southwestern cotton rust) is a fungal species and plant pathogen that causes rust on cotton.
It was originally found on the leaves of Chloris ciliata (fringed windmill grass) in Bolivia.
Description
he first disease symptom on cotton is the appearance of small, somewhat inconspicuous, pale-green lesions which are the first sign of disease. The lesions then develop into bright yellow spermogonial (conceptacle containing spermatia) pustules, usually on the upper leaf surfaces of the host plant. Spermogonial pustules may also appear on any of the above-ground plant parts, such as the stems. Within about 10 days of spermogonium formation, cup-like aecia (spore reproductive structures) erupt through the lower leaf epidermis. The aecia are large and can easily be observed. They appear as orange-yellow, circular, and slightly raised lesions produced on the lower leaf surfaces, bracts, green bolls and stems of cotton plants. Severe infections may cause defoliation and dwarfing of bolls. Infection of cotton seedlings may cause death. The first symptoms on grama grass are small, pale-brown uredinial lesions on the leaves. The telia are produced on grama grass either in the uredinium (pustules) or separately as dark-brown to black raised pustules.
See also
List of Puccinia species
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Cotton diseases
cacabata
Fungi described in 1925
Fungus species | Puccinia cacabata | [
"Biology"
] | 332 | [
"Fungi",
"Fungus species"
] |
11,128,372 | https://en.wikipedia.org/wiki/Puccinia%20schedonnardii | Puccinia schedonnardii is a basidiomycete fungus that affects cotton. More commonly known as a “rust,” this pathogen typically affects cotton leaves, which can decrease the quality of the boll at time of harvest. As large percentages of cotton in the United States are resistant to various rust varieties, there is little economic importance to this disease. In places where rust is prevalent, however, growers could see up to a 50% reduction in yield due to rust infection.
Hosts and symptoms
For Puccinia schedonnardii the host range is specific to cotton, but is not specific to a certain cotton species. Alternate hosts are necessary to complete the life cycle, and include many types of gramma grasses. Symptomology is similar to that of other rust species. First appearing as small yellow pustules on leaves, bolls, and stems, the spots will then transform into larger, orange/red pustules which release aeciospores. Rust lesions can cause leaves or stems to become weak and break or fall off, resulting in decreased photosynthetic ability and extreme difficulty during harvest. Symptoms on the alternate host grasses are small ovular red/brown (rust colored) powdery lesions, which release uredospores.
Disease cycle and pathogenesis
The disease cycle of Puccinia schedonardii does not vary from other rust disease cycles. This pathogen is heteroecious and exhibits a polycyclic disease cycle. Puccinia schedonnardii overwinters as teliospores that are produced in telia on the alternate host. In the spring, the teliospores germinate to produce basidiospores. The basidiospores are then windblown to the cotton host where they enter via stomata. When basidiospores germinate, they produce a mycelium from which flask-shape pycnia as well as receptive hyphae are formed. From here, nothing happens until the pycniospores produced by the pycnia fertilize receptive hyphae of a different mating type. Over a period of five to ten days the dikaryotic mycelia formed by the joining of the receptive hyphae and the pycniospore grows through the cotton leaf to produce aecia. Aecia are the pustules seen on the leaves of the cotton. When conditions are right, and adequate moisture is achieved, aeciospores are released from the aecia. These aeciospores land on the alternate grass host and infect it via a germ tube. Eventually, a uredium is formed from this germ tube. The uredospores released by the uredium are then able to do one of two things. As a polycyclic disease, the uredospores present an opportunity for secondary infection in a single season. The spores can either reinfect grasses by spreading uredospores that become uredia which leads to more uredospores and a likely epidemic; or they can become overwintering teliospores, thus preparing for the cycle to begin again in the spring.
Environment
Rust infections can proliferate under humid conditions and periods of prolonged wetness. Ideally, periods of or more of rain followed by at least 12 hours of high humidity are needed for the disease to develop. It is the moisture on the leaf surfaces that leads to disease and spore release/germination, so even well drained soils are susceptible to rust outbreaks. However, evidence has shown that poorly-drained soils may have increased incidence of fungal pathogens like rust, due to the increased relative humidity underneath the canopy.
Management and importance
Disease management can be difficult as it would be impossible to eliminate the alternate host grass species. However, the utilization of non-susceptible crops in a rotation can decrease infection rate in future cotton crops. An application of Mancozeb foliar fungicide can be used to prevent the disease, but little can be done after infection. This is not effective as a treatment, only as a prevention tool. Similarly, growers are proactive in their prevention of the disease, ensuring the proper fungicide applications.
As previously stated, the importance of this disease can be severe, but growers tend to plant resistant varieties in areas where rust has been prevalent. One common cultivar with transferred resistance is Gossypium hirsutum, which was transferred from G. arboretum and G. anomalum. Without resistance, Puccinia schedonnardii can cause a 50% yield loss, which is why generally resistant varieties are so widely used. Cotton is one of the most important textile fibers, and the United States is ranked third in cotton production. It is for that reason that such a large focus of agronomic research funding has gone to cotton research to develop solutions to cotton rust.
See also
List of Puccinia species
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Cotton diseases
schedonnardii
Fungi described in 1888
Fungus species | Puccinia schedonnardii | [
"Biology"
] | 1,050 | [
"Fungi",
"Fungus species"
] |
11,128,376 | https://en.wikipedia.org/wiki/Puccinia%20striiformis%20var.%20striiformis | Puccinia striiformis is a fungal species and plant pathogen. It causes stripe rust on wheat, but has other hosts as well. The species is common in Europe and in more recent years has become a problem in Australia. Crop infections can cause losses of up to 40%, and the fungus will infect both winter wheat and spring wheat.
Taxonomy
The taxonomy of P. striiformis was revised by Liu & Hambleton in 2010. These strains – commonly called stripe rusts of wheat and other grasses – were redefined as a sensu lato and separated into four species based on molecular and morphological studies: (on Aegilops, Elymus, Hordeum and Triticum spp.), (on Poa spp.), (on Dactylis glomerata, which Liu & Hambleton believe to generalize to all Dactylis) and Puccinia gansensis (a sp. nov. they find on Achnatherum inebrians).
P. striiformis, can greatly decrease wheat yield in northern Punjab and Khyber Pakhtunkhwa (NWFP).
See also
Wheat yellow rust
List of Puccinia species
References
Fungal plant pathogens and diseases
Wheat diseases
striiformis var. striiformis
Fungi described in 1854
Fungus species | Puccinia striiformis var. striiformis | [
"Biology"
] | 272 | [
"Fungi",
"Fungus species"
] |
11,128,384 | https://en.wikipedia.org/wiki/Pucciniastrum%20americanum | Pucciniastrum americanum is a plant pathogen infecting caneberries.
References
Fungal plant pathogens and diseases
Small fruit diseases
Pucciniales
Fungi described in 1908
Fungus species | Pucciniastrum americanum | [
"Biology"
] | 39 | [
"Fungi",
"Fungus species"
] |
11,128,390 | https://en.wikipedia.org/wiki/Pucciniastrum%20arcticum | Pucciniastrum arcticum is a plant pathogen infecting caneberries.
References
Fungal plant pathogens and diseases
Small fruit diseases
Pucciniales
Fungi described in 1895
Fungus species | Pucciniastrum arcticum | [
"Biology"
] | 39 | [
"Fungi",
"Fungus species"
] |
11,128,393 | https://en.wikipedia.org/wiki/Pyrenophora%20seminiperda | Pyrenophora seminiperda is a minor plant pathogen that causes leaf spots on many grasses. It is an important generalist grass seed pathogen which causes visible cylindrical masses of black fungal hyphae (stromata) to grow from infected seeds. Hence the common name "black fingers of death" It has been hypothesized that the fungus arrived in North America with invasive grasses from Eurasia. BFOD has been suggested as a method of biocontrol of the invasive cheatgrass, one of the most important invasive species in the USA. Various secondary metabolites of the fungus, including Cytochalasin B, Pyrenophoric Acid-B, and Spirostaphylotrichin W, appears to be responsible for the seed killing.
References
External links
Dr. Erin Mordecai discusses BFOD and Cheatgrass during a seminar at the Department of Ecology and Evolutionary Biology at University of Michigan, Feb. 16, 2017
Index Fungorum
USDA ARS Fungal Database
Pyrenophora
Fungal plant pathogens and diseases
Wheat diseases
Fungus species | Pyrenophora seminiperda | [
"Biology"
] | 223 | [
"Fungi",
"Fungus species"
] |
11,128,472 | https://en.wikipedia.org/wiki/MNOS%20%28operating%20system%29 | MNOS (MobilNaya Operatsionnaya Sistema, МобильНая Операционная Система (МНОС), or Portable Operating System) is a Unix-like operating system developed in the Soviet Union.
Overview
The system is derived from Version 6 Unix and then modified to incorporate features of Berkeley Software Distribution (BSD) Unix. From 1983 until 1986, it enjoyed popularity in the Soviet Union and other Eastern Bloc countries, due to its small size and faster performance than that of other Version 7 Unix (and later BSD Unix-based) alternatives.
Its development began in the IPK Minavtoproma in Moscow in 1981, and continued in cooperation with other institutes, including Kurchatov Institute. MNOS and its alternative, DEMOS version 1.x, were gradually merged from 1986 until 1990 resulting in the joint OS, DEMOS version 2.x. MNOS became the first fully bilingual version of Unix, and uses a proprietary 8-bit Cyrillic script character set, U-code, which was dropped in favor of KOI-8 in the process of merging with DEMOS.
The origin of the version qualifier RL is Rabochaya Loshadka (The working horsy)
See also
DEMOS
MOS (operating system)
References
Computing in the Soviet Union
Soviet inventions
Unix variants
Discontinued operating systems | MNOS (operating system) | [
"Technology"
] | 291 | [
"Computing in the Soviet Union",
"History of computing"
] |
11,128,474 | https://en.wikipedia.org/wiki/Septocyta%20ruborum | Septocyta ruborum is a species of fungus in the Ascomycota. Its taxonomic relationship to other taxa in the Ascomycota is unknown, and it has not been assigned with certainty to any order or family (incertae sedis). It is a plant pathogen and grows on Rubus laciniatus, R. nessensis, R. procerus and wild blackberry (R. fruticosus); it causes purple blotch or stem spot disease, also known as dieback of blackberries.
References
External links
Index Fungorum
Fungal plant pathogens and diseases
Small fruit diseases
Enigmatic Ascomycota taxa
Fungi described in 1834
Fungus species | Septocyta ruborum | [
"Biology"
] | 144 | [
"Fungi",
"Fungus species"
] |
11,128,483 | https://en.wikipedia.org/wiki/Slafractonia%20leguminicola | Slafractonia leguminicola (formerly Rhizoctonia leguminicola) is a fungus that is a plant pathogen that most often attaches itself to the Trifolium pratense or red clover. It is also called black patch disease. The infection is first seen as small black patches on the leaves of red clover (often on the bottom of the leaves first) and spreads to cover the entire plant, killing it. The hyphae of this fungus are white and web-like in appearance, just like those of other fungi. The consumption of the fungus can be harmful to any grazing animals, especially to horses and cows.
Effects of consumption
If Slafractonia leguminicola is consumed excessively by horses or cattle, the most noticeable symptom will be an increase in salivation (drool). Other symptoms include "eye discharge, bloat, frequent urination, watery diarrhea, reduced milk production, weight loss and abortion." The consumption must be exorbitant to cause these symptoms in such large animals, but if the pasture consists mainly of clover, and the hay also contains clover, the danger of more serious symptoms rises. This is known as slobbers, also slobbers syndrome, and is attributed in part to the alkaloid toxins slaframine and swainsonine.
References
External links
Index Fungorum
USDA ARS Fungal Database
Description of Black Patch Disease
Intoxication
Fungal plant pathogens and diseases
Fungi described in 1956
Fungus species | Slafractonia leguminicola | [
"Biology"
] | 314 | [
"Fungi",
"Fungus species"
] |
11,128,488 | https://en.wikipedia.org/wiki/Monilinia%20rubi | Monilinia rubi is a species of fungus in the family Sclerotiniaceae. The species is a plant pathogen causing "dry berry disease" of caneberries. The species was formerly known by the invalidly published name Rhizoctonia rubi.
References
Fungal plant pathogens and diseases
Small fruit diseases
Sclerotiniaceae
Fungi described in 2022
Fungus species | Monilinia rubi | [
"Biology"
] | 78 | [
"Fungi",
"Fungus species"
] |
11,128,493 | https://en.wikipedia.org/wiki/Rhizomorpha%20subcorticalis | Rhizomorpha subcorticalis is a species name that has been used to characterize certain fungal plant pathogen observations where the pathogen is evident only through mycelial cords ("rhizomorphs"). The species in question very likely also produces reproductive structures which would allow it to be situated in the normal taxonomic tree, especially if DNA analysis is available. A name like R. subcorticalis should only be used where such identification is impossible.
History
The species name was first validly published by Persoon in 1801 with acknowledgement of an earlier description by P. Micheli, describing the fungus as consisting of brownish-black branches and being found between bark and heartwood (which is the meaning of "subcorticalis"). Rhizomorpha fragilis is listed as a synonym, indicating that the "real" species may have been Armillaria mellea ("Honey Fungus"). Another given synonym is Clavaria phosphorea which was described in 1797 by James Sowerby - the epithet phosphorea indicates that it glows in the dark, which is also true of Armillaria rhizomorphs. Someone has annotated Sowerby's plate illustrating the species with the word Rhizomorpha.
The name has been used for a crown and root rot disease of strawberry and avocado, where according to the American Phytopathological Society the fungus in question is indeed actually Armillaria mellea.
In 1871 in his book on diseases of forest trees, Robert Hartig mentioned R. subcorticalis as well as R. fragilis contrasting them as different species, the former being thinner and softer and having little tissue in the interior of the strands. So here R. subcorticalis must represent something other than Armillaria mellea.
References
External links
USDA ARS Fungal Database
Fungal strawberry diseases
Avocado tree diseases
Fungi described in 1801
Enigmatic fungus taxa
Fungus species | Rhizomorpha subcorticalis | [
"Biology"
] | 414 | [
"Fungus stubs",
"Fungi",
"Fungus species"
] |
11,128,498 | https://en.wikipedia.org/wiki/Rhizophydium%20graminis | Rhizophydium graminis is a plant pathogen infecting the roots of both monocots including wheat and other Poaceae, and a few dicots.
References
Fungal plant pathogens and diseases
Wheat diseases
Chytridiomycota
Fungus species | Rhizophydium graminis | [
"Biology"
] | 56 | [
"Fungus stubs",
"Fungi",
"Fungus species"
] |
11,128,508 | https://en.wikipedia.org/wiki/Rhizopus%20circinans | Rhizopus circinans is a plant pathogen infecting almond, apricot and peach.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Stone fruit tree diseases
Mucoraceae
Fungus species | Rhizopus circinans | [
"Biology"
] | 52 | [
"Fungi",
"Fungus species"
] |
11,128,523 | https://en.wikipedia.org/wiki/Brasiliomyces%20malachrae | Brasiliomyces malachrae is a species of fungus in the family Erysiphaceae. It is a plant pathogen that grows on Gossypium, Lavatera assurgentiflora, Malachra capitata, Malvastrum coromandelianum, and species of Malvaceae. It is found in South America.
Taxonomy and phylogeny
Brasiliomyces malachrae was named after its origin country of Brazil and has the suffix "-myces" in the name. The suffix "-myces" is derived from the Greek "mykēt-" meaning "mushroom". When discovered, it was found growing on Malvastrum coromandelianum, using it as a host . B. malchrae was first depicted in "Scientific survey of porto rico and the Virgin Islands, vol. VIII, Part 1. Botany of Porto Rico and the Virgin Islands. Mycology" authored by C.E. Chardon and J.F. Seaver.
Brasiliomyces malchrae belongs to the genus Brasiliomyces and is part of the family Erysiphaceae. Phylogenetic analysts using rRNA gene sequencing placed B. malachrae between the groups Golovinomyceteae and Erysiphe. It is closely related to B. chiangmaiensis, B. entadae, and B. setosus.
Morphology
Brasiliomyces malachrae forms white colonies made of superficial mycelium on its host's leaves. These colonies grow scattered along the surface of the foliage, giving it a powdery mildew appearance. B. malachrae is an Ascomycota species, which is characterized by having ascospores that are found within an ascocarp. Powdery mildews form fruiting bodies known as chasmothecia (cleistothecia), which are considered overwintering organs. These organs give the fungus an advantage by allowing it to survive during the winter season when temperatures drop making survival difficult or during a dry season when resources are limited. For this particular species, every cleistothecium contains sac-like, asci (singular: ascus). Each asci contains 3-5 oval shaped ascospores with 5 being the most common.
Brasiliomyces is distinguished from all other Erysiphales because of its "thin, semitransparent chasmothecia peridia" which is made of a single cell layer. In addition, it is ecologically unique for being the only species of powdery mildew that forms abundant ascocarps in tropical climates.
Brasiliomyces malachrae has two sexual forms, the sexual and asexual morphs. The asexual morph is known to have hyaline ectophytic mycelium and catenescent conidia (singular: conidium). The mycelium for this morph is found present on the leaves, petioles and stems of plants. The sexual morph is known to form white chasmothecia which cover the surface of its host's leaves.
Ecology
Brasiliomyces malachrae is an obligate biotroph of plants and is known to have 10,000 host species in angiosperms. It gets its energy and nutrients by parasitizing flowering plants related to false mallows (Malvaceae) and oak trees. Powdery mildews are plant pathogens known to cause polycyclic diseases which damage a plant's ability to photosynthesize. This type of infection causes a decrease in plant growth and an increase in the rate at which the host's tissues deteriorate. Its geographic distribution is subject to tropical areas, particularly in South America. B. malachrae is found in abundance in areas of Brazil, Venezuela, Chile and Puerto Rico.
Biology and relevance for humans
Brasiliomyces malachrae has an economical impact on human life because it infects wild and cultivated cotton. Like other plant pathogens, powdery mildews cause significant decreases in crop yields and lowers the quality of the crops produced. It is unknown what other specific types of crops B. malachrae affects.
References
Erysiphales
Fungi described in 1926
Fungi of South America
Fungal plant pathogens and diseases
Fungus species
Taxa named by Fred Jay Seaver | Brasiliomyces malachrae | [
"Biology"
] | 899 | [
"Fungi",
"Fungus species"
] |
11,128,529 | https://en.wikipedia.org/wiki/Schizothyrium%20pomi | Schizothyrium pomi is a plant pathogen of the sooty blotch and flyspeck complex, infecting apple, pear and citrus trees and carnations.
it has been a presumed teleomorph of Zygophiala jamaicensis.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Ornamental plant pathogens and diseases
Apple tree diseases
Pear tree diseases
Fungal citrus diseases
Microthyriales
Fungus species | Schizothyrium pomi | [
"Biology"
] | 100 | [
"Fungi",
"Fungus species"
] |
11,128,544 | https://en.wikipedia.org/wiki/Sclerotinia%20sclerotiorum | Sclerotinia sclerotiorum is a plant pathogenic fungus and can cause a disease called white mold if conditions are conducive. S. sclerotiorum can also be known as cottony rot, watery soft rot, stem rot, drop, crown rot and blossom blight. A key characteristic of this pathogen is its ability to produce black resting structures known as sclerotia and white fuzzy growths of mycelium on the plant it infects. These sclerotia give rise to a fruiting body in the spring that produces spores in a sac which is why fungi in this class are called sac fungi (Ascomycota). This pathogen can occur on many continents and has a wide host range of plants. When S. sclerotiorum is onset in the field by favorable environmental conditions, losses can be great and control measures should be considered.
Hosts and symptoms
S. sclerotiorum is among the most omnivorous of plant pathogens and so would not make a good mycoherbicide. Economically significant hosts include Vicia faba, for which Lithourgidis et al have done extensive work over the years. Common hosts of white mold are herbaceous, succulent plants, particularly flowers and vegetables. Sunflowers are common hosts for white mold. It can also affect woody ornamentals occasionally, usually on juvenile tissue. White mold can affect their hosts at any stage of growth, including seedlings, mature plants, and harvested products. It can usually be found on tissues with high water content and in close proximity to the soil. One of the first symptoms noticed is an obvious area of white, fluffy mycelial growth. Usually this is preceded by pale to dark brown lesions on the stem at the soil line. The mycelium then cover this necrotic area. Once the xylem is affected, other symptoms occur higher up in the plant. These can include chlorosis, wilting, leaf drop, and death quickly follows. On fruits, the initial dark lesions occur on the tissue that comes in contact with the soil. Next, white fungal mycelium covers the fruit and it decays. This can occur when the fruit is in the field or when in storage.
Importance
White mold affects a wide range of hosts and causes sclerotinia stem rot. It is known to infect 408 plant species. As a nonspecific plant pathogen, diverse host range and ability to infect plants at any stage of growth makes white mold a very serious disease. The fungus can survive on infected tissues, in the soil, and on living plants. It affects young seedlings, mature plants, and fruit in the field or in storage. White mold can spread quickly in the field from plant to plant. It can also spread in a storage facility throughout the harvested crop. Some crops it affects commonly are soybeans, green beans, sunflowers, canola, and peanuts. White mold is the most common pathogen that affects sunflower and has been found to cause reduction in yield throughout the world including the United States, northern Europe, Great Britain and Russia.
Sclerotinia stem rot (or 'white stem rot', ) causes large yield losses in temperate climates, especially during cool and moist growing seasons. An analysis of soybean yields from 1996 to 2009 in the United States found that sclerotinia stem rot reduced yields by over ten million bushels in half of the studied growing seasons. During particularly bad years, these soybean yield reductions caused producers to lose millions of dollars. Compared to 23 common soybean diseases, sclerotinia stem rot was the second most problematic disease in the United States from 1996 to 2009. For soybeans, crop yields are inversely correlated with the incidence of Sclerotinia stem rot; an estimated of 0.25 metric ton per ha is lost for each 10% increment of diseased plants.
Environment
The pathogenic fungus Sclerotinia sclerotiorum proliferates in moist and cool environments. Under moist field conditions, S. sclerotiorum is capable of completely invading a plant host, colonizing nearly all of the plant's tissues with mycelium. Optimal temperatures for growth range from 15 to 21 degrees Celsius. Under wet conditions, S. sclerotiorum will produce an abundance of mycelium and sclerotia. Like most fungi, S. sclerotiorum prefers darker, shadier conditions as opposed to direct exposure to sunlight. For soybeans specifically, optimal conditions include canopy temperatures less than 28 °C and plant surface wetness for 12–16 h on a daily basis or continuous surface wetness for 42–72 h.
Life cycle
The lifecycle of Sclerotinia sclerotiorum can be described as monocyclic, as there are no secondary innoculums produced. During late summer to early fall, the fungus will produce a survival structure called a sclerotium either on or inside the tissues of a host plant. S. sclerotiorum sclerotia can remain viable for at least three years and germinate to produce fruiting bodies called apothecia, which are small, thin stalks ending with a cup-like structure about 3–6 mm in diameter. The cup of the apothecium is lined with asci, in which the ascospores are contained. When the ascospores are released from the asci, they are carried by the wind until they land on a suitable host. The ascospores of S. sclerotiorum infect aboveground plant host tissue and begin to invade the host's tissues via mycelium, causing infection. S. sclerotiorum is capable of invading nearly all tissue types including stems, foliage, flowers, fruits, and roots. Eventually white, fluffy mycelium will begin to grow on the surface of the infected tissues. At the end of the growing season, S. sclerotiorum will once again produce sclerotia. The sclerotia will then remain on the surface of the ground or in the soil, on either living or dead plant parts until the next season. The lifecycle will then continue respectively.
There are two theories contending to explain the majority of S. sclerotiorum virulence: The oxalate-dependent theory and the pH-dependent theory. The oxalate theory was very credible because ultraviolet mutants producing knockout of oxalic acid production do have drastically reduced virulence. Similar results have also obtained with Botrytis cinerea, similarly an oxalic acid producing pathogen, with similar knockouts. However Davidson et al 2016 and others have created transgenic hosts for oxalate oxidase and oxalate decarboxylase and charted the results day by day. They find that initial infection is not noticeably dependent on oxalate (although lesion expansion does require it for pH reduction and chelation of calcium). This supports the pH theory, with oxalates being merely a part of pH.
Control
Control of white mold on crops can depend greatly on cultural, biological, and chemical practices. Cultural practices include planting disease resistant crops, planting at lower densities and higher row spacing to promote air circulation. This would allow for creation of microclimates that are less favorable for disease development. Besides that, excessive irrigation should be avoided until flowering (which is the most active period of infection) has ceased. Furthermore, in susceptible areas, crop rotations should include at least two to three years of non-host crops (for example cereals and corn). Good weed control can also limit the amount of host plants in a field and reduce white mold pressure. Fields with heavy disease pressure may also be flooded for a period of four to five weeks so as the sclerotia may lose their viability. Tillage reduction can also reduce the number of viable S. sclerotiorum spores.
Coniothyrium minitans, a coelomycete distributed worldwide, is a pathogen of S. sclerotiorum and is a commercial biocontrol agent for sclerotinia stem rot. Application of C. minitans should occur three months before S. sclerotiorum development and be incorporated into the soil. Correct use of C. minitans can reduce S. sclerotiorum by 95% and sclerotinia stem rot 10 to 70%.
Systemic and contact fungicides are registered for white models. For instance, in soybeans, there are three classes of fungicides that are labeled for white mold control: methyl benzimidazole carbamates, succinate dehydrogenase inhibitors, and demethylation inhibitors. Additionally, herbicides containing lactofen have also been reported to indirectly control white mold. However, the use of lactofen herbicides can harm crops in years without high disease potential.
References
External links
Index Fungorum
USDA ARS Fungal Database
Crop Knowledge Master
ag.ndsu.edu
ars.usda.gov
Fungi described in 1801
Fungal plant pathogens and diseases
Vegetable diseases
Sclerotiniaceae
Soybean diseases
Fungus species | Sclerotinia sclerotiorum | [
"Biology"
] | 1,900 | [
"Fungi",
"Fungus species"
] |
11,128,549 | https://en.wikipedia.org/wiki/Sclerotinia%20trifoliorum | Sclerotinia trifoliorum is a plant pathogen infecting alfalfa, red clover, chickpea, and fava bean. Lithourgidis et al. have done much work on this disease and fava bean.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Sclerotiniaceae
Fungi described in 1880
Fungus species | Sclerotinia trifoliorum | [
"Biology"
] | 82 | [
"Fungi",
"Fungus species"
] |
11,128,558 | https://en.wikipedia.org/wiki/Septoria%20darrowii | Septoria darrowii is a species of fungus in the family Mycosphaerellaceae. It is a plant pathogen infecting caneberries.
References
darrowii
Fungal plant pathogens and diseases
Small fruit diseases
Fungi described in 1916
Fungus species | Septoria darrowii | [
"Biology"
] | 52 | [
"Fungi",
"Fungus species"
] |
11,128,575 | https://en.wikipedia.org/wiki/Coryneopsis%20rubi | Coryneopsis rubi is a plant pathogen.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Xylariales
Fungus species | Coryneopsis rubi | [
"Biology"
] | 36 | [
"Fungi",
"Fungus species"
] |
11,128,582 | https://en.wikipedia.org/wiki/Podosphaera%20macularis | Podosphaera macularis (formerly Sphaerotheca macularis) is a plant pathogen infecting several hosts including chamomile, caneberrie, strawberries, hop, hemp and Cineraria. It causes powdery mildew of hops.
Host range and symptoms of Podosphaera macularis
The pathogen that causes powdery mildew of hops was once considered to be Sphaerotheca macularis, which is capable of infecting many plants; however, in recent years, the pathogen that causes powdery mildew of hops has been taxonomically classified as Podosphaera macularis. This ascomycete is only known to be pathogenic on hop plants, including both ornamental and wild hops, and Cannabis sativa. The host range of many Podosphaera macularis strains is restricted by the existence of resistant hop varieties, such as the “Nugget” variety of Washington state and Oregon, although in recent years, resistance within this hop variety has been overcome in the laboratory. When disease does occur, early symptoms include chlorotic spots on the leaves of hop plants. Spots may fade to gray or white as the season progresses. Signs include white clusters of hyphae, which are often present on the leaves, and in some cases can infect the cone itself. If this infection occurs, a brown, necrotic lesion may develop. When both mating types exist within a population, chleistothecia can form and are visible as small, black dots on the undersides of leaves.
Disease cycle
Podosphaera macularis overwinters on the soil surface in debris as fungal survival structures (chasmothecia) or as mycelia in plant buds. These chasmothecia are formed closer to the end of the growing season. The characteristic morphology of chasmothecia of Hop Powdery Mildew are spherical black structures with spiked appendages. When favorable conditions are encountered during early spring, the asci (sac-like structures) within chasmothecia will rupture and ascospores will be discharged. Specifically, the favorable conditions for ascospore release include low light, excess fertility, and high soil moisture. Additionally, optimal infection is observed when the temperature is between 18 and 25 °C. Furthermore, the ascospores act as the primary inoculum and are dispersed passively by wind. Upon encountering a susceptible host plant, the ascospores will germinate and cause infection. Following infection, masses of asexual spores (conidia) will be produced during the season. It is these masses of conidia that contribute to the characteristic white, powdery appearance of infected plants. The lower leaves are the most affected, but the disease can appear on any part of the plant that is above the ground. These conidia are dispersed through wind. Thus, Podosphaera macularis is a polycyclic pathogen as conidia are produced/dispersed during the growing season and can further infect additional host plants. Particularly, the disease will be noticeable on infected plants as soon as the hop shoots start to emerge with the latent period being approximately 10 days at 12 and 15 °C compared to 5 days at 18-27 °C. These spore-covered shoots that emerge from infected buds are called “flag shoots” and will be stunted with distorted leaves. Periods of rapid plant growth are the most favorable for infection. In addition, the period in which lateral branch development takes place within the plants is also very vulnerable to the development of the disease. Due to Podosphaera macularis causing local infection, only the location of the host plant tissue where spores have landed will develop the disease.
Optimal environment
Under optimal conditions, this polycyclic disease can potentially grow 20 generations in a growing season. Favorable environmental conditions for Podosphaera macularis fecundity include low sun exposure, soil moisture, and excessive fertilization. The optimal temperature range for spore and mycelium growth is 18 to 25 °C. In addition, periods with small temperature differences between night and day, with a minimum of 10 °C at night and a daily high of 20 °C increase the risk of infection. High humidity and optimal temperature conditions are necessary for primary infection between the middle and end of May. The cleistothecia swell up and burst due to increased turgor pressure leading to the release of ascospores. During the secondary infection period from mid-July to August, conidia infectivity and germination is highest around 18 °C. However, leaf wetness is not essential for the formation and germination of conidia, but rather slight rain has an indirect effect related to high humidity and low sun light. Since the life cycle mainly exists externally, with only haustoria inside the host, supra-optimal temperatures and low relative humidity are unfavorable parameters for germination, infection or sporulation of powdery mildew. Temperatures exceeding 30 °C for more than three hours reduce the chance of infection by up to 50%. Intense rain and wind periods that cause spores blown throughout the hop yard also prevent powdery mildew fecundity. In addition, solar irradiation can kill released spores, but as hops grow, the sun can't penetrate the dense canopy.
Management
The two primary ways to control Podosphaera macularis are cultural and chemical control. The most effective way to manage hop powdery mildew is through preventative measures. Cultural control of the disease include growing powdery-mildew tolerant/resistant varieties of the host plant. Cultural practices that can help prevent the disease include carefully monitoring water and nutrient, reducing initial inoculum, and removing basal growth. Furthermore, pruning, crowning, and/or scratching will aid in further reduction of the disease. Pruning consists of removing shoots before training. Crowning refers to the process of removing the top 1–2 inches of the crown before budbreak. Scratching is done through disturbing the soil surface to remove the top 1–2 inches of buds. All of these methods disturb the overwintering stage of the life cycle of Podosphaera macularis. Likewise, chemical control primarily consists of spraying fungicides in hopes of preventing the disease through the use of early, continuous spraying during the growing season. Thus, prophylactic fungicide programs can be a very effective way in preventing the disease. Since the fungicides are a preventative measure, they are not very useful to use during a full-blown infection. Therefore, the use of fungicides disturbs release of spores and further infection within the disease cycle of Podosphaera macularis. As there are several fungicides that are effective against powdery mildew, it is important to apply the fungicides at specific times. If it is known that powdery mildew is present, spray programs should be started as soon as the shoots emerge. Due to powdery mildew's ability to quickly develop resistance to fungicides, it is important to rotate the fungicides that are used. However, few or no fungicide applications should be used during burr development as these burrs have increased vulnerability to damage. In this case, removing basal growth before flowering and applying a protectant fungicide with long-term residual action should be employed.
Disease importance
In 1997, hop powdery mildew was reported for the first time in hop yards in the United States Pacific Northwest. In Washington, severe infections lead to a yield loss of 800 hectares (US $10 million) of crops. At the time, sulfur was the only registered pesticide used on hop that was effective against powdery mildew. In 1998, the disease was confirmed in Idaho and Oregon. As a result, Yakima Valley growers managed the disease using approaches developed in Europe, such as, labor-intensive cultural practices, mechanical or chemical removal of spring growth, and intensive fungicide programs despite the small number of fungicides available for hop at the time. Although the methods successfully limited disease development, the depressed market for hops couldn't sustain the expensive production costs ($1400/ha annually in 1998). In 2001, a contracting brewery rejected 50% of an aroma hop grown in Oregon because of cone browning after drying, resulting in an additional US $5 million in losses that year. These losses have contributed to economic depression in the hop market and have forced several growers to declare bankruptcy. Currently, hop powdery mildew exists annually in all production regions in the United States. While more research is necessary to understand Podosphaera macularis and control, the current management system has returned economics to hop industry. Disease levels have decreased and control costs have been reduced to $740/ha on average. Unlike New York and California, hop production in the Pacific Northwest is likely to continue.
Pathogenesis of Podosphaera macularis
In order for pathogenesis to occur, a viable pathogen, susceptible host, and conducive environment must simultaneously be present. The germ tube of P. macularis plays an important role in determining the pathogen's viability, because it can penetrate its host in approximately 15 hours. The germ tube begins branching, leading to as many as three potentially conidia-forming germ tubes. As the pathogen invades host tissue, it establishes a haustorium to facilitate the collection of nutrients from the host cells. Despite this invasion, only certain hosts are susceptible, because there are seven R genes in hop varieties that can be activated in response to infection. Many of them operate by either causing the initial haustorium to lyse, or by preventing the pathogen from spreading. The spread is stopped by a hypersensitive response, which is often associated with the establishment of large callose and lignin deposits surrounding infected cells. Although susceptible plants can increase callose and lignin deposits in response to infection, the hypersensitive response is only found in resistant varieties. Finally, although powdery mildew can grow in a relatively hot and dry environment compared to downy mildew, conidia production peaks at temperatures of approximately 20 °C. Conidia can be produced at temperatures above 25 °C, but their infectivity is often reduced.
References
External links
Fungal plant pathogens and diseases
Food plant pathogens and diseases
Small fruit diseases
Fungal strawberry diseases
Hemp diseases
Ornamental plant pathogens and diseases
macularis
Fungi described in 2000
Fungus species | Podosphaera macularis | [
"Biology"
] | 2,154 | [
"Fungi",
"Fungus species"
] |
11,128,590 | https://en.wikipedia.org/wiki/Podosphaera%20pannosa | Podosphaera pannosa is a plant pathogen. It produces a powdery mildew on members of the rose family.
Summary
Rose powdery mildew [also known as 'Weeping Mildred'] is caused by the fungus Podosphaera pannosa, a member of the Ascomycete fungi. It infects a wide variety of roses, but especially those grown in dryer climates as the fungus has the rare characteristic that not only does it not need water to germinate and reproduce, it can be inhibited by it.
Disease cycle
The disease cycle of rose powdery mildew starts when the sexual spores, ascospores, of the pathogen survive the winter in a structure composed of hyphae called an ascocarp. The specific ascocarp is a chasmothecium, or cleistothecium, and has a circular shape to it. Under the right conditions the chasmothecium will break open to reveal the asci, which are long tube-like structures containing the ascospores. These ascospores are then released and spread by wind, insects, and rain until they land on a susceptible rose for a host and land, attach, and germinate on the plant. They will also produce condia, the asexual spores of Podosphaera pannosa, which will spread throughout the summer. It is these long chains of white conidia which give the fungus its characteristic “powdery” appearance. Late in the year as the plant is dying cleistothecia will again form when the ascogonium receives the nucleus from the antheridium.
Environment
Optimal conditions for rose powdery mildew are 16-27 degrees Celsius, with the optimal temperature for fungal growth at 23 degrees Celsius in a shaded area. They also do not need water to germinate and infect the rose. In fact, if there is too much water present on plant surfaces for a prolonged period of time the fungal growth is inhibited and the spores can actually die. Rose powdery mildew can also grow in any conditions where roses can grow and has been found everywhere from China to the United States.
Hosts, signs, and symptoms
A wide variety of rose species are susceptible to powdery mildew. In light of this it is more practical to discuss the rose varieties that are resistant as opposed to those that are susceptible. Two varieties that have been found to show resistance to rose powdery mildew are "Paul's Pink" rose variety and the "Magic" rose variety. Other research has shown that many chestnut rose (Rosa roxburghii) varieties are also resistant to powdery mildew. Rosa sterilis, Rosa kweichowensis, Rosa laevigata, Rosa lucidissima, and Rosa chinensis have all been shown to be resistant to powdery mildew. R. multiflora var. multiflora and R. multiflora var. cathayensis have all been shown to be susceptible to rose powdery mildew. Symptoms caused by the rose powdery mildew can be a dwarfing of the growth of the plant, or the twisting and deforming of leaves, but more noticeable is a sign of the disease, which is the white condia, the “powder” that appears on the plant surfaces, such as leaves, shoots, flowers, and buds. The fungus may grow on both new and old leaves, but is generally more concentrated on the underside of the leaf.
Management
Effective management of rose powdery mildew begins by using resistant varieties of rose, but it can also be managed through the use of fungicides, or by planting in sun since rose powdery mildew prefers the shade. In fact, increasing the exposure of rose powdery mildew from 18 to 24 hours of light per day reduced the production of conidia, the asexual spores of the fungus, by as much as 62%. There are a variety of fungicides that have proven to be effective. Examples are myclobutanil, azoxystrobin, triadimefon, and thiophanate-methyl formulations. Chemical fungicides are not always necessary, however, it is possible to use more environmentally-friendly solutions such as a water-vinegar spray, or mixtures of baking soda and insecticidal soaps. Recent studies have also shown that using a planting medium which includes silicon can also reduce the occurrence of powdery mildew by as much as 57%.
Importance
Powdery mildew affects more the 7600 species of hosts worldwide, including subsistence crops. Although rose powdery mildew will most directly affect the rose connoisseur, it is part of this larger family of powdery mildews, which can affect the crops used for food and survival in many countries, thereby having economic and human impacts beyond that of an unsightly rose bush. Research shows that total yield loss from powdery mildew on cereal crops alone can vary anywhere between 2 and 30% depending on the host and the environmental conditions. Additionally, the wholesale value of roses annually exceeds $100 million in the United States, so the national economic impact from the flower industry cannot be ignored. As explained by an old gardener's adage 'Weeping Mildred! She's no friend of the gardener and certainly no friend of me'.
References
Fungal plant pathogens and diseases
Rose diseases
pannosa
Fungi described in 1870
Fungus species | Podosphaera pannosa | [
"Biology"
] | 1,119 | [
"Fungi",
"Fungus species"
] |
11,128,602 | https://en.wikipedia.org/wiki/Sporonema%20phacidioides | Sporonema phacidioides is a plant pathogen infecting alfalfa.
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Helotiaceae
Fungus species | Sporonema phacidioides | [
"Biology"
] | 43 | [
"Fungi",
"Fungus species"
] |
11,128,606 | https://en.wikipedia.org/wiki/Stagonospora%20meliloti | Stagonospora meliloti is a plant pathogen infecting alfalfa.
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Pleosporales
Fungi described in 1919
Fungus species | Stagonospora meliloti | [
"Biology"
] | 46 | [
"Fungi",
"Fungus species"
] |
11,128,619 | https://en.wikipedia.org/wiki/Stemphylium%20alfalfae | Stemphylium alfalfae is a plant pathogen infecting alfalfa.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Pleosporaceae
Fungus species | Stemphylium alfalfae | [
"Biology"
] | 44 | [
"Fungi",
"Fungus species"
] |
11,128,626 | https://en.wikipedia.org/wiki/Stemphylium%20globuliferum | Stemphylium globuliferum is a plant pathogen infecting alfalfa.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Pleosporaceae
Fungus species | Stemphylium globuliferum | [
"Biology"
] | 46 | [
"Fungi",
"Fungus species"
] |
11,128,635 | https://en.wikipedia.org/wiki/Stemphylium%20vesicarium | Stemphylium vesicarium is a plant pathogen infecting many plants including onion, garlic, asparagus, and pear.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Vegetable diseases
Mango tree diseases
Pleosporaceae
Fungi described in 1833
Fungus species | Stemphylium vesicarium | [
"Biology"
] | 63 | [
"Fungi",
"Fungus species"
] |
11,128,642 | https://en.wikipedia.org/wiki/Stigmina%20carpophila | Stigmina carpophila (syn. Wilsonomyces carpophilus) is a fungal plant pathogen causing shot hole disease in stone fruits (Prunus spp.).
References
External links
Index Fungorum
USDA ARS Fungal Database
Coryneum or Shothole Blight. Utah State University Extension fact sheet PLP-008. 2008.
Fungal plant pathogens and diseases
Stone fruit tree diseases
Capnodiales
Fungi described in 1959
Taxa named by Joseph-Henri Léveillé
Fungus species | Stigmina carpophila | [
"Biology"
] | 103 | [
"Fungi",
"Fungus species"
] |
11,128,650 | https://en.wikipedia.org/wiki/Taphrina%20deformans | Taphrina deformans is a fungus and plant pathogen, and a causal agent[s] of peach leaf curl. Peach trees infected with T. deformans will experience leaf puckering and distortion, acquiring a characteristic downward and inward curl. Leaves will also undergo chlorosis, turning a pale green or yellow, and later show a red or purple tint. Fruit can either drop prematurely or show surface distortions. Severe infection can also produce lesions on the flowers. The host tree will experience defoliation if the leaves are badly diseased. If a seedling is severely infected, it may die. Almond trees display similar symptoms.
Life cycle
Taphrina deformans infects species of the genus Prunus (including P. amygdalus (almond) and P. persica (peach)). It is best known as the causative agent of peach leaf curl in peaches and nectarines. It has been observed that this fungus also infects the fruits of some South American trees like Ocotea puberula, Nectandra megapotamica and other Lauraceae species. An additional form has been identified which infects apricots. Some researchers posit that these forms are separate varieties altogether, which is supported by recent genetic evidence based on hybridization. Furthermore, almond strains of T. deformans seem to be unable to induce peach leaf curl on peach trees, and vice versa.
Infected leaves develop a whitish bloom as the infection progresses. This bloom consists of asci that break through the cuticle of the leaf by way of lysis. One ascus contains eight ascospores, which are ejected in early summer and moved by rain and wind. This fungus is believed to survive the winter by staying on the surface of the new host plant, such as on bark or buds. In the spring, new buds are infected by the conidia as the leaves emerge from the buds. The disease does not occur every year, as it requires a minimum of 3mm of rainfall followed by at least 12 days during which the developing conidia remain damp and at temperatures below 19 °C. The fungus has higher infection rates following cooler, damper winters.
Taphrina deformans hyphae growing in intercellular spaces secrete polysaccharide-degrading enzymes, such as cellulase, causing partial dissolution of the host cell wall. This process also results in changes in the plasma membrane.
T. deformans also produces the auxin indole-3-acetic acid from L-tryptophan via indole-3-pyruvic acid and indole-3-acetaldehyde. This process is thought to be responsible for the hyperplastic effect of the infection.
Peach leaf curl management
Because infection depends on a wet environment, appropriate irrigation of crops can help control pathogen dispersal. Although some sources also suggest thinning fruit to control the spread of disease, sanitation and culturing practices alone are insufficient to manage the pathogen. Fungicide is preferred; chlorothalonil and ziram are favored, and copper is an organic option. Fungicide application requires the correct timing and complete coverage of the crop. It is recommended that growers spray fungicides after leaf-fall, or after 90% senescence of leaves. In wetter climates, where multiple sprays may be necessary, spraying is recommended in the late fall and in late winter or early spring. Post-infection spraying of fungicide is inadequate to control the disease.
Although most commercial cultivars are susceptible, there are several genotypes of Prunus persica that have been identified as resistant to infection by T. deformans. These resistant genotypes appear to use molecular and biochemical mechanisms to manage the spread and development of the disease. For example, chlorogenic acid, which is known to have antifungal activity in vitro, is present in resistant strains but not susceptible strains. The chloroplasts are also activated as a site of defense signaling. Additionally, upregulation of the pattern of accumulation of isochorismate synthase indicates increased salicylic acid production. The activation of salicylic acid-dependent pathways suggests a mechanism to achieve systemic acquired resistance.
Importance
Peach leaf curl is present wherever peaches or nectarines are grown. The economic impact of the disease varies regionally, as pathogen spread and symptom severity depends on environmental factors. In the United States, $2.5 to 3 million dollars are lost because of peach leaf curl. Sixty to ninety percent of peach shoots in Italy can be infected by T. deformans. Although peach leaf curl is mostly manageable with fungicide spraying in dry climates, improper timing or incomplete coverage of the crop can result in control failure. Wetter climates which require multiple applications of fungicide will be more susceptible to human error. Additionally, unexpected winter warming can allow the pathogen to establish itself within buds before the late winter or early spring application of fungicide. Without fungicides, or through control failure, the disease can result in total yield loss, along with the stunting and death of shoots. Yield loss can result from tree defoliation, leading to decreased photosynthesis, and from infection of fruit, which decreases marketability.
Taphrina deformans genome
Taphrina deformans genome has been sequenced. The genome carries characteristic genes that are important for the plant infection process.
References
External links
Taphrina deformans at Fungal Databases
Taphrinomycetes
Fungi described in 1857
Fungal tree pathogens and diseases
Stone fruit tree diseases
Taxa named by Miles Joseph Berkeley
Fungus species | Taphrina deformans | [
"Biology"
] | 1,158 | [
"Fungi",
"Fungus species"
] |
11,128,655 | https://en.wikipedia.org/wiki/Tilletia%20caries | Tilletia caries (synonymous with Tilletia tritici) is a basidiomycete that causes common bunt of wheat. The common names of this disease are stinking bunt of wheat and stinking smut of wheat. This pathogen infects wheat, rye, and various other grasses. T. caries is economically and agriculturally important because it reduces both the wheat yield and grain quality.
Life cycle
Infection of the wheat occurs during germination of the plant seed and is favored by cool, wet conditions. Optimum conditions for spore germination are soil temperatures in the range of . Bunt fungi overwinter as dikaryotic teliospores typically on seed and occasionally in soil. The fungus infects the shoots of wheat seedlings before the plants emerge from the soil. After karyogamy, the teliospores germinate to form a basidium, on which 8–16 haploid basidiospores (primary sporidia) will develop. There are two mating types of basidiospores (+ and -) and they fuse to form H-shaped structures to establish a dikaryon. This dikaryon then will yield infectious hyphae which can either produce more hyphae or more secondary sporidia. The pathogen grows within the terminal meristem via mycelium and completes its life cycle by transforming the mycelial cells into teliospores. The smutted wheat kernels that are full of teliospores break open and release upon harvest, which allows for the teliospores to overwinter on the seed and are blown away by currents onto the soil, thus completing the life cycle.
Morphology
Teliopsores are thick-walled, globiose, reticulate and 13–23 μm in diameter.
Hosts
Agropyron (wheatgrass), Bromus (bromegrasses), Elymus (wildrye), Festuca (fescues), Hordeum (barleys), Lolium (ryegrasses), Poa (meadow grass), Secale cereale (rye), Triticale, Triticum spp. (wheats) – including T. aestivum (common wheat), T. dicoccum (hulled wheat), T. turgidum (durum wheat) – and other Poaceae (other grasses).
Signs and symptoms
It is difficult to detect Tilletia caries early in its infection period; by the time symptoms are able to be detected, the pathogen is systemic throughout the plant. Plants that are infected will be stunted anywhere from a few centimeters/inches below average to half the average height of a healthy plant. Additionally, the heads are slender and remain green longer than healthy heads. A symptom that is indicative of T. caries is the replacement of yellow heads with grey bunt balls in the head of infected plants. The infected bunt balls are about the same shape and size as normal kernels. When the mature kernels are broken, they are full of a black, powdery mass of the fungal spores. These fungal spores give off a distinctive fishy smell and are oily to the touch.
History and significance
From the late 1800s until the 1930s, stinking smut was a devastating disease of wheat. For example, infection levels over 20% were common in Washington State in the early 1900s. One of the most extreme cases was in Kansas in 1890 where the yield was reduced 20–50% because of Tilletia caries. Only when seed treatments became available after 1930 did losses from smut drop to much lower levels. Today, losses from smut rarely occur unless a grower chooses not to plant treated seed, but, if left untreated, bunt can reduce yield by more than 50%. In modern agriculture, if an infection occurs, losses are 5–10%. If there is a significant T. caries infection, the dusty and oily spore masses released during harvest can lead to combine explosions. Static electricity that develops around the combine machinery ignites the teliospore dust released from the combine.
Use as a biological weapon
It was used as a biological weapon by Iraq against Iran during the Iran–Iraq War in the 1980s.
Management
The most effective and widely used management strategy for common bunt is to treat seed with fungicide before planting. According to the University of Nebraska-Lincoln Institute of Agriculture and Natural Resources, it is recommended to buy certified, fungicide-treated seed or have it cleaned and treated by a commercial seed conditioner. There are ways that farmers can manipulate the severity of the infection to a certain extent. For example, they can plant the seed when the soil temperature is higher than what is ideal for teliospore germination, e.g., above . For winter wheat this means planting in early fall and for spring wheat planting in late spring. This tactic can reduce the amount of smut that occurs, but it typically does not eliminate the disease. Typically, smut poses more of a problem in winter wheat than in spring wheat because in autumn, when winter wheat is planted, there is a longer period of more favorable temperatures for teliospore germination than compared to the planting season for spring wheat. There are no current wheat cultivars on the market with good resistance to common bunt. However, there have been research efforts that utilize DNA markers for resistant cultivars in the attempt to understand the specific genes that code for resistance against common bunt. This may be applied for future breeding of commercially available resistant wheat crop.
See also
Common bunt
Notes
References
Ustilaginomycotina
Fungal plant pathogens and diseases
Wheat diseases
Fungi described in 1775
Fungus species | Tilletia caries | [
"Biology"
] | 1,184 | [
"Fungi",
"Fungus species"
] |
11,128,659 | https://en.wikipedia.org/wiki/Tilletia%20controversa | Tilletia controversa is a fungal plant pathogen. It is a fungus known to cause the smut disease TCK smut in soft white and hard red winter wheats. It stunts the growth of the plants and leaves smut balls in the grain heads. When the grain is milled the smut balls emit a fishy odor that lowers the quality of the flour.
TCK smut exists in the western and northwestern United States, but is not considered a major problem. The disease took on policy significance because China applied a zero tolerance on the presence of TCK spores, resulting in a ban from 1974 to 1999 on shipments from the Pacific Northwest. Until the summer of 1996, China accepted shipments of U.S. wheat from the Gulf Coast, and negotiated price discounts with the shippers to cover the cost of decontamination if traces of TCK were found. Then in June 1996, China rejected all cargoes of U.S. wheat with traces of TCK. The November 1999 U.S.-China Agricultural Cooperation Agreement removes the ban and allows imports of U.S. wheat and other grains that meet a specific TCK tolerance level, thus improving the competitiveness of U.S. wheat with Canadian and Australian exports.
Symptomology
Presents in the dough stage.
References
External links
Fungal plant pathogens and diseases
Wheat diseases
Ustilaginomycotina
Fungi described in 1874
Fungus species | Tilletia controversa | [
"Biology"
] | 291 | [
"Fungi",
"Fungus species"
] |
11,128,686 | https://en.wikipedia.org/wiki/Typhula%20incarnata | Typhula incarnata is a fungal plant pathogen in the family Typhulaceae.
Hosts and symptoms
Typhula incarnata is, along with Typhula ishikariensis, the causal agent of gray snow mold (also known as speckled snow mold or Typhula blight). This plant pathogen destroys cool season turfgrasses grown in areas with extended periods of snowcover. “Turfgrass hosts include but are not limited to: annual bluegrass, colonial bentgrass, creeping bentgrass, fine-leaf fescues, Kentucky bluegrass, perennial ryegrass, and tall fescue”. Signs of the pathogen can be observed in the spring as circular grayish-brown patches, about 15 cm in diameter, of mycelium. T. incarnata can be distinguished from T. ishkikariensis by its sclerotia. T. incarnata has reddish-brown sclerotia with a diameter of 1.5–3 mm, whereas T. iskikariensis has black sclerotia with a diameter of 0.5-1.5 mm.
Disease cycle
The cool (-1-13 °C) and damp conditions of the fall allow Typhula incarnata to begin producing sclerotia. Young sclerotia of T. incarnata start out whitish-pink in color and eventually mature into hard reddish-brown spheres about 5 mm in diameter. Mature sclerotia will produce spore-bearing structures known as clavula, where basidia and basidiospores can form. During the winter, the sclerotia begin to germinate and produce mycelium under a snow cover. The mycelium eventually spreads, produces infection cushions, and penetrates plant tissue. In order for there to be a severe disease outbreak there must be a persistent snow cover, however there have been a few instances where an outbreak has occurred with little to no snow. In the spring, when the snow melts sclerotia and gray mycelia can be seen on dead plant tissue. As the plant tissue begins to decompose sclerotia drop to the ground where they oversummer. Throughout the summer, fungi tend to infest the sclerotia of T. incarnata, reducing germination rates up to 90%.
Environment
Typhula incarnata develops when a prolonged snow cover (around 60 days) sits on an unfrozen ground where soil temperatures are above freezing (-1˗ 4.4 °C). T. incarnata typically doesn’t develop if the ground freezes before snow accumulates.
Snow molds are opportunistic pathogens meaning they only become pathogenic when host immunity is low. During the winter, plants usually have little resistance to disease because their carbohydrate reserves are low. This weakened immune response, along with little competition from other microorganisms, gives snow molds the perfect opportunity to infect their host.
Cold tolerance is one of the most important feature of snow molds. T. incarnata, and other species of gray snow molds, survive the winter by producing extracellular antifreeze proteins. These antifreeze proteins cause thermal hysteria. Thermal hysteria is the supply of heat to a material at a rate different from equilibrium. Antifreeze proteins bind to ice crystals and create curved ice fronts which are energetically unfavorable for the further absorption of water. This causes ice crystals not to form.
Management
Turf infected by Typhula incarnata will typically recover when growth resumes in the spring. This is because T. incarnata doesn’t kill the crowns of grass. In order to avoid T. incarnata try planting species that aren’t susceptible (e.g., fine fescues). Also, avoid applying high nitrogen fertilizers late into the fall. High levels of nitrogen can promote late-season growth which encourages the disease. Finally, try mowing throughout the fall. This can help “prevent excessive turf top growth that is more easily infected by Typhula”.
Fungicides aren’t typically used to treat T. incarnata because the grass can usually recover naturally in the spring. Fungicides should only be used to prevent severe snow mold damage. They should be applied right before a snow cover is permanent or when snow begins to melt. Mixtures of fungicides, such as a demethylase inhibitor (DMI) with a chlorothalonil product or thiophanate-methyl product, can also be very effective at preventing snow mold damage.
Importance
Typhula incarnata is common in Wisconsin and other areas of the Midwest. This disease can lead to reduced turf quality long into the summer months. Around $20,000 are spent each year on fungicides to prevent snow molds. Despite the large amount of money used to prevent this disease, many times T. incarnata, and other species of snow molds, will develop, especially after harsh or variable winter conditions.
References
Fungal plant pathogens and diseases
Cereal diseases
Typhulaceae
Fungi described in 1838
Fungus species | Typhula incarnata | [
"Biology"
] | 1,060 | [
"Fungi",
"Fungus species"
] |
11,128,691 | https://en.wikipedia.org/wiki/Typhula%20ishikariensis | Typhula ishikariensis is, along with Typhula incarnata, the causal agent of grey snow mould (also called speckled snow mould or Typhula blight), an obligately parasitic plant pathogen that can destroy turfgrass when covered for a long period with snow. It is a particular problem on golf courses established in unsuitable areas. More importantly, it can also damage crops of winter wheat. The species was described as new to science in 1930 by Japanese mycologist Sanshi Imai. The varieties canadensis and ishikariensis (the former as a new combination) were described in 1978. There is a wide variety within the species and not all authorities agree as to subspecies, or even whether it should be monophyletic.
Taxonomy
There is a wide range of morphology, physiology, and genetics. Some test have shown Typhula idahoensis to be interfertile, or not to be; and there are significant morphological and range differences; and so it is sometimes regarded as a subspecies or non-synonymous entirely. Some schemes have a var. ishikariensis, var. idahoensis, and var. canadiensis along the lines of basidiocarp and sclerotial morphology. North American populations all have high genetic similarity. Japanese populations appear to be two intersterile biotypes, A and B. Norwegian populations have been proposed to be group I, II, and III based on culture preferences and differences of interfertility with Japanese populations; I and II are also differentiated from III by being cold temperate, while III is Arctic adapted. Another proposal divides the worldwide population into two species, I and II, based on morphology and interfertility: I including Japanese A above, North American ishikariensis and idahoensis, and Norwegian I and III, with hosts monocots, dicots, conifer seedlings, and in Russia the roots of hops; II including Japanese B, North American canadiensis, and Norwegian II, only harming monocots. Genetic factors governing sclerotial size vary widely across the world, and differences between Japanese B and Polish populations have been studied and are pronounced.
It is broadly agreed that there is some degree of differentiation within the species along the lines of winter weather in the various locales.
Physiology
Temperature
Minimum growth temperature is below . Optimal growth range is . Maximum growth temperature . Norwegian groups I and II are colder temperate populations (optimal growth ), while group III is purely Arctic (irregular growth at , no hyphal growth at ). Canadian population exposed to and then incubated at optimal growth temperature showed irregular growth similar to Norwegian III's reaction to above, suggesting similar ill-adaptation to temperatures outside the Arctic.
Maximum oxygen consumption is at , which is higher than optimal growth temp.
After being stored at and then incubated at , Norwegian I (southern Norway) showed delay resumption of lifecycle (i.e. growth), while III from Finnmark (northern Norway) stored at the same temp and incubated at (optimal growth temp) immediately resumed growth. Isolates from Moscow died from the stress of freezing, but there was no lethality or even delay due to freezing of isolates from Novosibirsk in central Siberia (considered equivalent to Norwegian III).
Norwegian III does not actually avoid freezing, in fact freezing before reaching and so its freeze tolerance may not be (or not be entirely) due to antifreeze proteins, but extracellular ice formation may play some protective role.
Freeze/thaw cycling killed off significant numbers of sclerotia of Norwegian I and Moscow isolates, while Norwegian III and Siberian showed no mortality.
When divided into the two worldwide divisions I and II (as described in §Taxonomy above), a period of freezing halved the growth rate of I, but only brought it down to 80% for II.
Exposure to lethal heat of or both decreased protein content of the mycelia.
Osmoregulation
Does not grow well on lower-water potato dextrose agar, unlike some snow moulds (such as Sclerotinia borealis which is more adapted to continue parasitizing plant tissues in frozen soil).
Lipid metabolism
T. ishikariensis produces betaine lipids.
Protein metabolism
Low amounts of sclerotinial proteins do occur in the vegetative hyphae - whether produced there or progressing into there - during normal growth at .
Morphology
Dark amber to dark chestnut sclerotia when not desiccated, dark brown to almost black when desiccated. Sclerotia not gelatinous. Clavulae of sporophores pale yellow to gray white, transition to gray brown on the stipes. Genetic factors governing sclerotial size vary widely across the world: Specifically in Japanese B, long snowcover selects for larger, and brief snowcover for smaller; while in Polish populations, incubation temp was significant and overwhelmed genetic factors, more often producing smaller sclerotia. Across the world, smaller sclerotia are an adaptation to shorter or highly variable duration of snow cover, and strongly for the combination of the two.
Symptomology
Speckled, hence the common name.
Hosts
Grasses, forages, and winter cereals.
Distribution
S. ishikariensis is found in cool temperate areas, frigid zone areas, and into the Arctic, including northern Japan, Russia, northern Scandinavia, and North America. Specifically including Arctic areas of Alaska, the Yukon, Greenland, Finnmark county in Norway (especially group III), Finnish Lapland, Swedish Lapland, Svalbard (especially group III), and Greenland (especially group III).
Further south, Switzerland, southern Siberia, southern and central Norway (especially groups I and II), and the Mie Prefecture on Honshu in Japan.
Any grassland getting more than 150 days of snow cover.
References
Typhulaceae
Fungi described in 1930
Fungi of Asia
Fungi of North America
Fungal plant pathogens and diseases
Fungus species | Typhula ishikariensis | [
"Biology"
] | 1,240 | [
"Fungi",
"Fungus species"
] |
11,128,703 | https://en.wikipedia.org/wiki/Urocystis%20agropyri | Urocystis agropyri is a fungal plant pathogen that causes flag smut on wheat.
Biology
The flag smut fungus, U. agropyri, is a basidiomycete. It produces basidiospores and teliospores. This pathogen is found globally, but is most problematic in Australia and India.
Hosts
Urocystis agropyri infects wheat (Triticum aestivum L.), among other grass species. Relatives of U. agropyri infect other grasses and are frequently identified as the causal organism of flag smut on wheat, but there is debate still as to whether they are the same organism or different strains.
Symptoms
Flag smut is a systemic disease that starts in young tissues. Early symptoms include "leprous" spots and bending or twisting of coleoptiles. Older plant leaves have white striations that eventually turn silvery gray, which is evidence of the pathogen's impending sporulation. Additionally, infected plants may have stunted growth, increased leaf production, sterile seeds, and failure to produce heads or have successful leaf expansion.
Diagnosis
Leaves will appear streaked with stripes, which are white, gray or black. The leaf discoloration is due to fungal structures called sori, which are clusters of spore-bearing structures. During sporulation, sori burst through leaves releasing teliospores and cause leaves to appear tattered. These spores are reddish brown, smoothly rounded, and they tend to be in clumps of 5-6 with sterile cells around them. The clumped spores are often referred to as "spore balls" and measure about 20-50 microns. Large quantities of U. agropyri spores look like brown or black dust.
Disease cycle
Urocystis agropyri produces teliospores, which may be wind dispersed or distributed through soils via machinery or animals. In soil, a dikaryotic teliospore germinates, meiosis occurs, then mitosis, and this gives rise to up to four basidiospores, each containing a single nucleus. Basidiospores germinate on seedlings, and each hypha undergoes plasmogamy with a compatible hypha. In this, one nucleus transfers to the other hypha, reestablishing the dikaryotic state of the fungus. The hyphae form appressoria which penetrate the coleoptile of an emerging seeds' shoot through the epidermal tissue, then hyphae grow between vascular bundles of the leaves. Some hyphal cells give rise to smut sori, bearing teliospores, which emerge through the leaf tissue for wind dispersal. Teliospores come to rest in soils, and when conditions are right, they give rise to more basidiospores, further spreading the infection. Alternatively, teliospores can form in seeds when the mycelia grows throughout the plant, in which case they germinate within the seed to give rise to new infection, again via basidiospore production. Teliospores overwinter in the soil, senescent plant tissues, and in seeds. These spores maintain germination viability for 3–7 years.
Environmental conditions
This pathogen prefers arid summers, moderate temperatures, and mild winters. Flag smut fungi germinate in dry soils when the temperature ranges from 40-80 °F. Flag smut has been reported in Australia, the United States, Canada, South Africa, China, Japan, India, Egypt, and Pakistan. Cultivation practices that leave plant debris on soil surfaces enhances U. agropyri's success, as does sowing wheat in winter rather than spring. Mild winters improves the pathogen's ability to establish infections for seeds sown in autumn or winter; spring plantings give the fungus less opportunity to establish.
Disease management
Generally, strategies to prevent flag smut include use of disease resistant cultivars, chemical seed treatments, and crop rotation to reduce amount of inocula present. Since U. agropyri spores germinate in dry soils, maintaining wetter soils helps to diminish their viability. Carboxin is a commonly used fungicide on seeds, which works well to prevent onset of disease. In addition to seed treatments, application of systemic fungicides early in the growing season and at low doses is effective at controlling the disease. Sowing seeds shallowly in soil also helps to reduce disease occurrence.
Impact
Yield loss
Losses from flag smut are usually considered on a field by field basis. Fields planted with susceptible cultivars that happen to become infected will generally be total losses. Australia is among a handful of places that still suffer losses resulting from flag smut. Average losses in Australia are greater than AUS$50 million. Australia experienced the greatest loss known early in the 20th century due to susceptibility of popular wheat varieties and lack of fungicides at the time. In the 1960s, new varieties were planted, which supported a resurgence of disease. Occasionally, total crop losses still occur, but the typical loss range is 5-20%. Losses may be either direct or indirect, as affected plants may be more susceptible to other biotic or abiotic stresses. India reported losses from flag smut in the 1940s through the 1970s, and this averaged about 15,000 tons each year. India and the United States currently have low incidence of this disease due to deployment of resistant cultivars. The occurrence of the disease in the United States was limited to the Pacific Northwest and was the result of the introduction of susceptible cultivars in the mid-twentieth century.
Historical
Urocystis agropyri is a global problem in wheat-growing regions, especially where environmental conditions suit the fungus. Flag smut was first reported from Agropyron spp. in Europe in 1848, misattributed to Uredo agropyri. It was probably present in South Australia before 1868, known as "black rust", but is only definitely identified starting from that year. It was at first identified by Wolff 1873 as U. occulta, but spore morphology differentiated it in the opinion of Körnicke 1877. Later, other countries identified it as the pathogen that was introduced to world crops through trading of seeds and infected cultivars. The development of flag smut in Australia in the nineteenth century, followed by the United States in the twentieth century, is likely due to the spread and exchange of contaminated seeds. Quarantine regulations restricted the movement of infected seed, chaff, and farm machinery from endemic areas. They were first enacted in the USA in the twentieth century to effectively inhibit the spread of disease. Similar regulations were later adopted in Belgium, Germany, the Netherlands, the United Kingdom and many other countries. Flag smut is widespread across Australia and was particularly problematic in the 1920s until resistant cultivars were discovered and put into use. Between 1955 and 1971, flag smut damage and distribution increased in the Pacific Northwest of the United States when the use of several susceptible wheat cultivars was coupled with deep seeding in early autumn planting. Through use of resistant cultivars, the Pacific Northwest's flag smut issue no longer poses a significant threat to yields.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Wheat diseases
Ustilaginomycotina
Fungi described in 1848
Fungus species | Urocystis agropyri | [
"Biology"
] | 1,527 | [
"Fungi",
"Fungus species"
] |
11,128,709 | https://en.wikipedia.org/wiki/Urocystis%20brassicae | Urocystis brassicae is a gall smut and a plant pathogen that stunts the infected plants. It is known to create root galls on Indian Mustard (Brassica campestris) and other Brassica species.
Physical characteristics
Gall characteristics
The wart-like galls of U. brassicae grow on the roots of plants. When young, the galls are white in color. As they age, they become a grayish-black colored gall with a 2–5 cm diameter.
Spore characteristics
Urocystis brassicae has two types of spores. One is fertile and the other is sterile. Fertile spores are a deep brown color with dimensions of approximately 20 by 16 μm. These fertile spores are surrounded by sterile spores that are elongated and a brighter brown than the fertile spores. These sterile spores are approximately 9.9 by 6.1 μm.
Distribution of Urocystis brassicae
Urocystis brassicae is commonly found in India (Bihar) and China.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Ustilaginomycotina
Fungi described in 1938
Fungus species | Urocystis brassicae | [
"Biology"
] | 238 | [
"Fungi",
"Fungus species"
] |
11,128,713 | https://en.wikipedia.org/wiki/Ustilago%20avenae | Ustilago avenae is a plant pathogen. Semiloose smut of oats can be found wherever oats are grown, as they are seldom treated with seed treatments. The level of treatment is low because oats command a lower sale price than other cereals and hence it is thought that treatment is uneconomic. Semiloose smut of oats, unlike loose smut of wheat and barley, can thus infect up to 80 per cent of a crop.
As with wheat and barley the grains of semiloose smut infected oat plants are completely replaced by the smut fungus. It is not evident until heads emerge and is very difficult to distinguish from covered smut.
Life cycle
As for loose smut of wheat and barley; however, the semiloose smut fungus is carried on the seed surface and not inside the embryo of the seed. The pathogen is activated when the infected seed germinates, and it extends toward the growing point of the plant.
Evident from flowering onwards when the plant begins to form the head, the fungus invades all of the young head tissue except for that of the rachis (backbone). Production of plant growth hormones by the fungus results in infected plant heads reaching flowering earlier than healthy heads.
The head produced by the infected plant contains black spore masses in place of the grain. The spores are loosely held and are easily spread by wind onto neighboring healthy plants. Because flowering of infected heads occurs earlier than healthy heads, production and release of spores occurs when the rest of the crop is flowering. Spores are blown by the wind into the flowers of the healthy plants. The spores enter the ovaries and become part of the developing grain. In this way, seed for the following year becomes contaminated.
References
Fungal plant pathogens and diseases
Oats diseases
Ustilaginomycotina
Taxa named by Christiaan Hendrik Persoon
Fungus species | Ustilago avenae | [
"Biology"
] | 386 | [
"Fungi",
"Fungus species"
] |
11,128,721 | https://en.wikipedia.org/wiki/Ustilago%20tritici | Ustilago tritici is a plant pathogen infecting barley; rye and wheat.
References
Fungal plant pathogens and diseases
Barley diseases
Rye diseases
Wheat diseases
Ustilaginomycotina
Fungus species | Ustilago tritici | [
"Biology"
] | 42 | [
"Fungi",
"Fungus species"
] |
11,128,728 | https://en.wikipedia.org/wiki/Valsa%20auerswaldii | Valsa auerswaldii is a plant pathogen infecting apples.
See also
List of apple diseases
References
Fungal tree pathogens and diseases
Apple tree diseases
Diaporthales
Fungi described in 1928
Taxa named by Theodor Rudolph Joseph Nitschke
Fungus species | Valsa auerswaldii | [
"Biology"
] | 53 | [
"Fungi",
"Fungus species"
] |
11,128,733 | https://en.wikipedia.org/wiki/Valsa%20ceratosperma | Valsa ceratosperma is a plant pathogen infecting apples and pears.
See also
List of apple diseases
List of pear diseases
References
Fungal tree pathogens and diseases
Apple tree diseases
Pear tree diseases
Diaporthales
Fungi described in 1937
Fungus species | Valsa ceratosperma | [
"Biology"
] | 54 | [
"Fungi",
"Fungus species"
] |
11,128,743 | https://en.wikipedia.org/wiki/Verticillium%20albo-atrum | Verticillium albo-atrum is a plant pathogen with many hosts.
Infected plants
See:
List of potato diseases
List of alfalfa diseases
List of African daisy diseases
List of beet diseases
List of caneberries diseases
List of tobacco diseases
List of tomato diseases
List of sunflower diseases
List of strawberry diseases
List of sapphire flower diseases
List of rose diseases
List of pocketbook plant diseases
List of Capsicum diseases
List of peanut diseases
List of mint diseases
List of mango diseases
List of Jerusalem cherry diseases
List of impatiens diseases
List of hop diseases
List of hemp diseases
List of geranium diseases
List of fuchsia diseases
List of elm diseases
List of dahlia diseases
List of cucurbit diseases
List of crucifer diseases
List of cineraria diseases
List of chickpea diseases
List of Ficus diseases
References
External links
Index Fungorum
USDA ARS Fungal Database
Verticillium albo-atrum By: Travis DeSavigny PP 728 Soilborne Plant Pathogens North Carolina State University
Fungal plant pathogens and diseases
Food plant pathogens and diseases
Ornamental plant pathogens and diseases
Enigmatic Hypocreales taxa
Fungi described in 1879
Fungus species | Verticillium albo-atrum | [
"Biology"
] | 241 | [
"Fungi",
"Fungus species"
] |
11,128,746 | https://en.wikipedia.org/wiki/Verticillium%20dahliae | Verticillium dahliae is a fungal plant pathogen. It causes verticillium wilt in many plant species, causing leaves to curl and discolor. It may cause death in some plants. Over 400 plant species are affected by Verticillium complex.
Management
Verticillium dahliae has a wide host range and can persist as microsclerotia in the soil for years, so management via fallowing or crop rotation generally has little success. The exception to this is rotation using broccoli, which has been shown to decrease Verticillium severity and incidence in cauliflower fields. This is likely due to the production of allyl isothiocyanate in broccoli, which can suppress the growth of plant pathogenic fungi.
Seed choice may reduce disease presence. Purchasing seed stock from certified Verticillium-free growers and utilizing resistant or partially resistant cultivars can decrease disease incidence. Even resistant cultivars may show symptoms if the field has a high concentration of Verticillium, so site selection is still essential to minimizing disease incidence.
Using fertilizers high in nitrogen and overwatering crops, especially early in the season, may increase disease incidence, so proper fertilizer ratios and drip irrigation are recommended. Following harvest, burning crop residues will limit the amount of Verticillium that can enter the soil and overwinter.
Hosts and symptoms
There are many strains of Verticillium dahliae which are categorized into vegetative compatibility groups (VCG). These groups comprise strains that are able to exchange genetic material via anastomosis. Each VCG affects a few or only one host and the virulence of the pathogen varies by host. While individual V. dahliae strains are relatively host specific, as a species it has a wide range.
Verticillium dahliae has a very wide host range, affecting over 300 plant species. Some susceptible crops include Brussels sprouts, cabbage, eggplant, cucumbers, mint, pepper, potatoes, pumpkin, spinach, tomato, watermelon, honeydew, and cantaloupe. Of these, tomato, potato, and eggplant have resistant or tolerant varieties.
Symptoms of this disease are seen throughout the plant. Leaves may have abnormal coloration, necrotic areas, wilt, and/or fall off the plant. The stem may have discolored vascular tissue, exhibit rosetting (shortened internodes of the plant caused by reduced growth, resulting in a rosette-like appearance), and/or be stunted. Early senescence and dieback may also occur.
Microsclerotia can be seen under a lens as small black structures in the vasculature of living and dead plants. This feature can be used to distinguish V. dahliae from V. albo-atrum, the other verticillium wilt pathogen.
Disease cycle
Verticillium dahliae invades the host plant via natural wounds or by penetrating the root tissue. Following entry, the pathogen enters the xylem where conidia are spread throughout the host. The plant responds to the pathogen by producing tyloses which block the xylem, resulting in decreased water flow and wilting. When the plant dies, Verticillium survives as mycelia in dead tissue, as long-term resting spores in the form of microsclerotia, or saprophytically in the soil. Microsclerotia can be spread via wind and rain, resulting in infection of previously pathogen-free fields. Additionally, the disease can spread locally from the roots of affected plants to healthy plants, live in the vascular tissue of some resistant species, and spread via wind from host leaf tissue.
With this pathogen's ability to survive saprophytically or form resting spores that can survive for over a decade, once a site is infected, it will most likely never be Verticillium-free again.
Recombination
V. dahliae, a fungus in the division Ascomycota, has a strongly clonal population structure. Recombination events have occurred between different clonal lineages, and less frequently within lineages. Two mating types have been identified. Homologs of eight meiosis specific genes are present in the V. dahliae genome. These findings suggest that the capability for meiotic sexual reproduction has been adaptively maintained in the clonal lineages of V. dahliae, and can occasionally be expressed as recombination between genetic markers. Perhaps, as suggested by Wallen and Perlin for Ascomycota fungi generally, in V. dahliae homologous recombination during sexual reproduction functions to repair DNA damage, especially under stressful conditions.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Fungi described in 1913
Enigmatic Hypocreales taxa
Fungus species | Verticillium dahliae | [
"Biology"
] | 1,022 | [
"Fungi",
"Fungus species"
] |
11,128,751 | https://en.wikipedia.org/wiki/Verticillium%20longisporum | Verticillium longisporum, also known as Verticillium Wilt, is a fungal plant pathogen that commonly infects canola. V. longisporum can attack other brassica plants as well as woody ornamentals. A main symptom of the infected plant is wilting. In America, V. longsiporum primarily effects eudicot plants. This pathogen can be very devastating and hard to eradicate, responding only to expensive fumigation or fungal resistant plants.
Host and symptoms
The main host of V. longisporum is canola. However, it can also affect brassica type plants. These include cabbage, broccoli, mustard, and cauliflower. It is common for Verticillium spp. to be virulent to more than one host.
Symptoms of wilts caused by V. longisporum include leaf wilting, necrosis, and stunting. Brown stripes on the Canola plant also become visible as it is produced. As the Canola plant matures, peeling the top layer back reveals microsclerotia grown from the fungus. The wilts caused by V. longisporum are identified based on the main symptom of wilting. When the pathogen enters the plant, it clogs the xylem and cause key nutrients to not reach stems or leaves, which cause a wilt. Another key feature of the pathogen is that symptoms are typically found on outer or lower parts of the plant in a localized area. Most plants can live with the disease based on the severity. Older plants, especially trees, will have an increased chance of survival while younger plants have a higher susceptibility of death caused by the disease. If a plant has a pre-existing disease, the chance of being infected and dying from the V. longisporum increases.
Life cycle
Like most Verticillium spp., Verticillium longisporum is a fungus that primarily lives in the soil. The life cycle of V. longisporum is very similar to that of V. dahliae. V. longisporum is able to survive using overwintering structures that can survive in the soil, called microsclerotia. Microsclerotia are masses of hyphae that are produced in the dying part of the plant and are used to infect healthy plants. When a canola root is near the microsclerotia, the fungi will grow towards it. Root exudates cause the microsclerotia to germinate. Once the fungus has entered the root, it will produce hyphae to move through the plant. The fungi also use conidia to transport. Once the fungi reaches the xylem, it can quickly colonize the plant. This causes the plant to start slowly dying. This is due to fungal growth and plant defenses clogging the xylem, so the plant cannot take up nutrients in this area. Once there is enough dead plant tissue, the fungus will produce more microsclerotia. The process will then continue to repeat. This is the only known life cycle of V. longisporum. The pathogen is only known to cause asexual disease. The fungus is primarily spread via soil cultivation, harvesting, or any action that causes movement of the soil. Verticillium spp. do a spectacular job of being able to germinate, spread within the plant, and reproduce in most weather types. With this, water is usually necessary for the germination of the resting spore to first affect plants. This causes the disease to be most infectious during the spring when there is a lot of water in the ground and new, young plants are present.
Environment
Verticillium longisporum is favorable in conditions that are hot and dry with soil and air temperatures of 15-19 °C and 15-23 °C. The disease easily enters the vascular system to infect when root and xylem function is reduced due to stressful conditions. Excess moisture and warm soil temperature protects the plant from V. longisporum and makes it less of a problem because the survival structures rapidly lose viability. The pathogen can survive harsh, cold weather and winters due to the thick walled microsclerotia it produces. V. longisporum is extremely difficult to eradicate due to its ability to endure fluctuating conditions.
Importance
Verticillium longisporum is known to cause the disease Verticillium wilt in oilseed crops. This disease is responsible for major yield loss with reports as high as 72%. Oilseed is grown worldwide with a few countries being China, Canada, United Kingdom, and Australia. Historically, oilseed was used in the production of oil for lamps, soap making, and high temperature lubricating oils. More currently, it is produced for canola oil, biodiesel and a byproduct in animal feed. With the world’s drive to find a sustainable energy source and its need to consume large amount of meat, large yield losses can be very detrimental. In European cultivars only low-level resistance against this pathogen is found. The survival of the microsclerotia in the soil means that the field cannot grow oilseed for many years in order to avoid disease.
See also
List of canola diseases
Verticillium Dahliae
References
Fungal plant pathogens and diseases
Canola diseases
Fungi described in 1961
Enigmatic Hypocreales taxa
Fungus species | Verticillium longisporum | [
"Biology"
] | 1,138 | [
"Fungi",
"Fungus species"
] |
11,128,761 | https://en.wikipedia.org/wiki/Xylaria%20mali | Xylaria mali is a plant pathogen that causes black rot on apple.
References
Fungal tree pathogens and diseases
Apple tree diseases
Xylariales
Fungi described in 1928
Fungus species | Xylaria mali | [
"Biology"
] | 37 | [
"Fungi",
"Fungus species"
] |
11,128,763 | https://en.wikipedia.org/wiki/Xylaria%20polymorpha | Xylaria polymorpha, commonly known as dead man's fingers, is a cosmopolitan saprobic fungus. It is characterized by its elongated upright, clavate, or strap-like stromata poking up through the ground, much like fingers.
Taxonomy
The genus Xylaria contains about 100 species of cosmopolitan fungi. The specific epithet polymorpha means "many forms". As its name suggests, it has a variable but often club-shaped fruiting body (stroma) resembling burned wood.
Description
Belonging to the phylum of fungus known as Ascomycetes (division Mycota) known as the sac fungi, they are characterized by a saclike structure, the ascus, which contains anything from four to eight ascospores in the sexual stage. The sac fungi are separated into subgroups based on whether asci arise singly or are borne in one of several types of fruiting structures, or ascocarps, and on the method of discharge of the ascospores. Unlike some species in this class, X. polymorpha is inedible.
Often this fungus is found with a multitude of separate "digits", but at times the individual parts will be fused together. In maturity, the fruiting bodies can be tall, externally colored black or brown, sometimes with shades of blue or green. It is white on the inside, with a blackened dotted area all around. This blackened surrounding area is made up of tiny structures called perithecia. The perithecia hold a layer of asci which contain the ascospores. The asci elongate into the ostiole, and discharge the ascospores outward. The spore distribution is a lengthy process, sometimes taking several months to complete. The spore print is black.
In springtime this fungus often produces a layer of white or bluish asexual spores called conidia, which grow on its surface and surrounding area.
Distribution and habitat
X. polymorpha is geographically distributed across all six inhabited continents. It is a common inhabitant of forest and woodland areas, usually growing from the bases of rotting or injured tree stumps and decaying wood. It has also been known to colonize substrates like woody legume pods, petioles, and herbaceous stems.
References
External links
Index Fungorum
USDA ARS Fungal Database
Xylaria polymorpha (Dead Man's Fingers)
Images
Dead Man's Fingers
Mushroom Observer, Xylaria polymorpha
Fungal tree pathogens and diseases
Xylariales
Fungi of Europe
Inedible fungi
Taxa named by Christiaan Hendrik Persoon
Fungi described in 1824
Fungus species | Xylaria polymorpha | [
"Biology"
] | 552 | [
"Fungi",
"Fungus species"
] |
11,128,986 | https://en.wikipedia.org/wiki/Cascade%20effect%20%28ecology%29 | An ecological cascade effect is a series of secondary extinctions that are triggered by the primary extinction of a key species in an ecosystem. Secondary extinctions are likely to occur when the threatened species are: dependent on a few specific food sources, mutualistic (dependent on the key species in some way), or forced to coexist with an invasive species that is introduced to the ecosystem. Species introductions to a foreign ecosystem can often devastate entire communities, and even entire ecosystems. These exotic species monopolize the ecosystem's resources, and since they have no natural predators to decrease their growth, they are able to increase indefinitely. Olsen et al. showed that exotic species have caused lake and estuary ecosystems to go through cascade effects due to loss of algae, crayfish, mollusks, fish, amphibians, and birds. However, the principal cause of cascade effects is the loss of top predators as the key species. As a result of this loss, a dramatic increase (ecological release) of prey species occurs. The prey is then able to overexploit its own food resources, until the population numbers decrease in abundance, which can lead to extinction. When the prey's food resources disappear, they starve and may go extinct as well. If the prey species is herbivorous, then their initial release and exploitation of the plants may result in a loss of plant biodiversity in the area. If other organisms in the ecosystem also depend upon these plants as food resources, then these species may go extinct as well. An example of the cascade effect caused by the loss of a top predator is apparent in tropical forests. When hunters cause local extinctions of top predators, the predators' prey's population numbers increase, causing an overexploitation of a food resource and a cascade effect of species loss. Recent studies have been performed on approaches to mitigate extinction cascades in food-web networks.
Current example
One example of the cascade effect caused by the loss of a top predator has to do with sea otters (Enhydra lutris). Starting before the 17th century and not phased out until 1911 when an international treaty was signed to prevent their further exploitation, sea otters were hunted aggressively for their pelts, which caused a cascade effect through the kelp forest ecosystems along the Pacific Coast of North America. One of the sea otters' primary food sources is the sea urchin (Class: Echinoidea). When hunters caused sea otter populations to decline, an ecological release of sea urchin populations occurred. The sea urchins then overexploited their main food source, kelp, creating urchin barrens where no life exists. No longer having food to eat, the sea urchins populations became locally extinct as well. Also, since kelp forest ecosystems are homes to many other species, the loss of the kelp ultimately caused their extinction as well. In conclusion, the loss of sea otters in local areas along the Pacific coast seems to have caused a cascade effect of secondary extinctions, continuing into the present day.
See also
Competition (biology)
Critical transition
Defaunation
Ecological release
Generalist and specialist species
Greenpeace
IUCN
Mutualism
Overexploitation
Trophic cascade
World Wide Fund for Nature
References
Ecology terminology
Doomsday scenarios
Conservation biology | Cascade effect (ecology) | [
"Biology"
] | 672 | [
"Ecology terminology",
"Conservation biology"
] |
11,130,010 | https://en.wikipedia.org/wiki/Misattribution%20of%20arousal | In psychology, misattribution of arousal is the process whereby people make a mistake in assuming what is causing them to feel aroused. For example, when actually experiencing physiological responses related to fear, people mislabel those responses as romantic arousal. The reason physiological symptoms may be attributed to incorrect stimuli is because many stimuli have similar physiological symptoms such as increased blood pressure or shortness of breath.
One of the initial studies looking into this phenomenon conducted by Schachter and Singer (1962) was based on the idea that the experience of arousal could be ambiguous and therefore misattributed to an incorrect stimulus. Operating under this assumption, the researchers developed the two factor theory of emotion. Misattribution of arousal, which is an influence on emotion processing, can be found in multiple situations, such as romantic situations and physiological responses from exercise.
An example of the possible effects of misattribution of arousal is perceiving a potential partner as more attractive because of a heightened state of physiological stress. A study done by White et al. (1981) investigated this phenomenon and found that those in an unrelated aroused state will rate an attractive confederate more highly than a rater without arousal. The researchers also found that aroused raters would dislike an unattractive confederate more than those without arousal.
Initial demonstration
Experiment
Donald Dutton and Arthur Aron's study (1974) to test the causation of misattribution of arousal incorporated an attractive confederate woman to wait at the end of a bridge that was either a suspension bridge (that would induce fear) or a sturdy bridge (that would not induce fear). After the men crossed the bridge, they were stopped by a female confederate and took a Thematic Apperception Test wherein they had to compose a short story based on an ambiguous image; the image chosen was purposely not sexual at all. Afterward, the story was analyzed for sexual content. As the men finished the survey, the female confederate, who was not aware of the experimental hypothesis, explained she would be available to answer any questions regarding her project, giving her phone number and name to the male subjects. Dutton and Aron wondered if the participants were more likely to call the woman because they were physically attracted to her or not. However, Dutton and Aron had to take into consideration that some factors of the men, such as the possibility of some men already being involved in a relationship or how an individual male interpreted the woman's body gestures.
Therefore, Dutton and Aron had the woman survey the men under two conditions: immediately after they crossed a -long bridge or after they had crossed and had enough time to rest. In the first condition, the men who were surveyed during their cross over the bridge would have had their arousal level increased as they were speaking to the woman. Conditions such as experiencing winds during their walk and the nervous feeling may have contributed to their fast-paced heartbeats and rapid breathing.
In the other condition, the woman had approached the men after they had crossed the bridge. They had enough time to rest and get their heartbeat and breathing back to normal.
Results
More males contacted the female experimenter when they had just walked down the suspension bridge, which was thought to be due to their misattribution of their arousal (they believed that they were feeling sexual arousal at the sight of the female instead of feeling the leftover physiological arousal from the fear of walking across the suspension bridge). This was interpreted by the researchers to mean that the men found the woman more attractive when they had more anxiety about crossing the bridge. There were no significant differences on either bridge when the researchers used a male confederate to give the Thematic Apperception Test and his phone number for any further questions about the experiment.
Subsequent research
Experiment: expansion on initial demonstration
Fear is not the only emotion that can be a result of misattribution of arousal. One of the earlier studies that focused on misattribution of arousal was done by Schachter and Singer in 1962. The researchers told the participants in their study that they were testing how their vision responded to a shot of vitamins called Suproxin. Suproxin was actually a shot of epinephrine or a placebo. Epinephrine activated the sympathetic nervous system and produced symptoms such as an elevated heart rate and blood pressure. The researchers then either told the participants the effects of the epinephrine, said nothing about any potential effects of the epinephrine, or told them they may feel some effects that had nothing to do with the epinephrine (for example, the researchers told the participants they could experience a headache from the shot). The participants were told that they had to wait 20 min for the Suproxin to get into their bloodstreams. While they were waiting, a confederate (a participant who is also covertly collaborating with/an associate of the researchers) who was either euphoric or angry waited with the participant to see if the emotion of the participant could be manipulated by the epinephrine and the emotion of the confederate. The confederate played basketball with pieces of balled up scrap paper while in the euphoric state or had to take a questionnaire and became very angry with the personal questions in the anger state. The participants then were observed to see if they appeared to be angry or euphoric and took a self-reported survey about their mood.
They found that if someone was physiologically aroused without being aware of it, they would attribute their arousal to a recent thought in their memory. The researchers also found that emotions (in this study, euphoria and anger) could be manipulated by providing a participant with a shot of epinephrine.
Experiment: influence on confidence
Misattribution of arousal can also influence how much confidence one feels before completing a task. One study conducted by Savitsky, Medvec, Charlton, and Gilovich focused on how confidence can be affected by misattribution of arousal. Typically people feel more confident before they are supposed to do a task, but the closer they get to having to perform that task, the less confident they feel, which could be due to the arousal from the expectation of the performance. The researchers told participants that they would be exposed to a subliminal noise and were then asked if they could predict how well they would do on two tasks. After they made their predictions, the researchers either told them that the noise could make them nervous, the noise would have no effect on them, or they were told that they would not have to do the tasks until the next session a month away after they were exposed to the noise. The tasks were to unscramble anagrams or to recall as many nonsense syllables as they could after seeing them briefly. They could earn money for the tasks (more money was earned for each anagram unscrambled or each syllable correctly recalled). The participants then predicted how well they did on the tasks, and how well they believed everyone else did on the task. A second experiment replicated this first experiment. The researchers had participants attribute their arousal to noises that they heard, which resulted in those participants feeling more confident that they did well on the tasks than those that attributed their arousal to the performance anxiety from the task.
Experiment: polarity
White, Fishbein, and Rutsein generated further research on the misattribution of arousal with their 1981 study, "Passionate Love and the Misattribution of Arousal". The researchers performed their study using two experiments. In the first experiment, 54 male participants were placed under varying degrees of physical exertion, in order to create a state of arousal. The participants were then asked to view a video of a female confederate either manipulated to be attractive or unattractive. The video was a self-disclosure statement where the female discussed hobbies and other typically dating related material. After completing the video, participants were given thirteen attraction trait criteria on which to rate the female. The experiment confirmed the researcher's hypothesis that individuals in a neutral aroused state were more likely to rate a target as attractive than an unaroused individual.
White, Fishbein, and Rutsein hypothesized that the polarity of an individual's arousal could influence the impact of the misattribution of arousal. To test this hypothesis the researchers created three separate videos to prime the subjects with a positive, negative, or neutral arousal. Sixty six male participants were asked to watch one of the videos and then rate the female as in the first study. The study found that regardless of the stimuli's polarity, the participants in the aroused state found the attractive confederate more attractive and the unattractive confederate as less attractive than the unaroused participants.
Experiment: conditional emotional responses
A study done by Loftis and Ross in 1974 looked at the effects of misattribution of arousal upon acquisition and extinction of a conditional emotional response. They conducted two experiments with 89 female undergraduates to show that misattribution procedures can alter physiological response to a conditioned source of a fear or arousal. The results suggested that self perception and attribution play a major role in emotional response.
Experiment: alternate model
Additionally, a study conducted by Allen, Kenrick, Linder and McCall in 1989 suggested an alternative explanation to the misattribution of arousal in terms of attraction-arousal (as there are other types of arousal that can be misattributed). The original model of misattribution of arousal essentially states that the person who is aroused will only be attracted to the target person, so long as they are unaware of the true cause of arousal. Allen et al. propose a different model called Response-Facilitation, in which they suggest that the subject that is aroused will be more attracted to the target person (compared to people who are unaroused), regardless of if they know the true source of the stimulation. To test this new model, they ran two studies in which they manipulated the type of arousal the participants experienced and the salience (awareness) of the arousal. In one study, they put the participants into three different groups: control (no arousal), arousal-awareness (where they were put in front of an attractive person, but knew what the actual cause of arousal (exercise) was), and arousal-unaware (where they were put in front of an attractive person but didn't realize the true cause of arousal). After conducting the study, the researchers found that regardless of whether the person was aware of the true cause of arousal, they were still more attracted to the target person than those in the control condition. In other words, whereas the original theory of misattribution of arousal said that people could only misattribute their arousal if they were unaware of the true cause, the Response-Facilitation model states that it is possible that people could be aware of the true cause of arousal and still find themselves attracted to someone, compared to people who are unaroused. This demonstrates another potential model that could explain the attraction-arousal component of the misattribution of arousal theory.
See also
Alexithymia
Two factor theory of emotion
Misattribution theory of humor
There are no atheists in foxholes
References
Aronson, E., Wilson, T. & Akert, R., Social Psychology (6th edition). 2005. 145-147.
External links
"The Capilano Suspension Bridge experiment"
Sexology | Misattribution of arousal | [
"Biology"
] | 2,323 | [
"Behavioural sciences",
"Behavior",
"Sexology"
] |
11,130,126 | https://en.wikipedia.org/wiki/Music%20Temple | Music Temple is a geological feature near the Colorado River in Glen Canyon, southern Utah, on the Navajo Nation, United States, which consists of a grotto with high, vaulting walls and a deep central pool of water.
It was named by John Wesley Powell in 1869 because of its unusual acoustic properties: reportedly, a one-second note would reverberate for eleven seconds. It was considered one of the more beautiful natural formations of the area in the late 19th century and into the mid-20th century.
Music Temple is within the area that was inundated by Lake Powell, formed by the construction of Glen Canyon Dam. It was submerged by the mid-1960s as the lake filled. As the lake declined in contents over due to a long term drought, the feature is partially exposed.
Sources
University of Utah
Lake Powell
Landforms of San Juan County, Utah
Geography of the Navajo Nation | Music Temple | [
"Engineering"
] | 181 | [
"Colorado River Storage Project",
"Lake Powell"
] |
11,130,631 | https://en.wikipedia.org/wiki/Avidyne%20Entegra | Avidyne Entegra is an integrated aircraft instrumentation system ("glass cockpit"), produced by Avidyne Corporation, consisting of a primary flight display (PFD), and multi-function display (MFD). Cirrus became the first customer of the Entegra system and began offering it on the SR20 and SR22 aircraft in 2003 as the first integrated flight deck for light general aviation (GA). The original Entegra system was designed to use third-party components such as a GPS from Garmin and an autopilot system from S-TEC Corporation.
One of the advantages of these glass flight deck systems is upgradeability. Avidyne has demonstrated this with a continuous stream of hardware and software upgrades, including:
2004: Added CMax Electronic Charts and first to certify XM datalink for light GA.
2005 Added Primary Engine Instrumentation on PFD.
2006: Introduced Release 6, which added Flight Director, V-Speed & Heading on ADI, additional datalink weather products on the MFD, and support for the USB memory-stick data loader.
2007: Introduced Release 7, which added support for WAAS/LPV Approach guidance among other things.
2008: Introduced Release 8, which expanded weather product for Canadian, Mexico and Caribbean (METARS, TAFs, Color Lightning).
2009: Release 9, a hardware and software upgrade that was certified in April 2009.
Also Avidyne has introduced the DFC90 digital and attitude based autopilot for Entegra installations that replaces the S-TEC55X rate based autopilot and has advanced features like a "straight & level" button, envelope protection and IAS climb. To install the DFC90 A/P (which is a slide-in replacement for the 55X) the PFD has to be upgraded to WAAS standard.
With the introduction of the IFD navigators product range (IFD440/540/550) Entegra 8.x was not dependent of Garmin navigators anymore. The IFD440 COM/GPS/WAAS Navcoms are direct slide in replacements for the GNS430W navigators. Original Entegra systems with the non-WAAS GNS430 navigators need to get a PFD HW and SW upgrade before they can utilize 430W or IFD440 navigators which are capable of GPS/WAAS 3D approaches like LNAV/VNAV or LPV.
Navdata and Approach Charts on the MFD can be updated via the USB port on the MFD (which is not suitable for charging).
System Redundancy
Entegra Release 9 system was designed with a fully redundant dual-databus architecture that eliminates traditional "Reversionary Modes."
A typical Entegra Release 9 installation features two large-format IFD5000 Integrated Flight Displays (IFD), which are fully interchangeable for use as PFD or MFD. Since each IFD5000 is fully capable of performing the functions of the other, no unfamiliar or limited reversionary modes are required . In the event of a display failure, the remaining IFD5000 continues to operate as either display format with no loss of functionality.
Some competing glass flight deck systems have limited redundancy, lose critical functionality such as datalink weather, traffic, or even autopilot, and their failure modes force the pilot to learn composite display symbology and "reversionary modes."
GA Glass history
Avidyne was first to certify big glass for light GA with the 2003 launch of Entegra in Cirrus aircraft. This is considered a "first generation" big-glass system that integrates the six 3-inch instruments (6-pack) into a more usable package, along with an exceptionally reliable Air Data and Heading Reference System (ADAHRS) that replaces the "spinning mass" attitude and directional gyros. Entegra Release 8 still relies on a 'federated' radio stack (dual G430s) for GPS/NAV/COM capability, as well as audio and transponder.
Entegra R9 was meant to replace the original Entegra system in Cirrus Aircraft in 2007 but Cirrus went with the new G1000/Perspective system instead. For a short time Cirrus aircraft could be ordered with either Avidyne or Garmin avionics, today Garmin G1000 is the only option, it became the General Aviation market leader in glass cockpits. Avidyne still supports R8 (Entgra) and R9 systems and gave existing Entegra customers an upgrade path with the introduction of the DFC90 digital autopilot and the IFD4/5xx series of GPS navigators.
Use
Avidyne Entegra systems are found in aircraft from such companies as:
Cirrus Aircraft
Columbia/Lancair Aircraft
Piper Aircraft
Spectrum Aeronautical
Extra Aircraft
Competition
The Avidyne Entegra competes with the Garmin G1000 and Chelton FlightLogic EFIS glass cockpits. However, there are significant differences with regard to the features, degree of integration, intuitive aspects of the design, and overall product utility. Note that the Chelton system is not typically found in airplanes that include the less expensive G1000 or Avidyne systems. Other competitors are ASPEN and DYNON.
External links
Avidyne Corporation
Avidyne Entegra
Release9.com
Plastic Pilot
References
Avionics
Aircraft instruments
Glass cockpit | Avidyne Entegra | [
"Technology",
"Engineering"
] | 1,135 | [
"Glass cockpit",
"Avionics",
"Aircraft instruments",
"Measuring instruments"
] |
11,130,776 | https://en.wikipedia.org/wiki/Poison%20laboratory%20of%20the%20Soviet%20secret%20services | The poison laboratory of the Soviet secret services, alternatively known as Laboratory 1, Laboratory 12, and Kamera (which means "The Cell" in Russian), was a covert research-and-development facility of the Soviet secret police agencies. Prior to the dissolution of the Soviet Union, the laboratory manufactured and tested poisons, and was reportedly reactivated by the Russian government in the late 1990s.
The laboratory activities were mentioned in the Mitrokhin archive.
Chronology
1921: First poison laboratory within the Soviet secret services was established under the name "Special Office". It was operated by the Cheka and headed by professor of medicine Ignatii Kazakov, according to Pavel Sudoplatov.
1926: The laboratory was under the supervision of Genrikh Yagoda, a deputy of OGPU chairman Vyacheslav Menzhinsky, who became NKVD chief in 1934 after Menzhinsky's death.
February 20, 1939: It becomes Laboratory 1 headed by Grigory Mairanovsky. The laboratory was under the direct supervision of NKVD director Lavrenty Beria and his deputy Vsevolod Merkulov from 1939 to March 1953. Victims included the American Isaiah Oggins.
March 14, 1953: It was renamed to Laboratory 12. V. Naumov became the newly appointed head. Lavrenty Beria and Vsevolod Merkulov were executed after Stalin's death. Immediate NKVD supervisor of the laboratory, Pavel Sudoplatov, received a long term in prison.
1978: Expanded into the Central Investigation Institute for Special Technology within the First Chief Directorate of the KGB.
Since 1991: Several laboratories of the SVR (headquartered in Yasenevo near Moscow) were responsible for the "creation of biological and toxin weapons for clandestine operations in the West".
Human experimentation
Mairanovsky and his colleagues tested a variety of lethal poisons on prisoners from the Gulags, including mustard gas, ricin, digitoxin, curare, cyanide, and many others. The objective of these experiments was to identify a tasteless, odorless chemical that could not be detected . Candidate poisons were administered to the victims along with a meal or drink, disguised as "medication".
Ultimately, a preparation meeting the desired criteria was developed and referred to as C-2 or K-2 (carbylamine choline chloride). According to witness testimonies, the victims experienced physical changes, such as a rapid weakening and diminishment in height, followed by a calm and silent demeanor, culminating in death within 15 minutes. Mairanovsky intentionally brought individuals of various physical conditions and ages into the laboratory to comprehensively understand the effects of each poison.
Pavel Sudoplatov and Nahum Eitingon only approved specialized equipment (namely, poisons) if it had been tested on "humans", as revealed in the testimony of Mikhail Filimonov. Vsevolod Merkulov stated that these experiments received authorization from NKVD chief Lavrentiy Beria. Following Stalin's death and Beria's subsequent arrest, Beria attested on August 28, 1953, that "I gave orders to Mairanovsky to conduct experiments on people sentenced to the highest measure of punishment, but it was not my idea".
In addition to human experimentation, Mairanovsky personally executed people with poisons, under the supervision of Sudoplatov.
Prominent victims
1930: The leader of the Russian All-Military Union, general Alexander Kutepov, was drugged and kidnapped in Paris and died from a heart attack due to an overdose of the administered drug.
1936: Nestor Lakoba, Abkhaz Communist leader
1937: One of the leaders of the White movement and head of the Russian All-Military Union, Russian general Evgenii Miller, was drugged and kidnapped in Paris and later executed in Russia.
1938: Abram Slutsky (17 February 1938)
1940: Nikolai Koltsov, famous Russian biologist
1947: Cy Oggins was taken to Laboratory Number One (the "Kamera"), where Grigory Mairanovsky injected him with curare, which takes 10 to 15 minutes to kill
1947: Archbishop Theodore Romzha of the Ukrainian Catholic Church was killed by injection of curare provided by Mairanovsky and administered by a medical nurse who was a Ministry for State Security agent.
1971: Nobel prize laureate and dissident Alexander Solzhenitsyn was poisoned with what was later determined to be ricin. Solzhenitsyn survived the attempt.
1978: Dissident Bulgarian writer Georgi Markov was assassinated in London using a tiny pellet from an umbrella gun poisoned with ricin; the necessary equipment was prepared in this laboratory. In a Discovery Channel television program about his illustrated book of espionage equipment called The Ultimate Spy, espionage historian H. Keith Melton indicates that once the Bulgarian secret service had decided to kill Markov, KGB specialists from the Laboratory gave the Bulgarians a choice between two KGB tools that could be provided for the task: either a poisonous topical gelatin to be smeared on Markov, or an instrument to administer a poison pellet, as was eventually done.
1979: Attempted poisoning of the second President of Afghanistan Hafizullah Amin on December 13, 1979. Department 8 of KGB succeeded in infiltrating the illegal agent Mitalin Talybov (codenamed SABIR) as a chef of Amin's presidential palace. However, Amin switched his food and drink as if he expected to be poisoned, so his son-in-law became seriously ill and, ironically, was flown to a hospital in Moscow.
Alleged victims
Russian writer Maxim Gorky and his son Max Peshkov. During the Trial of the Twenty-One in 1938, NKVD chief Genrikh Yagoda admitted that he poisoned Gorky and his son and unsuccessfully tried to poison his own deputy (and eventual successor) Nikolai Yezhov. The attempted poisoning of Yezhov was later officially dismissed as falsification, but Vyacheslav Molotov believed that the poisoning accusations were true. Yagoda was never officially rehabilitated (recognized as an innocent victim of political repressions) by Soviet authorities.
Soviet leader Joseph Stalin. Russian historians Anton Antonov-Ovseenko and Edvard Radzinsky believe that Stalin was poisoned by associates of NKVD chief Lavrentiy Beria, based on the interviews of a former Stalin bodyguard and numerous pieces of circumstantial evidence. Stalin planned to dismiss and execute Molotov and other senior members of the Soviet regime in 1953. According to Radzinsky, Stalin was poisoned by Khrustalev, a senior bodyguard briefly mentioned in the memoirs of Svetlana Alliluyeva, Stalin's daughter.
Georgi Dimitrov, the first Communist leader of Bulgaria, abruptly fell ill in 1949 and died in a Moscow hospital. According to some historians, Dimitrov was poisoned by the Soviet authorities on the orders of Stalin, due in part to his support for the proposed Balkan Federation.
Nikolai Khokhlov, a KGB defector who survived a thallium poisoning attempt in Frankfurt in 1957.
Alleged FSB victims
Lechi Ismailov, a Chechen rebel commander sentenced in Russia for nine years in prison died in September 2002 after an unsuccessful attempt to recruit him as an informer by FSB. Shortly after being transferred from the Lefortovo prison to a regular prison, he had a "farewell" cup of tea with the FSB officer after which fell fatally ill, lost his hair and died shortly after.
Roman Tsepov, a Russian businessman close to Vladimir Putin and Tambov Gang circles.
Amir Khattab, who was poisoned by a "a fast-acting nerve agent, possibly sarin or a derivative" transferred on a letter delivered by an FSB-recruited courier.
Yuri Shchekochikhin, a Russian journalist for the newspaper Novaya Gazeta, Shchekochikhin investigated apartment bombings allegedly directed by the Russian secret services and the Three Whales Corruption Scandal which involved high-ranking FSB. Shchekochikhin died from a fast and mysterious disease shortly before his departure to the US to testify before FBI investigators. His medical documentation was classified as "state secret" by Russian authorities.
Journalist Anna Politkovskaya. During the Beslan school hostage crisis in September 2004 and while on her way to Beslan to help in negotiations with the hostage-takers, Politkovskaya fell violently ill and lost consciousness after drinking tea given to her by an Aeroflot flight attendant. She survived. The drug was allegedly prepared in the FSB poison facility. Politkovskaya was later shot to death in her Moscow apartment building in 2006.
Former KGB agent Alexander Litvinenko. He was poisoned in a sushi bar in London in 2006. Traces of polonium-210 were found in his body. In a farewell letter, Litvinenko accused President Vladimir Putin of being behind the attack on his life. Litvinenko was critical of the Putin regime and accused the FSB of being behind the 1999 attacks in Russia. He died on 23 November 2006.
Viktor Kalashnikov, a freelance journalist and former KGB colonel, and his wife Marina Kalashnikova. In December 2010, the Charité hospital in Berlin discovered that they had been poisoned with mercury. Viktor Kalashnikov claimed it was the work of the FSB.
Karinna Moskalenko, a human rights lawyer who defended Litvinenko and other anti-Putin dissidents in court. She fell ill from mercury poisoning in October 2008, just prior to a hearing regarding the assassination of Anna Politkovskaya. Although initially alleged to be an attempt on her life, it was found by French police to be the result of a barometer broken in the car by the previous owner.
Viktor Yushchenko, the third President of Ukraine. Yuschenko was found to have been poisoned with TCDD dioxin during the 2004 Ukrainian presidential election campaign. In 2009, he accused Russia of shielding a number of witnesses to his poisoning, and called on the Russian government to turn them over.
Pyotr Verzilov, spokesman for the protest band Pussy Riot. Verzilov was admitted to a hospital in Moscow in September 2018, before being transferred to the Charité in Berlin. The German doctors believed it was "highly probable" that Verzilov was poisoned.
Vladimir Kara-Murza, opposition politician. Kara-Murza suddenly fell ill during a meeting in Moscow in May 2015, and was in a coma for more than a month. Coming on the heels of the assassination of Boris Nemtsov in Moscow that February, his family suspected he had been poisoned. Kara-Murza was hospitalized again for an alleged poisoning in February 2017.
Sergei Skripal, former GRU officer and double agent for the British SIS, and his daughter Yulia. On 4 March 2018, the Skripals were poisoned with a Novichok agent in Salisbury, United Kingdom, where Sergei had been living since 2010. Both eventually recovered; in 2020, they were reported to be living under new identities in New Zealand.
Emilian Gebrev, Bulgarian arms dealer. Gebrev, his son, and one of his business partners were allegedly poisoned in April 2015. British investigators traveled to Bulgaria in 2019 to investigate an alleged connection between Gebrev's poisoning and that of the Skripals in England in 2018.
Alexei Navalny, anti-corruption advocate and opposition leader. Navalny fell ill on a flight from Tomsk to Moscow on 20 August 2020, and placed into an induced coma at a hospital in Omsk. He was transferred to the Charité in Berlin two days later. Five laboratories certified by the Organisation for the Prohibition of Chemical Weapons confirmed the presence of a new type of Novichok agent.
Planned victims
President of the Socialist Federal Republic of Yugoslavia Josip Broz Tito. In the late 1940s, the laboratory manufactured a powdered plague for use in a small container and where the assassin was vaccinated against plague. The device was to be used against Tito, but MGB agent Iosif Grigulevich, who had previously organized the assault on the villa of Leon Trotsky and now received the assignment to kill Tito, was recalled after the death of Joseph Stalin.
FSB era
The first democratically elected President of the Republic of Georgia, Zviad Gamsakhurdia. According to former Deputy Director of Biopreparat Ken Alibek, this laboratory was possibly involved in the design of an undetectable chemical or biological agent to assassinate Gamsakhurdia. BBC News reported that some Gamsakhurdia friends believed he committed suicide, "although his widow insists that he was murdered."
Threatened dissidents
The New York Times reported that Garry Kasparov, the chess champion and Putin opponent, drinks bottled water and eats prepared meals carried by his bodyguards.
See also
Active measures
Biopreparat
List of poisonings
Nazi human experimentation
North Korean human experimentation
Project MKULTRA
Unit 731 (Japan)
United States chemical weapons program
Notes and references
Sources
PETLIURA, KONOVALETS, BANDERA - Three Leaders of Ukrainian Liberation Movement murdered by the Order of Moscow. Ukrainian Publishers Limited. 237, Liverpool Road, London, United Kingdom. 1962. (audiobook).
Ken Alibek and S. Handelman. Biohazard: The Chilling True Story of the Largest Covert Biological Weapons Program in the World - Told from Inside by the Man Who Ran it. 1999. Delta (2000)
Vadim J. Birstein. The Perversion Of Knowledge: The True Story of Soviet Science. Westview Press (2004) .
Vasili Mitrokhin and Christopher Andrew, The World Was Going Our Way: The KGB and the Battle for the Third World, Basic Books (2005) hardcover, 677 pages
The Laboratory 12 poison plot, by Martin Sixsmith, The Sunday Times, April 8, 2007
The KGB's Poison Factory, by Boris Volodarsky, Wall Street Journal, 7 April 2005
History of Soviet poisonings (Russian) by Boris Sokolov grani.ru
Organic poison (Russian) by Vladimir Abarinov, grani.ru
Boris Volodarsky, The KGB’s Poison Factory: From Lenin to Litvinenko (London: Frontline Books, 2009)
Boris Volodarsky, Assassins: The KGB's Poison Factory 10 Years On (London: Frontline Books, 2019) ISBN 978-1-52673-392-4
Research institutes in Russia
Research institutes in the Soviet Union
Toxicology
Biological warfare facilities
Chemical warfare
Medical experimentation on prisoners of war
Human rights abuses in the Soviet Union
Human subject research in Russia
Soviet intelligence agencies
KGB
1921 establishments in Russia
Research institutes established in 1921
Government agencies established in 1921
Soviet chemical weapons program
Chemical warfare facilities
Soviet biological weapons program
Human rights in the Soviet Union
Military history of the Soviet Union during World War II
Soviet World War II crimes
Crimes against humanity
NKVD | Poison laboratory of the Soviet secret services | [
"Chemistry",
"Biology",
"Environmental_science"
] | 3,074 | [
"Medical experimentation on prisoners of war",
"Toxicology",
"Chemical warfare facilities",
"Biological warfare facilities",
"Biological warfare",
"nan"
] |
11,131,291 | https://en.wikipedia.org/wiki/Extra%20element%20theorem | The Extra Element Theorem (EET) is an analytic technique developed by R. D. Middlebrook for simplifying the process of deriving driving point and transfer functions for linear electronic circuits. Much like Thévenin's theorem, the extra element theorem breaks down one complicated problem into several simpler ones.
Driving point and transfer functions can generally be found using Kirchhoff's circuit laws. However, several complicated equations may result that offer little insight into the circuit's behavior. Using the extra element theorem, a circuit element (such as a resistor) can be removed from a circuit, and the desired driving point or transfer function is found. By removing the element that most complicate the circuit (such as an element that creates feedback), the desired function can be easier to obtain. Next, two correctional factors must be found and combined with the previously derived function to find the exact expression.
The general form of the extra element theorem is called the N-extra element theorem and allows multiple circuit elements to be removed at once.
General formulation
The (single) extra element theorem expresses any transfer function as a product of the transfer function with that element removed and a correction factor. The correction factor term consists of the impedance of the extra element and two driving point impedances seen by the extra element: The double null injection driving point impedance and the single injection driving point impedance. Because an extra element can be removed in general by either short-circuiting or open-circuiting the element, there are two equivalent forms of the EET:
or,
Where the Laplace-domain transfer functions and impedances in the above expressions are defined as follows: is the transfer function with the extra element present. is the transfer function with the extra element open-circuited. is the transfer function with the extra element short-circuited. is the impedance of the extra element. is the single-injection driving point impedance "seen" by the extra element. is the double-null-injection driving point impedance "seen" by the extra element.
The extra element theorem incidentally proves that any electric circuit transfer function can be expressed as no more than a bilinear function of any particular circuit element.
Driving point impedances
Single Injection Driving Point Impedance
is found by making the input to the system's transfer function zero (short circuit a voltage source or open circuit a current source) and determining the impedance across the terminals to which the extra element will be connected with the extra element absent. This impedance is same as the Thévenin's equivalent impedance.
Double Null Injection Driving Point Impedance
is found by replacing the extra element with a second test signal source (either a current source or voltage source as appropriate). Then, is defined as the ratio of voltage across the terminals of this second test source to the current leaving its positive terminal when the output of the system's transfer function is nulled for any value of the primary input to the system's transfer function.
In practice, can be found from working backward from the facts that the output of the transfer function is made zero and that the primary input to the transfer function is unknown. Then using conventional circuit analysis techniques to express both the voltage across the extra element test source's terminals, , and the current leaving the extra element test source's positive terminals, , and calculating . Although the computation of is an unfamiliar process for many engineers, its expressions are often much simpler than those for because the nulling of the transfer function's output often leads to other voltages/currents in the circuit being zero, which may allow exclusion of certain components from analysis.
Special case with transfer function as a self-impedance
As a special case, the EET can be used to find the input impedance of a network with the addition of an element designated as "extra". In this case, is the same as the impedance of the input test current source signal made zero or equivalently with the input open circuited. Likewise, since the transfer function output signal can be considered to be the voltage at the input terminals, is found when the input voltage is zero i.e. the input terminals are short-circuited. Thus, for this particular application, the EET can be written as:
where
is the impedance chosen as the extra element
is the input impedance with Z removed (or made infinite)
is the impedance seen by the extra element Z with the input shorted (or made zero)
is the impedance seen by the extra element Z with the input open (or made infinite)
Computing these three terms may seem like extra effort, but they are often easier to compute than the overall input impedance.
Example
Consider the problem of finding for the circuit in Figure 1 using the EET (note all component values are unity for simplicity). If the capacitor (gray shading) is denoted the extra element then
Removing this capacitor from the circuit,
Calculating the impedance seen by the capacitor with the input shorted,
Calculating the impedance seen by the capacitor with the input open,
Therefore, using the EET,
This problem was solved by calculating three simple driving point impedances by inspection.
Feedback amplifiers
The EET is also useful for analyzing single and multi-loop feedback amplifiers. In this case, the EET can take the form of the asymptotic gain model.
See also
Asymptotic gain model
Blackman's theorem
Return ratio
Signal-flow graph
Further reading
Christophe Basso Linear Circuit Transfer Functions: An Introduction to Fast Analytical Techniques first edition, Wiley, IEEE Press, 2016, 978-1119236375
References
External links
Examples of applying the EET
Derivation and examples
Fast Analytical Techniques at Work in Small-Signal Modeling
Circuit theorems | Extra element theorem | [
"Physics"
] | 1,179 | [
"Equations of physics",
"Circuit theorems",
"Physics theorems"
] |
11,131,769 | https://en.wikipedia.org/wiki/Long%20Tom%20%28rocket%29 | Long Tom was the first Australian sounding rocket. It was first launched from the Woomera Test Range in October 1957. It was a two-stage rocket developed to test the range's instrumentation for later projects. In the early 1960s it was superseded by the HAD and HAT sounding rockets.
See also
Australian Space Research Institute
References
Sounding rockets of Australia | Long Tom (rocket) | [
"Astronomy"
] | 71 | [
"Rocketry stubs",
"Astronomy stubs"
] |
11,132,183 | https://en.wikipedia.org/wiki/Normative%20social%20influence | Normative social influence is a type of social influence that leads to conformity. It is defined in social psychology as "...the influence of other people that leads us to conform in order to be liked and accepted by them." The power of normative social influence stems from the human identity as a social being, with a need for companionship and association.
Normative social influence involves a change in behaviour that is deemed necessary in order to fit in a particular group. The need for a positive relationship with the people around leads us to conformity. This fact often leads to people exhibiting public compliance—but not necessarily private acceptance—of the group's social norms in order to be accepted by the group. Social norms refers to the unwritten rules that govern social behavior. These are customary standards for behavior that are widely shared by members of a culture.
In many cases, normative social influence serves to promote social cohesion. When a majority of group members conform to social norms, the group generally becomes more stable. This stability translates into social cohesion, which allows group members to work together toward a common understanding, or "good", but also has the unintended impact of making the group members less individualistic.
Research
Classic research
In 1955, Solomon Asch conducted his classic conformity experiments in an attempt to discover if people still conform when the right answer is obvious. Specifically, he asked participants in his experiment to judge the similarity of lines, an easy task by objective standards. Using accomplices to the plot, also known as confederates, Asch created the illusion that an entire group of participants believed something that was clearly false (i.e., that dissimilar lines were actually similar). In this situation, participants conformed over 36% of the time on trials where the confederates gave blatantly false answers. When asked to make the judgments in private, participants gave the right answer more than 99% of the time. Asch's results cannot be explained by informational social influence, because in this case, the task was easy and the correct answer was obvious. Thus, participants were not necessarily looking to others to figure out the right answer, as informational social influence predicts. Instead, they were seeking acceptance and avoiding disapproval. Follow-up interviews with participants of the original Asch studies confirmed this. When participants were asked why they conformed, many provided reasons other than a need for accuracy.
Current research
In more current research, Schultz (1999) found that households that received more normative messages describing the frequency and amount of weekly recycling began to have a direct impact on both the households' frequency and amount of their curbside recycling. The sudden change was due to the fact that "the other neighbors'" recycling habits had a direct normative effect on the household to change their recycling behaviors. Similar results were apparent in another study in which researchers were able to increase household energy conservation through the use of normative messages. Participants in this conservation study did not believe that such normative messages could influence their behavior; they attributed their conservation efforts to environmental concerns or social responsibility needs. Thus, normative social influence can be a very powerful, yet unconscious, motivator of behavior.
Consequences
Lastly, different studies have illustrated the consequences of deviation from a group's influence. In a study by Schachter (1951), participants were placed in groups and asked to discuss what to do with a juvenile delinquent they had read about. A "deviant" was instructed by the experimenter to take a stand strongly opposing that of the rest of the group and to hold this position in the midst of any arguments from other members. After the conclusion of the discussions, participants chose to reject this deviant the most, considering him the least desirable of the members, and relegating him to the least important tasks. Recent work by Berns et al. (2005) examined the physiological effects of deviation by using fMRI to scan participants' brains as they completed an object rotation task with other "participants", who were actually confederates. The researchers were interested in examining participants' brain activity when they were under pressure to conform to an incorrect group majority. The amygdala region (which is associated with negative emotions) was activated when participants sought to break off from the influence of the majority; providing support for the point that resisting normative social influence can often lead to negative emotional consequences for individuals.
Affecting factors
Social impact theory
Latane's social impact theory posits that three factors influence the extent to which we conform to group norms: personal importance, immediacy, and size. As the group becomes more important to a person, physically closer to him/her, and larger in number, Social Impact Theory predicts that conformity to group norms will increase. However, the size of the group only affects conformity to an extent—as a group expands past 3–5 members, the effect levels off.
Unanimity
When a group is unanimous in its support of a norm, an individual feels greater pressure to follow suit. However, even a small break in unanimity can lead to a decrease in the power of such normative influence. In Asch's study, when even one other confederate dissented from the majority and provided the correct answer, the participant answered incorrectly on fewer trials (about a fourth less). In addition, participants experienced positive emotions towards such dissenters. A similar reduction in conformity even occurred when the dissenting confederate provided an answer that was false (but still different from that of the majority).
In some versions of the experiment, Asch had dissenting confederates eventually rejoin the majority opinion after several trials; when this occurred, participants experienced greater pressure from normative influence and conformed as if they had never had the dissenter on their side. However, when the conditions were altered and the dissenting confederate left the room after several trials, the participants did not experience a similar pressure to conform as they had when the confederate rejoined the majority—they made fewer mistakes than they had in the condition where the confederate rejoined the others.
Private vs. public
The pressure to bend to normative influence increases for actions performed in public, whereas this pressure decreases for actions done in private. In another variation of the Asch study, the researchers allowed the participant to privately write down his answer after all of the confederates had publicly stated their answers; this variation reduced the level of conformity among participants. In addition, the control condition of the Asch study revealed that participants were almost perfectly accurate when answering independently.
Minority influence
It is possible for a vocal minority to stem the normative influence of a larger majority. In the versions of the Asch study where a dissenter was inserted into the group (see Unanimity section), his presence as a minority member gave the participant the confidence to exert his independence to a greater extent. However, as soon as the dissenter waffled on his opinions and rejoined the majority, participant conformity increased. Thus, a minority must consistently stand by its beliefs to be effective.
In addition, there are other factors that increase the power of the minority: when the majority is forced to think about the beliefs and perspective of the minority, when the majority and minority are similar to one another, and when the minority exhibits some willingness to compromise and be flexible, although there is debate over the degree to which consistency and compromise should be balanced.
It is often the case that whereas a majority influences public compliance with a norm, a minority can engender private acceptance of a new norm, with the result often being conversion (public and private acceptance of a norm).
Cultural differences
There is a distinction between individualistic (e.g., United States) and collectivistic (e.g., Japan) cultures. While some predict that collectivistic cultures would exhibit stronger conformity under normative social influence, this is not necessarily the case—the identity of the group acts as a potential moderator. Because collectivists emphasize the importance of in-group members (e.g., family and friends), normative pressure from in-groups can lead to higher conformity than pressures from strangers.
Gender differences
Many have long wondered whether there is a gender gap in conformity under normative influence, with women possibly conforming more than men. A meta-analysis by Eagly and Carli (1981) shows that this gap is small, and driven by public vs. private situations. Women do conform (slightly) more under normative influence than do men when in public situations as opposed to private ones. Eagly and Carli found that male researchers reported higher levels of conformity among female participants than did female researchers; the authors speculate that each gender could be implicitly biased towards portraying itself in a positive light, thus leading to actions (e.g., setting up experimental conditions under which males or females may be more comfortable) that might favor one gender over the other.
Examples
Fashion choices are often impacted by normative social influence. To feel accepted by a particular crowd, men and women often dress similarly to individuals in that group. Fashion conformity promotes social cohesion within the group and can be a result of both conscious and unconscious motivations.
Similar to fashion conformity, the male and the female views of the ideal body image are often affected by normative social influence. Social media and marketing helps to portray what is commonly considered the current view of physical attractiveness by the masses. As each generation defines the ideal female figure, women feel the pressure to conform to avoid the disapproval of others. Likewise, as society continues to define the ideal male body type as muscular and fit, men also come under pressure to conform, which often leads to changes in eating habits to reach that ideal.
See also
Acceptance
Groupthink
Obedience
Normative
Social control
References
Conformity
Human behavior | Normative social influence | [
"Biology"
] | 1,996 | [
"Behavior",
"Conformity",
"Human behavior"
] |
11,132,625 | https://en.wikipedia.org/wiki/Tuanku%20Zara%20Salim | Tuanku Zara Salim (Jawi: توانكو زارا سليم; born Zara Salim Davidson; 22 March 1973) is the Raja Permaisuri (Queen consort) of Perak as the wife of Sultan Nazrin Muizzuddin Shah, the current Sultan of Perak.
A chemical engineer by training, she was heading an oil and gas consultancy firm based in Kuala Lumpur before her marriage to the Sultan. She and the Sultan, who had been the country's most eligible royal bachelor for decades, have known each other since the mid-1990s. She was officially installed as the Raja Permaisuri of Perak during Sultan Nazrin's enthronement ceremony as the 35th Sultan of Perak on 6 May 2015.
Early life
Born in the city of Ipoh on 22 March 1973, she is the youngest of four children of William Stanley Walker Davidson (Salim Davidson), an Englishman and his ethnic Malay wife of mixed Arab and Thai descent, Sharifah Azaliah binti Syed Omar Shahabudin who is from Alor Setar, Kedah. She is also an extended member of Kedah Royal Family on her maternal side and has three elder brothers. Her father is a prominent lawyer in Perak and Kuala Lumpur.
Zara, who has a strong interest in foreign languages, studied at SMK Convent Ipoh and represented her school in squash and tennis from 1988 to 1990. She also represented Perak in swimming between 1981 and 1987.
After completing her A-Levels at Prime College in 1992, she left for the United Kingdom to study chemical engineering at the University of Nottingham and graduated with first class honours in July 1995. She also won the top student award for her final-year project.
Coincidentally her father-in-law, the late Sultan Azlan Shah, read law at the same university and was conferred the Bachelor of Laws degree in 1953 before being admitted to the English Bar in 1954.
Career
Zara joined the Business Evaluation Department in the Corporate Planning Unit of Petronas in December 1995 and was part of the team responsible for the successful establishment of the Kertih and Kuantan integrated petrochemical complexes, whose foreign partners included BP, BASF, Dow Chemicals and Mitsubishi.
She then became a project analyst in the Petronas Petrochemical Business Unit and was part of the core team developing the Petronas brand essence, which now forms part of the Petronas global branding strategy.
Between February 1999 and October 2000, she was a product manager at Petlin (Malaysia) Sdn Bhd, a Petronas joint venture with DSM of the Netherlands and Sasol of South Africa. She was also part of the Petronas project team to operationalise the largest single-train low-density polyethylene (LDPE) plant in the world at Kertih.
Zara left Petronas in November 2001 to become an account manager at Formis Network Services Sdn Bhd and then assumed the post of vice-president of partnerships and alliances at Formis (Malaysia) Berhad, a technology-based company listed on Bursa Malaysia, between 2003 and 2005.
Between 2005 and 2007, Zara, who is a certified life-saver and adventure sports enthusiast, became the managing director of Forthwave Consulting Sdn Bhd, a hydrocarbon technical engineering and software development company in Kuala Lumpur.
Royal connections
Zara Salim Davidson is the great-grandchild of Kedah's 24th ruler Sultan Abdul Hamid Halim Shah, Utusan Malaysia reported.
The Sultan ruled Kedah for 62 years from 1881 to 1943 and one of the Sultan's sons was Tunku Abdul Rahman, Malaysia's first prime minister. Therefore, she is also the grandniece of the Tunku and a niece of Sultan Abdul Halim Mu'adzam Shah and of the current sultan, Sultan Sallehuddin.
According to Zara's uncle Syed Mohd Aldinuri Syed Omar, Zara's mother Sharifah Azaliah Syed Omar was the daughter of Tunku Aminah, who was the daughter of Sultan Abdul Hamid and Che Manjalara.
From the marriage to Che Manjalara, Sultan Abdul Hamid had seven children, including Tunku Abdul Rahman Putra Al-Haj and Tunku Aminah.
Zara also has a close relationship with the late Sultan of Kedah, Sultan Abdul Halim Mu'adzam Shah Sultan Badlishah, who is the grandson of Sultan Abdul Hamid and a nephew to Tunku Abdul Rahman.
She is also the granddaughter of the former Kedah Menteri Besar Tan Sri Syed Omar Syed Abdullah Shahabudin. Tunku Aminah married Syed Omar, who became Kedah's second Menteri Besar after independence, from July 1959 to December 1967.
Marriage and children
Zara's wedding to Raja Nazrin Shah was held at Istana Iskandariah on 17 May 2007.
A day after the solemnisation of their vows, there was a special proclamation ceremony to bestowed upon her with the honorific prefix of Tuanku and the official title as Raja Puan Besar (Crown Princess) of Perak, a title reserved for the royal wife of the Raja Muda (Crown Prince) of Perak that had been vacant since April 1987.
The royal wedding reception took place on 19 May 2007.
The couple's first child, a son named Raja Azlan Muzzaffar Shah, the Raja Kecil Besar of Perak was born 14 March 2008. Their second child, a daughter named Raja Nazira Safya, was born 2 August 2011.
Current activities and social contributions
On July 9, 2007, a purple, hybrid orchid was named Dendrobium Tuanku Zara Salim in honour of her visit to the 7th Ipoh International Orchid Festival.
She is the current Chancellor of the Sultan Idris Education University (UPSI), and was proclaimed on 1 January 2012.
She has been awarded an Honorary Fellowship by the Institution of Chemical Engineers (IChemE) in April 2014 in recognition of her interest in the continued improvement of Malaysia's education and academic performance.
She then was made as the first Royal Patron of the IChemE in Malaysia in October 2016. On accepting the honour, Tuanku Zara said that chemical engineering can be a challenging career, but rewarding as well. Chemical engineers are highly employable, have wide and diverse career options, and can make a big impact in society and the environment. She intends to, as Royal Patron of the IChemE, do her part to inspire the younger generation to consider chemical engineering as a viable career choice.
She is also a patron of the Family Health Society of Perak, Convent Girls Alumni and Perak Girl Guides.
Styles and honours
Recipient of the Royal Family Order of Perak (DK) (6 May 2015).
Superior Class of the Perak Family Order of Sultan Azlan Shah (DKSA) (18 May 2007)
Member First Class of the Azlanii Royal Family Order (DKA)
References
|-
1973 births
People from Alor Setar
Living people
Alumni of the University of Nottingham
Chemical engineers
Malaysian female swimmers
Malaysian people of English descent
Malaysian people of Arab descent
Malaysian people of Malay descent
Malaysian people of Thai descent
Malaysian Muslims
People from Ipoh
Royal House of Perak
Perak royal consorts
Royal House of Kedah | Tuanku Zara Salim | [
"Chemistry",
"Engineering"
] | 1,507 | [
"Chemical engineering",
"Chemical engineers"
] |
11,133,153 | https://en.wikipedia.org/wiki/Polarization%20spectroscopy | Polarization spectroscopy comprises a set of spectroscopic techniques based on polarization properties of light (not necessarily visible one; UV, X-ray, infrared, or in any other frequency range of the electromagnetic radiation). By analyzing the polarization properties of light, decisions can be made about the media that emitted the light (or the media the light passes/scatters through). Alternatively, a source of polarized light may be used to probe a media; in this case, the changes in the light polarization (compared to the incidental light) allow inferences about the media's properties.
In general, any kind of anisotropy in the media results in some sort of change in polarization. Such an anisotropy can be either inherent to the media (e.g., in the case of a crystal substance), or imposed externally (e.g., in the presence of magnetic field in plasma or by another laser beam).
See also
Faraday effect
Plasma diagnostics
Stark effect
Zeeman effect
References
Spectroscopy | Polarization spectroscopy | [
"Physics",
"Chemistry",
"Astronomy"
] | 215 | [
"Spectroscopy stubs",
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Astronomy stubs",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs"
] |
2,280,325 | https://en.wikipedia.org/wiki/Metabolic%20disorder | A metabolic disorder is a disorder that negatively alters the body's processing and distribution of macronutrients, such as proteins, fats, and carbohydrates. Metabolic disorders can happen when abnormal chemical reactions in the body alter the normal metabolic process. It can also be defined as inherited single gene anomaly, most of which are autosomal recessive.
Signs and symptoms
Some of the symptoms that can occur with metabolic disorders are lethargy, weight loss, jaundice and seizures. The symptoms expressed would vary with the type of metabolic disorder. There are four categories of symptoms: acute symptoms, late-onset acute symptoms, progressive general symptoms and permanent symptoms.
Causes
Inherited metabolic disorders are one cause of metabolic disorders, and occur when a defective gene causes an enzyme deficiency. These diseases, of which there are many subtypes, are known as inborn errors of metabolism. Metabolic diseases can also occur when the liver or pancreas do not function properly.
Types
The principal classes of metabolic disorders are:
Diagnosis
Metabolic disorders can be present at birth, and many can be identified by routine screening. If a metabolic disorder is not identified early, then it may be diagnosed later in life, when symptoms appear. Specific blood and DNA tests can be done to diagnose genetic metabolic disorders.
The gut microbiota, which is a population of microbes that live in the human digestive system, also has an important part in metabolism and generally has a positive function for its host. In terms of pathophysiological/mechanism interactions, an abnormal gut microbiota can play a role in metabolic disorder related obesity.
Screening
Metabolic disorder screening can be done in newborns via blood, skin, or hearing tests.
Management
Metabolic disorders can be treatable by nutrition management, especially if detected early. It is important for dieticians to have knowledge of the genotype to create a treatment that will be more effective for the individual.
See also
Metabolic syndrome
Metabolic Myopathies
Lysosomal storage disease
Deficiency disease
Hypermetabolism
Citrullinemia
References
Further reading
External links
Metabolism
Endocrinology
Medical genetics | Metabolic disorder | [
"Chemistry",
"Biology"
] | 431 | [
"Biochemistry",
"Metabolic disorders",
"Metabolism",
"Cellular processes"
] |
2,280,670 | https://en.wikipedia.org/wiki/Naturalisation%20%28biology%29 | Naturalisation (or naturalization) is the ecological phenomenon through which a species, taxon, or population of exotic (as opposed to native) origin integrates into a given ecosystem, becoming capable of reproducing and growing in it, and proceeds to disseminate spontaneously. In some instances, the presence of a species in a given ecosystem is so ancient that it cannot be presupposed whether it is native or introduced.
Generally, any introduced species may (in the wild) either go extinct or naturalise in its new environment.
Some populations do not sustain themselves reproductively, but exist because of continued influx from elsewhere. Such a non-sustaining population, or the individuals within it, are said to be adventive. Cultivated plants, sometimes called nativars, are a major source of adventive populations.
Botany
In botany, naturalisation is the situation in which an exogenous plant reproduces and disperses on its own in a new environment. For example, northern white cedar is naturalised in the United Kingdom, where it reproduces on its own, while it is not in France, where human intervention via cuttings or seeds are essential for its dissemination.
Two categories of naturalisation are defined from two distinct parameters: one, archaeonaturalised, refers to introduction before a given time (introduced over a hundred years ago), while the second, amphinaturalised or eurynaturalised, implies a notion of spatial extension (taxon assimilated indigenous and present over a vast space, opposed to stenonaturalised).
Degrees of naturalisation
The degrees of naturalisation are defined in relation to the status of nativity or introduction of taxons or species:
Accidental taxon: non-native taxon growing spontaneously, which appears sporadically as a result of accidental introduction due to human activities (as opposed to intentional introductions)
Subspontaneous taxon: taxon naturalised following an introduction of accidental origin (fortuitous introduction linked to human activities) or unknown, and which, after acclimatization, can reproduce like native plants but is still poorly established
Spontaneous taxon: native or non-native taxon growing and reproducing naturally, without intentional human intervention in the territory considered, and is well established (mixes with local flora or fauna)
Zoology
Animal naturalisation is mainly carried out through breeding and by commensalism following human migrations.
The concerned species are thus:
either introduced voluntarily into an ecosystem where they are not native;
either accidentally introduced or become feral;
or by naturally following human migratory flows by commensalism (eg: arrival of house sparrow in Western Europe following Huns, and previously in Eastern Europe from Asia Minor in Antiquity).
It sometimes happens that a naturalised species hybridizes with a native.
Introduction and origin areas
The introduction site or introduction area is the place or, in a broadlier way, the new environment where the candidate species for naturalisation takes root. It is generally opposed to the origin area, where this same species is native.
There is also a more ambiguous notion that is the "natural distribution area" or "natural distribution range", particularly when it comes to anthropophilic species or some species benefiting from anthropogenic land settlement (canals, bridges, deforestation, etc.) that have connected two previously isolated areas (e.g. the Suez Canal, which causes Lessepsian migration).
Impact on the ecosystem
Naturalisation is sometimes done with human help in order to replace another species having suffered directly or indirectly from anthropogenic activities, or deemed less profitable for human use.
Naturalised species may become invasive species if they become sufficiently abundant to have an adverse effect on native species (e.g. microbes affected by invasive plants) or on biotope.
Examples of naturalised species that have become invasive include the European rabbit, native to Europe and which abounds in Australia; or the Japanese knotweed which is invading Europe and America where it is considered to be amongst the one hundred most invasive species in the 21st century. Apart from direct competition between native and introduced populations, genetic pollution by hybridization can add up cumulatively to environmental effects that compromise the conservation of native populations.
Some naturalised species, such as palms, can act as ecosystem engineers, by changing the habitat and creating new niches that can sometimes have positive effects on an ecosystem. Potential and/or perceived positive impacts of naturalised species are less studied than potential and/or perceived negative impacts.
However, the impact on local species is not easy to assess in a short period. For instance, the African sacred ibis (Threskiornis aethiopicus) escaped in 1990 from an animal park in Morbihan (France), gave rise to an eradication campaign in 2008. In 2013, however, the CNRS stated that this bird species is not a threat in France, and may even promote Eurasian spoonbill and limit the development of the invasive Louisiana crayfish.
See also
Adventitious plant
Adventive species
Colonisation (biology)
Cosmopolitan distribution
Endemism
Hemerochory
Indigenous (ecology)
References
Ecological processes
Ecology terminology
. | Naturalisation (biology) | [
"Physics",
"Biology"
] | 1,043 | [
"Ecology terminology",
"Physical phenomena",
"Ecological processes",
"Earth phenomena"
] |
2,280,725 | https://en.wikipedia.org/wiki/Nucleotide%20exchange%20factor | Nucleotide exchange factors (NEFs) are proteins that stimulate the exchange (replacement) of nucleoside diphosphates for nucleoside triphosphates bound to other proteins.
Function
Many cellular proteins cleave (hydrolyze) nucleoside triphosphates – adenosine triphosphate (ATP) or guanosine triphosphate (GTP) – to their diphosphate forms (ADP and GDP) as a source of energy and to drive conformational changes. These changes in turn affect the structural, enzymatic, or signalling properties of the protein.
Nucleotide exchange factors actively assist in the exchange of depleted nucleoside diphosphates for fresh nucleoside triphosphates. NEFs are specific for the nucleotides they exchange (ADP or GDP, but not both) and are often specific to a single protein or class of proteins with which they interact.
See also
Nucleoside-diphosphate kinase
Guanine nucleotide exchange factor
References
External links
Alfred Wittinghofer's Seminar: GTP-Binding Proteins as Molecular Switches
Proteins
Articles containing video clips | Nucleotide exchange factor | [
"Chemistry"
] | 244 | [
"Biomolecules by chemical classification",
"Protein stubs",
"Biochemistry stubs",
"Molecular biology",
"Proteins"
] |
2,280,942 | https://en.wikipedia.org/wiki/IEC%2060228 | IEC 60228 is the International Electrotechnical Commission (IEC)'s international standard on conductors of insulated cables. the current version is Third Edition 2004-11
Among other things, it defines a set of standard wire cross-sectional areas:
In engineering applications, it is often most convenient to describe a wire in terms of its cross-section area, rather than its diameter, because the cross section is directly proportional to its strength and weight, and inversely proportional to its resistance. The cross-sectional area is also related to the maximum current that a metallic wire can carry safely.
This document is one considered fundamental in that it does not contain reference to any other standard.
Description
The document describes several aspects of the conductors for electrical cables
Class
This refers to the flexibility and thermal effects i.e temperature of a conductor.
Class 1: Solid conductor
Class 2: Stranded conductor intended for fixed installation
Class 5: Flexible conductor
Class 6: Very Flexible conductor
Size
The nominal (see below) cross-sectional area for standard conductors including the following:
Class 2: Minimum number of strands required to make particular conductor size
Class 5 and 6: Maximum diameter of any component strand of the conductor
Resistance
The maximum permissible resistance per unit length (in ohms per kilometre – Ω/km) of each conductor size, class and type (both plain copper and metal coated)
Purpose of the document
This document and its precursors were created due to a need for a standard definition of cable conductor size. The main problem being that not all copper has the same resistivity value, so, for example, a 4 mm2 conductor from two different suppliers may have different resistance values. Instead this document describes conductors by their nominal size, determined by resistance rather than physical dimensions. This is a key distinction as it makes a standardized definition of conductors based solely on their electrical characteristics.
Almost all characteristics of conductors, resistance, current carrying capacity etc. are dependent on the physical dimensions of the conductor. However this document allows an easy reference whereby the standard conductor sizes and reference to physical dimensions are maintained but given an exact meaning in terms of the electrical characteristics of a conductor.
Footnotes
See also
Circular mil, Unusual unit used as the North American Electrical industry standard for wires larger than 4/0.
American wire gauge (AWG), used primarily in the US and Canada
Standard wire gauge (SWG), the British imperial standard BS3737, superseded by the metric.
Stubs Iron Wire Gauge
Jewelry wire gauge
Body jewelry sizes
Electrical wiring
Number 8 wire, a term used in the New Zealand vernacular
References
External links
60228
Wire gauges | IEC 60228 | [
"Technology"
] | 523 | [
"Computer standards",
"IEC standards"
] |
2,281,338 | https://en.wikipedia.org/wiki/Court-bouillon | Court-bouillon or court bouillon (in Louisiana, pronounced coo-bee-yon) is a quickly-cooked broth used for poaching other foods, most commonly fish or seafood. It is also sometimes used for poaching vegetables, eggs, sweetbreads, cockscombs, and delicate meats. It includes seasonings and salt but lacks animal gelatin.
Description
Court bouillon loosely translates from French as "short broth" because the cooking time is brief in comparison with a rich and complex stock, and generally is not served as part of the finished dish. Because delicate foods do not cook for very long, it is prepared before the foods are added. Typically, cooking times do not exceed 60 minutes.
Although a court bouillon may become the base for a stock or fumet, in traditional terms it is differentiated by the inclusion of acidulating ingredients such as wine, vinegar, or lemon juice. In addition to contributing their own flavor, acids help to draw flavors from the vegetable aromatics during the short preparation time prior to use. Court bouillon also includes salt and lacks animal gelatin.
Types
Traditionally, court bouillon is made from water, salt, white wine, vegetable aromatics (mirepoix of carrot, onion, and celery), and flavored with bouquet garni and black pepper.
Court-bouillon need not be elaborate. Court bouillon used to prepare lobster may be as simple as water, salt, lemon juice, and perhaps thyme and bay leaf; that for poached eggs may be salt, water, and vinegar.
In Louisiana Creole and Cajun cuisines, court-bouillon — often spelled "courtbouillon" — refers to a thick, rich fish stew most often prepared with redfish and thickened with roux.
See also
Nage
References
Further reading
McGee, Harold. On Food and Cooking: The Science and Lore of the Kitchen. (, 2004)
Larousse Gastronomique
Oxford Companion to Food
French cuisine
Cajun cuisine
Food ingredients | Court-bouillon | [
"Technology"
] | 427 | [
"Food ingredients",
"Components"
] |
2,281,448 | https://en.wikipedia.org/wiki/AutoIt | AutoIt is a freeware programming language for Microsoft Windows. In its earliest release, it was primarily intended to create automation scripts (sometimes called macros) for Microsoft Windows programs but has since grown to include enhancements in both programming language design and overall functionality.
The scripting language in AutoIt 1 and 2 was statement-driven and designed primarily for simulating user interaction. From version 3 onward, the AutoIt syntax is similar to that found in the BASIC family of languages. In this form, AutoIt is a general-purpose, third-generation programming language with a classical data model and a variant data type that can store several types of data, including arrays.
An AutoIt automation script can be converted into a compressed, stand-alone executable which can be run on computers even if they do not have the AutoIt interpreter installed. A wide range of function libraries (known as UDFs, or "User Defined Functions") are also included as standard or are available from the website to add specialized functionality. AutoIt is also distributed with an IDE based on the free SciTE editor. The compiler and help text are fully integrated and provide a de facto standard environment for developers using AutoIt.
History
AutoIt1 and AutoIt2 were closed-source projects, and had a very different syntax than AutoIt3, whose syntax is more like VBScript and BASIC.
AutoIt3 was initially free and open-source, licensed under the terms of the GNU General Public License, with its initial public release 3.0.100 in February 2004, and had open-source releases in March 2004 and August 2004. Version 3.0.102, released in August 2004, was initially open-source, but by January 2005 was distributed as closed-source. Subsequent releases, starting from the February 2005 release of version 3.1.0, were all closed-source. Version 3.1.0 was also the first release with support for GUI scripts.
Related projects
The free and open-source AutoHotkey project derived 29 of its functions from the AutoIt 3.1 source code. The AutoHotkey syntax is quite different from AutoIt3 syntax, and rather resembles AutoIt2 syntax.
Features
AutoIt is typically used to produce utility software for Microsoft Windows and to automate routine tasks, such as systems management, monitoring, maintenance, or software installation. It is also used to simulate user interaction, whereby an application is "driven" (via automated form entry, keypresses, mouse clicks, and so on) to do things by an AutoIt script.
AutoIt can also be used in low-cost laboratory automation. Applications include instrument synchronization, alarm monitoring and results gathering. Devices such as CNC routers and 3D-printers can also be controlled.
64-bit code support from version 3.2.10.0
Add-on libraries and modules for specific apps
Automate sending user input and keystrokes to apps, as well as to individual controls within an app
Call functions in DLL files
Compatible with User Account Control
Compiling into standalone executables
Create graphical user interfaces, including message and input boxes
Include data files in the compiled file to be extracted when running
Manipulate windows and processes
Object-oriented design through a library
Play sounds, pause, resume, stop, seek, get the current position of the sound and get the length of the sound
Run console apps and access the standard streams
Scripting language with BASIC-like structure for Windows
Simulate mouse movements
Supports component object model (COM)
Supports regular expressions
Supports TCP and UDP protocols
Unicode support from version 3.2.4.0
Examples
Hello world
; Make available a library of constant values.
#include <MsgBoxConstants.au3>
; Displays "Hello, world!" in a message box.
MsgBox($MB_SYSTEMMODAL, "Title", "Hello, world!")
Automating the Windows Calculator
; Make available a library of constant values.
#include <MsgBoxConstants.au3>
; Display a message box with a timeout of 6 seconds.
MsgBox($MB_OK, "Attention", "Avoid touching the keyboard or mouse during automation.", 6)
; Run the Windows Calculator.
Run("calc.exe")
; Wait for the calculator to become active with a timeout of 10 seconds.
WinWaitActive("[CLASS:CalcFrame]", "", 10)
; If the calculator did not appear after 10 seconds then exit the script.
If WinExists("[CLASS:CalcFrame]") = 0 Then Exit
; Automatically type the current year into the calculator.
Send(@YEAR)
; Let's slow the script down a bit so we can see what's going on.
Sleep(600)
; Automatically type in 'divide by 4', and then sleep 600 ms.
Send("/4")
Sleep(600)
; Hit the return key to display the result, and sleep 600 ms.
Send("{ENTER}")
Sleep(600)
; Copy the result to the clipboard using the Windows shortcut Ctrl+C.
Send("^c")
; Declare, and assign the contents of the clipboard to, a variable.
Local $fResult = ClipGet()
; Check to see if the variable contains a decimal point or not.
If StringInStr($fResult, ".") Then
; Display a message box with a timeout of 5 seconds.
MsgBox($MB_OK, "Leap Year", @YEAR & " is not a leap year.", 5)
Else
; This message will only display if the current year is a leap year.
MsgBox($MB_OK, "Leap Year", @YEAR & " is a leap year.", 5)
EndIf
; Close the Windows calculator - always tidy up afterwards.
WinClose("[CLASS:CalcFrame]")
Find average
; Find Average by JohnOne, modified by czardas
#include <MsgBoxConstants.au3>
_Example() ; Run the example.
Func _Example()
; Display an input box and ask the user to enter some numbers separated by commas.
Local $sInput = InputBox("Find Average", "Enter some numbers separated by commas: 1,2,42,100,3")
; If an error occurred then exit the script.
If @error Then Exit
; Populate an array with the user's input.
Local $aSplit = StringSplit($sInput, ",")
; Pass the array to the function _Find_Average() and then check for errors.
Local $fAverage = _Find_Average($aSplit)
If @error Then Exit
; Display the result in a message box.
MsgBox($MB_OK, "Find Average", "Result: " & $fAverage)
EndFunc ;==>_Example
Func _Find_Average($aArray)
; If the input is not of the correct type (an array), then return an error along with the details.
If Not IsArray($aArray) Then Return SetError(1, 0, VarGetType($aArray))
; More detailed checks are possible, but for brevity just one is performed here.
; Declare a variable to store the sum of the numbers.
Local $iArraySum = 0
; Loop through the array.
For $i = 1 To $aArray[0]
; Increment the sum by the number in each array element.
$iArraySum += Number($aArray[$i])
Next
; Return the average rounded to 2 decimal places.
Return Round($iArraySum / $aArray[0], 2)
EndFunc ;==>_Find_Average
See also
AutoHotkey
Automator (for Macintosh)
Expect
iMacros
Keyboard Maestro (for Macintosh)
KiXtart
Macro Express
Winbatch
References
External links
Scripting languages
Automation software
Proprietary software
BASIC programming language family | AutoIt | [
"Engineering"
] | 1,706 | [
"Automation software",
"Automation"
] |
2,281,449 | https://en.wikipedia.org/wiki/NetLander | In late 2007 (or 2009) CNES (the French Space Agency) and ESA (the European Space Agency) had planned to send a remote sensing orbiter and four small Netlanders to Mars. They planned to land them in four different locations. The CNES and ESA cancelled this mission because it was too expensive; both agencies planned to send other orbiters and landers for missions like ExoMars.
The Landers' mission was to take pictures of the landing site and explore Mars' internal surface and atmosphere. Each of the four landers was to carry instruments for the following measurements:
Seismometer, IPG, France
Panoramic camera, DLR, Germany
Atmospheric sensors, FMI, Finland
Atmospheric electricity sensor, CETP, France
geodesic and ionospheric measurements, GRGS, France
Soil properties measurements, University of Münster, Germany
Ground Penetrating Radar, CETP, France
Magnetometer, CETP, France
Microphone, University of California, Berkeley, USA
The Orbiter's mission was to take pictures from orbit, explore Mars' atmosphere and relay information and images from the Netlanders.
The MetNet multi-lander mission to Mars is based on the legacy of NetLander. MetNet is scheduled for launches in 2011–2019. The NetLander design has been examined as the basis for a small Solar System body lander, such as a comet.
References
See also
ExoMars (Europes big Mars mission in the 2010s)
Missions to Mars
Cancelled spacecraft
European Space Agency space probes | NetLander | [
"Astronomy"
] | 310 | [
"Astronomy stubs",
"Spacecraft stubs"
] |
2,281,512 | https://en.wikipedia.org/wiki/Delta%20Eridani | Delta Eridani, which is Latinized from δ Eridani, is the fifth-brightest star in the constellation of Eridanus.
The star is visible to the naked eye and has been observed to vary slightly in brightness between magnitudes 3.51 and 3.56, although subsequent observations did not bear this out. It is relatively near to the Sun, with a distance of about 29.5 light years as determined from parallax. The star is drifting closer with a radial velocity of −6 km/s.
Delta Eridani is sometimes called Rana: Rana means "the frog" in Latin, but derivation of this name is uncertain. The name was approved by the International Astronomic Union on 4 April 2022.
Characteristics
The stellar classification of this star is K0 IV, matching a subgiant star that has exhausted its core hydrogen. This has caused the star to expand and become cooler than a comparable main sequence star. Stellar modelling indicates it is near the end of the subgiant stage and about to transition into a giant. It is an estimated six billion years old with 33% more mass than the Sun. The star has 2.35 times the size of the Sun and is radiating three times the Sun's luminosity from its photosphere at an effective temperature of 5,022 K.
Delta Eridani is catalogued as a suspected RS Canum Venaticorum variable in 1983, but the activity level for the star is very low that this is considered a mistake. A low projected rotational velocity of under 1 km/s and the lack of radial velocity variation suggests that this star is being viewed from nearly pole-on. However, an examination of the star using interferometry does not detect the presence of a companion at the expected distance.
Chinese name
In Chinese, (), meaning Celestial Meadows, refers to an asterism consisting of δ Eridani, γ Eridani, π Eridani, ε Eridani, ζ Eridani, η Eridani, π Ceti, τ1 Eridani, τ2 Eridani, τ3 Eridani, τ4 Eridani, τ5 Eridani, τ6 Eridani, τ7 Eridani, τ8 Eridani and τ9 Eridani. Consequently, the Chinese name for δ Eridani itself is (, .)
See also
List of star systems within 25–30 light-years
References
External links
nStars entry
K-type subgiants
Eridani, Delta
Suspected variables
Eridanus (constellation)
Eridani, Delta
BD-10 0728
Eridani, 23
0150
023249
017378
1136
Rana | Delta Eridani | [
"Astronomy"
] | 558 | [
"Eridanus (constellation)",
"Constellations"
] |
2,281,782 | https://en.wikipedia.org/wiki/History%20of%20thermodynamics | The history of thermodynamics is a fundamental strand in the history of physics, the history of chemistry, and the history of science in general. Due to the relevance of thermodynamics in much of science and technology, its history is finely woven with the developments of classical mechanics, quantum mechanics, magnetism, and chemical kinetics, to more distant applied fields such as meteorology, information theory, and biology (physiology), and to technological developments such as the steam engine, internal combustion engine, cryogenics and electricity generation. The development of thermodynamics both drove and was driven by atomic theory. It also, albeit in a subtle manner, motivated new directions in probability and statistics; see, for example, the timeline of thermodynamics.
Antiquity
The ancients viewed heat as that related to fire. In 3000 BC, the ancient Egyptians viewed heat as related to origin mythologies. The ancient Indian philosophy including Vedic philosophy believed that five classical elements (or pancha mahā bhūta) are the basis of all cosmic creations. In the Western philosophical tradition, after much debate about the primal element among earlier pre-Socratic philosophers, Empedocles proposed a four-element theory, in which all substances derive from earth, water, air, and fire. The Empedoclean element of fire is perhaps the principal ancestor of later concepts such as phlogiston and caloric. Around 500 BC, the Greek philosopher Heraclitus became famous as the "flux and fire" philosopher for his proverbial utterance: "All things are flowing." Heraclitus argued that the three principal elements in nature were fire, earth, and water.
Vacuum-abhorrence
The 5th century BC Greek philosopher Parmenides, in his only known work, a poem conventionally titled On Nature, uses verbal reasoning to postulate that a void, essentially what is now known as a vacuum, in nature could not occur. This view was supported by the arguments of Aristotle, but was criticized by Leucippus and Hero of Alexandria. From antiquity to the Middle Ages various arguments were put forward to prove or disapprove the existence of a vacuum and several attempts were made to construct a vacuum but all proved unsuccessful.
Atomism
Atomism is a central part of today's relationship between thermodynamics and statistical mechanics. Ancient thinkers such as Leucippus and Democritus, and later the Epicureans, by advancing atomism, laid the foundations for the later atomic theory. Until experimental proof of atoms was later provided in the 20th century, the atomic theory was driven largely by philosophical considerations and scientific intuition.
17th century
Early thermometers
The European scientists Cornelius Drebbel, Robert Fludd, Galileo Galilei and Santorio Santorio in the 16th and 17th centuries were able to gauge the relative "coldness" or "hotness" of air, using a rudimentary air thermometer (or thermoscope). This may have been influenced by an earlier device which could expand and contract the air constructed by Philo of Byzantium and Hero of Alexandria.
"Heat is motion" (Francis Bacon)
The idea that heat is a form of motion is perhaps an ancient one and is certainly discussed by the English philosopher and scientist Francis Bacon in 1620 in his Novum Organum. Bacon surmised: "Heat itself, its essence and quiddity is motion and nothing else." "not ... of the whole, but of the small particles of the body."
René Descartes
Precursor to work
In 1637, in a letter to the Dutch scientist Christiaan Huygens, the French philosopher René Descartes wrote:
In 1686, the German philosopher Gottfried Leibniz wrote essentially the same thing: The same force ["work" in modern terms] is necessary to raise body A of 1 pound (libra) to a height of 4 yards (ulnae), as is necessary to raise body B of 4 pounds to a height of 1 yard.
Quantity of motion
In Principles of Philosophy (Principia Philosophiae) from 1644, Descartes defined "quantity of motion" (Latin: quantitas motus) as the product of size and speed, and claimed that the total quantity of motion in the universe is conserved.
He claimed that merely by letting things run their course, God preserves the same amount of motion as He created, and that thus the total quantity of motion in the universe is conserved.
Boyle's law
Irish physicist and chemist Robert Boyle in 1656, in coordination with English scientist Robert Hooke, built an air pump. Using this pump, Boyle and Hooke noticed the pressure-volume correlation: PV=constant. In that time, air was assumed to be a system of motionless particles, and not interpreted as a system of moving molecules. The concept of thermal motion came two centuries later. Therefore, Boyle's publication in 1660 speaks about a mechanical concept: the air spring. Later, after the invention of the thermometer, the property temperature could be quantified. This tool gave Gay-Lussac the opportunity to derive his law, which led shortly later to the ideal gas law.
Gas laws in brief
Boyle's law (1662)
Charles's law was first published by Joseph Louis Gay-Lussac in 1802, but he referenced unpublished work by Jacques Charles from around 1787. The relationship had been anticipated by the work of Guillaume Amontons in 1702.
Gay-Lussac's law (1802)
Steam digester
Denis Papin, an associate of Boyle's, built in 1679 a bone digester, which is a closed vessel with a tightly fitting lid that confines steam until a high pressure is generated. Later designs implemented a steam release valve to keep the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and cylinder engine. He did not however follow through with his design. Nevertheless, in 1697, based on Papin's designs, Thomas Newcomen greatly improved upon engineer Thomas Savery's earlier "fire engine" by incorporating a piston. This made it suitable for mechanical work in addition to pumping to heights beyond 30 feet, and is thus often considered the first true steam engine.
Heat transfer (Halley and Newton)
The phenomenon of heat conduction is immediately grasped in everyday life. The fact that warm air rises and the importance of the phenomenon to meteorology was first realised by Edmond Halley in 1686.
In 1701, Sir Isaac Newton published his law of cooling.
18th century
Phlogiston theory
The theory of phlogiston arose in the 17th century, late in the period of alchemy. Its replacement by caloric theory in the 18th century is one of the historical markers of the transition from alchemy to chemistry. Phlogiston was a hypothetical substance that was presumed to be liberated from combustible substances during burning, and from metals during the process of rusting.
Limit to the "degree of cold"
In 1702 Guillaume Amontons introduced the concept of absolute zero based on observations of gases.
Kinetic theory (18th century)
An early scientific reflection on the microscopic and kinetic nature of matter and heat is found in a work by Mikhail Lomonosov, in which he wrote: "Movement should not be denied based on the fact it is not seen. ... leaves of trees move when rustled by a wind, despite it being unobservable from large distances. Just as in this case motion ... remains hidden in warm bodies due to the extremely small sizes of the moving particles."
During the same years, Daniel Bernoulli published his book Hydrodynamics (1738), in which he derived an equation for the pressure of a gas considering the collisions of its atoms with the walls of a container. He proved that this pressure is two thirds the average kinetic energy of the gas in a unit volume. Bernoulli's ideas, however, made little impact on the dominant caloric culture. Bernoulli made a connection with Gottfried Leibniz's vis viva principle, an early formulation of the principle of conservation of energy, and the two theories became intimately entwined throughout their history.
Thermochemistry and steam engines
Heat capacity
Bodies were capable of holding a certain amount of this fluid, leading to the term heat capacity, named and first investigated by Scottish chemist Joseph Black in the 1750s.
In the mid- to late 19th century, heat became understood as a manifestation of a system's internal energy. Today heat is seen as the transfer of disordered thermal energy. Nevertheless, at least in English, the term heat capacity survives. In some other languages, the term thermal capacity is preferred, and it is also sometimes used in English.
Steam engines
Prior to 1698 and the invention of the Savery engine, horses were used to power pulleys, attached to buckets, which lifted water out of flooded salt mines in England. In the years to follow, more variations of steam engines were built, such as the Newcomen engine, and later the Watt engine. In time, these early engines would eventually be utilized in place of horses. Thus, each engine began to be associated with a certain amount of "horse power" depending upon how many horses it had replaced. The main problem with these first engines was that they were slow and clumsy, converting less than 2% of the input fuel into useful work. In other words, large quantities of coal (or wood) had to be burned to yield only a small fraction of work output. Hence the need for a new science of engine dynamics was born.
Caloric theory
In the mid- to late 18th century, heat was thought to be a measurement of an invisible fluid, known as the caloric. Like phlogiston, caloric was presumed to be the "substance" of heat that would flow from a hotter body to a cooler body, thus warming it. The utility and explanatory power of kinetic theory, however, soon started to displace the caloric theory. Nevertheless, William Thomson, for example, was still trying to explain James Joule's observations within a caloric framework as late as 1850. The caloric theory was largely obsolete by the end of the 19th century.
Calorimetry
Joseph Black and Antoine Lavoisier made important contributions in the precise measurement of heat changes using the calorimeter, a subject which became known as thermochemistry. The development of the steam engine focused attention on calorimetry and the amount of heat produced from different types of coal. The first quantitative research on the heat changes during chemical reactions was initiated by Lavoisier using an ice calorimeter following research by Joseph Black on the latent heat of water.
Thermal conduction and thermal radiation
Carl Wilhelm Scheele distinguished heat transfer by thermal radiation (radiant heat) from that by convection and conduction in 1777.
In the 17th century, it came to be believed that all materials had an identical conductivity and that differences in sensation arose from their different heat capacities. Suggestions that this might not be the case came from the new science of electricity in which it was easily apparent that some materials were good electrical conductors while others were effective insulators. Jan Ingen-Housz in 1785-9 made some of the earliest measurements, as did Benjamin Thompson during the same period.
In 1791, Pierre Prévost showed that all bodies radiate heat, no matter how hot or cold they are. In 1804, Sir John Leslie observed that a matte black surface radiates heat more effectively than a polished surface, suggesting the importance of black-body radiation.
Heat and friction (Rumford)
In the 19th century, scientists abandoned the idea of a physical caloric. The first substantial experimental challenges to the caloric theory arose in a work by Benjamin Thompson's (Count Rumford) from 1798, in which he showed that boring cast iron cannons produced great amounts of heat which he ascribed to friction. His work was among the first to undermine the caloric theory.
As a result of his experiments in 1798, Thompson suggested that heat was a form of motion, though no attempt was made to reconcile theoretical and experimental approaches, and it is unlikely that he was thinking of the vis viva principle.
Early 19th century
Modern thermodynamics (Carnot)
Although early steam engines were crude and inefficient, they attracted the attention of the leading scientists of the time. One such scientist was Sadi Carnot, the "father of thermodynamics", who in 1824 published Reflections on the Motive Power of Fire, a discourse on heat, power, and engine efficiency. Most cite this book as the starting point for thermodynamics as a modern science. (The name "thermodynamics", however, did not arrive until 1854, when the British mathematician and physicist William Thomson (Lord Kelvin) coined the term thermo-dynamics in his paper On the Dynamical Theory of Heat.)
Carnot defined "motive power" to be the expression of the useful effect that a motor is capable of producing. Herein, Carnot introduced us to the first modern day definition of "work": weight lifted through a height. The desire to understand, via formulation, this useful effect in relation to "work" is at the core of all modern day thermodynamics.
Even though he was working with the caloric theory, Carnot in 1824 suggested that some of the caloric available for generating useful work is lost in any real process.
Reflection, refraction, and polarisation of radiant heat
Though it had come to be suspected from Scheele's work, in 1831 Macedonio Melloni demonstrated that radiant heat could be reflected, refracted and polarised in the same way as light.
Kinetic theory (early 19th century)
John Herapath independently formulated a kinetic theory in 1820, but mistakenly associated temperature with momentum rather than vis viva or kinetic energy. His work ultimately failed peer review, even from someone as well-disposed to the kinetic principle as Humphry Davy, and was neglected.
John James Waterston in 1843 provided a largely accurate account, again independently, but his work received the same reception, failing peer review.
Further progress in kinetic theory started only in the middle of the 19th century, with the works of Rudolf Clausius, James Clerk Maxwell, and Ludwig Boltzmann.
Mechanical equivalent of heat
Quantitative studies by Joule from 1843 onwards provided soundly reproducible phenomena, and helped to place the subject of thermodynamics on a solid footing. In 1843, Joule experimentally found the mechanical equivalent of heat. In 1845, Joule reported his best-known experiment, involving the use of a falling weight to spin a paddle-wheel in a barrel of water, which allowed him to estimate a mechanical equivalent of heat of 819 ft·lbf/Btu (4.41 J/cal). This led to the theory of conservation of energy and explained why heat can do work.
Absolute zero and the Kelvin scale
The idea of absolute zero was generalised in 1848 by Lord Kelvin.
Late 19th century
Entropy and the second law of thermodynamics
Lord Kelvin
In March 1851, while grappling to come to terms with the work of Joule, Lord Kelvin started to speculate that there was an inevitable loss of useful heat in all processes. The idea was framed even more dramatically by Hermann von Helmholtz in 1854, giving birth to the spectre of the heat death of the universe.
William Rankine
In 1854, William John Macquorn Rankine started to make use of what he called thermodynamic function in calculations. This has subsequently been shown to be identical to the concept of entropy formulated by the famed mathematical physicist Rudolf Clausius.
Rudolf Clausius
In 1865, Clausius coined the term "entropy" (das Wärmegewicht, symbolized S) to denote heat lost or turned into waste. ("Wärmegewicht" translates literally as "heat-weight"; the corresponding English term stems from the Greek τρέπω, "I turn".) Clausius used the concept to develop his classic statement of the second law of thermodynamics the same year.
Statistical thermodynamics
Temperature is average kinetic energy of molecules
In his 1857 work On the nature of the motion called heat, Clausius for the first time clearly states that heat is the average kinetic energy of molecules.
Maxwell–Boltzmann distribution
Clausius' above statement interested the Scottish mathematician and physicist James Clerk Maxwell, who in 1859 derived the momentum distribution later named after him. The Austrian physicist Ludwig Boltzmann subsequently generalized this distribution for the case of gases in external fields. In association with Clausius, in 1871, Maxwell formulated a new branch of thermodynamics called statistical thermodynamics, which functions to analyze large numbers of particles at equilibrium, i.e., systems where no changes are occurring, such that only their average properties as temperature T, pressure P, and volume V become important.
Degrees of freedom
Boltzmann is perhaps the most significant contributor to kinetic theory, as he introduced many of the fundamental concepts in the theory. Besides the Maxwell–Boltzmann distribution mentioned above, he also associated the kinetic energy of particles with their degrees of freedom. The Boltzmann equation for the distribution function of a gas in non-equilibrium states is still the most effective equation for studying transport phenomena in gases and metals. By introducing the concept of thermodynamic probability as the number of microstates corresponding to the current macrostate, he showed that its logarithm is proportional to entropy.
Definition of entropy
In 1875, the Austrian physicist Ludwig Boltzmann formulated a precise connection between entropy S and molecular motion:
being defined in terms of the number of possible states W that such motion could occupy, where k is the Boltzmann constant.
Gibbs free energy
In 1876, chemical engineer Willard Gibbs published an obscure 300-page paper titled: On the Equilibrium of Heterogeneous Substances, wherein he formulated one grand equality, the Gibbs free energy equation, which suggested a measure of the amount of "useful work" attainable in reacting systems.
Enthalpy
Gibbs also originated the concept we now know as enthalpy H, calling it "a heat function for constant pressure". The modern word enthalpy would be coined many years later by Heike Kamerlingh Onnes, who based it on the Greek word enthalpein meaning to warm.
Stefan–Boltzmann law
James Clerk Maxwell's 1862 insight that both light and radiant heat were forms of electromagnetic wave led to the start of the quantitative analysis of thermal radiation. In 1879, Jožef Stefan observed that the total radiant flux from a blackbody is proportional to the fourth power of its temperature and stated the Stefan–Boltzmann law. The law was derived theoretically by Ludwig Boltzmann in 1884.
20th century
Quantum thermodynamics
In 1900 Max Planck found an accurate formula for the spectrum of black-body radiation. Fitting new data required the introduction of a new constant, known as the Planck constant, the fundamental constant of modern physics. Looking at the radiation as coming from a cavity oscillator in thermal equilibrium, the formula suggested that energy in a cavity occurs only in multiples of frequency times the constant. That is, it is quantized. This avoided a divergence to which the theory would lead without the quantization.
Third law of thermodynamics
In 1906, Walther Nernst stated the third law of thermodynamics.
Erwin Schrödinger
Building on the foundations above, Lars Onsager, Erwin Schrödinger, Ilya Prigogine and others, brought these engine "concepts" into the thoroughfare of almost every modern-day branch of science.
Branches of thermodynamics
The following list is a rough disciplinary outline of the major branches of thermodynamics and their time of inception:
Thermochemistry – 1780s
Classical thermodynamics – 1824
Chemical thermodynamics – 1876
Statistical mechanics – c. 1880s
Equilibrium thermodynamics
Engineering thermodynamics
Chemical engineering thermodynamics – c. 1940s
Non-equilibrium thermodynamics – 1941
Small systems thermodynamics – 1960s
Biological thermodynamics – 1957
Ecosystem thermodynamics – 1959
Relativistic thermodynamics – 1965
Rational thermodynamics – 1960s
Quantum thermodynamics – 1968
Black hole thermodynamics – c. 1970s
Theory of critical phenomena and use of renormalization group theory in statistical physics – 1966-1974
Geological thermodynamics – c. 1970s
Biological evolution thermodynamics – 1978
Geochemical thermodynamics – c. 1980s
Atmospheric thermodynamics – c. 1980s
Natural systems thermodynamics – 1990s
Supramolecular thermodynamics – 1990s
Earthquake thermodynamics – 2000
Drug-receptor thermodynamics – 2001
Pharmaceutical systems thermodynamics – 2002
Concepts of thermodynamics have also been applied in other fields, for example:
Thermoeconomics – c. 1970s
See also
History of chemistry
Timeline of heat engine technology
Timeline of low-temperature technology
Timeline of thermodynamics
References
Further reading
External links
History of Thermodynamics – University of Waterloo
Thermodynamic History Notes – WolframScience.com
Brief History of Thermodynamics – Berkeley [PDF]
Thermodynamics
Chemical engineering | History of thermodynamics | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 4,435 | [
"Chemical engineering",
"History of thermodynamics",
"Thermodynamics",
"nan",
"Dynamical systems"
] |
2,281,811 | https://en.wikipedia.org/wiki/Viscous%20coupling%20unit | A viscous coupling is a mechanical device which transfers torque and rotation by the medium of a viscous fluid.
Design
Rotary viscous couplings with interleaved, perforated plates and filled with viscous fluids are used in automotive systems to transmit torque. The device consists of a number of circular plates with tabs or perforations, fitted very close to each other in a sealed drum. Alternate plates are connected to a driving shaft at one end of the assembly and a driven shaft at the other end. The drum is filled with a dilatant fluid, often silicone-based, to about 80% by volume. When the two sets of plates are rotating in unison, the fluid stays cool and remains liquid. When the plates start rotating at different speeds, the shear effect of the tabs or perforations on the fluid will cause it to heat and become nearly solid because the viscosity of dilatant fluids rapidly increases with shear. The fluid in this state will effectively glue the plates together and transmit power from one set of plates to the other. The size of the tabs or perforations, the number of plates, and the fluid used will determine the strength and onset of this mechanical transfer.
This type of device essentially differs from fluid couplings such as torque converters by using the viscosity of the medium to transfer torque, rather than its momentum. This makes it potentially useful even on very small scales and requires less cooling. The torque transmitted is sensitive to the difference in speeds of the input and output but is almost independent of their common rate.
Vehicles
Viscous couplings are used as the center differential in some four-wheel-drive (4WD) vehicles.
The first mass-produced viscous couplings for a permanent 4WD off-road-capable vehicle were in the AMC Eagle, which was produced from 1980 to 1988 model years. The AMC Eagle's single-speed model 119 New Process central differential used a viscous coupling filled with a liquid silicone-based material. It linked the front and rear differentials for quiet and smooth transfer of power to the axle with the greatest traction, on wet or dry pavement.
Viscous couplings are used as the center differential in cars such as the Toyota Celica GT-Four, and also as a limited slip differential (LSD) in rear axles. They offer a cheaper way to implement four-wheel-drive than technologies like the mechanical-transfer Torsen differentials.
Volvo, Subaru, Land Rover, Vauxhall/Opel, and many others have also used viscous couplings in their drivelines at various times. They are now mostly superseded by electronically controlled devices.
Attributions
Tony Rolt is often credited with the original idea and development as applied to automatic couplings in vehicle transmission systems, particularly four-wheel drive, working with Freddie Dixon at the time. Ferguson Research and FF Developments were companies formed in the 1970s to commercialize much early work into 4WD systems and devices.
It is thought that GKN Driveline owns any remaining worldwide patents today and offers several variants and combinations of viscous couplings integrated with other driveline components.
See also
Hele-Shaw clutch
Limited-slip differential
References
External links
Interactive animation
The Viscous Coupling: how to test & replace it
Different types of 4WD
Some torque~temp VC calibration curves
Mechanical devices using viscosity
Vehicle parts
Clutches | Viscous coupling unit | [
"Technology"
] | 693 | [
"Vehicle parts",
"Components"
] |
2,282,048 | https://en.wikipedia.org/wiki/Parametrization%20%28climate%20modeling%29 | Parameterization in a weather or climate model is a method of replacing processes that are too small-scale or complex to be physically represented in the model by a simplified process. This can be contrasted with other processes—e.g., large-scale flow of the atmosphere—that are explicitly resolved within the models. Associated with these parameterizations are various parameters used in the simplified processes. Examples include the descent rate of raindrops, convective clouds, simplifications of the atmospheric radiative transfer on the basis of atmospheric radiative transfer codes, and cloud microphysics. Radiative parameterizations are important to both atmospheric and oceanic modeling alike. Atmospheric emissions from different sources within individual grid boxes also need to be parameterized to determine their impact on air quality.
Clouds
Weather and climate model gridboxes have sides of between and . A typical cumulus cloud has a scale of less than , and would require a grid even finer than this to be represented physically by the equations of fluid motion. Therefore, the processes that such clouds represent are parameterized, by processes of various sophistication. In the earliest models, if a column of air in a model gridbox was unstable (i.e., the bottom warmer than the top) then it would be overturned, and the air in that vertical column mixed. More sophisticated schemes add enhancements, recognizing that only some portions of the box might convect and that entrainment and other processes occur. Weather models that have gridboxes with sides between and can explicitly represent convective clouds, although they still need to parameterize cloud microphysics.
The formation of large-scale (stratus-type) clouds is more physically based: they form when the relative humidity reaches some prescribed value. Still, sub grid scale processes need to be taken into account. Rather than assuming that clouds form at 100% relative humidity, the cloud fraction can be related to a critical relative humidity of 70% for stratus-type clouds, and at or above 80% for cumuliform clouds, reflecting the sub grid scale variation that would occur in the real world. Portions of the precipitation parameterization include the condensation rate, energy exchanges dealing with the change of state from water vapor into liquid drops, and the microphysical component which controls the rate of change from water vapor to water droplets.
Radiation and atmosphere-surface interaction
The amount of solar radiation reaching ground level in rugged terrain, or due to variable cloudiness, is parameterized as this process occurs on the molecular scale. This method of parameterization is also done for the surface flux of energy between the ocean and the atmosphere in order to determine realistic sea surface temperatures and type of sea ice found near the ocean's surface. Also, the grid size of the models is large when compared to the actual size and roughness of clouds and topography. Sun angle as well as the impact of multiple cloud layers is taken into account. Soil type, vegetation type, and soil moisture all determine how much radiation goes into warming and how much moisture is drawn up into the adjacent atmosphere. Thus, they are important to parameterize.
Air quality
Air quality forecasting attempts to predict when the concentrations of pollutants will attain levels that are hazardous to public health. The concentration of pollutants in the atmosphere is determined by transport, diffusion, chemical transformation, and ground deposition. Alongside pollutant source and terrain information, these models require data about the state of the fluid flow in the atmosphere to determine its transport and diffusion. Within air quality models, parameterizations take into account atmospheric emissions from multiple relatively tiny sources (e.g. roads, fields, factories) within specific grid boxes.
Eddies
The ocean (and, although more variably, the atmosphere) is stratified through density. At rest, surfaces of constant density (known as isopycnals in the ocean) will be parallel to surfaces of constant pressure (isobars). However, various processes such as geostrophy and upwelling can result in isopycnals becoming tilted relative to isobars. These tilted density surfaces represent a source of potential energy and, if the slope becomes steep enough, a fluid instability known as baroclinic instability can be triggered. Eddies are generated through baroclinic instability, which act to flatten density surfaces through the slantwise exchange of fluid.
The resulting eddies are formed at a characteristic scale called the Rossby deformation radius. This scale depends on the strength of stratification and the coriolis parameter (which in turn depends on the latitude). As a result, baroclinic eddies form on scales of around 1° (~100 km) at the tropics, but less than 1/12° (~10 km) at the poles and in some shelf seas. Most climate models, such as those run as part of CMIP experiments, are run at a resolution of 1-1/4° in the ocean, and can therefore not resolve baroclinic eddies across large parts of the ocean, particularly at the poles. However, high-latitude baroclinic eddies are important for many ocean processes such as the Atlantic Meridional Overturning Circulation (AMOC), which affects global climate. As a result, the effects of eddies are parameterised in climate models, such as through the widely-used Gent-McWilliams (GM) parameterisation which represents the isopycnal-flattening effects of eddies as a diffusivity. This parameterisation is not perfect - for instance, it may overpredict the sensitivity of the Antarctic Circumpolar Current and AMOC to the strength of winds over the Southern Ocean. As a result, alternative parameterisations are being developed to improve the representation of eddies in ocean models.
Problems with increased resolution
As model resolution increases, errors associated with moist convective processes are increased as assumptions which are statistically valid for larger grid boxes become questionable once the grid boxes shrink in scale towards the size of the convection itself. At resolutions greater than T639, which has a grid box dimension of about , the Arakawa-Schubert convective scheme produces minimal convective precipitation, making most precipitation unrealistically stratiform in nature.
Calibration
When a physical process is parameterized, two choices have to be made: what is the structural form (for instance, two variables can be related linearly) and what is the exact value of the parameters (for instance, the constant of proportionality). The process of determining the exact values of the parameters in a parameterization is called calibration, or sometimes less precise, tuning. Calibration is a difficult process, and different strategies are used to do it. One popular method is to run a model, or a submodel, and compare it to a small set of selected metrics, such as temperature. The parameters that lead to the model run which resembles reality best are chosen.
See also
Global climate model
Climate ensemble
Parametrization
References
Further reading
Numerical climate and weather models | Parametrization (climate modeling) | [
"Environmental_science"
] | 1,449 | [
"Atmospheric models",
"Environmental modelling"
] |
2,282,082 | https://en.wikipedia.org/wiki/Francisco%20J.%20Ayala | Francisco José Ayala Pereda (March 12, 1934 – March 3, 2023) was a Spanish-American evolutionary biologist and philosopher who was a longtime faculty member at the University of California, Irvine, and University of California, Davis.
Ayala was previously president and chairman of the board of the American Association for the Advancement of Science. At University of California, Irvine, his academic appointments included University Professor and Donald Bren Professor of Biological Sciences, Ecology & Evolutionary Biology (School of Biological Sciences), Professor of Philosophy (School of Humanities), and Professor of Logic and the Philosophy of Science (School of Social Sciences).
Ayala's employment at UC Irvine ended in 2018 after the university issued a report relating to allegations of sexual harassment claims against him. Ayala denied having "intentionally caused sexual harassment to anybody." His name was removed from the School of Biological Sciences, the Science Library, as well as various graduate fellowships, scholarship programs, and endowed chairs.
Early life and education
Earlier in life, Ayala was a Dominican priest, ordained in 1960 and leaving the priesthood that same year. After graduating from the University of Salamanca, he moved to the United States in 1961 to study for a PhD at Columbia University. There, he studied for his doctorate under Theodosius Dobzhansky, graduating in 1964. He became a US citizen in 1971.
Career
Ayala is known for his research on population and evolutionary genetics, and has been called the "Renaissance Man of Evolutionary Biology". His "discoveries have opened up new approaches to the prevention and treatment of diseases that affect hundreds of millions of individuals worldwide", including demonstrating that the reproduction of Trypanosoma cruzi, the agent of Chagas disease, is mostly the product of cloning, and that only a few clones account for most of this widespread disease.
Ayala served on the advisory board of the now defunct Campaign to Defend the Constitution, an organization that has lobbied in support of the separation of church and state. He has been publicly critical of U.S. restrictions on federal funding of embryonic stem cell research. He was also a critic of intelligent design theories, claiming that they are not only pseudoscience, but also misunderstood from a theological point of view. He suggested that the theory of evolution resolves the problem of evil, thus being a kind of theodicy. Although Ayala generally did not discuss his religious views, he has stated that "science is compatible with religious faith in a personal, omnipotent and benevolent God." He also briefly served, in 1960, as a Dominican priest. Ayala did not say whether he remained a religious believer, not wanting to be "tagged by one side or the other."
Ayala attended the Beyond Belief symposium in November 2006. Ayala debated Christian apologist William Lane Craig in November 2009 on the topic of intelligent design.
On October 18, 2011, the University of California, Irvine (UCI) announced that Ayala would be donating $10 million to the university's School of Biological Sciences. The gift was to be "$1 million a year for the next decade."
Sexual harassment investigation
Four women (professor Kathleen Treseder, another professor, an assistant dean, and one graduate student), alleged that Ayala had sexually harassed them, prompting an investigation led by Erik Pelowitz at UC Irvine's Office of Equal Opportunity and Diversity. The university investigators concluded that Ayala violated the university's policies on sexual harassment and sex discrimination in the cases of three of the four women, and found that the conduct at issue extended to 2004 and had led to previous warnings to Ayala. The investigators' report also concluded that Ayala "engaged in a campaign with the highest University officials to influence the outcome of this investigation." Ayala denied most allegations against him, and wrote to the university's chancellor, Howard Gillman, "I have never intentionally caused sexual harassment to anybody. To the extent that my actions may have caused harm to others ... I apologize from the deepest of my heart and of my mind." He resigned effective July 1, 2018.
Ayala was represented in the investigation by attorney Susan Estrich. The investigation against Ayala extended to more than 60 witnesses, and the outcome divided scholars. Camilo José Cela Conde and Elizabeth Loftus defended him, the latter saying that she was "shocked that this man's life was ruined over this collection of reactions to his behavior" and described the allegations as "thin." In contrast, T. Jane Zelikova, the founder of 500 Women Scientists, supported Ayala's ouster. Ann Olivarius, a lawyer and sexual harassment expert who reviewed the report at the request of Science magazine, said that Ayala did not "have sex with students or pressure them directly for sex" but "clearly made multiple women feel degraded" and continued to do so "after senior university officials warned him to stop acting in these ways."
Ayala returned to his private life, and the university removed his name from the School of Biological Sciences, the Science Library, and endowed chairs that had been named after Ayala. The American Association for the Advancement of Science removed his fellowship status. The U.S. National Academy of Sciences rescinded Ayala's membership for violation of Section 4 of the NAS Code of Conduct, effective June 23, 2021.
Awards and honors
In 2001, Ayala was awarded the National Medal of Science. On April 13, 2007, he was awarded the first of 100 bicentennial medals at Mount Saint Mary's University for lecturing there as the first presenter for the Bicentennial Distinguished Lecture Series. His lecture was entitled "The Biological Foundations of Morality". Other awards he received include the Gold Honorary Gregor Mendel Medal of the Czech Academy of Sciences, the Gold Medal of the Accademia Nazionale dei Lincei, the Gold Medal of the Stazione Zoologica in Naples, the President's Award of the American Institute of Biological Sciences, the AAAS Award for Scientific Freedom and Responsibility and 150th Anniversary Leadership Medal of the AAAS, the Medal of the College of France, the UCI Medal of the University of California, the 1998 Distinguished Scientist Award from the SACNAS, and Sigma Xi's William Procter Prize for Scientific Achievement, 2000. In 2010, he was awarded the Templeton Prize. The science library at UCI was named after him from 2010 until 2018, when his name was removed after a university investigation concluded that his conduct with respect to three women violated university policies. Ayala delivered a lecture at the Trotter Prize ceremony in 2011 entitled "Darwin's Gift to Science and Religion." In 2014, UCI named its School of Biological Sciences the Francisco J. Ayala School of Biological Sciences after Ayala. UCI removed his name from the library and school in 2018, after finding that he sexually harassed at least four women.
Ayala was elected a Fellow of the American Academy of Arts and Sciences in 1977, and the National Academy of Sciences in 1980, though his fellowship status in these institutions was later revoked. He was a member of the American Philosophical Society. He was also a foreign member of the Russian Academy of Sciences, the Accademia Nazionale dei Lincei in Rome, the Spanish Royal Academy of Sciences, the Mexican Academy of Sciences, and the Serbian Academy of Sciences and Arts. He had honorary degrees from the University of Athens, the University of Bologna, the University of Barcelona, the University of the Balearic Islands, the University of León, the University of Madrid, the University of Salamanca, the University of Valencia, the University of Vigo, Far Eastern National University, Masaryk University and University of Warsaw.
Personal life
Francisco Ayala was born to Francisco Ayala and Soledad Pereda. In the late 1960s he met Mary Henderson, they married on May 27, 1968. They had two sons: Francisco José (born 1969) and Carlos Alberto (born 1972). Their marriage ended in divorce, and in 1985 he married an ecologist named Hana Ayala (née Lostáková, born 1956). They lived in Irvine, California. He died from a heart attack on March 3, 2023, in Newport Beach, at age 88.
Books
Ayala has published 950 publications and 30 books. Recently published books include:
Ayala, F.J. Evolution, Explanation, Ethics and Aesthetics: Towards a Philosophy of Biology. Academic Press: 2016.
Ayala, F.J. Am I a Monkey: Six Big Questions About Evolution. Johns Hopkins University Press: Baltimore, MD, US 2010.
Ayala, F.J. and Robert Arp, eds. Contemporary Debates in Philosophy of Biology. Wiley-Blackwell: London, 2009.
Avise, J.C. and F.J. Ayala, eds. In the Light of Evolution: Adaptation and Complex Design. National Academy Press: Washington, DC. 2007.
Cela Conde, C.J. and F.J. Ayala. Human Evolution. Trails from the Past. Oxford University Press: Oxford, 2007.
Ayala, F.J. Darwin y el Diseño Inteligente. Creacionismo, Cristianismo y Evolución. Alianza Editorial: Madrid, Spain, 231 pp. 2007.
Ayala, F.J. Darwin's Gift to Science and Religion. Joseph Henry Press: Washington, DC, xi + 237 pp. 2007
Ayala, F.J. La Evolución de un Evolucionista. Escritos Seleccionados. University of Valencia: Valencia, Spain, 441 pp. 2006.
Ayala, F.J. Darwin and Intelligent Design. Fortress Press: Minneapolis, MN, xi + 116 pp. 2006.
Ayala, F.J. and C.J. Cela Conde. La piedra que se volvió palabra. Las claves evolutivas de la humanidad. Alianza Editorial: Madrid, Spain. 184 pp. 2006
Hey, J., W.M. Fitch and F.J. Ayala, eds. Systematics and the Origin of Species. On Ernst Mayr's 100th Anniversary. National Academies Press: Washington, DC. xiii + 367 pp. 2005
Wuketits, F.M. and F.J. Ayala, eds. Handbook of Evolution: The Evolution of Living Systems (Including Hominids), Volume 2. Wiley-VCH: Weinheim, Germany. 292 pp. 2005.
Ayala, F.J. Le Ragioni dell’ Evoluzione. Di Renzo Editore: Rome. 109 pp. 2005.
Ayala, F.J. Human Evolution: Biology, Culture, Ethics. In: J.B. Miller, ed., The Epic of Evolution. Science and Religion in Dialogue (Pearson Education, Inc.: Upper Saddle River, New Jersey), pp. 166–180. 2004.
See also
List of celebrities who own wineries and vineyards
References
External links
Francisco J. Ayala at the University of California, Irvine
Francisco J. Ayala Biography
1934 births
2023 deaths
People from Madrid
Human evolution theorists
Members of the United States National Academy of Sciences
Members of the Serbian Academy of Sciences and Arts
Foreign members of the Russian Academy of Sciences
Members of the Lincean Academy
Presidents of the American Association for the Advancement of Science
Templeton Prize laureates
National Medal of Science laureates
Spanish biologists
University of California, Irvine faculty
University of California, Davis faculty
Philosophers from California
21st-century Spanish philosophers
20th-century Spanish philosophers
University of Salamanca alumni
People from Irvine, California
Charles Darwin biographers
Columbia University alumni
Spanish emigrants to the United States
Fellows of the American Academy of Arts and Sciences
Former Dominicans
21st-century American biologists
Hispanic and Latino American scientists
Writers about religion and science
Critics of New Atheism
American critics of atheism
Theistic evolutionists
Science activists | Francisco J. Ayala | [
"Biology"
] | 2,435 | [
"Non-Darwinian evolution",
"Theistic evolutionists",
"Biology theories"
] |
2,282,128 | https://en.wikipedia.org/wiki/Genealogical%20DNA%20test | A genealogical DNA test is a DNA-based genetic test used in genetic genealogy that looks at specific locations of a person's genome in order to find or verify ancestral genealogical relationships, or (with lower reliability) to estimate the ethnic mixture of an individual. Since different testing companies use different ethnic reference groups and different matching algorithms, ethnicity estimates for an individual vary between tests, sometimes dramatically.
Three principal types of genealogical DNA tests are available, with each looking at a different part of the genome and being useful for different types of genealogical research: autosomal (atDNA), mitochondrial (mtDNA), and Y-chromosome (Y-DNA).
Autosomal tests may result in a large number of DNA matches to both males and females who have also tested with the same company. Each match will typically show an estimated degree of relatedness, i.e., a close family match, 1st-2nd cousins, 3rd-4th cousins, etc. The furthest degree of relationship is usually the "6th-cousin or further" level. However, due to the random nature of which, and how much, DNA is inherited by each tested person from their common ancestors, precise relationship conclusions can only be made for close relations. Traditional genealogical research, and the sharing of family trees, is typically required for interpretation of the results. Autosomal tests are also used in estimating ethnic mix.
MtDNA and Y-DNA tests are much more objective. However, they give considerably fewer DNA matches, if any (depending on the company doing the testing), since they are limited to relationships along a strict female line and a strict male line respectively. MtDNA and Y-DNA tests are utilized to identify archeological cultures and migration paths of a person's ancestors along a strict mother's line or a strict father's line. Based on MtDNA and Y-DNA, a person's haplogroup(s) can be identified. The mtDNA test can be taken by both males and females, because everyone inherits their mtDNA from their mother, as the mitochondrial DNA is located in the egg cell. However, a Y-DNA test can only be taken by a male, as only males have a Y-chromosome.
Procedure
A genealogical DNA test is performed on a DNA sample obtained by cheek-scraping (also known as a buccal swab), spit-cups, mouthwash, or chewing gum. Typically, the sample collection uses a home test kit supplied by a service provider such as 23andMe, AncestryDNA, Family Tree DNA, or MyHeritage. After following the kit instructions on how to collect the sample, it is returned to the supplier for analysis. The sample is then processed using a technology known as DNA microarray to obtain the genetic information.
Types of tests
There are three major types of genealogical DNA tests: Autosomal (which includes X-DNA), Y-DNA, and mtDNA.
Autosomal DNA tests look at chromosome pairs 1–22 and the X part of the 23rd chromosome. The autosomes (chromosome pairs 1–22) are inherited from both parents and all recent ancestors. The X-chromosome follows a special inheritance pattern, because females (XX) inherit an X-chromosome from each of their parents, while males (XY) inherit an X-chromosome from their mother and a Y-chromosome from their father (XY). Ethnicity estimates are often included with this sort of testing.
Y-DNA looks at the Y-chromosome, which is passed down from father to son. Thus, the Y-DNA test can only be taken by males to explore their direct paternal line.
mtDNA looks at the mitochondria, which is passed down from mother to child. Thus, the mtDNA test can be taken by both males and females, and it explores one's direct maternal line.
Y-DNA and mtDNA cannot be used for ethnicity estimates, but can be used to find one's haplogroup, which is unevenly distributed geographically. Direct-to-consumer DNA test companies have often labeled haplogroups by continent or ethnicity (e.g., an "African haplogroup" or a "Viking haplogroup"), but these labels may be speculative or misleading.
Autosomal DNA (atDNA) testing
Testing
Autosomal DNA is contained in the 22 pairs of chromosomes not involved in determining a person's sex. Autosomal DNA recombines in each generation, and new offspring receive one set of chromosomes from each parent. These are inherited exactly equally from both parents and roughly equally from grandparents to about 3x great-grandparents. Therefore, the number of markers (one of two or more known variants in the genome at a particular location – known as Single-nucleotide polymorphisms or SNPs) inherited from a specific ancestor decreases by about half with each successive generation; that is, an individual receives half of their markers from each parent, about a quarter of those markers from each grandparent; about an eighth of those markers from each great-grandparent, etc. Inheritance is more random and unequal from more distant ancestors. Generally, a genealogical DNA test might test about 700,000 SNPs (specific points in the genome).
Reporting process
The preparation of a report on the DNA in the sample proceeds in multiple stages:
identification of the DNA base pair at specific SNP locations
comparison with previously stored results
interpretation of matches
Base pair identification
All major service providers use equipment with chips supplied by Illumina. The chip determines which SNP locations are tested. Different versions of the chip are used by different service providers. In addition, updated versions of the Illumina chip may test different sets of SNP locations. The list of SNP locations and base pairs at that location is usually available to the customer as "raw data". The raw data can be uploaded to some other genealogical service providers to produce an additional interpretation and matches. For additional genealogical analysis the data can also be uploaded to GEDmatch (a third-party web based set of tools that analyzes raw data from the main service providers). Raw data can also be uploaded to services that provide health risk and trait reports using SNP genotypes. These reports may be free or inexpensive, in contrast to reports provided by DTC testing companies, who charge about double the cost of their genealogy-only services. The implications of individual SNP results can be ascertained from raw data results by referring to SNPedia.com.
Identification of Matches
The major component of an autosomal DNA test is matching other individuals. Where the individual being tested has a number of consecutive SNPs in common with a previously tested individual in the company's database, it can be inferred that they share a segment of DNA at that part of their genomes. If the segment is longer than a threshold amount set by the testing company, then these two individuals are considered to be a match. Unlike the identification of base pairs, the data bases against which the new sample is tested, and the algorithms used to determine a match, are proprietary and specific to each company.
The unit for segments of DNA is the centimorgan (cM). For comparison, a full human genome is about 6500 cM. The shorter the length of a match, the greater are the chances that a match is spurious. An important statistic for subsequent interpretation is the length of the shared DNA (or the percentage of the genome that is shared).
Interpretation of Autosomal matches
Most companies will show the customers how many cMs they share and across how many segments. From the number of cMs and segments, the relationship between the two individuals can be estimated; however, due to the random nature of DNA inheritance, relationship estimates, especially for distant relatives, are only approximate. Some more distant cousins will not match at all. Although information about specific SNPs can be used for some purposes (e.g., suggesting likely eye color), the key information is the percentage of DNA shared by two individuals. This can indicate the closeness of the relationship. However, it does not show the roles of the two individuals, e.g., 50% shared suggests a parent/child relationship, but it does not identify which individual is the parent.
Various advanced techniques and analyses can be done on this data. This includes features such as In-common/Shared Matches, Chromosome Browsers, and Triangulation. This analysis is often required if DNA evidence is being used to prove or disprove a specific relationship.
X-chromosome DNA testing
The X-chromosome SNP results are often included in autosomal DNA tests. Both males and females receive an X-chromosome from their mother, but only females receive a second X-chromosome from their father. The X-chromosome has a special path of inheritance patterns and can be useful in significantly narrowing down possible ancestor lines compared to autosomal DNA. For example, an X-chromosome match with a male can only have come from his maternal side. Like autosomal DNA, X-chromosome DNA undergoes random recombination at each generation (except for father-to-daughter X-chromosomes, which are passed down unchanged). There are specialized inheritance charts which describe the possible patterns of X-chromosome DNA inheritance for males and females.
STRs
Some genealogical companies offer autosomal STRs (short tandem repeats). These are similar to Y-DNA STRs. The number of STRs offered is limited, and results have been used for personal identification, paternity cases, and inter-population studies.
Law enforcement agencies in the US and Europe use autosomal STR data to identify criminals.
Mitochondrial DNA (mtDNA) testing
The mitochondrion is a component of a human cell, and contains its own DNA. Mitochondrial DNA usually has 16,569 base pairs (the number can vary slightly depending on addition or deletion mutations) and is much smaller than the human genome DNA which has 3.2 billion base pairs. Mitochondrial DNA is transmitted from mother to child, as it is contained in the egg cell. Thus, a direct maternal ancestor can be traced using mtDNA. The transmission occurs with relatively rare mutations compared to autosomal DNA. A perfect match found to another person's mtDNA test results indicates shared ancestry of possibly between 1 and 50 generations ago. More distant matching to a specific haplogroup or subclade may be linked to a common geographic origin.
Test
The mtDNA, by current conventions, is divided into three regions. They are the coding region (00577-16023) and two Hyper Variable Regions (HVR1 [16024-16569], and HVR2 [00001-00576]).
The two most common mtDNA tests are a sequence of HVR1 and HVR2 and a full sequence of the mitochondria. Generally, testing only the HVRs has limited genealogical use so it is increasingly popular and accessible to have a full sequence. The full mtDNA sequence is only offered by Family Tree DNA among the major testing companies and is somewhat controversial because the coding region DNA may reveal medical information about the test-taker
Haplogroups
All humans descend in the direct female line from Mitochondrial Eve, a female who lived probably around 150,000 years ago in Africa. Different branches of her descendants are different haplogroups. Most mtDNA results include a prediction or exact assertion of one's mtDNA Haplogroup. Mitochrondial haplogroups were greatly popularized by the book The Seven Daughters of Eve, which explores mitochondrial DNA.
Understanding mtDNA test results
It is not normal for test results to give a base-by-base list of results. Instead, results are normally compared to the Cambridge Reference Sequence (CRS), which is the mitochondria of a European who was the first person to have their mtDNA published in 1981 (and revised in 1999). Differences between the CRS and testers are usually very few, thus it is more convenient than listing one's raw results for each base pair.
Examples
Note that in HVR1, instead of reporting the base pair exactly, for example 16,111, the 16 is often removed to give in this example 111. The letters refer to one of the four bases (A, T, G, C) that make up DNA.
Y-chromosome (Y-DNA) testing
The Y-chromosome is one of the 23rd pair of human chromosomes. Only males have a Y-chromosome, because women have two X chromosomes in their 23rd pair. A man's patrilineal ancestry, or male-line ancestry, can be traced using the DNA on his Y-chromosome (Y-DNA), because the Y-chromosome is transmitted from a father to son nearly unchanged. A man's test results are compared to another man's results to determine the time frame in which the two individuals shared a most recent common ancestor, or MRCA, in their direct patrilineal lines. If their test results are very close, they are related within a genealogically useful time frame. A surname project is where many individuals whose Y-chromosomes match collaborate to find their common ancestry.
Women who wish to determine their direct paternal DNA ancestry can ask their father, brother, paternal uncle, paternal grandfather, or a paternal uncle's son (their cousin) to take a test for them.
There are two types of DNA testing: STRs and SNPs.
STR markers
Most common is STRs (short tandem repeat). A certain section of DNA is examined for a pattern that repeats (e.g. ATCG). The number of times it repeats is the value of the marker. Typical tests test between 12 and 111 STR markers. STRs mutate fairly frequently. The results of two individuals are then compared to see if there is a match. DNA companies will usually provide an estimate of how closely related two people are, in terms of generations or years, based on the difference between their results. Because STR values mutate frequently and are not permanent, false matches between two men can occur when they have the same STR values by chance. A true genetic relationship can only be determined using Y-DNA SNP mutations, but sequencing these was historically more time-consuming and expensive.
SNP markers and Haplogroups
A person's haplogroup can often be inferred from their STR results, but can be proven only with a Y-chromosome SNP test (Y-SNP test).
A single-nucleotide polymorphism (SNP) is a change to a single nucleotide in a DNA sequence. Typical Y-DNA SNP tests test about 20,000 to 35,000 SNPs. Getting a SNP test allows a much higher resolution than STRs. It can be used to provide additional information about the relationship between two individuals and to confirm haplogroups. Unique permanent SNP mutations occur in every male line every 83 years on average, providing excellent time resolution.
All human men descend in the paternal line from a single man dubbed Y-chromosomal Adam, who lived probably between 200,000 and 300,000 years ago. A 'family tree' can be drawn showing how men today descend from him. Different branches of this tree are different haplogroups. Most haplogroups can be further subdivided multiple times into sub-clades. Some known sub-clades were founded in the last 1000 years, meaning their timeframe approaches the genealogical era (c.1500 onwards).
New sub-clades of haplogroups may be discovered when an individual tests, especially if they are non-European. Most significant of these new discoveries was in 2013 when the haplogroup A00 was discovered, which required theories about Y-chromosomal Adam to be significantly revised. The haplogroup was discovered when an African-American man tested STRs at FamilyTreeDNA and his results were found to be unusual. SNP testing confirmed that he does not descend patrilineally from the "old" Y-chromosomal Adam and so a much older man became Y-Chromosomal Adam.
Using DNA test results
Ethnicity estimates
Many companies offer a percentage breakdown by ethnicity or region. Generally the world is specified into about 20–25 regions, and the approximate percentage of DNA inherited from each is stated. This is usually done by comparing the frequency of each Autosomal DNA marker tested to many population groups. The reliability of this type of test is dependent on comparative population size, the number of markers tested, the ancestry informative value of the SNPs tested, and the degree of admixture in the person tested. Earlier ethnicity estimates were often wildly inaccurate, but as companies receive more samples over time, ethnicity estimates have become more accurate. Testing companies such as Ancestry.com will often regularly update their ethnicity estimates, which has caused some controversy from customers as their results update. Usually the results at the continental level are accurate, but more specific assertions of the test may turn out to be incorrect.
Audience
The interest in genealogical DNA tests has been linked to both an increase in curiosity about traditional genealogy and to more general personal origins. Those who test for traditional genealogy often utilize a combination of autosomal, mitochondrial, and Y-Chromosome tests. Those with an interest in personal ethnic origins are more likely to use an autosomal test. However, answering specific questions about the ethnic origins of a particular lineage may be best suited to an mtDNA test or a Y-DNA test.
Maternal origin tests
For recent genealogy, exact matching on the mtDNA full sequence is used to confirm a common ancestor on the direct maternal line between two suspected relatives. Because mtDNA mutations are very rare, a nearly perfect match is not usually considered relevant to the most recent 1 to 16 generations. In cultures lacking matrilineal surnames to pass down, neither relative above is likely to have as many generations of ancestors in their matrilineal information table as in the above patrilineal or Y-DNA case: for further information on this difficulty in traditional genealogy, due to lack of matrilineal surnames (or matrinames), see Matriname. However, the foundation of testing is still two suspected descendants of one person. This hypothesize and test DNA pattern is the same one used for autosomal DNA and Y-DNA.
Tests for ethnicity and membership of other groups
As discussed above, autosomal tests usually report the ethnic proportions of the individual. These attempt to measure an individual's mixed geographic heritage by identifying particular markers, called ancestry informative markers or AIM, that are associated with populations of specific geographical areas. Geneticist Adam Rutherford has written that these tests "don’t necessarily show your geographical origins in the past. They show with whom you have common ancestry today."
The haplogroups determined by Y-DNA and mtDNA tests are often unevenly geographically distributed. Many direct-to-consumer DNA tests described this association to infer the test-taker's ancestral homeland. Most tests describe haplogroups according to their most frequently associated continent (e.g., a "European haplogroup"). When Leslie Emery and collaborators performed a trial of mtDNA haplogroups as a predictor of continental origin on individuals in the Human Genetic Diversity Panel (HGDP) and 1000 Genomes (1KGP) datasets, they found that only 14 of 23 haplogroups had a success rate above 50% among the HGDP samples, as did "about half" of the haplogroups in the 1KGP. The authors concluded that, for most people, "mtDNA-haplogroup membership provides limited information about either continental ancestry or continental region of origin."
African ancestry
Y-DNA and mtDNA testing may be able to determine with which peoples in present-day Africa a person shares a direct line of part of his or her ancestry, but patterns of historic migration and historical events cloud the tracing of ancestral groups. Due to joint long histories in the US, approximately 30% of African American males have a European Y-Chromosome haplogroup Approximately 58% of African Americans have at least the equivalent of one great-grandparent (13%) of European ancestry. Only about 5% have the equivalent of one great-grandparent of Native American ancestry. By the early 19th century, substantial families of Free Persons of Color had been established in the Chesapeake Bay area who were descended from free people during the colonial period; most of those have been documented as descended from white men and African women (servant, slave or free). Over time various groups married more within mixed-race, black or white communities.
According to authorities like Salas, nearly three-quarters of the ancestors of African Americans taken in slavery came from regions of West Africa. The African-American movement to discover and identify with ancestral tribes has burgeoned since DNA testing became available. African Americans usually cannot easily trace their ancestry during the years of slavery through surname research, census and property records, and other traditional means. Genealogical DNA testing may provide a tie to regional African heritage.
United States – Melungeon testing
Melungeons are one of numerous multiracial groups in the United States with origins wrapped in myth. The historical research of Paul Heinegg has documented that many of the Melungeon groups in the Upper South were descended from mixed-race people who were free in colonial Virginia and the result of unions between the Europeans and Africans. They moved to the frontiers of Virginia, North Carolina, Kentucky and Tennessee to gain some freedom from the racial barriers of the plantation areas. Several efforts, including a number of ongoing studies, have examined the genetic makeup of families historically identified as Melungeon. Most results point primarily to a mixture of European and African, which is supported by historical documentation. Some may have Native American heritage as well. Though some companies provide additional Melungeon research materials with Y-DNA and mtDNA tests, any test will allow comparisons with the results of current and past Melungeon DNA studies.
Native American ancestry
The pre-columbian indigenous people of the United States are called "Native Americans" in American English. Autosomal testing, Y-DNA, and mtDNA testing can be conducted to determine the ancestry of Native Americans. A mitochondrial Haplogroup determination test based on mutations in Hypervariable Region 1 and 2 may establish whether a person's direct female line belongs to one of the canonical Native American Haplogroups, A, B, C, D or X. The vast majority of Native American individuals belong to one of the five identified mtDNA Haplogroups. Thus, being in one of those groups provides evidence of potential Native American descent. However, DNA ethnicity results cannot be used as a substitute for legal documentation. Native American tribes have their own requirements for membership, often based on at least one of a person's ancestors having been included on tribal-specific Native American censuses (or final rolls) prepared during treaty-making, relocation to reservations or apportionment of land in the late 19th century and early 20th century. One example is the Dawes Rolls.
Cohanim ancestry
The Cohanim (or Kohanim) is a patrilineal priestly line of descent in Judaism. According to the Bible, the ancestor of the Cohanim is Aaron, brother of Moses. Many believe that descent from Aaron is verifiable with a Y-DNA test: the first published study in genealogical Y-Chromosome DNA testing found that a significant percentage of Cohens had distinctively similar DNA, rather more so than general Jewish or Middle Eastern populations. These Cohens tended to belong to Haplogroup J, with Y-STR values clustered unusually closely around a haplotype known as the Cohen Modal Haplotype (CMH). This could be consistent with a shared common ancestor, or with the hereditary priesthood having originally been founded from members of a single closely related clan.
Nevertheless, the original studies tested only six Y-STR markers, which is considered a low-resolution test. In response to the low resolution of the original 6-marker CMH, the testing company FTDNA released a 12-marker CMH signature that was more specific to the large closely related group of Cohens in Haplogroup J1.
A further academic study published in 2009 examined more STR markers and identified a more sharply defined SNP haplogroup, J1e* (now J1c3, also called J-P58*) for the J1 lineage. The research found "that 46.1% of Kohanim carry Y chromosomes belonging to a single paternal lineage (J-P58*) that likely originated in the Near East well before the dispersal of Jewish groups in the Diaspora. Support for a Near Eastern origin of this lineage comes from its high frequency in our sample of Bedouins, Yemenis (67%), and Jordanians (55%) and its precipitous drop in frequency as one moves away from Saudi Arabia and the Near East (Fig. 4). Moreover, there is a striking contrast between the relatively high frequency of J-58* in Jewish populations (»20%) and Kohanim (»46%) and its vanishingly low frequency in our sample of non-Jewish populations that hosted Jewish diaspora communities outside of the Near East."
Recent phylogenetic research for haplogroup J-M267 placed the "Y-chromosomal Aaron" in a subhaplogroup of J-L862, L147.1 (age estimate 5631-6778yBP yBP): YSC235>PF4847/CTS11741>YSC234>ZS241>ZS227>Z18271 (age estimate 2731yBP).
European testing
Benefits
Genealogical DNA tests have become popular due to the ease of testing at home and their usefulness in supplementing genealogical research. Genealogical DNA tests allow for an individual to determine with high accuracy whether he or she is related to another person within a certain time frame, or with certainty that he or she is not related. DNA tests are perceived as more scientific, conclusive and expeditious than searching the civil records. However, they are limited by restrictions on lines that may be studied. The civil records are always only as accurate as the individuals having provided or written the information.
Y-DNA testing results are normally stated as probabilities: For example, with the same surname a perfect 37/37 marker test match gives a 95% likelihood of the most recent common ancestor (MRCA) being within 8 generations, while a 111 of 111 marker match gives the same 95% likelihood of the MRCA being within only 5 generations back.
As presented above in mtDNA testing, if a perfect match is found, the mtDNA test results can be helpful. In some cases, research according to traditional genealogy methods encounters difficulties due to the lack of regularly recorded matrilineal surname information in many cultures (see Matrilineal surname).
Autosomal DNA combined with genealogical research has been used by adoptees to find their biological parents, to find the name and family of unidentified bodies, and by law enforcement agencies to apprehend criminals (for example, the Contra Costa County District Attorney's office used the "open-source" genetic genealogy site GEDmatch to find relatives of the suspect in the Golden State Killer case.). The Atlantic magazine commented in 2018 that "Now, the floodgates are open. ..a small, volunteer-run website, GEDmatch.com, has become ... the de facto DNA and genealogy database for all of law enforcement." Family Tree DNA announced in February 2019 it was allowing the FBI to access its DNA data for cases of murder and rape. However, in May 2019 GEDmatch initiated stricter rules for accessing their autosomal DNA database and Family Tree DNA shut down their Y-DNA database ysearch.org, making it more difficult for law enforcement agencies to solve cases.
Drawbacks
Common concerns about genealogical DNA testing are cost and privacy issues. Some testing companies, such as 23andMe and Ancestry, retain samples and results for their own use without a privacy agreement with subjects.
Autosomal DNA tests can identify relationships but they can be misinterpreted. For example, transplants of stem cell or bone marrow will produce matches with the donor. In addition, identical twins (who have identical DNA) can give unexpected results.
Testing of the Y-DNA lineage from father to son may reveal complications, due to unusual mutations, secret adoptions, and non-paternity events (i.e., that the perceived father in a generation is not the father indicated by written birth records). According to the Ancestry and Ancestry Testing Task Force of the American Society of Human Genetics, autosomal tests cannot detect "large portions" of DNA from distant ancestors because it has not been inherited.
With the increasing popularity of the use of DNA tests for ethnicity tests, uncertainties and errors in ethnicity estimates are a drawback for Genetic genealogy. While ethnicity estimates at the continental level should be accurate (with the possible exception of East Asia and the Americas), sub-continental estimates, especially in Europe, are often inaccurate. Customers may be misinformed about the uncertainties and errors of the estimates.
Some have recommended government or other regulation of ancestry testing to ensure its performance to an agreed standard.
A number of law enforcement agencies took legal action to compel
genetic genealogy companies to release genetic information that could match cold case crime victims or perpetrators. A number of companies fought the requests.
Common misunderstandings of genetics
The popular consciousness of DNA testing and of DNA generally is subject to a number of misconceptions involving the reliability of testing, the nature of the connections with one's ancestors, the connection between DNA and personal traits, etc.
Medical information
Though genealogical DNA tests are not designed mainly for medical purposes, autosomal DNA tests can be used to analyze the probability of hundreds of heritable medical conditions, albeit the result is complex to understand and may confuse a non-expert. 23andMe provides medical and trait information from their genealogical DNA test and for a fee the Promethease web site analyses genealogical DNA test data from Family Tree DNA, 23andMe, or AncestryDNA for medical information. Promethease, and its research paper crawling database SNPedia, has received criticism for technical complexity and a poorly defined "magnitude" scale that causes misconceptions, confusion and panic among its users.
The testing of full MtDNA and YDNA sequences is still somewhat controversial as it may reveal even more medical information. For example, a correlation exists between a lack of Y-DNA marker DYS464 and infertility, and between mtDNA haplogroup H and protection from sepsis. Certain haplogroups have been linked to longevity in some population groups. The field of linkage disequilibrium, unequal association of genetic disorders with a certain mitochondrial lineage, is in its infancy, but those mitochondrial mutations that have been linked are searchable in the genome database Mitomap. Family Tree DNA's MtFull Sequence test analyses the full MtDNA genome and the National Human Genome Research Institute operates the Genetic And Rare Disease Information Center that can assist consumers in identifying an appropriate screening test and help locate a nearby medical center that offers such a test.
DNA testing for consumers
The first company to provide direct-to-consumer genealogical DNA tests was the now defunct GeneTree. However, it did not offer multi-generational genealogy tests. In fall 2001, GeneTree sold its assets to Salt Lake City-based Sorenson Molecular Genealogy Foundation (SMGF) which originated in 1999.
While in operation, SMGF provided free Y-chromosome and mitochondrial DNA tests to thousands.
Later, GeneTree returned to genetic testing for genealogy in conjunction with the Sorenson parent company and eventually was part of the assets acquired in the Ancestry.com buyout of SMGF in 2012.
In 2000, Family Tree DNA, founded by Bennett Greenspan and Max Blankfeld, was the first company dedicated to direct-to-consumer testing for genealogy research. They initially offered eleven-marker Y-Chromosome STR tests and HVR1 mitochondrial DNA tests. They originally tested in partnership with the University of Arizona.
In 2007, 23andMe was the first company to offer a saliva-based direct-to-consumer genetic testing. It was also the first to implement the use of autosomal DNA for ancestry testing, which other major companies (e.g., Ancestry, Family Tree DNA, and MyHeritage) now use.
MyHeritage launched its genetic testing service in 2016, allowing users to use cheek swabs to collect samples. In 2019, new analysis tools were presented: autoclusters (grouping all matches visually into clusters) and family tree theories (suggesting conceivable relations between DNA matches by combining several Myheritage trees as well as the Geni global family tree).
Living DNA, founded in 2015, also provides a genetic testing service. Living DNA uses SNP chips to provide reports on autosomal ancestry, Y, and mtDNA ancestry. Living DNA provides detailed reports on ancestry from the UK as well as detailed Y chromosome and mtDNA reports.
In 2019 it was estimated that large genealogical testing companies had about 26 million DNA profiles. Many transferred their test result for free to multiple testing sites, and also to genealogical services such as Geni.com and GEDmatch. GEDmatch said in 2018 that about half of their one million profiles were from the USA.
DNA in genealogy software
Some genealogy software programs – such as Family Tree Maker, Legacy Family Tree (Deluxe Edition) and the Swedish program Genney – allow recording DNA marker test results. This allows for tracking of both Y-chromosome and mtDNA tests, and recording results for relatives.
See also
Archaeogenetics
Common misunderstandings of genetics
DNA paternity testing
Electropherogram
Family name (patrilineal surname)
Genetic fingerprinting
Genetic Information Nondiscrimination Act
Genographic Project (Geno 2.0 Next Generation)
International HapMap Project
International Society of Genetic Genealogy
List of DNA tested mummies
:Category: Genetic genealogy companies, including:
23andMe
Ancestry.com
Family Tree DNA
Living DNA
MyHeritage
References
Sources
Further reading
Y-Haplogroups brief descriptions and regional origins
AncestryDNA Markets Genetic "Greatness" During the Olympics, by Leah Lowthorp | 03.16.2018, Biopolitical Times.
DNA
Genetic genealogy
Genetics techniques
Human population genetics | Genealogical DNA test | [
"Engineering",
"Biology"
] | 7,156 | [
"Genetics techniques",
"Genetic engineering"
] |
2,282,444 | https://en.wikipedia.org/wiki/Rare-earth%20magnet | A rare-earth magnet is a strong permanent magnet made from alloys of rare-earth elements. Developed in the 1970s and 1980s, rare-earth magnets are the strongest type of permanent magnets made, producing significantly stronger magnetic fields than other types such as ferrite or alnico magnets. The magnetic field typically produced by rare-earth magnets can exceed 1.2 teslas, whereas ferrite or ceramic magnets typically exhibit fields of 0.5 to 1 tesla.
There are two types: neodymium magnets and samarium–cobalt magnets. Rare-earth magnets are extremely brittle and also vulnerable to corrosion, so they are usually plated or coated to protect them from breaking, chipping, or crumbling into powder.
The development of rare-earth magnets began around 1966, when K. J. Strnat and G. Hoffer of the US Air Force Materials Laboratory discovered that an alloy of yttrium and cobalt, YCo5, had by far the largest magnetic anisotropy constant of any material then known.
The term "rare earth" can be misleading, as some of these metals can be as abundant in the Earth's crust as tin or lead, but rare earth ores do not exist in seams (like coal or copper), so in any given cubic kilometre of crust they are "rare". China has the highest production but China imports significant amounts of REE ore from Myanmar. Some countries classify rare earth metals as strategically important, and Chinese export restrictions on these materials have led other countries, including the United States, to initiate research programs to develop strong magnets that do not require rare earth metals.
Properties
The rare-earth (lanthanide) elements are metals that are ferromagnetic, meaning that like iron they can be magnetized to become permanent magnets, but their Curie temperatures (the temperature above which their ferromagnetism disappears) are below room temperature, so in pure form their magnetism only appears at low temperatures. However, they form compounds with the transition metals such as iron, nickel, and cobalt, and some of these compounds have Curie temperatures well above room temperature. Rare-earth magnets are made from these compounds.
The greater strength of rare-earth magnets is mostly due to two factors:
Firstly, their crystalline structures have very high magnetic anisotropy. This means that a crystal of the material preferentially magnetizes along a specific crystal axis but is very difficult to magnetize in other directions. Like other magnets, rare-earth magnets are composed of microcrystalline grains, which are aligned in a powerful magnetic field during manufacture, so their magnetic axes all point in the same direction. The resistance of the crystal lattice to turning its direction of magnetization gives these compounds a very high magnetic coercivity (resistance to being demagnetized), so that the strong demagnetizing field within the finished magnet does not reduce the material's magnetization.
Secondly, atoms of rare-earth elements can have high magnetic moments. Their orbital electron structures contain many unpaired electrons; in other elements, almost all of the electrons exist in pairs with opposite spins, so their magnetic fields cancel out, but in rare-earths, there is much less magnetic cancellation. This is a consequence of incomplete filling of the f-shell, which can contain up to 7 unpaired electrons. In a magnet, it is the unpaired electrons, aligned so they spin in the same direction, which generate the magnetic field. This gives the materials high remanence (saturation magnetization J). The maximal energy density B·H is proportional to J, so these materials have the potential for storing large amounts of magnetic energy. The magnetic energy product B·H of neodymium magnets is about 18 times greater than "ordinary" magnets by volume. This allows rare-earth magnets to be smaller than other magnets with the same field strength.
Some important properties used to compare permanent magnets are: remanence (Br), which measures the strength of the magnetic field; coercivity (Hci), the material's resistance to becoming demagnetized; energy product (B·Hmax), the density of magnetic energy; and Curie temperature (TC), the temperature at which the material loses its magnetism. Rare-earth magnets have higher remanence, much higher coercivity and energy product, but (for neodymium) lower Curie temperature than other types. The table below compares the magnetic performance of the two types of rare-earth magnets, neodymium (Nd2Fe14B) and samarium–cobalt (SmCo5), with other types of permanent magnets.
Types
Samarium–cobalt
Samarium–cobalt magnets (chemical formula: SmCo5), the first family of rare-earth magnets invented, are less used than neodymium magnets because of their higher cost and lower magnetic field strength. However, samarium–cobalt has a higher Curie temperature, creating a niche for these magnets in applications where high field strength is needed at high operating temperatures. They are highly resistant to oxidation, but sintered samarium–cobalt magnets are brittle and prone to chipping and cracking and may fracture when subjected to thermal shock.
Neodymium
Neodymium magnets, invented in the 1980s, are the strongest and most affordable type of rare-earth magnet. They are made of an alloy of neodymium, iron, and boron (Nd2Fe14B), sometimes abbreviated as NIB. Neodymium magnets are used in numerous applications requiring strong, compact permanent magnets, such as electric motors for cordless tools, hard disk drives, magnetic hold-downs, and jewellery clasps. They have the highest magnetic field strength and have a higher coercivity (which makes them magnetically stable), but they have a lower Curie temperature and are more vulnerable to oxidation than samarium–cobalt magnets.
Corrosion can cause unprotected magnets to spall off a surface layer or to crumble into a powder. Use of protective surface treatments such as gold, nickel, zinc, and tin plating and epoxy-resin coating can provide corrosion protection; the majority of neodymium magnets use nickel plating to provide a robust protection.
Originally, the high cost of these magnets limited their use to applications requiring compactness together with high field strength. Both the raw materials and the patent licenses were expensive. However, since the 1990s, NIB magnets have become steadily less expensive, and their lower cost has inspired new uses such as magnetic construction toys.
Applications
Since their prices became competitive in the 1990s, neodymium magnets have been replacing alnico and ferrite magnets in the many applications in modern technology requiring powerful magnets. Their greater strength allows smaller and lighter magnets to be used for a given application.
Common applications of rare-earth magnets include:
Computer hard disk drives
Wind turbine generators
speakers / headphones
Bicycle dynamos
MRI scanners
Fishing reel brakes
Permanent magnet motors in cordless tools
High-performance AC servo motors
Traction motors and integrated starter-generators in hybrid and electric vehicles
Mechanically powered flashlights, employing rare earth magnets for generating electricity in a shaking motion or rotating (hand-crank-powered) motion
Industrial uses such as maintaining product purity, equipment protection, and quality control
Capture of fine metallic particles in lubricating oils (crankcases of internal combustion engines, also gearboxes and differentials), so as to keep said particles out of circulation, thereby rendering them unable to cause abrasive wear of moving machine parts
Other applications of rare-earth magnets include:
Linear motors (used in maglev trains, etc.)
Stop motion animation: as tie-downs when the use of traditional screw and nut tie-downs is impractical.
Diamagnetic levitation experimentation, the study of magnetic field dynamics and superconductor levitation.
Electrodynamic bearings
Launched roller coaster technology found on roller coaster and other thrill rides.
LED Throwies, small LEDs attached to a button cell battery and a small rare earth magnet, used as a form of non-destructive graffiti and temporary public art.
Desk toys
Electric guitar pickups
Miniature figures, for which rare-earth magnets have gained popularity in the miniatures gaming community for their small size and relative strength assisting in basing and swapping weapons between models.
Research on cancer treatment is exploring the use of magnetic nanoparticles(MNPs) made from rare earth metals. In magnetic hyperthermia, MNPs generate localized heat within tumor cells, leading to their selective destruction. In targeted delivery systems, MNPs are attached to therapeutics and guided by an external magnetic field to concentrate and retain them at the desired site.
Hazards and legislation
The greater force exerted by rare-earth magnets creates hazards that are not seen with other types of magnet. Magnets larger than a few centimeters are strong enough to cause injuries to body parts pinched between two magnets or a magnet and a metal surface, even causing broken bones. Magnets allowed to get too near each other can strike each other with enough force to chip and shatter the brittle material, and the flying chips can cause injuries. Starting in 2005, powerful magnets breaking off toys or from magnetic construction sets started causing injuries and deaths. Young children who have swallowed several magnets have had a fold of the digestive tract pinched between the magnets, causing injury and in one case intestinal perforations, sepsis, and death.
The swallowing of small magnets such as neodymium magnetic spheres can result in intestinal injury requiring surgery. The magnets attract each other through the walls of the stomach and intestine, perforating the bowel. The U.S. Centers for Disease Control reported 33 cases as of 2010 requiring surgery and one death. The magnets have been swallowed by both toddlers and teens (who were using the magnets to pretend to have tongue piercings).
North America
A voluntary standard for toys, permanently fusing strong magnets to prevent swallowing, and capping unconnected magnet strength, was adopted in 2007. In 2009, a sudden growth in sales of magnetic desk toys for adults caused a surge in injuries, with emergency room visits estimated at 3,617 in 2012. In response, the U.S. Consumer Product Safety Commission passed a rule in 2012 restricting rare-earth magnet size in consumer products, but it was vacated by a US federal court decision in November 2016, in a case brought by the one remaining manufacturer. After the rule was nullified, the number of ingestion incidents in the country rose sharply, and is estimated to exceed 1,500 in 2019, leading the CPSC to advise children under the age of 14 to not use the magnets.
In 2009 US company Maxfield & Oberton, maker of Buckyballs, decided to repackage sphere magnets and sell them as toys. Buckyballs launched at New York International Gift Fair in 2009 and sold in the hundreds of thousands before the U.S. Consumer Product Safety Commission issued a recall on packaging labeled 13+. According to the CPSC, 175,000 units had been sold to the public. Fewer than 50 were returned. Buckyballs labeled "Keep Away From All Children" were not recalled. Subsequently, Maxfield & Oberton changed all mentions of "toy" to "desk toy", positioning the product as a stress-reliever for adults and restricted sales from stores that sold primarily children's products.
In the United States, as a result of an estimated 2,900 emergency room visits between 2009 and 2013 due to either "ball-shaped" or "high-powered" magnets, or both, the U.S. Consumer Product Safety Commission (CPSC) has undergone rulemaking to attempt to restrict their sale.
Further investigation by the CPSC published in 2012 found an increasing trend of magnet ingestion incidents in young children and teens since 2009. Incidents involving older children and teens were unintentional and the result of using the magnets to mimic body piercings such as tongue studs. The commission cited hidden complications if more than one magnet becomes attached across tissue inside the body. Another recall was issued for Buckyballs in 2012 along with similar products marketed as toys in the US. Recalls and administrative complaints were filed against other similar US companies. Maxfield & Oberton refused the recall and continued selling their desktop toys. The company launched a political campaign against the CPSC, and Craig Zucker, the company's co-founder, debated the safety commission on FOX News.
In June 2012, due to a letter by U.S. Senator Kirsten Gillibrand to U.S. Consumer Product Safety Commission Chairwoman Inez Tenenbaum, the United States Consumer Product Safety Commission filed administrative complaints, attempting to ban the sale of Buckyballs and Zen Magnets. Zen Magnets LLC is the first company to ever receive this sort of complaint without record of injury. In November 2012, Buckyballs announced that they had stopped production due to a CPSC lawsuit.
In March 2016, Zen Magnets (a manufacturer of neodymium magnet spheres) won in a major 2014 court hearing concerning the danger posed by "defective" warning labels on their spherical magnets. It was decided by a DC court (CPSC Docket No: 12-2) that "Proper use of Zen Magnets and Neoballs creates no exposure to danger whatsoever." As of January 2017, many brands of magnet spheres including Zen Magnets have resumed the sale of small neodymium magnet spheres following a successful appeal by Zen Magnets in the Tenth Circuit US Court of Appeals which vacated the 2012 CPSC regulation banning these products and thereby rendered the sale of small neodymium magnets once again legal in the United States. It was the CPSC's first such loss in more than 30 years.
A study published in the Journal of Pediatric Gastroenterology and Nutrition found a significant increase in magnet ingestions by children after 2017, including "a 5-fold increase in the escalation of care for multiple magnet ingestions". On June 3, 2020, the CPSC submitted a "Petition Response Staff Briefing Package" to the commission, even after the petition was rescinded. It outlines a desire to conduct research in 2021 with a suggested rule proposal in 2022 for a vote.
As of 2019, manufacturers are working on a similar voluntary standard at the ASTM. On October 26, 2017, the CPSC filed an administrative complaint against Zen Magnets, alleging that the magnet sets contained product defects that created a substantial risk of injury to children, declaring that "It is illegal under federal law for any person to sell, offer for sale, manufacture, distribute in commerce, or import into the United States any Zen Magnets and Neoballs."
Sales of "certain products with small, powerful magnets" are prohibited in Canada since 2015.
Oceania
In November 2012, following an interim ban in New South Wales, a permanent ban on the sale of neodymium magnets went into effect throughout Australia.
In January 2013, Consumer Affairs Minister Simon Bridges announced a ban on the import and sale of neodymium magnet sets in New Zealand, effective from January 24, 2013.
Environmental impact
The European Union's ETN-Demeter project (European Training Network for the Design and Recycling of Rare-Earth Permanent Magnet Motors and Generators in Hybrid and Full Electric Vehicles) is examining sustainable design of electric motors used in vehicles. They are, for example, designing electric motors in which the magnets can be easily removed for recycling the rare earth metals.
The European Union's European Research Council also awarded to Principal Investigator, Prof. Thomas Zemb, and co-Principal Investigator, Dr. Jean-Christophe P. Gabriel, an Advanced Research Grant for the project "Rare Earth Element reCYCling with Low harmful Emissions : REE-CYCLE", which aimed at finding new processes for the recycling of rare earth.
Alternatives
The United States Department of Energy has identified a need to find substitutes for rare-earth metals in permanent-magnet technology and has begun funding such research. The Advanced Research Projects Agency-Energy (ARPA-E) has sponsored a Rare Earth Alternatives in Critical Technologies (REACT) program, to develop alternative materials. In 2011, ARPA-E awarded $31.6 million to fund Rare-Earth Substitute projects.
See also
References
Further reading
Furlani Edward P. (2001). "Permanent Magnet and Electromechanical Devices: Materials, Analysis and Applications". Academic Press Series in Electromagnetism. .
Campbell Peter (1996). "Permanent Magnet Materials and their Application" (Cambridge Studies in Magnetism). .
External links
Standard Specifications for Permanent Magnet Materials (Magnetic Materials Producers Association)
Ferromagnetic materials
Loudspeaker technology
Magnetic levitation
Types of magnets | Rare-earth magnet | [
"Physics"
] | 3,469 | [
"Materials",
"Ferromagnetic materials",
"Matter"
] |
2,282,614 | https://en.wikipedia.org/wiki/Antonius%20van%20den%20Broek | Antonius Johannes van den Broek (4 May 1870 – 25 October 1926) was a Dutch mathematical economist and amateur physicist, notable for being the first who realized that the position of an element in the periodic table (now called atomic number) corresponds to the charge of its atomic nucleus. This hypothesis was published in 1911 and inspired the experimental work of Henry Moseley, who found good experimental evidence for it by 1913.
Life
Van den Broek was the son of a civil law notary and trained to be a lawyer himself. He studied at Leiden University and at the Sorbonne in Paris, obtaining a degree in 1895 in Leiden. From 1895 to 1900 he held a lawyers office in The Hague until 1900, after which he studied mathematical economy in Vienna and Berlin. However, from 1903 on, his main interest was physics. Much of the time between 1903 and 1911 he lived in France and Germany. Most of his papers he wrote between 1913 and 1916 while living in Gorssel. He married Elisabeth Margaretha Mauve in 1906, with whom he had five children.
Major contribution to science
The idea of the direct correlation of the charge of the atom nucleus and the periodic table was contained in his paper published in Nature on 20 July 1911, just one month after Ernest Rutherford published the results of his experiments that showed the existence of a small charged nucleus in an atom (see Rutherford model). However, Rutherford's original paper noted only that the charge on the nucleus was large, on the order of about half of the atomic weight of the atom, in whole number units of hydrogen mass. Rutherford on this basis made the tentative suggestion that atomic nuclei are composed of numbers of helium nuclei, each with a charge corresponding to half of its atomic weight. This consideration would make the nuclear charge nearly equal to atomic number in smaller atoms, with some deviation from this rule for the largest atoms, such as gold. For example, Rutherford found the charge on gold to be about 100 units and thought perhaps that it might be exactly 98 (which would be close to half its atomic weight). But gold's place in the periodic table (and thus its atomic number) was known to be 79.
Thus Rutherford did not make the proposal that the number of charges in the nucleus of an atom might be exactly equal to its place on the periodic table (atomic number). This hypothesis was put forward by Van den Broek. The number of the place of an element in the periodic table (or atomic number) at that time was not thought by most physicists to be a physical property. It was not until the work of Henry Moseley working with the Bohr model of the atom with the explicit idea of testing Van den Broek's hypothesis, that it was realized that atomic number was indeed a purely physical property (the charge of the nucleus) which could be measured, and that Van den Broek's original guess had been correct, or very close to being correct. Moseley's work actually found (see Moseley's law) the nuclear charge best described by the Bohr equation and a charge of Z-1, where Z is the atomic number.
Henry Moseley, in his paper on atomic number and X-ray emission, mentions only the models of Rutherford and Van den Broek.
References
H. A. M. Snelders (1979) BROEK, Antonius Johannes van den (1870-1926), Biografisch Woordenboek van Nederland 1, The Hague. (in Dutch)
E. R. Scerri (2007) The Periodic Table, Its Story and Its Significance, Oxford University Press
E.R. Scerri (2016) A Tale of Seven Scientists and A New Philosophy of Science, chapter 3, Oxford University Press
External links
1870 births
1926 deaths
20th-century Dutch lawyers
20th-century Dutch physicists
People involved with the periodic table
Leiden University alumni
People from Zoetermeer
University of Paris alumni
Dutch expatriates in France | Antonius van den Broek | [
"Chemistry"
] | 808 | [
"Periodic table",
"People involved with the periodic table"
] |
2,282,753 | https://en.wikipedia.org/wiki/Orthocarbonic%20acid | Orthocarbonic acid, carbon hydroxide, methanetetrol is the name given to a hypothetical compound with the chemical formula or . Its molecular structure consists of a single carbon atom bonded to four hydroxyl groups. It would be therefore a fourfold alcohol. In theory it could lose four protons to give the hypothetical oxocarbon anion orthocarbonate , and is therefore considered an oxoacid of carbon.
Orthocarbonic acid is highly unstable. Calculations show that it decomposes into carbonic acid and water:
Orthocarbonic acid is one of the group of ortho acids that have the general structure of . The term ortho acid is also used to refer to the most hydroxylated acid in a set of oxoacids.
Researchers predict that orthocarbonic acid is stable at high pressure; hence it may form in the interior of the ice giant planets Uranus and Neptune, where water and methane are common.
Orthocarbonate anions
By loss of one through four protons, orthocarbonic acid could yield four anions: (trihydrogen orthocarbonate), (dihydrogen orthocarbonate), (hydrogen orthocarbonate), and (orthocarbonate).
Numerous salts of fully deprotonated , such as (calcium orthocarbonate) or (strontium orthocarbonate), have been synthesized under high pressure conditions and structurally characterized by X-ray diffraction. Strontium orthocarbonate, , is stable at atmospheric pressure. Orthocarbonate is tetrahedral in shape, and is isoelectronic to orthonitrate. The C-O distance is 1.41 Å. is an oxide orthocarbonate (tristrontium orthocarbonate oxide), also stable at atmospheric pressure.
Orthocarbonate esters
The tetravalent moiety is found in stable organic compounds; they are formally esters of orthocarbonic acid, and therefore are called orthocarbonates. For example, tetraethoxymethane can be prepared by the reaction between chloropicrin and sodium ethoxide in ethanol. Polyorthocarbonates are stable polymers that might have applications in absorbing organic solvents in waste treatment processes, or in dental restorative materials. The explosive trinitroethylorthocarbonate possesses an orthocarbonate core.
A linear polymer which can be described as a (spiro) orthocarbonate ester of pentaerythritol, whose formula could be written as , was synthesized in 2002.
The carbon atom in the spiro ester bis-catechol orthocarbonate was found to have tetrahedral bond geometry, contrasting with the square planar geometry of the silicon atom in the analogous orthosilicate ester.
Orthocarbonates may exist in several conformers, that differ by the relative rotation of the C–O–C bridges. The conformation structures of some esters, such as tetraphenoxymethane, tetrakis(3,5-dimethyl-phenoxy)methane, and tetrakis(4-bromophenoxy)methane have been determined by X-ray diffraction.
See also
Pentaerythritol,
Silicic acid,
Carbonic acid,
References
Hypothetical chemical compounds
Oxoacids
Tetrols
Hydroxides | Orthocarbonic acid | [
"Chemistry"
] | 742 | [
"Hydroxides",
"Hypotheses in chemistry",
"Theoretical chemistry",
"Hypothetical chemical compounds",
"Bases (chemistry)"
] |
2,282,947 | https://en.wikipedia.org/wiki/Advanced%20Telecommunications%20Computing%20Architecture | Advanced Telecommunications Computing Architecture (ATCA or AdvancedTCA) is the largest specification effort in the history of the PCI Industrial Computer Manufacturers Group (PICMG), with more than 100 companies participating. Known as AdvancedTCA, the official specification designation PICMG 3.x (see below) was ratified by the PICMG organization in December 2002. AdvancedTCA is targeted primarily to requirements for "carrier grade" communications equipment, but has recently expanded its reach into more ruggedized applications geared toward the military/aerospace industries as well. This series of specifications incorporates the latest trends in high speed interconnect technologies, next-generation processors, and improved Reliability, Availability and Serviceability (RAS).
Mechanical specifications
An AdvancedTCA board (blade) is 280 mm deep and 322 mm high. The boards have a metal front panel and a metal cover on the bottom of the printed circuit board to limit electromagnetic interference and to limit the spread of fire. The locking injector-ejector handle (lever) actuates a microswitch to let the Intelligent Platform Management Controller (IPMC) know that an operator wants to remove a board, or that the board has just been installed, thus activating the hot-swap procedure. AdvancedTCA boards support the use of PCI Mezzanine Card (PMC) or Advanced Mezzanine Card (AMC) expansion mezzanines.
The shelf supports RTMs (Rear Transition Modules). RTMs plug into the back of the shelf in slot locations that match the front boards. The RTM and the front board are interconnected through a Zone-3 connector. The Zone-3 connector is not defined by the AdvancedTCA specification.
Each shelf slot is 30.48 mm wide. This allows for 14-board chassis to be installed in a 19-inch rack-mountable system and 16 boards in an ETSI rack-mountable system. A typical 14-slot system is 12 or 13 rack units high. The large AdvancedTCA shelves are targeted to the telecommunication market so the airflow goes in the front of the shelf, across the boards from bottom to top, and out the rear of the shelf. Smaller shelves that are used in enterprise applications typically have horizontal air flow.
The small-medium AdvancedTCA shelves are targeted to the telecommunication market; for the lab research operation, some shelves have an open cover in order to make testing easier.
Backplane architecture
The AdvancedTCA backplane provides point-to-point connections between the boards and does not use a data bus. The backplane definition is divided into three sections; Zone-1, Zone-2, and Zone-3. The connectors in Zone-1 provide redundant −48 VDC power and Shelf Management signals to the boards. The connectors in Zone-2 provide the connections to the Base Interface and Fabric Interface. All Fabric connections use point-to-point 100 Ω differential signals. Zone-2 is called "Fabric Agnostic" which means that any Fabric that can use 100 Ω differential signals can be used with an AdvancedTCA backplane.
The connectors in Zone-3 are user defined and are usually used to connect a front board to a Rear Transition Module. The Zone-3 area can also hold a special backplane to interconnect boards with signals that are not defined in the AdvancedTCA specification.
The AdvancedTCA Fabric specification uses Logical Slots to describe the interconnections. The Fabric Switch Boards go in Logical Slots 1 and 2. The chassis manufacturer is free to decide the relationship between Logical and Physical Slots in a chassis. The chassis Field Replaceable Units (FRU) data includes an Address Table that describes the relationship between the Logical and Physical slots.
The Shelf Managers communicate with each board and FRU in the chassis with IPMI (Intelligent Platform Management Interface) protocols running on redundant I²C buses on the Zone-1 connectors.
The Base Interface is the primary Fabric on the Zone-2 connectors and allocates 4 differential pairs per Base Channel. It is wired as a Dual-Star with redundant fabric hub slots at the core. It is commonly used for out of band management, firmware uploading, OS boot, etc.
The Fabric Interface on the backplane supports many different Fabrics and can be wired as a Dual-Star, Dual-Dual-Star, Mesh, Replicated-Mesh or other architectures. It allocates 8 differential pairs per Fabric Channel and each Channel can be divided into four 2-pair Ports. The Fabric Interface is typically used to move data between the boards and the outside network.
The Synchronization Clock Interface routes MLVDS (Multipoint Low-voltage differential signaling) clock signals over multiple 130 Ω buses. The clocks are typically used to synchronize telecom interfaces.
Update Channel Interface is a set of 10 differential signal pairs that interconnect two slots. Which slots are interconnected depends on the particular backplane design. These are signals commonly used to interconnect two hub boards, or redundant processor boards.
Fabrics
The Base Interface can only be 10BASE-T, 100BASE-TX, or 1000BASE-T Ethernet. Since all boards and hubs are required to support one of these interfaces there is always a network connection to the boards.
The Fabric is commonly SerDes Gigabit Ethernet, but can also be Fibre Channel, XAUI 10-Gigabit Ethernet, InfiniBand, PCI Express, or Serial RapidIO. Any Fabric that can use the point-to-point 100 Ω differential signals can be used with an AdvancedTCA backplane.
The PICMG 3.1 Ethernet/Fibre Channel specification has been revised to include IEEE 100GBASE-KR4 signaling to the existing IEEE 40GBASE-KR4, 10GBASE-KX4, 10GBASE-KR, and XAUI signaling.
Blades (boards)
AdvancedTCA blades can be Processors, Switches, AMC carriers, etc. A typical shelf will contain one or more switch blades and several processor blades.
When they are first inserted into the shelf the onboard IPMC is powered from the redundant −48 V on the backplane. The IPMC sends an IPMI event message to the Shelf Manager to let it know that it has been installed. The Shelf Manager reads information from the blade and determines if there is enough power available. If there is, the Shelf Manager sends a command to the IPMC to power-up the payload part of the blade. The Shelf Manager also determines what fabric ports are supported by the blade. It then looks at the fabric interconnect information for the backplane to find out what fabric ports are on the other end of the fabric connections. If the fabric ports on both ends of the backplane wires match then it sends an IPMI command to both blades to enable the matching ports.
Once the blade is powered-up and connected to the fabrics the Shelf Manager listens for event messages from the sensors on the blade. If a temperature sensor reports that it is too warm then the Shelf Manager will increase the speed of the fans.
The FRU data in the board contains descriptive information like the manufacturer, model number, serial number, manufacturing date, revision, etc. This information can be read remotely to perform an inventory of the blades in a shelf.
Shelf Management
The Shelf Manager monitors and controls the boards (blades) and FRU in the shelf. If any sensor reports a problem the Shelf Manager can take action or report the problem to a System Manager. This action could be something simple like making the fans go faster, or more drastic such as powering off a board. Each board and FRU contains inventory information (FRU Data) that can be retrieved by the Shelf Manager. The FRU data is used by the Shelf Manager to determine if there is enough power available for a board or FRU and if the Fabric ports that interconnect boards are compatible. The FRU data can also reveal the manufacturer, manufacturing date, model number, serial number, and asset tag.
Each blade, intelligent FRU, and Shelf Manager contains an Intelligent Platform Management Controller (IPMC). The Shelf Manager communicates with the boards and intelligent FRUs with IPMI protocols running on redundant I²C buses. IPMI protocols include packet checksums to ensure that data transmission is reliable. It is also possible to have non-intelligent FRUs managed by an intelligent FRUs. These are called Managed FRUs and have the same capabilities as an intelligent FRU.
The interconnection between the Shelf Manager and the boards is a redundant pair of Intelligent Platform Management Buses (IPMBs). The IPMB architecture can be a pair of buses (Bused IPMB) or a pair of radial connections (Radial IPMB). Radial IPMB implementations usually include the capability to isolate individual IPMB connections to improve reliability in the event of an IPMC failure.
The Shelf Manager communicates with outside entities with RMCP (IPMI over TCP/IP), HTTP, SNMP over an Ethernet network. Some Shelf Managers support the Hardware Platform Interface, a technical specification defined by the Service Availability Forum.
New specification activity
Two new working groups have been started to adapt ATCA to the specific requirements of physics research.
WG1: Physics xTCA I/O, Timing and Synchronization Working Group
WG1 will define rear I/O for AMC modules and a new component called the μRTM. Additions will be made to the μTCA Shelf specification to accommodate the μRTM and to the ATCA specification to accommodate AMC Rear I/O for an ATCA carrier RTM. Signal lines be identified for use as clocks, gates, and triggers that are commonly used in Physics data acquisition systems.
WG2: Physics xTCA Software Architectures and Protocols Working Group
WG2 will define a common set of software architectures and supporting infrastructure to facilitate inter-operability and portability of both hardware and software modules among the various applications developed for the Physics xTCA platform and that will minimize the development effort and time required to construct experiments and systems using that platform.
A working group was formed to extend ATCA to non-telecom markets.
PICMG 3.7 ATCA Extensions for Applications Outside the Telecom Central Office
The goals of this new working group are to define enhanced features to support double-wide boards; add enhancements to support 600W single-slot boards and 800W double-slot boards; add support for double-sided shelves with full sized boards plugged into both the front and rear of the shelf; and add support for 10Gbs signaling on the Base Interface.
PICMG specifications
3.0 is the "base" or "core" specification. The AdvancedTCA definition alone defines a Fabric agnostic chassis backplane that can be used with any of the Fabrics defined in the following specifications:
3.1 Ethernet (and Fibre Channel)
3.2 InfiniBand
3.3 StarFabric
3.4 PCI Express (and PCI Express Advanced Switching)
3.5 RapidIO
See also
AdvancedMC - expansion cards for AdvancedTCA; also can be used standalone in MicroTCA systems.
AXIe - A new modular instrumentation standard formally launched in November 2009, based on the ATCA standard.
References
External links
Official AdvancedTCA Site
Official PICMG Site
coreIPM Project: Free & Open Source Software for ATCA Platform Management
AdvancedTCA Systems Magazine
Computer buses
Telecommunications equipment
Computer standards
Telecommunications standards | Advanced Telecommunications Computing Architecture | [
"Technology"
] | 2,342 | [
"Computer standards"
] |
2,282,957 | https://en.wikipedia.org/wiki/William%20Forsell%20Kirby | William Forsell Kirby (14 January 1844 – 20 November 1912) was an English entomologist and folklorist. He specialized in the study of the stick insects, describing nearly 70 species and 22 genera. His collection filled 120 cabinets and claimed that on reorganization, it would need 500 drawers. The stick insect Phobaeticus kirbyi described from Borneo and named by Brunner in 1907 after Kirby is one of the largest stick insects in the world.
Life and work
Kirby was born in Leicester. He was the eldest son of banker Samuel Kirby and Lydia Forsell. He was educated privately, and became interested in butterflies and moths at an early age. The family moved to Brighton, where he became acquainted with Henry Cooke, Frederick Merrifield and J. N. Winter through the Brighton and Sussex entomological society. He published his first entomological article in 1856. He was elected fellow of the Entomological Society of London in 1861. He published the Manual of European Butterflies in 1862.
Kirby lived in Germany in 1866 where he met and married Johanna Maria Kappel (1835-1893). Johanna known as "Hannchen" was the second daughter of farmer Wilhelm Kappel and Sibilla Gertraud Kirberg. They had a son William Egmont Kirby (1844 -1912) who also became an entomologist. Johanna's younger brother was the entomologist August Kappel (1840-1915) who became a fellow of the Linnean Society. In 1867 Kirby became a curator in the Museum of the Royal Dublin Society, and produced a Synonymic Catalogue of Diurnal Lepidoptera (1871; Supplement 1877). He moved to London in 1879 when he joined the staff of the British Museum (Natural History) as an assistant, after the death of Frederick Smith. Here he lived close to his friend H. W. Bates after whom he named a couple of stick insect species.
Kirby published a number of catalogues, as well as Rhopalocera Exotica (1887–1897) and an Elementary Text-book of Entomology. He also did important work on orthopteroid insects including a three volume Catalogue of all known species (1904, 1906, 1910). He retired in 1909.
Kirby had a wide range of interests, knew many languages (a working knowledge of German, Italian, Persian, Portuguese, Spanish, Russian, Swedish, Danish and Finnish) and fully translated Finland's national epic, the Kalevala, from Finnish into English. Kirby's translation, which carefully reproduces the Kalevala meter, was a major influence on the writings of J.R.R. Tolkien, who first read it in his teens. Kirby also provided many footnotes to Sir Richard Burton's translation of the Arabian Nights.
Kirby died at Chiswick and was buried in Chiswick cemetery. An obituary was written by his son who noted that he was “never tiring assistance to all who required help or counsel endeared him to a large circle of friends and acquaintances”.
Evolution
Kirby was an advocate of theistic evolution. In his book Evolution and Natural Theology, he argued that evolution and theism are compatible. He noted that creationism was scientifically untenable and refuted its arguments. He viewed nature as a "vast self-adjusting machine".
Publications
Entomology
Collaborative works, and other acknowledgements
, with index
, index and introductions
, index and introductions
Other biology
Literary
, translator and editor
, compiled and translated from Estonian and German
, translator
, translator
Notes
Further reading
The Natural History Museum at South Kensington by William T. Stearn
External links
1844 births
1912 deaths
19th-century British translators
Critics of creationism
Employees of the Natural History Museum, London
English lepidopterists
Irish entomologists
Scientists from Leicester
Theistic evolutionists
Translators from Finnish | William Forsell Kirby | [
"Biology"
] | 780 | [
"Non-Darwinian evolution",
"Theistic evolutionists",
"Biology theories"
] |
2,283,060 | https://en.wikipedia.org/wiki/Gable%20stone | Gable stones () are carved and often colourfully painted stone tablets, which are set into the walls of buildings, usually at about 4 metres from the ground. They serve both to identify and embellish the building. They are also called "stone tablets" by the Rijksmuseum, which sometimes appends "from a facade". A "wall stone" is another suggested translation from the Dutch term.
The content of gable stones may explain something about the house's owner and are a feature of the urban fabric of Amsterdam. Some 2,500 of these stones can still be found in the Netherlands, of which around 850 are in Amsterdam and 250 in Maastricht, while others are also found in cities such as Brussels, Liège, Lille, Oslo, Bergen, Munich, Copenhagen, Bucharest, Zürich, Stockholm and Warsaw.
History
Gable stones came into use in the 16th century, in the days before house numbers, taking over from hanging signs as a way of simultaneously and memorably identifying and adorning a house.
The tradition is alive and has moved with the times – new stones are still commissioned, and for instance the Rabobank at Frederiksplein 54 in Amsterdam wistfully commemorates the introduction of the euro with a stone entitled De eerste en de laatste gulden (The first and the last guilder), created by Zutphen sculptor Hans 't Mannetje.
In Amsterdam, many gable stones have been conserved by the Vereniging Vrienden van Amsterdamse Gevelstenen (VVAG) or Friends of Amsterdam Gable Stones.
Features
They normally combine a picture with an inscription, or sometimes just a date. Some illustrate the name or profession of the owner, for instance a quill pen as a badge for an author, or a ship for a sailor. Some are named after notable people (The King of Bohemia) or faraway trading destinations (Königsberg). Some stones act as talismans, quoting from holy scripture. A pious motto repeatedly found on Dutch gable stones is Nooit Volmaakt (Never Perfect), a testimony to the householder's belief that only God can achieve perfection. Going beyond practicality or superstition, some stones make a joke, usually a visual pun. For instance the "Batenburg" stone from Prinsengracht, Amsterdam, shown here, puns on the words baten (to profit) and burg (castle) to form a playful honor to André Batenburg, a supporter of the city's historic preservation movement who helped conserve the house on which the stone is mounted.
A variation on the theme is a tablet bearing a biblical quotation or more worldly motto, but without an image. (Mediaeval builders also often carved mottos in wooden beams or painted them on plaster panels.) An interesting example is the Dutch-language tablet in Österlånggatan in the Hanseatic old town of Stockholm, Sweden. It reads:
Gaet het wel men heeft veel vrinden
kert het luck wie kan se vinden
[When things go well one has many friends
If your luck changes who can find them?]
Gallery
See also
Gable
Gaper
Plaquette
Pub signs
Marriage stone
Datestone
References
Stonemasonry
Ornaments (architecture)
Architectural elements
Architecture in the Netherlands | Gable stone | [
"Technology",
"Engineering"
] | 681 | [
"Building engineering",
"Construction",
"Stonemasonry",
"Architectural elements",
"Components",
"Architecture"
] |
2,283,094 | https://en.wikipedia.org/wiki/NGC%206822 | NGC 6822 (also known as Barnard's Galaxy, IC 4895, or Caldwell 57) is a barred irregular galaxy approximately 1.6 million light-years away in the constellation Sagittarius. Part of the Local Group of galaxies, it was discovered by E. E. Barnard in 1884, with a six-inch refractor telescope. It is the closest non-satellite galaxy to the Milky Way, but lies just outside its virial radius. It is similar in structure and composition to the Small Magellanic Cloud. It is about 7,000 light-years in diameter.
Observational history
NGC 6822 was discovered by E. E. Barnard in 1884 using a six-inch refractor telescope.
Edwin Hubble, in the paper N.G.C. 6822, A Remote Stellar System, identified 15 variable stars (11 of which were Cepheids) of this galaxy. He also surveyed the galaxy's stars distribution down to magnitude 19.4. He provided spectral characteristics, luminosities and dimensions for the five brightest "diffuse nebulae" (giant H II regions) that included the Bubble Nebula and the Ring Nebula. He also computed the absolute magnitude of the entire galaxy.
Hubble's detection of eleven Cepheid variable stars was a milestone in astronomy. Utilizing the Cepheid Period-Luminosity relationship, Hubble determined a distance of . This was the first system beyond the Magellanic Clouds to have its distance determined. (Hubble continued this process with the Andromeda Galaxy and the Triangulum Galaxy). This distance to the galaxy was way beyond Harlow Shapley's value of 300,000 light-years for the size of the universe. In the paper, Hubble concluded the "Great Debate" of 1920 between Heber Curtis and Shapley over the scale of the universe and the nature of the "spiral nebula". It soon became evident that all spiral nebulae were in fact spiral galaxies far outside our own Milky Way.
An analysis of Hubble's plates by Susan Kayser in 1966 remained the most complete study of this galaxy until 2002.
In 1977, Paul W. Hodge extended the list of known H II regions in Barnard to 16. Today, there are over 150 of these regions catalogued in Barnard's Galaxy.
Star formation
Observations of the galaxy show stars-forming in the dense cores of giant clouds of molecular hydrogen gas, cold enough to collapse under its own gravity. The distribution of hydrogen gas is disk-shaped, but mysteriously, it is angled at about 60° relative to the stellar distribution. Most of its stars formed within the last 3 to 5 billion years.
NGC 6822 has spent most of its life in relative isolation. However, it likely passed within the virial radius of the Milky Way some 3 to 4 billion years ago, which may be coincident with its increase in star formation.
See also
Large Magellanic Cloud, another irregular galaxy within the Local Group
Notes
Sources
Robert Burnham, Jr, Burnham's Celestial Handbook: An observer's guide to the universe beyond the solar system, vol 3, p. 1559
External links
The Milky Way's Tiny but Tough Galactic Neighbour — ESO Press Release
NGC 6822: Barnard's Galaxy
SEDS – NGC 6822
Barred irregular galaxies
Local Group
Sagittarius (constellation)
6822
IC objects
063616
057b
18840817 | NGC 6822 | [
"Astronomy"
] | 701 | [
"Sagittarius (constellation)",
"Constellations"
] |
2,283,222 | https://en.wikipedia.org/wiki/Endohedral%20fullerene | Endohedral fullerenes, also called endofullerenes, are fullerenes that have additional atoms, ions, or clusters enclosed within their inner spheres. The first lanthanum C60 complex called La@C60 was synthesized in 1985. The @ (at sign) in the name reflects the notion of a small molecule trapped inside a shell. Two types of endohedral complexes exist: endohedral metallofullerenes and non-metal doped fullerenes.
Notation
In a traditional chemical formula notation, a buckminsterfullerene (C60) with an atom (M) was simply represented as MC60 regardless of whether M was inside or outside the fullerene. In order to allow for more detailed discussions with minimal loss of information, a more explicit notation was proposed in 1991,
where the atoms listed to the left of the @ sign are situated inside the network composed of the atoms listed to the right. The example above would then be denoted M@C60 if M were inside the carbon network. A more complex example is K2(K@C59B), which denotes "a 60-atom fullerene cage with one boron atom substituted for a carbon in the geodesic network, a single potassium trapped inside, and two potassium atoms adhering to the outside."
The choice of the symbol has been explained by the authors as being concise, readily printed and transmitted electronically (the at sign is included in ASCII, which most modern character encoding schemes are based on), and the visual aspects suggesting the structure of an endohedral fullerene.
Endohedral metallofullerenes
Doping fullerenes with electropositive metals takes place in an arc reactor or via laser evaporation. The metals can be transition metals like scandium, yttrium as well as lanthanides like lanthanum and cerium. Also possible are endohedral complexes with elements of the alkaline earth metals like barium and strontium, alkali metals like potassium and tetravalent metals like uranium, zirconium and hafnium. The synthesis in the arc reactor is however unspecific. Besides unfilled fullerenes, endohedral metallofullerenes develop with different cage sizes like La@C60 or La@C82 and as different isomer cages. Aside from the dominant presence of mono-metal cages, numerous di-metal endohedral complexes and the tri-metal carbide fullerenes like Sc3C2@C80 were also isolated.
In 1999 a discovery drew large attention. With the synthesis of the Sc3N@C80 by Harry Dorn and coworkers, the inclusion of a molecule fragment in a fullerene cage had succeeded for the first time. This compound can be prepared by arc-vaporization at temperatures up to 1100 °C of graphite rods packed with scandium(III) oxide iron nitride and graphite powder in a K-H generator in a nitrogen atmosphere at 300 Torr.
Endohedral metallofullerenes are characterised by the fact that electrons will transfer from the metal atom to the fullerene cage and that the metal atom takes a position off-center in the cage. The size of the charge transfer is not always simple to determine. In most cases it is between 2 and 3 charge units, in the case of the La2@C80 however it can be even about 6 electrons such as in Sc3N@C80 which is better described as [Sc3N]+6@[C80]−6. These anionic fullerene cages are very stable molecules and do not have the reactivity associated with ordinary empty fullerenes. They are stable in air up to very high temperatures (600 to 850 °C).
The lack of reactivity in Diels-Alder reactions is utilised in a method to purify [C80]−6 compounds from a complex mixture of empty and partly filled fullerenes of different cage size. In this method Merrifield resin is modified as a cyclopentadienyl resin and used as a solid phase against a mobile phase containing the complex mixture in a column chromatography operation. Only very stable fullerenes such as [Sc3N]+6@[C80]−6 pass through the column unreacted.
In Ce2@C80 the two metal atoms exhibit a non-bonded interaction. Since all the six-membered rings in C80-Ih are equal the two encapsulated Ce atoms exhibit a three-dimensional random motion. This is evidenced by the presence of only two signals in the 13C-NMR spectrum. It is possible to force the metal atoms to a standstill at the equator as shown by x-ray crystallography when the fullerene is exahedrally functionalized by an electron donation silyl group in a reaction of Ce2@C80 with 1,1,2,2-tetrakis(2,4,6-trimethylphenyl)-1,2-disilirane.
Gd@C82(OH)22, an endohedral metallofluorenol, can competitively inhibit the WW domain in the oncogene YAP1 from activating. It was originally developed as an MRI contrast agent.
Non-metal doped fullerenes
Endohedral complexes He@C60 and Ne@C60 are prepared by pressurizing C60 to ca. 3 bar in a noble-gas atmosphere. Under these conditions about one out of every 650,000 C60 cages was doped with a helium atom.
The formation of endohedral complexes with helium, neon, argon, krypton and xenon as well as numerous adducts of the He@C60 compound was also demonstrated with pressures of 3 kbars and incorporation of up to 0.1% of the noble gases.
While noble gases are chemically very inert and commonly exist as individual atoms, this is not the case for nitrogen and phosphorus and so the formation of the endohedral complexes N@C60, N@C70 and P@C60 is more surprising.
The nitrogen atom is in its electronic initial state (4S3/2) and is highly reactive. Nevertheless, N@C60 is sufficiently stable that exohedral derivatization from the mono- to the hexa adduct of the malonic acid ethyl ester is possible.
In these compounds no charge transfer of the nitrogen atom in the center to the carbon atoms of the cage takes place. Therefore, 13C-couplings, which are observed very easily with the endohedral metallofullerenes, could only be observed in the case of the N@C60 in a high resolution spectrum as shoulders of the central line.
The central atom in these endohedral complexes is located in the center of the cage. While other atomic traps require complex equipment, e.g. laser cooling or magnetic traps, endohedral fullerenes represent an atomic trap that is stable at room temperature and for an arbitrarily long time. Atomic or ion traps are of great interest since particles are present free from (significant) interaction with their environment, allowing unique quantum mechanical phenomena to be explored. For example, the compression of the atomic wave function as a consequence of the packing in the cage could be observed with ENDOR spectroscopy. The nitrogen atom can be used as a probe, in order to detect the smallest changes of the electronic structure of its environment.
Contrary to the metallo endohedral compounds, these complexes cannot be produced in an arc. Atoms are implanted in the fullerene starting material using gas discharge (nitrogen and phosphorus complexes) or by direct ion implantation. Alternatively, endohedral hydrogen fullerenes can be produced by opening and closing a fullerene by organic chemistry methods.
A recent example of endohedral fullerenes includes single molecules of water encapsulated in C60.
Noble gas endofullerenes are predicted to exhibit unusual polarizability. Thus, calculated values of mean polarizability of Ng@C60 do not equal to the sum of polarizabilities of a fullerene cage and the trapped atom, i.e. exaltation of polarizability occurs. The sign of the Δα polarizability exaltation depends on the number of atoms in a fullerene molecule: for small fullerenes (), it is positive; for the larger ones (), it is negative (depression of polarizability). The following formula, describing the dependence of Δα on n, has been proposed: Δα = αNg(2e−0.06(n – 20)−1). It describes the DFT-calculated mean polarizabilities of Ng@C60 endofullerenes with sufficient accuracy. The calculated data allows using C60 fullerene as a Faraday cage, which isolates the encapsulated atom from the external electric field. The mentioned relations should be typical for the more complicated endohedral structures (e.g., C60@C240 and giant fullerene-containing "onions" ).
Molecular endofullerenes
Closed fullerenes encapsulating small molecules have been synthesized. Representative are the synthesis of the dihydrogen endofullerene H2@C60, the water endofullerene H2O@C60, the hydrogen fluoride endofullerene HF@C60, and the methane endofullerene CH4@C60. The encapsulated molecules display unusual physical properties which have been studied by a variety of physical methods. As shown theoretically, compression of molecular endofullerenes (e.g., H2@C60) may lead to dissociation of the encapsulated molecules and reaction of their fragments with interiors of the fullerene cage. Such reactions should result in endohedral fullerene adducts, which are currently unknown.
See also
Fullerene ligands
Inclusion compounds
References
External links
Movie "Helium atom trapped in fullerene (C60) and dodecahedrane (C20H20)" (Youtube)
Fullerenes
Supramolecular chemistry | Endohedral fullerene | [
"Chemistry",
"Materials_science"
] | 2,124 | [
"Nanotechnology",
"nan",
"Supramolecular chemistry"
] |
2,283,276 | https://en.wikipedia.org/wiki/Arame | , sea oak is a species of kelp, of the brown algae, best known for its use in Japanese cuisine.
Description
Eisenia bicyclis is indigenous to temperate Pacific Ocean waters centered near Japan, although it is deliberately cultured elsewhere, including South Korea. It grows and reproduces seasonally. Two flattened oval fronds rise from a stiff woody stipe which can be up to about tall. The fronds are shed and new ones formed annually. The plant appears both branched and feathered. It may be harvested by divers manually or mechanically, and the dried form is available year-round.
Cuisine
It is one of many species of seaweed used in Asian cuisine.
Usually purchased in a dried state, it is reconstituted quickly, taking about five minutes. Arame comes in dark brown strands, has a mild, semi-sweet flavor, and a firm texture. It is added to appetizers, casseroles, muffins, pilafs, soups, toasted dishes, and many other types of food. Its mild flavor makes it adaptable to many uses.
Chemistry
Arame is high in calcium, iodine, iron, magnesium, and vitamin A as well as being a dietary source of many other minerals. It also is harvested for alginate, fertilizer and iodide. It contains the storage polysaccharide laminarin and the tripeptide eisenin, a peptide with immunological activity.
Lignan content in arame is noted by several sources. It also contains the phlorotannins phlorofucofuroeckol A, dioxinodehydroeckol, fucofuroeckol A, eckol, dieckol, triphloroethol A and 7-phloroethol. Extracts of this algae have been tested to combat MRSA staph infections.
See also
Edible seaweed
Seafood allergy
References
Further reading
Kristina Turner. 1996. The Self-Healing Cookbook: A Macrobiotic Primer for the Healing Body. p. 122
Further reading
Iwata, Kayoko. Tagami, Keiko. Uchida, Shigeo. (16 July 2013). "Ecological Half-Lives of Radiocesium in 16 Species in Marine Biota after the TEPCO's Fukushima Daiichi Nuclear Power Plant Accident". Environmental Science and Technology. Vol. 47. Issue. 14. Web of Science Core Collection.
External links
AlgaeBase Profile, M.D. Guiry in Guiry, M.D. & Guiry, G.M. 2013. AlgaeBase. National University of Ireland, Galway, retrieved 8 February 2013.
Lessoniaceae
Edible seaweeds | Arame | [
"Biology"
] | 559 | [
"Algae stubs",
"Eukaryotes",
"Algae",
"Eukaryote stubs"
] |
2,283,477 | https://en.wikipedia.org/wiki/Sextans%20Dwarf%20Spheroidal | The Sextans Dwarf Spheroidal is a dwarf spheroidal galaxy that was discovered in 1990 by Mike Irwin as the 8th satellite of the Milky Way, located in the constellation of Sextans. It is also an elliptical galaxy, and displays a redshift because it is receding from the Sun at 224 km/s (72 km/s from the Galaxy). The distance to the galaxy is 320,000 light-years and the diameter is 8,400 light-years along its major axis.
Like other dwarf spheroidal galaxies, the Sextans Dwarf's population consists of old, metal-poor stars: one study found that the majority of stars have a metallicity between [Fe/H] = −3.2 and −1.4. An analysis of several stars found them to also be deficient in barium, except for one star.
References
Dwarf spheroidal galaxies
Local Group
Milky Way Subgroup
Sextans
? | Sextans Dwarf Spheroidal | [
"Astronomy"
] | 200 | [
"Sextans",
"Constellations"
] |
2,283,598 | https://en.wikipedia.org/wiki/Infrared%20multiphoton%20dissociation | Infrared multiple photon dissociation (IRMPD) is a technique used in mass spectrometry to fragment molecules in the gas phase usually for structural analysis of the original (parent) molecule.
How it works
An infrared laser is directed through a window into the vacuum of the mass spectrometer where the ions are. The mechanism of fragmentation involves the absorption by a given ion of multiple infrared photons. The parent ion becomes excited into more energetic vibrational states until a bond(s) is broken resulting in gas phase fragments of the parent ion. In the case of powerful laser pulses, the dissociation proceeds via inner-valence ionization of electrons.
IRMPD is most often used in Fourier transform ion cyclotron resonance mass spectrometry.
Infrared photodissociation spectroscopy
By applying intense tunable IR lasers, like IR-OPOs or IR free electron lasers, the wavelength dependence of the IRMPD yield can be studied. This infrared photodissociation spectroscopy allows for the measurement of vibrational spectra of (unstable) species that can only be prepared in the gas phase. Such species include molecular ions but also neutral species like metal clusters that can be gently ionized after interaction with the IR light for their mass spectrometric detection.
Analytical applications
The combination of mass spectrometry and IRMPD with tunable lasers (IR ion spectroscopy) is increasingly recognized as a powerful tool for small-molecule identification. Examples are metabomics, where biomarkers are identified in body fluids (urine, blood, cerebrospinal) and forensic sciences, where isomeric designer drugs were identified in seized samples.
Isotope separation
Due to the relatively large differences in IR absorption frequencies that are due to different resonance frequencies for molecules containing different isotopes, this technique has been suggested as a way to perform Isotope separation with difficult-to-separate isotopes, in a single pass. For example, molecules of UF6 containing U-235 might be ionized completely as a result of such a laser resonance, leaving UF6 containing the heavier U-238 intact.
See also
Infrared spectroscopy
Tandem mass spectrometry
Photodissociation
Blackbody infrared radiative dissociation
Electron capture dissociation
Collision-induced dissociation is another, more common, method to fragment gas phase ions.
References
Tandem mass spectrometry
Measuring instruments | Infrared multiphoton dissociation | [
"Physics",
"Technology",
"Engineering"
] | 482 | [
"Mass spectrometry",
"Spectrum (physical sciences)",
"Measuring instruments",
"Tandem mass spectrometry"
] |
2,283,810 | https://en.wikipedia.org/wiki/Garage%20kit | A garage kit (ガレージキット) or resin kit is an assembly scale model kit most commonly cast in polyurethane resin.
They are often model figures portraying humans or other living creatures. In Japan, kits often depict anime characters, and in the United States, depictions of movie monsters are common. However, kits can be produced depicting a wide range of subjects, from characters in horror, science fiction, fantasy films, television and comic books to nudes, pin-up girls and original works of art, as well as upgrade and conversion kits for existing models and airsoft guns.
Originally garage kits were amateur-produced, and the term originated with dedicated hobbyists using their garages as workshops. Unable to find model kits of subjects they wanted on the market, they began producing kits of their own. As the market expanded, professional companies began making similar kits. Sometimes a distinction is made between true garage kits, made by amateurs, and resin kits, manufactured professionally by companies.
Because of the labor-intensive casting process, garage kits are usually produced in limited numbers and are more expensive than injection-molded plastic kits. The parts are glued together using cyanoacrylate (Super Glue) or an epoxy cement and the completed figure is painted. Some figures are sold completed, but most commonly they are sold in parts for the buyer to assemble and finish.
Japan
Japanese garage kits are often anime figures depicting popular characters. Another major subject is "Kaiju" monsters such as Godzilla, and they may also include subjects such as mecha and science fiction spaceships. Garage kits can be as simple as a one piece figure, or as complex as kits with well over one hundred parts. Most commonly they are cast in polyurethane resin, but may also be fabricated of diverse substances such as soft vinyl, white metal (a type of lead alloy) and fabric.
Originally the kits were sold and traded between hobbyists at conventions like Wonder Festival. As the market grew, a number of companies began producing resin kits professionally, such as Federation Models, Volks, WAVE/Be-J, Kaiyodo, Kotobukiya and B-Club, a subsidiary of Bandai producing Gundam kits (Gunpla).
The scale of figure kits varies, but as of 2008, 1/8 seems to be the predominant scale. Prior to 1990 the dominant scale was 1/6. This scale shrink coincided with the rise in material, labor, and licensing costs. Other scales, such as 1/3, 1/4, 1/6, 1/7 also exist, but are less common. Larger kits (1/3, 1/4, etc.) generally command higher prices due to the greater amounts of material required to produce them.
Japanese garage kits are usually cast as separate parts which are packed with instructions and sometimes photographs of the final product. Most professionally manufactured kits come in a box while amateur-produced kits sold at conventions come in plastic bags, blank boxes or even boxes with copied sheet information glued onto them. They are not painted, but some of them do have decals provided by the sculptor or circle. The builder then paints and assembles the model, ideally using an airbrush. However, they can also be painted with a regular brush using a variety of techniques to achieve similar effects as when painting with a conventional airbrush.
United States
In the 1950s and 60s, Aurora and other companies produced cheap plastic models of movie monsters, comic book heroes, and movie and television characters. This market has since disappeared, but through the 1980s an underground market grew through which enthusiasts could acquire the old plastic model kits.
In the early to mid-1980s, hobbyists began creating their own garage kits of movie monsters. There was a small but enthusiastic market for these new model kits. They were poured into flexible molds which could produce rigid reproductions of new figures which were then sculpted more accurately and with more detail than the old plastic model kits. They were usually produced in limited numbers and sold primarily by mail order and at toy and hobby conventions.
In the mid- to late 1980s the monster model kit hobby grew toward the mainstream. By the 1990s, model kits were produced in the US and the UK, as well as in Japan, and distributed through hobby and comic stores. There was an unprecedented variety of licensed model figure kits.
In the late 1990s, model kit sales went down. Hobby and comic stores and their distributors began either carrying fewer garage kits or closing down, along with their producers.
As of 2009, there are two American garage kit magazines, Kitbuilders Magazine and Amazing Figure Modeler, and there are garage kit conventions held annually, like WonderFest USA in Louisville, Kentucky.
Production
Garage kits are generally produced in small quantities, from the tens to a few hundred copies, compared to injection-molded plastic kits which are produced in many thousands. This is due to the labor-intensive nature of the manufacturing process and the relatively low market demand.
Resin casting garage kit production is the most labor-intensive. The upside is that creating the initial mold is much less costly than in the injection-molding process.
Vinyl garage kits are produced by using liquid vinyl Plastisol in a spin casting process known as slush molding. It is more complex than resin casting, but less expensive and less sophisticated than the injection molding used for most plastic products. It is not something that is commonly done in a basement or garage.
Intellectual property issues
The legality of amateur garage kits can be questionable as they are not always properly licensed. The model might be of a copyrighted character or design that was produced by fans because no official model exists.
The relatively low initial investment and ease of resin casting means that it's also easy to create recast copies of existing original kits. Recasts are produced by making molds of parts from original model kits and then doing recasts from the new molds. This can be done for personal use, such as modification of an existing kit, but unlicensed recast copies are sometimes sold unlawfully. In some cases the original kit is no longer available, but in others they are still in active production. The recasts can be of officially licensed model kits, but when they are of unlicensed kits the sculptor usually has a hard time pursuing litigation. The recasts are usually of inferior quality when produced in Thailand, however, other recasters in Hong Kong rival originals in quality and casting and offer at a price that undercuts the original. Recast kits can be found on online auction sites, where they can be difficult to control due to potential cumbersome site policies and seller pseudonymity. Many recasters are in East Asia but can be found all over the globe.
In an effort to legitimize amateur garage kit production and sales in Japan, it is not uncommon for a license holder to issue a 'single day license' (:ja:当日版権システム) where for one day only, license is granted for the sale of amateur garage kits. These licensing agreements are typically negotiated between an event organizer (Wonder Festival, Character Hobby, Figure Mania, etc.) and various licensing entities for license to characters from specific TV shows and movies. Typically, the event organizer publishes a list of licenses available in advance, and sculptors intending to sell their sculptures then submit applications (including photos of their sculpture) for approval. Applications may be rejected.
References
External links
Federation Models
Volks
Be-J
Kaiyodo
Kotobukiya
Scale modeling
Toy figurines | Garage kit | [
"Physics"
] | 1,537 | [
"Scale modeling"
] |
2,284,446 | https://en.wikipedia.org/wiki/Sebastes | Sebastes is a genus of marine ray-finned fish belonging to the subfamily Sebastinae part of the family Scorpaenidae, most of which have the common name of rockfish. A few are called ocean perch, sea perch or redfish instead. They are found in the Atlantic and Pacific Oceans.
Taxonomy
Sebastes was first described as a genus in 1829 by the French zoologist Georges Cuvier, the Dutch ichthyologist Pieter Bleeker designated Perca norvegica, which may have been originally described by the Norwegian zoologist Peter Ascanius in 1772, as the type species in 1876. The genus is the type genus of both the tribe Sebastini and the subfamily Sebastinae, although some authorities treat these as the subfamily Sebastinae and the family Sebastidae, separating the Sebastidae as a distinct family from the Scorpaenidae. but other authorities place it in the Perciformes in the suborder Scorpaenoidei.
Some authorities subdivide this large genus into subgenera as follows:
Sebastes Cuvier, 1829
S. fasciatus
S. mentella
S. norvegicus
S. viviparus
Acutomentum Eigenmann & Beeson, 1893
S. alutus
S. baramenuke
S. brevispinis
S. entomelas
S. flammeus
S. hopkinsi
S. iracundus
S. kiyomatsui
S. macdonaldi
S. minor
S. ovalis
S. rufus
S. scythropus
S. wakiyai
Allosebastes Hubbs, 1951
S. cortezi
S. diploproa
S. emphaeus
S. peduncularis
S. proriger
S. rufinanus
S. saxicola
S. semicinctus
S. sinensis
S. variegatus
S. varispinis
S. wilsoni
S. zacentrus
Auctospina Eigenmann & Beeson 1893
S. auriculatus
S. dallii
Emmelas Jordan & Evermann 1898
S. glaucus
Eosebastes Jordan & Evermann, 1896
S. aurora
S. crameri
S. melanosema
S. melanostomus
Hatumeus Matsubara, 1943
S. owstoni
Hispaniscus Jordan & Evermann, 1896
S. elongatus
S. levis
S. rubrivinctus
Mebarus Matsubara 1943
S. atrovirens
S. cheni
S. inermis
S. joyneri
S. taczanowskii
S. thompsoni
S. ventricosus
Murasoius Matsubara 1943
S. nudus
S. pachycephalus
Neohispaniscus Matsubara 1943
S. schlegelii
S. vulpes
S. zonatus
Pteropodus Eigenmann & Beeson, 1893
S. carnatus
S. caurinus
S. chrysomelas
S. hubbsi
S. longispinis
S. maliger
S. nebulosus
S. nivosus
S. rastrelliger
S. trivittatus
Rosicola Jordan & Evermann, 1896
S. babcocki
S. miniatus
S. pinniger
Sebastichthys Gill, 1862
S. nigrocinctus
Sebastocarus Jordan & Evermann, 1927
S. serriceps
Sebastodes Gill, 1861
S. goodei
S. itinus
S. jordani
S. paucispinis
S. steindachneri
Sebastomus Gill, 1864
S. capensis
S. chlorostictus
S. constellatus
S. ensifer
S. eos
S. exsul
S. helvomaculatus
S. lentiginosus
S. notius
S. oculatus
S. rosaceus
S. rosenblatti
S. serranoides
S. simulator
S. spinorbis
S. umbrosus
Sebastopyr Jordan & Evermann, 1927
S. ruberrimus
Sebastosomus Gill, 1864
S. ciliatus
S. diaconus
S. flavidus
S. melanops
S. mystinus
S. variabilis
Takenokius Matsubara, 1943
S. oblongus
Zalopyr Jordan & Evermann, 1898
S. aleutianus
S. borealis
S. matsubarae
S. melanostictus
Incertae sedis
S. gilli
S. koreanus
S. moseri
S. phillipsi
S. polyspinis
S. reedi
The genus name is derived from the Greek Sebastos, an honorific used in ancient Greek for the Roman imperial title of Augustus, an allusion to the old name for S. norvegicus on Ibiza, its type locality, which Cuvier translated as "august" or "venerable".
The fossil record of rockfish goes back to the Miocene, with unequivocal whole body fossils and otoliths from California and Japan (although fossil otoliths from Belgium, "Sebastes" weileri, may push the record back as far as the early Oligocene).
Species
Sebastes contains 109 recognized extant species in this genus are:
Characteristics
Sebastes species have bodies which vary from elongate to deep, and which may be moderately to highly compressed with a comparatively large head. Their eyes vary from large to small. They may have spines on the head or these may be absent, if spines are present, these can be small and weak or robust and there can be up to 8 of them. They lack a spiny horizontal ridge below the eye. The jaws have many small conical teeth and there are teeth on the roof of the mouth. The single dorsal fin is typically strongly incised at the posterior of the spiny portion which contains 12–15 robust, venom-bearing spines and to the rear of these are 9–16 soft rays, The anal fin has 2–4 spines and 6 to 11 soft rays. There is a spine in each of the pelvic fins as well as 5 soft rays and these are placed under the pectoral fins. The pectoral fins are large and may be rounded or pointed in shape with 14–22 soft rays, the longest being the central rays. The caudal fin is straight to slightly concave. The lateral line may have pored or tubed scales. They vary in size from a maximum total length of in S. koreanus to in S. borealis.
Distribution
Sebastes rockfish are found in the temperate North and South Pacific and Atlantic Oceans. Rockfish range from the intertidal zone to almost deep, usually living benthically on various substrates, often, as the name suggests, around rock outcrops.
Biology
Sebastes rockfish may be long-lived, amongst the longest-living fish on earth, with several species known to surpass 100 years of age, and a maximum reported age of 205 years for S. aleutianus.
Ecotoxicology, radioecology
Like all carnivores, these fish can bioaccumulate some pollutants or radionuclides such as cesium. Highly radioactive rockfish have been caught in a port near Fukushima city, Japan, not far from the Fukushima Daiichi Nuclear Power Plant, nearly 2 years after the nuclear disaster (ex: 107000 Bq/kg (2013-02-12); 116000 Bq/kg (2013-02-13) and 132000Bq/kg (2013-02-13), respectively 1070, 1160, and 1320 times more than the maximum allowed by Japanese authorities (as updated on April 1, 2012)
Fisheries
Sebastes rockfish are important sport and commercial fish, and many species have been overfished. As a result, seasons are tightly controlled in many areas. Sebastes species are sometimes fraudulently substituted for the more expensive northern red snapper (Lutjanus campechanus).
References
Sebastini
Ray-finned fish genera
Game fish
Negligibly senescent organisms
Extant Rupelian first appearances
Taxa named by Georges Cuvier
Rupelian genus first appearances | Sebastes | [
"Biology"
] | 1,725 | [
"Senescence",
"Negligibly senescent organisms",
"Organisms by adaptation"
] |
2,284,519 | https://en.wikipedia.org/wiki/Stepper | A stepper or wafer stepper is a device used in the manufacture of integrated circuits (ICs). It is an essential part of the process of photolithography, which creates millions of microscopic circuit elements on the surface of silicon wafers out of which chips are made. It is similar in operation to a slide projector or a photographic enlarger. The ICs that are made form the heart of computer processors, memory chips, and many other electronic devices.
Stepper is short for step-and-repeat camera.
The stepper emerged in the late 1970s but did not become widespread until the 1980s. This was because it was replacing an earlier technology, the mask aligner. Aligners imaged the entire surface of a wafer at the same time, producing many chips in a single operation. In contrast, the stepper imaged only one chip at a time, and was thus much slower to operate. The stepper eventually displaced the aligner when the relentless forces of Moore's Law demanded that smaller feature sizes be used. Because the stepper imaged only one chip at a time it offered higher resolution and was the first technology to exceed the 1 micron limit. The addition of auto-alignment systems reduced the setup time needed to image multiple ICs, and by the late 1980s, the stepper had almost entirely replaced the aligner in the high-end market.
The stepper was itself replaced by the step-and-scan systems (scanners) which offered an additional order of magnitude resolution advance. Step-and-scan systems work by scanning only a small portion of the mask for an individual IC, and thus require much longer operation times than the original steppers. Step-and-scan systems became widespread during the 1990s and essentially universal by the 2000s. Today, step-and-scan systems are so widespread that they are often simply referred to as steppers. An example of a step-and-scan system is the PAS 5500 from ASML.
History
1957: Attempts to miniaturize electronic circuits started back in 1957 when Jay Lathrop and James Nall of the U.S. Army's Diamond Ordnance Fuze Laboratories were granted a US2890395A patent for a photolithographic technique that could be used to deposit thin-film metal strips that in turn used to connect discrete transistors on a ceramic plate. It was also used to etch holes in the silicon dioxide (SiO2) layers to microfabricate diode arrays. Later, in 1959, Lathrop went to Texas Instruments, working for Jack Kilby, and Nall joined Fairchild Semiconductor.
1958: Based on their works, Jay Last and Robert Noyce at Fairchild Semiconductor built one of the first «step-and-repeat» cameras that repeated identical patterns of the transistors on a single wafer using photolithography.
1959: (Or no later 1961); The David W. Mann division of GCA Corporation became the first company to make commercial step and repeat mask reduction devices called photo-repeaters, which were the predecessors of modern day photolithography steppers. The company was later sold to GCA Corporation/Mann and Perkin Elmer.
1970: the Cobilt company was founded by a group of three engineers from Germany and England (from Kasper Instruments), and one salesman Peter Wolken. The company made what would later be called wafer steppers or lithography machines, at the time referred as mask aligners. The throughput of this machine was one 2-Inches wide wafer at a time.
The Cobilt, which also traded abroad and had plants in Hong-Kong, in Europe was originally represented by a company called Advanced Semiconductor Materials (ASM) run by in Holland, who have founded the ASML in the mid of 1960s.
Around 1971 or so the Cobilt was acquired by Computervision, which had greatly automated Cobilt machine.
1973: Perkin-Elmer had introduced Micralign projection aligner. It helped to decrease amount of defective chips that resulted in low yields and greatly boosted IC industry by helping to lower prices on chips.
GCA introduced the first successful stepper, the DSW 4800 , in 1975. It could reach critical dimensions of 1 micron, better than any other system at the time.
The stepper's role in photolithography
Integrated circuits (ICs) are produced in a process known as photolithography. The process starts with a large highly purified cylindrical crystal of the semiconductor material known as a boule. Thin slices are cut off the boule to form disks, and then undergo initial processing and treatment to create a blank silicon wafer.
Elements of the circuit to be created on the IC are reproduced in a pattern of transparent and opaque areas on the surface of a glass or plastic plate called a photomask or reticle. The wafer is coated with a photosensitive material called photoresist. The mask is positioned over the wafer and bright light, normally ultraviolet, is shone through the mask. Exposure to the light causes sections of the resist to either harden or soften, depending on the process.
After exposure, the wafer is developed like photographic film, causing the photoresist to dissolve in certain areas according to the amount of light the areas received during exposure. These areas of photoresist and no photoresist reproduce the pattern on the reticle. The developed wafer is then exposed to solvents. The solvent etches away the silicon in the parts of the wafer that are no longer protected by the photoresist coating. Other chemicals are used to change the electrical characteristics of the silicon in the bare areas.
The wafer is then cleaned, recoated with photoresist, then passed through the process again in a process that creates the circuit on the silicon, layer by layer. Once the entire process is complete, the wafer is sawn apart into individual chips, tested, and packaged for sale.
Aligners vs. steppers
Before steppers, wafers were exposed using mask aligners, which patterned the entire wafer at once. Masks for these systems would contain many individual ICs patterned across the mask. Between each step, the operator would use a microscope to align the wafer with the next mask to be applied. During the 1970s, aligners generally worked at a one-to-one magnification, which limited the amount of detail on the wafer to about whatever could be produced on the mask.
As feature sizes shrank, following Moore's law, the construction of these complex multi-chip masks became very difficult. In 1975, GCA introduced the first step-and-scan camera, which simplified the process of making masks. In this system, a single parent mask, known as the reticle, was produced at large scale so it could be mechanically robust. This was imaged through a photographic projector, shrinking the projected image 5 to 10 times. The mechanism imaged the reticle onto a photographic plate, moved the reticle to another position, and repeated this process. The result was a mask containing many precise images of the original reticle pattern.
GCA continued development of the hardware as a direct-to-wafer system, eliminating the need to produce a mask from the reticle and instead using the reticle to expose the wafer directly. Because the reticle was at a much larger scale than the final image, the resolution could be improved, as this was formerly limited to the resolution of the mask itself. To pattern the entire wafer, the mask is repeatedly moved, or stepped, across the surface of the wafer. This requires the stepping mechanism to be incredibly accurate, demanding precise alignment. The alignment process is normally automated, eliminating manual operation. As each exposure takes as long as the entire mask in an aligner, steppers are inherently slower to use than aligners, so aligners remain in use for roles where higher resolutions are not required.
Steppers increased the possible resolution many times over that of the aligners and were the first systems to allow features smaller than 1 micron. However, the relentless drive of Moore's law pushed the industry to the point where even the maximum magnifications possible in the projection system were not enough to continue shrinking the feature sizes. This led to the 1990 introduction of the step-and-scan systems, which combine a stepper system with a scanner that images only a portion of the mask at a time. Doing so allows much better focus over the tiny part of the mask, although it also makes the IC production process much slower. As of 2008, step-and-scan systems are the most widely used systems for high-end semiconductor device fabrication.
Major subassemblies
A typical stepper has the following subassemblies: wafer loader, wafer stage, wafer alignment system, reticle loader, reticle stage, reticle alignment system, reduction lens, and illumination system. Process programs for each layer printed on the wafer are executed by a control system centering on a computer that stores the process program, reads it, and communicates with the various subassemblies of the stepper in carrying out the program's instructions. The components of the stepper are contained in a sealed chamber that is maintained at a precise temperature to prevent distortions in the printed patterns that might be caused by expansion or contraction of the wafer due to temperature variations. The chamber also contains other systems that support the process, such as air conditioning, power supplies, control boards for the various electrical components, and others.
Basic operation
The silicon wafers are coated with photoresist, and placed in a cassette or "boat" that holds a number of wafers. This is then placed in a part of the stepper called the wafer loader, usually located at the lower front of the stepper.
A robot in the wafer loader picks up one of the wafers from the cassette and loads it onto the wafer stage where it is aligned to enable another, finer alignment process that will occur later on.
The pattern of the circuitry for each chip is contained in a pattern etched in chrome on the reticle, which is a plate of transparent quartz. A typical reticle used in steppers is 6 inches square and has a usable area of 104mm by 132mm.
A variety of reticles, each appropriate for one stage in the process, are contained in a rack in the reticle loader, usually located at the upper front of the stepper. Before the wafer is exposed a reticle is loaded onto the reticle stage by a robot, where it is also very precisely aligned. Since the same reticle can be used to expose many wafers, it is loaded once before a series of wafers is exposed, and is realigned periodically.
Once the wafer and reticle are in place and aligned, the wafer stage, which is moved very precisely in the X and Y directions (front to back and left to right) by worm screws or linear motors, carries the wafer so that the first of the many patterns (or "shots") to be exposed on it is located below the lens, directly under the reticle.
Although the wafer is aligned after it is placed on the wafer stage, this alignment is not sufficient to ensure that the layer of circuitry to be printed onto the wafer exactly overlays previous layers already there. Therefore, each shot is aligned using special alignment marks that are located in the pattern for each final IC chip. Once this fine alignment is completed, the shot is exposed by light from the stepper's illumination system that passes through the reticle, through a reduction lens, and on to the surface of the wafer. A process program or "recipe" determines the length of the exposure, the reticle used, as well as other factors that affect the exposure.
Each shot located in a grid pattern on the wafer and is exposed in turn as the wafer is stepped back and forth under the lens. When all shots on the wafer are exposed, the wafer is unloaded by the wafer loader robot, and another wafer takes its place on the stage. The exposed wafer is eventually moved to a developer where the photoresist on its surface is exposed to developing chemicals that wash away areas of the photoresist, based on whether or not they were exposed to the light passing through the reticle. The developed surface is then subjected to other processes of photolithography.
Illumination and the challenges of improving resolution
The greatest limitation on the ability to produce increasingly finer lines on the surface of the wafer has been the wavelength of the light used in the exposure system. As the required lines have become narrower and narrower, illumination sources producing light with progressively shorter wavelengths have been put into service in steppers and scanners. An alternative to conventional light based lithography is nanoimprint lithography.
The ability of an exposure system, such as a stepper, to resolve narrow lines is limited by the wavelength of the light used for illumination, the ability of the lens to capture light (or actually orders of diffraction) coming at increasingly wider angles (called numerical aperture or N.A.), and various improvements in the process itself. This is expressed by the following equation:
is the critical dimension, or finest line resolvable, is a coefficient expressing process-related factors, is the wavelength of the light, and is the numerical aperture. Decreasing the wavelength of the light in the illumination system increases the resolving power of the stepper.
Twenty years ago, the "g-line" (436 nm) of the mercury spectrum was used to create lines in the 750 nm range in steppers that employed mercury lamps as their illumination source. Several years later systems employing the ultraviolet "i-line" (365 nm) from mercury lamps were introduced to create lines as low as 350 nm. As the desired line widths approached and eventually became narrower than the wavelength of the light used to create them, a variety of resolution enhancement techniques were developed to make this possible, such as phase shifting reticles and various techniques for manipulating the angles of the exposure light in order to maximize the resolving power of the lens.
Eventually however, the desired line widths became narrower than what was possible using mercury lamps, and near the middle of the 2000s, the semiconductor industry moved towards steppers that employed krypton-fluoride (KrF) excimer lasers producing 248 nm light. Such systems are currently being used to produce lines in the 110 nm range. Lines as low as 32 nm are being resolved by production-capable steppers using argon-fluoride (ArF) excimer lasers that emit light with a wavelength of 193 nm. Although fluoride (F2) lasers are available that produce 157 nm light, they are not practical because of their low power and because they quickly degrade photoresist and other materials used in the stepper.
Since practical light sources with wavelengths narrower than these lasers have not been available, manufacturers have sought to improve resolution by reducing the process coefficient . This is done by further improving techniques for manipulating the light as it passes through the illumination system and the reticle, as well as improving techniques for processing the wafer before and after exposure. Manufacturers have also introduced ever larger and more expensive lenses as a means of increasing the numerical aperture. However, these techniques are approaching their practical limit, and line widths in the 45 nm range appear to be near the best that can be achieved with conventional design.
Ultimately, other sources of illumination will have to be put to use, such as electron beams, x-rays or similar sources of electromagnetic energy with wavelengths much shorter than visible light. However, in order to delay as long as possible the vast expense and difficulty of adopting a whole new type of illumination technology, manufacturers have turned to a technique, previously used in microscopes, for increasing the numerical aperture of the lens by allowing the light to pass through water instead of air. This method, called immersion lithography, is the current cutting edge of practical production technology. It works because numerical aperture is a function of the maximum angle of light that can enter the lens and the refractive index of the medium through which the light passes. When water is employed as the medium, it greatly increases numerical aperture, since it has a refractive index of 1.44 at 193 nm, while air has an index of 1.0003. Current production machines employing this technology are capable of resolving lines in the 32 nm range, and may eventually be able to achieve lines of 30 nm.
Scanners
Modern scanners are steppers that increase the length of the area exposed in each shot (the exposure field) by moving the reticle stage and wafer stage in opposite directions to each other during the exposure. Instead of exposing the entire field at once, the exposure is made through an "exposure slit" that is as wide as the exposure field, but only a fraction of its length (such as a 9x25 mm slit for a 35x25 mm field). The image from the exposure slit is scanned across the exposure area.
There are several benefits to this technique. The field can be exposed with a lesser reduction of size from the reticle to the wafer (such as 4x reduction on a scanner, compared with 5x reduction on a stepper), while allowing a field size much larger than that which can be exposed with a typical stepper. Also the optical properties of the projection lens can be optimized in the area through which the image of the projection slit passes, while optical aberrations can be ignored outside of this area, because they will not affect the exposed area on the wafer.
Successful scanning requires extremely precise synchronization between the moving reticle and wafer stages during the exposure. Accomplishing this presents many technological challenges.
See also
Stepping level
Photolithography
Extreme ultraviolet lithography
Semiconductor
Integrated circuit
Stepper makers:
ASML
Ultratech
Nikon, Precision division
Canon
Ushio, Inc.
Rudolph Technologies
SUSS MicroTec
MAPPER LLC.
Notes
References
Lithography (microfabrication)
Semiconductor device fabrication | Stepper | [
"Materials_science"
] | 3,754 | [
"Semiconductor device fabrication",
"Nanotechnology",
"Microtechnology",
"Lithography (microfabrication)"
] |
2,284,563 | https://en.wikipedia.org/wiki/Conditioned%20taste%20aversion | Conditioned taste aversion occurs when an animal acquires an aversion to the taste of a food that was paired with aversive stimuli. The effect explains that the aversion develops more strongly for stimuli that cause nausea than other stimuli. This is considered an adaptive trait or survival mechanism that enables the organism to avoid poisonous substances (e.g., poisonous berries) before they cause harm. The aversion reduces consuming the same substance (or something that tastes similar) in the future, thus avoiding poisoning.
Studies on conditioned taste aversion that involved irradiating rats were conducted in the 1950s by John Garcia, leading to it sometimes being called the Garcia effect.
Conditioned taste aversion can occur when sickness is merely coincidental to, and not caused by, the substance consumed. For example, a person who becomes very sick after consuming tequila-and-orange-juice cocktails may then become averse to the taste of orange juice, even though the sickness was caused by the over-consumption of alcohol. Under these circumstances, conditioned taste aversion is sometimes known as the sauce-bearnaise syndrome, a term coined by Seligman and Hager.
Garcia's study
While studying the effects of radiation on various behaviors in the mid to late 1950s, Garcia noticed that rats developed an aversion to substances consumed prior to being irradiated. To examine this, Garcia put together a study in which three groups of rats were given sweetened water followed by either no radiation, mild radiation, or strong radiation. When rats were subsequently given a choice between sweetened water and regular tap water, rats who had been exposed to radiation drank much less sweetened water than those who had not.
This finding was surprising in that the aversion could occur after just a single trial and with a long delay between the stimuli. Most research at the time found that learning required multiple trials and shorter latencies. Many scientists were skeptical of Garcia's findings because they did not follow a broad array of previous results: that any neutral stimulus could be made aversive or reinforcing by pairing it with aversive or reinforcing stimuli. However, Garcia's results were clear, and replicated in different settings. A compelling experiment compared pairing of nausea with taste, versus pairing with light and sound stimuli ("bright noisy water"). Strong aversions developed to the taste stimuli but not to the light and sound. This demonstrated that the particular stimulus used in conditioning can matter: some stimulus pairings generate stronger aversion than others. Conditioned taste aversion can also be species-specific, which was also a challenge to previous results.
Overview
Taste aversion does not require cognitive awareness to develop—that is, the organism does not have to consciously recognize a connection between the perceived cause (the taste) and effect (the aversion). In fact, the subject may hope to enjoy the substance, but the aversion persists.
Also, taste aversion generally requires only one trial. Classical conditioning requires several pairings of the neutral stimulus (e.g., a ringing bell) with the unconditioned stimulus (i.e., meat powder) before the neutral stimulus elicits the response (salivation). With taste aversion, the food can become aversive after just one pairing with sickness. In addition, respondent behavior generally requires brief intervals between the neutral stimulus and the unconditioned stimulus. With taste aversion, however, the hotdog a person eats at lunch may become aversive as a result of the vomiting that person has in the evening.
If the flavor has been encountered before the organism becomes ill, the effect will not be as strong or will not be present. This quality is called latent inhibition. Conditioned taste aversion is often used in laboratories to study gustation and learning in rats.
Aversions can be developed to odors as well as tastes.
Common vampire bats (Desmodus rotundus) do not learn taste aversions despite being closely related to other species of bats that do. The diet of common vampire bats only consists of vertebrate blood and therefore it is hypothesized that the development of a taste aversion to their only food source would not be advantageous for these animals.
In humans
Taste aversion is common in humans. When humans eat bad food (e.g., spoiled meat) and get sick, they may later find that particular food aversive. The food does not have to cause the sickness for it to become aversive. A human who eats sushi for the first time and who happens to come down with an unrelated stomach virus may still develop an aversion to sushi. Even something as obvious as riding a roller coaster (causing nausea) after eating the sushi will influence the development of taste aversion to sushi. Humans might also develop aversions to certain types of alcoholic beverages because of vomiting during intoxication. This is the mechanism of disulfram, a drug used to treat alcohol dependence by inhibiting aldehyde dehydrogenase which causes a rapid buildup of the hangover-causing compound acetaldehyde when consuming alcoholic beverages, thereby pairing a negative stimulus with consumption of alcohol. Certain other cephalosporin antibiotic medications, like cefotetan and cefazolin are metabolized to a compound that has similar effects at blocking acetaldehyde metabolism, 1-methylthiotetrazole (1-MTT), and can cause the same hangovers upon alcohol ingestion.
Taste aversion is a common problem with chemotherapy patients, who become nauseated because of the drug therapy and then develop aversions to food.
Applications
Taste aversion has been demonstrated in a wide variety of both captive and free-ranging predators. In these studies, animals that consume a bait laced with an undetectable dose of an aversion agent avoid both baits and live prey with the same taste and scent as the baits. When predators detect the aversion agent in the baits, they quickly form aversions to the baits, but discriminate between these and different-tasting live prey.
Stimulus generalization
Stimulus generalization is another learning phenomenon that can be illustrated by conditioned taste aversion. This phenomenon demonstrates that we tend to develop aversions even to types of food that resemble the foods which caused us illness. For example, if one eats an orange and gets sick, one might also avoid eating tangerines and clementines because they smell and taste similar to oranges.
Stimulus generalization applies beyond food tastes and aversion. Trauma and aversive events of all kinds create aversion and generalizations to other events. And like taste aversion, the generalization may or not be conscious. Stimulus generalization is a factor in "superstitious behavior", racism and prejudice of all kinds.
Compared with taste avoidance
Although the terms "taste avoidance" and "taste aversion" are often used interchangeably, studies with rats indicate they are not necessarily synonymous. A rat may avoid a food yet still enjoy it and choose it over others. Rats tested with a sucrose solution paired with a drug that gave positive, euphoric effects, such as amphetamine, cocaine, and morphine emitted positive reactions to the drugs, yet they avoided approaching these solutions. When one of these solutions was placed next to another solution the rats had been given a conditioned taste aversion, the rat would choose the aversive. Scientists theorize that in terms of evolution, because rats are unable to vomit and immediately purge toxins, rats have developed a strong "first line of defense", which is their sense of taste and smell. This further shows the importance of taste and the correlation between taste and any change in physiological state, whether it be good or bad. Because rats rely upon taste and pairing it with a reaction rather than relying on later responses that involve the gastrointestinal tract, taste avoidance is just as prevalent as taste aversion, though the two don't necessarily go hand in hand.
See also
Pavlovian-instrumental transfer
Poison shyness
References
Animal cognition
Behaviorism
Eating behaviors
Gustation
Ethology | Conditioned taste aversion | [
"Biology"
] | 1,697 | [
"Behavior",
"Animals",
"Biological interactions",
"Eating behaviors",
"Behavioural sciences",
"Behaviorism",
"Animal cognition",
"Ethology"
] |
2,285,007 | https://en.wikipedia.org/wiki/Regenerative%20medicine | Regenerative medicine deals with the "process of replacing, engineering or regenerating human or animal cells, tissues or organs to restore or establish normal function". This field holds the promise of engineering damaged tissues and organs by stimulating the body's own repair mechanisms to functionally heal previously irreparable tissues or organs.
Regenerative medicine also includes the possibility of growing tissues and organs in the laboratory and implanting them when the body cannot heal itself. When the cell source for a regenerated organ is derived from the patient's own tissue or cells, the challenge of organ transplant rejection via immunological mismatch is circumvented. This approach could alleviate the problem of the shortage of organs available for donation.
Some of the biomedical approaches within the field of regenerative medicine may involve the use of stem cells. Examples include the injection of stem cells or progenitor cells obtained through directed differentiation (cell therapies); the induction of regeneration by biologically active molecules administered alone or as a secretion by infused cells (immunomodulation therapy); and transplantation of in vitro grown organs and tissues (tissue engineering).
History
The ancient Greeks postulated whether parts of the body could be regenerated in the 700s BC. Skin grafting, invented in the late 19th century, can be thought of as the earliest major attempt to recreate bodily tissue to restore structure and function. Advances in transplanting body parts in the 20th century further pushed the theory that body parts could regenerate and grow new cells. These advances led to tissue engineering, and from this field, the study of regenerative medicine expanded and began to take hold. This began with cellular therapy, which led to the stem cell research that is widely being conducted today.
The first cell therapies were intended to slow the aging process. This began in the 1930s with Paul Niehans, a Swiss doctor who was known to have treated famous historical figures such as Pope Pius XII, Charlie Chaplin, and king Ibn Saud of Saudi Arabia. Niehans would inject cells of young animals (usually lambs or calves) into his patients in an attempt to rejuvenate them. In 1956, a more sophisticated process was created to treat leukemia by inserting bone marrow from a healthy person into a patient with leukemia. This process worked mostly due to both the donor and receiver in this case being identical twins. Nowadays, bone marrow can be taken from people who are similar enough to the patient who needs the cells to prevent rejection.
The term "regenerative medicine" was first used in a 1992 article on hospital administration by Leland Kaiser. Kaiser's paper closes with a series of short paragraphs on future technologies that will impact hospitals. One paragraph had "Regenerative Medicine" as a bold print title and stated, "A new branch of medicine will develop that attempts to change the course of chronic disease and in many instances will regenerate tired and failing organ systems."
The term was brought into the popular culture in 1999 by William A. Haseltine when he coined the term during a conference on Lake Como, to describe interventions that restore to normal function that which is damaged by disease, injured by trauma, or worn by time. Haseltine was briefed on the project to isolate human embryonic stem cells and embryonic germ cells at Geron Corporation in collaboration with researchers at the University of Wisconsin–Madison and Johns Hopkins School of Medicine. He recognized that these cells' unique ability to differentiate into all the cell types of the human body (pluripotency) had the potential to develop into a new kind of regenerative therapy. Explaining the new class of therapies that such cells could enable, he used the term "regenerative medicine" in the way that it is used today: "an approach to therapy that ... employs human genes, proteins and cells to re-grow, restore or provide mechanical replacements for tissues that have been injured by trauma, damaged by disease or worn by time" and "offers the prospect of curing diseases that cannot be treated effectively today, including those related to aging".
Later, Haseltine would go on to explain that regenerative medicine acknowledges the reality that most people, regardless of which illness they have or which treatment they require, simply want to be restored to normal health. Designed to be applied broadly, the original definition includes cell and stem cell therapies, gene therapy, tissue engineering, genomic medicine, personalized medicine, biomechanical prosthetics, recombinant proteins, and antibody treatments. It also includes more familiar chemical pharmacopeia—in short, any intervention that restores a person to normal health. In addition to functioning as shorthand for a wide range of technologies and treatments, the term "regenerative medicine" is also patient friendly. It solves the problem that confusing or intimidating language discourages patients.
The term regenerative medicine is increasingly conflated with research on stem cell therapies. Some academic programs and departments retain the original broader definition while others use it to describe work on stem cell research.
From 1995 to 1998 Michael D. West, PhD, organized and managed the research between Geron Corporation and its academic collaborators James Thomson at the University of Wisconsin–Madison and John Gearhart of Johns Hopkins University that led to the first isolation of human embryonic stem and human embryonic germ cells, respectively.
In March 2000, Haseltine, Antony Atala, M.D., Michael D. West, Ph.D., and other leading researchers founded E-Biomed: The Journal of Regenerative Medicine. The peer-reviewed journal facilitated discourse around regenerative medicine by publishing innovative research on stem cell therapies, gene therapies, tissue engineering, and biomechanical prosthetics. The Society for Regenerative Medicine, later renamed the Regenerative Medicine and Stem Cell Biology Society, served a similar purpose, creating a community of like-minded experts from around the world.
In June 2008, at the Hospital Clínic de Barcelona, Professor Paolo Macchiarini and his team, of the University of Barcelona, performed the first tissue engineered trachea (wind pipe) transplantation. Adult stem cells were extracted from the patient's bone marrow, grown into a large population, and matured into cartilage cells, or chondrocytes, using an adaptive method originally devised for treating osteoarthritis. The team then seeded the newly grown chondrocytes, as well as epithelial cells, into a decellularised (free of donor cells) tracheal segment that was donated from a 51-year-old transplant donor who had died of cerebral hemorrhage. After four days of seeding, the graft was used to replace the patient's left main bronchus. After one month, a biopsy elicited local bleeding, indicating that the blood vessels had already grown back successfully.
In 2009, the SENS Foundation was launched, with its stated aim as "the application of regenerative medicine – defined to include the repair of living cells and extracellular material in situ – to the diseases and disabilities of ageing". In 2012, Professor Paolo Macchiarini and his team improved upon the 2008 implant by transplanting a laboratory-made trachea seeded with the patient's own cells.
On September 12, 2014, surgeons at the Institute of Biomedical Research and Innovation Hospital in Kobe, Japan, transplanted a 1.3 by 3.0 millimeter sheet of retinal pigment epithelium cells, which were differentiated from iPS cells through directed differentiation, into an eye of an elderly woman, who suffers from age-related macular degeneration.
In 2016, Paolo Macchiarini was fired from Karolinska University in Sweden due to falsified test results and lies. The TV-show Experimenten aired on Swedish Television and detailed all the lies and falsified results.
Research
Widespread interest and funding for research on regenerative medicine has prompted institutions in the United States and around the world to establish departments and research institutes that specialize in regenerative medicine including: The Department of Rehabilitation and Regenerative Medicine at Columbia University, the Institute for Stem Cell Biology and Regenerative Medicine at Stanford University, the Center for Regenerative and Nanomedicine at Northwestern University, the Wake Forest Institute for Regenerative Medicine, and the British Heart Foundation Centers of Regenerative Medicine at the University of Oxford. In China, institutes dedicated to regenerative medicine are run by the Chinese Academy of Sciences, Tsinghua University, and the Chinese University of Hong Kong, among others.
In dentistry
Regenerative medicine has been studied by dentists to find ways that damaged teeth can be repaired and restored to obtain natural structure and function. Dental tissues are often damaged due to tooth decay, and are often deemed to be irreplaceable except by synthetic or metal dental fillings or crowns, which requires further damage to be done to the teeth by drilling into them to prevent the loss of an entire tooth.
Researchers from King's College London have created a drug called Tideglusib that claims to have the ability to regrow dentin, the second layer of the tooth beneath the enamel which encases and protects the pulp (often referred to as the nerve).
Animal studies conducted on mice in Japan in 2007 show great possibilities in regenerating an entire tooth. Some mice had a tooth extracted and the cells from bioengineered tooth germs were implanted into them and allowed to grow. The result were perfectly functioning and healthy teeth, complete with all three layers, as well as roots. These teeth also had the necessary ligaments to stay rooted in its socket and allow for natural shifting. They contrast with traditional dental implants, which are restricted to one spot as they are drilled into the jawbone.
A person's baby teeth are known to contain stem cells that can be used for regeneration of the dental pulp after a root canal treatment or injury. These cells can also be used to repair damage from periodontitis, an advanced form of gum disease that causes bone loss and severe gum recession. Research is still being done to see if these stem cells are viable enough to grow into completely new teeth. Some parents even opt to keep their children's baby teeth in special storage with the thought that, when older, the children could use the stem cells within them to treat a condition.
Extracellular matrix
Extracellular matrix materials are commercially available and are used in reconstructive surgery, treatment of chronic wounds, and some orthopedic surgeries; as of January 2017 clinical studies were under way to use them in heart surgery to try to repair damaged heart tissue.
The use of fish skin with its natural constituent of omega 3, has been developed by an Icelandic company Kereceis. Omega 3 is a natural anti-inflammatory, and the fish skin material acts as a scaffold for cell regeneration. In 2016 their product Omega3 Wound was approved by the FDA for the treatment of chronic wounds and burns. In 2021 the FDA gave approval for Omega3 Surgibind to be used in surgical applications including plastic surgery.
Cord blood
Though uses of cord blood beyond blood and immunological disorders is speculative, some research has been done in other areas. Any such potential beyond blood and immunological uses is limited by the fact that cord cells are hematopoietic stem cells (which can differentiate only into blood cells), and not pluripotent stem cells (such as embryonic stem cells, which can differentiate into any type of tissue). Cord blood has been studied as a treatment for diabetes. However, apart from blood disorders, the use of cord blood for other diseases is not a routine clinical modality and remains a major challenge for the stem cell community.
Along with cord blood, Wharton's jelly and the cord lining have been explored as sources for mesenchymal stem cells (MSC), and as of 2015 had been studied in vitro, in animal models, and in early stage clinical trials for cardiovascular diseases, as well as neurological deficits, liver diseases, immune system diseases, diabetes, lung injury, kidney injury, and leukemia.
See also
References
Further reading
Non-technical further reading
Regenerative Medicine, gives more details about Regenerative Stem Cells.
Kevin Strange and Viravuth Yin, "A Shot at Regeneration: A once abandoned drug compound shows an ability to rebuild organs damaged by illness and injury", Scientific American, vol. 320, no. 4 (April 2019), pp. 56–61.
Technical further reading
Vertebrate developmental biology
Regenerative biomedicine
Tissue engineering | Regenerative medicine | [
"Chemistry",
"Engineering",
"Biology"
] | 2,608 | [
"Biological engineering",
"Cloning",
"Chemical engineering",
"Tissue engineering",
"Medical technology"
] |
2,285,013 | https://en.wikipedia.org/wiki/Cell%20therapy | Cell therapy (also called cellular therapy, cell transplantation, or cytotherapy) is a therapy in which viable cells are injected, grafted or implanted into a patient in order to effectuate a medicinal effect, for example, by transplanting T-cells capable of fighting cancer cells via cell-mediated immunity in the course of immunotherapy, or grafting stem cells to regenerate diseased tissues.
Cell therapy originated in the nineteenth century when scientists experimented by injecting animal material in an attempt to prevent and treat illness. Although such attempts produced no positive benefit, further research found in the mid twentieth century that human cells could be used to help prevent the human body rejecting transplanted organs, leading in time to successful bone marrow transplantation as has become common practice in treatment for patients that have compromised bone marrow after disease, infection, radiation or chemotherapy. In recent decades, however, stem cell and cell transplantation has gained significant interest by researchers as a potential new therapeutic strategy for a wide range of diseases, in particular for degenerative and immunogenic pathologies.
Background
Cell therapy can be defined as therapy in which cellular material is injected or otherwise transplanted into a patient. The origins of cell therapy can perhaps be traced to the nineteenth century, when Charles-Édouard Brown-Séquard (1817–1894) injected animal testicle extracts in an attempt to stop the effects of aging. In 1931 Paul Niehans (1882–1971) – who has been called the inventor of cell therapy – attempted to cure a patient by injecting material from calf embryos. Niehans claimed to have treated many people for cancer using this technique, though his claims have never been validated by research.
In 1953 researchers found that laboratory animals could be helped not to reject organ transplants by pre-inoculating them with cells from donor animals; in 1968, in Minnesota, the first successful human bone marrow transplantation took place. In more recent work, cell encapsulation is pursued as a means to shield therapeutic cells from the host immune response. Recent work includes micro-encapsulating cells in a gel core surrounded by a solid, but permeable, shell.
Bone marrow transplants are the most common and well established cell transplantation therapies. The first recording of a successful bone marrow transplant, dates back to 1956 by dr. E Donnall Thomas, who treated a leukemia patient with their twin-siblings bone marrow. In general, for patients presenting damaged or destroyed bone marrow, for example after chemotherapy and/or radiation for acute myeloid leukemia (AML), bone marrow derived cells can be infused into the patients blood stream. Here the injected cells are able to home into the affected bone marrow, integrate, proliferate and recover or re-establish its biological function e.g. the haematopoiesis. Annually an estimated 18,000 patients require potentially life-saving bone marrow transplants in the US. For a long time, bone marrow transplantation was the only clinically applicable method of cell transplantation, however, since the 1990s, cell therapy has been investigated for a wide scale of pathologies and disorders. Cell therapy provided a novel approach to effectuate therapeutic efficacy. Previously, medical agents could only be effective by directing and inducing the patients own cells. However, in many diseases and disorders, cell are compromised by e.g. senescence, limited blood supply (ischemia), inflammation, or simply a reduction in the number of cells. Cell therapy offers a new strategy that supports the introduction of new and active cells to restore previously compromised or deteriorated tissue- and organ structures. As such, in recent times, cell therapy has been recognized as an important field in the treatment of human disease, and investigations are ongoing in articular cartilage, brain tissue, spine, heart, cancers, etc. As a consequence cell therapy as a strategy has been attracting significant investments by commercial entities which suggest strong prospects for future growth.
In 2021 Atara biotherapeutics became the first ever allogeneic T cell therapy company to be reviewed by any regulatory agency in the world (EMA)
Mechanisms of action
Cell therapy is targeted at many clinical indications in multiple organs and by several modes of cell delivery. Accordingly, the specific mechanisms of action involved in the therapies are wide-ranging. However, there are two main principles by which cells facilitate therapeutic action:
Stem, progenitor, or mature cell engraftment, differentiation, and long-term replacement of damaged tissue. In this paradigm multipotent or unipotent cells differentiate into a specific cell type in the lab or after reaching the site of injury (via local or systemic administration). These cells then integrate into the site of injury, replacing damaged tissue, and thus facilitate improved function of the organ or tissue. An example of this is the use of cells to replace cardiomyocytes after myocardial infarction, to facilitate angiogenesis in ischemic limb disease, or the production of cartilage matrix in intervertebral disc degeneration.
Cells that have the capacity to release soluble factors such as cytokines, chemokines, and growth factors which act in a paracrine or endocrine manner. These factors facilitate self-healing of the organ or region by inducing local (stem) cells or attracting cells to migrate towards the transplantation site. Early cell passages have been shown to be more efficient paracrine activity than later passages. The delivered cells (via local or systemic administration) remain viable for a relatively short period (days-weeks) and then die. This includes cells that naturally secrete the relevant therapeutic factors, or which undergo epigenetic changes or genetic engineering that causes the cells to release large quantities of a specific molecule. Examples of this include cells that secrete factors which facilitate angiogenesis, anti-inflammation, and anti-apoptosis. This mode of action is proposed by companies such as Pluristem and Pervasis that use adherent stromal cells or mature endothelial cells to treat peripheral artery disease and arteriovenous access complications.
Cell therapy strategies
Allogeneic
In allogeneic cell therapy the donor is a different person to the recipient of the cells. In pharmaceutical manufacturing, the allogenic methodology is promising because unmatched allogenic therapies can form the basis of "off the shelf" products. There is research interest in attempting to develop such products to treat conditions including Crohn's disease and a variety of vascular conditions.
Autologous
In autologous cell therapy, cells are transplanted that are derived from the patients own tissues. Multiple clinical studies are ongoing that obtain stromal cells from bone-marrow, adipose tissue, or peripheral blood to be transplanted at sites of injury or stress; which is being actively explored for e.g. cartilage and muscle repair. It could also involve the isolation of matured cells from diseased tissues, to be later re-implanted at the same or neighboring tissues; a strategy being assessed in clinical trials for e.g. the spine in preventing disc reherniation or adjacent disc disease. The benefit of an autologous strategy is that there is limited concern for immunogenic responses or transplant rejection. Nevertheless, an autologous strategy is often costly due to patient-by-patient processing, thus preventing the option to create large quality-controlled batches. Moreover, autologous strategies generally do not allow for product quality and effectiveness testing prior to transplantation, as it is highly donor (thus patient) dependent. This is a particular concern as often the patient functioning as donor is diseased, and this can impact cell potency and quality.
Xenogeneic
In xenogeneic cell therapies, the recipient will receive cells from another species. For example, the transplantation of pig derived cells to humans. Currently, xenogeneic cell therapies primarily involve human cell transplantation into experimental animal models for assessment of efficacy and safety, however future advances could potentially enable xenogeneic strategies to humans as well.
Types of cells
Human embryonic stem cells
Research into human embryonic stem cells is controversial, and regulation varies from country to country, with some countries banning it outright. Nevertheless, these cells are being investigated as the basis for a number of therapeutic applications, including possible treatments for diabetes and Parkinson's disease.
Neural stem cell therapy
Neural stem cells (NSCs) are the subject of ongoing research for possible therapeutic applications, for example for treating a number of neurological disorders such as Parkinson's disease and Huntington's disease.
Mesenchymal stem cell therapy
MSCs are immunomodulatory, multipotent and fast proliferating and these unique capabilities mean they can be used for a wide range of treatments including immune-modulatory therapy, bone and cartilage regeneration, myocardium regeneration and the treatment of Hurler syndrome, a skeletal and neurological disorder.
Researchers have demonstrated the use of MSCs for the treatment of osteogenesis imperfecta (OI). Horwitz et al. transplanted bone marrow (BM) cells from human leukocyte antigen (HLA)-identical siblings to patients with OI. Results show that MSCs can develop into normal osteoblasts, leading to fast bone development and reduced fracture frequencies. A more recent clinical trial showed that allogeneic fetal MSCs transplanted in utero in patients with severe OI can engraft and differentiate into bone in a human fetus.
Besides bone and cartilage regeneration, cardiomyocyte regeneration with autologous BM MSCs has also been reported recently. Introduction of BM MSCs following myocardial infarction (MI) resulted in significant reduction of damaged regions and improvement in heart function. Clinical trials for treatment of acute MI with Prochymal by Osiris Therapeutics are underway. Also, a clinical trial revealed huge improvements in nerve conduction velocities in Hurler's Syndrome patients infused with BM MSCs from HLA-identical siblings.
Hematopoietic stem cell transplantation
Hematopoietic stem cells (HSCs), derived from bone marrow or blood, are cells with the abilities to self-renew and to differentiate into all types of blood cells, especially those involved in the human immune system. Thus, they can be used to treat blood and immune disorders. Since human bone marrow grafting was first published in 1957, there have been significant advancements in HSCs therapy. Following that, syngeneic marrow infusion and allogeneic marrow grafting were performed successfully. HSCs therapy can also render its cure by reconstituting damaged blood-forming cells and restoring the immune system after high-dose chemotherapy to eliminate disease.
There are three types of HSC transplantation: syngeneic, autologous, and allogeneic transplants. Syngeneic transplantations occur between identical twins. Autologous transplantations use the HSCs obtained directly from the patient and hence avoid complications of tissue incompatibility; whereas allogeneic transplantations involve the use of donor HSCs, either genetically related or unrelated to the recipient. To lower the risks of transplant, which include graft rejection and graft-versus-host disease (GVHD), allogeneic HSCT must satisfy compatibility at the HLA loci (i.e. genetic matching to reduce the immunogenicity of the transplant).
In addition to bone marrow-derived HSCs, the use of alternative sources such as umbilical cord blood (UCB) and peripheral blood stem cells (PBSCs) has been increasing. In comparison with bone marrow-derived HSC recipients, PBSC recipients who had myeloid malignancies reported a faster engraftment and better overall survival. The use of UCB requires less stringent HLA loci matching, although the time of engraftment is longer and graft failure rate is higher.
Differentiated or mature cell transplantation
Alternative to stem- or progenitor cells, investigations are exploring the transplantation of differentiated cells that only possess low or no proliferation ability. This tends to involve specialized cells able to facilitate specific function in the patients body (for example, transplantation of cardiomyocytes to repair heart function or islet cell transplantation for establishing insulin homeostasis in diabetes patients) or support/regenerate the extracellular matrix production of specific tissues (for example intervertebral disc repair by transplanting chondrocytes).
Alternative medicine
In alternative medicine, cell therapy is defined as the injection of non-human cellular animal material in an attempt to treat illness. Quackwatch labels this as "senseless", since "cells from the organs of one species cannot replace the cells from the organs of other species" and because a number of serious adverse effects have been reported. Of this alternative, animal-based form of cell therapy, the American Cancer Society say: "Available scientific evidence does not support claims that cell therapy is effective in treating cancer or any other disease. It may in fact be lethal ...".
Manufacturing
Despite being one of the fast growing areas within Life Sciences, the manufacturing of cell therapy products is largely hindered by small scale batches and labour-intensive processes.
A number of manufacturers are turning to automated methods of production, eliminating human involvement and risk of human error. Automated methods of cell therapy manufacturing have opened up larger scale production of higher quality products at lower cost.
Supply chain
Logistics departments of biopharma companies experience new obstacles because of the introduction of new cell and gene therapy products, such as CAR T-cell therapies and allogeneic therapies. Cell and gene therapies require manufacturer and distributors alike to implement new systems and processes in order to ensure safe handling and delivery. Additionally, on-demand inventory therefore becomes more and more important, especially with regard to unforeseeable events like the COVID-19 pandemic, so that supply chain interruptions can be prevented. Furthermore recent changes as a result of the COVID 19 pandemic and political instability in Europe, secondary to Brexit, have further impacted the logistics chain for cellular therapies.
See also
Stem cell
Stem cell therapy
Allotransplantation
Autotransplantation
Xenotransplantation
Regenerative medicine
Mesenchymal stem cell
Hematopoietic stem cell transplantation
Stem cell therapy for macular degeneration
References
External links
International Society for Cellular Therapy
International Society for Stem Cell Research
Biotechnology
Cell biology
Stem cells | Cell therapy | [
"Biology"
] | 3,004 | [
"Cell therapies",
"Cell biology"
] |
2,285,143 | https://en.wikipedia.org/wiki/Encryption%20software | Encryption software is software that uses cryptography to prevent unauthorized access to digital information. Cryptography is used to protect digital information on computers as well as the digital information that is sent to other computers over the Internet.
Classification
There are many software products which provide encryption. Software encryption uses a cipher to obscure the content into ciphertext. One way to classify this type of software is the type of cipher used. Ciphers can be divided into two categories: public key ciphers (also known as asymmetric ciphers), and symmetric key ciphers. Encryption software can be based on either public key or symmetric key encryption.
Another way to classify software encryption is to categorize its purpose. Using this approach, software encryption may be classified into software which encrypts "data in transit" and software which encrypts "data at rest". Data in transit generally uses public key ciphers, and data at rest generally uses symmetric key ciphers.
Symmetric key ciphers can be further divided into stream ciphers and block ciphers. Stream ciphers typically encrypt plaintext a bit or byte at a time, and are most commonly used to encrypt real-time communications, such as audio and video information. The key is used to establish the initial state of a keystream generator, and the output of that generator is used to encrypt the plaintext. Block cipher algorithms split the plaintext into fixed-size blocks and encrypt one block at a time. For example, AES processes 16-byte blocks, while its predecessor DES encrypted blocks of eight bytes.
There is also a well-known case where PKI is used for data in transit of data at rest.
Data in transit
Data in transit is data that is being sent over a computer network. When the data is between two endpoints, any confidential information may be vulnerable. The payload (confidential information) can be encrypted to secure its confidentiality, as well as its integrity and validity.
Often, the data in transit is between two entities that do not know each other - such as in the case of visiting a website. As establishing a relationship and securely sharing an encryption key to secure the information that will be exchanged, a set of roles, policies, and procedures to accomplish this has been developed; it is known as the public key infrastructure, or PKI. Once PKI has established a secure connection, a symmetric key can be shared between endpoints. A symmetric key is preferred over the private and public keys as a symmetric cipher is much more efficient (uses fewer CPU cycles) than an asymmetric cipher. There are several methods for encrypting data in transit, such as IPsec, SCP, SFTP, SSH, OpenPGP and HTTPS.
Data at rest
Data at rest refers to data that has been saved to persistent storage. Data at rest is generally encrypted by a symmetric key.
Encryption may be applied at different layers in the storage stack. For example, encryption can be configured at the disk layer, on a subset of a disk called a partition, on a volume, which is a combination of disks or partitions, at the layer of a file system, or within user space applications such as database or other applications that run on the host operating system.
With full disk encryption, the entire disk is encrypted (except for the bits necessary to boot or access the disk when not using an unencrypted boot/preboot partition). As disks can be partitioned into multiple partitions, partition encryption can be used to encrypt individual disk partitions. Volumes, created by combining two or more partitions, can be encrypted using volume encryption. File systems, also composed of one or more partitions, can be encrypted using filesystem-level encryption. Directories are referred to as encrypted when the files within the directory are encrypted. File encryption encrypts a single file. Database encryption acts on the data to be stored, accepting unencrypted information and writing that information to persistent storage only after it has encrypted the data. Device-level encryption, a somewhat vague term that includes encryption-capable tape drives, can be used to offload the encryption tasks from the CPU.
Transit of data at rest
When there is a need to securely transmit data at rest, without the ability to create a secure connection, user space tools have been developed that support this need. These tools rely upon the receiver publishing their public key, and the sender being able to obtain that public key. The sender is then able to create a symmetric key to encrypt the information, and then use the receiver's public key to securely protect the transmission of the information and the symmetric key. This allows secure transmission of information from one party to another.
Performance
The performance of encryption software is measured relative to the speed of the CPU. Thus, cycles per byte (sometimes abbreviated cpb), a unit indicating the number of clock cycles a microprocessor will need per byte of data processed, is the usual unit of measurement. Cycles per byte serve as a partial indicator of real-world performance in cryptographic functions. Applications may offer their own encryption called native encryption, including databases applications such as Microsoft SQL, Oracle, and MongoDB, and commonly rely on direct usage of CPU cycles for performance. This often impacts the desirability of encryption in businesses seeking greater security and ease of satisfying compliance by impacting the speed and scale of how data within organizations through to their partners.
See also
Cryptographic Protocol
Public Key (Asymmetric) Algorithms
Symmetric Algorithms
Transport Layer Security
Comparison of disk encryption software
Defense strategy (computing)
Ransomware: Malicious software using encryption
References
External links | Encryption software | [
"Mathematics"
] | 1,187 | [
"Cryptographic software",
"Mathematical software"
] |
2,285,184 | https://en.wikipedia.org/wiki/Surface%20metrology | Surface metrology is the measurement of small-scale features on surfaces, and is a branch of metrology. Surface primary form, surface fractality, and surface finish (including surface roughness) are the parameters most commonly associated with the field. It is important to many disciplines and is mostly known for the machining of precision parts and assemblies which contain mating surfaces or which must operate with high internal pressures.
Surface finish may be measured in two ways: contact and non-contact methods. Contact methods involve dragging a measurement stylus across the surface; these instruments are called profilometers. Non-contact methods include: interferometry, digital holography, confocal microscopy, focus variation, structured light, electrical capacitance, electron microscopy, photogrammetry and non-contact profilometers.
Overview
The most common method is to use a diamond stylus profilometer. The stylus is run perpendicular to the lay of the surface. The probe usually traces along a straight line on a flat surface or in a circular arc around a cylindrical surface. The length of the path that it traces is called the measurement length. The wavelength of the lowest frequency filter that will be used to analyze the data is usually defined as the sampling length. Most standards recommend that the measurement length should be at least seven times longer than the sampling length, and according to the Nyquist–Shannon sampling theorem it should be at least two times longer than the wavelength of interesting features. The assessment length or evaluation length is the length of data that will be used for analysis. Commonly one sampling length is discarded from each end of the measurement length. 3D measurements can be made with a profilometer by scanning over a 2D area on the surface.
The disadvantage of a profilometer is that it is not accurate when the size of the features of the surface are close to the same size as the stylus. Another disadvantage is that profilometers have difficulty detecting flaws of the same general size as the roughness of the surface. There are also limitations for non-contact instruments. For example, instruments that rely on optical interference cannot resolve features that are less than some fraction of the operating wavelength. This limitation can make it difficult to accurately measure roughness even on common objects, since the interesting features may be well below the wavelength of light. The wavelength of red light is about 650 nm, while the average roughness, (Ra) of a ground shaft might be 200 nm.
The first step of analysis is to filter the raw data to remove very high frequency data (called "micro-roughness") since it can often be attributed to vibrations or debris on the surface. Filtering out the micro-roughness at a given cut-off threshold also allows to bring closer the roughness assessment made using profilometers having different stylus ball radius e.g. 2 μm and 5 μm radii. Next, the data is separated into roughness, waviness and form. This can be accomplished using reference lines, envelope methods, digital filters, fractals or other techniques. Finally, the data is summarized using one or more roughness parameters, or a graph. In the past, surface finish was usually analyzed by hand. The roughness trace would be plotted on graph paper, and an experienced machinist decided what data to ignore and where to place the mean line. Today, the measured data is stored on a computer, and analyzed using methods from signal analysis and statistics.
Equipment
Contact (tactile measurement)
Stylus-based contact instruments have the following advantages:
The system is very simple and sufficient for basic roughness, waviness or form measurement requiring only 2D profiles (e.g. calculation of the Ra value).
The system is never lured by the optical properties of a sample (e.g. highly reflective, transparent, micro-structured).
The stylus ignores the oil film covering many metal components during their industrial process.
Technologies:
Contact Profilometers – traditionally use a diamond stylus and work like a phonograph.
Atomic force microscope are sometimes also considered contact profilers operating at atomic scale.
Non-contact (optical microscopes)
Optical measurement instruments have some advantages over the tactile ones as follows:
no touching of the surface (the sample can not be damaged)
the measurement speed is usually much higher (up to a million 3D points can be measured in a second)
some of them are genuinely built for 3D surface topography rather than single traces of data
they can measure surfaces through transparent medium such as glass or plastic film
non-contact measurement may sometimes be the only solution when the component to measure is very soft (e.g. pollution deposit) or very hard (e.g. abrasive paper).
Vertical scanning:
Coherence scanning interferometry
Confocal microscopy
Focus variation
Confocal chromatic aberration
Horizontal scanning:
Scanning laser microscope (SLM)
Structured-light scanning
Non-scanning
Digital holographic microscopy
Choice of the right measurement instrument
Because every instrument has advantages and disadvantages the operator must choose the right instrument depending on the measurement application. In the following some advantages and disadvantages to the main technologies are listed:
Interferometry: This method has the highest vertical resolution of any optical technique and lateral resolution equivalent to most other optical techniques except for confocal which has better lateral resolution. Instruments can measure very smooth surfaces using phase shifting interferometry (PSI) with high vertical repeatability; such systems can be dedicated for measuring large parts (up to 300mm) or microscope-based. They can also use coherence scanning interferometry (CSI) with a white-light source to measure steep or rough surfaces, including machined metal, foam, paper and more. As is the case with all optical techniques, the interaction of light with the sample for this instruments is not fully understood. This means that measurement errors can occur especially for roughness measurement.
Digital Holography: this method provides 3D topography with a similar resolution as interferometry. Moreover, as it is a non-scanning technique, it ideal for the measurement of moving samples, deformable surfaces, MEMS dynamics, chemical reactions, the effect of magnetic or electrical field on samples, and measurement of the presence of vibrations, in particular for quality control.:
Focus variation: This method delivers color information, can measure on steep flanks and can measure on very rough surfaces. The disadvantage is that this method can not measure on surfaces with a very smooth surface roughness like a silicon wafer. The main application is metal (machined parts and tools), plastic or paper samples.
Confocal microscopy: this method has the advantage of high lateral resolution because of the use of a pin hole but has the disadvantage that it can not measure on steep flanks. Also, it quickly loses vertical resolution when looking at large areas since the vertical sensitivity depends on the microscope objective in use.
Confocal chromatic aberration: This method has the advantage of measuring certain height ranges without a vertical scan, can measure very rough surfaces with ease, and smooth surfaces down to the single nm range. The fact that these sensors have no moving parts allows for very high scan speeds and makes them very repeatable. Configurations with a high numerical aperture can measure on relatively steep flanks. Multiple sensors, with the same or different measurement ranges, can be used simultaneously, allowing differential measurement approaches (TTV) or expanding the use case of a system.
Contact profilometer: this method is the most common surface measurement technique. The advantages are that it is a cheap instrument and has higher lateral resolution than optical techniques, depending on the stylus tip radius chosen. New systems can do 3D measurements in addition to 2D traces and can measure form and critical dimensions as well as roughness. However, the disadvantages are that the stylus tip has to be in physical contact with the surface, which may alter the surface and/or stylus and cause contamination. Furthermore, due to the mechanical interaction, the scan speeds are significantly slower than with optical methods. Because of the stylus shank angle, stylus profilometers cannot measure up to the edge of a rising structure, causing a "shadow"or undefined area, usually much larger than what is typical for optical systems.
Resolution
The scale of the desired measurement will help decide which type of microscope will be used.
For 3D measurements, the probe is commanded to scan over a 2D area on the surface. The spacing between data points may not be the same in both directions.
In some cases, the physics of the measuring instrument may have a large effect on the data. This is especially true when measuring very smooth surfaces. For contact measurements, most obvious problem is that the stylus may scratch the measured surface. Another problem is that the stylus may be too blunt to reach the bottom of deep valleys and it may round the tips of sharp peaks. In this case the probe is a physical filter that limits the accuracy of the instrument.
Roughness parameters
The real surface geometry is so complicated that a finite number of parameters cannot provide a full description. If the number of parameters used is increased, a more accurate description can be obtained. This is one of the reasons for introducing new parameters for surface evaluation. Surface roughness parameters are normally categorised into three groups according to its functionality. These groups are defined as amplitude parameters, spacing parameters, and hybrid parameters.
Profile roughness parameters
Parameters used to describe surfaces are largely statistical indicators obtained from many samples of the surface height. Some examples include:
This is a small subset of available parameters described in standards like ASME B46.1 and ISO 4287.
Most of these parameters originated from the capabilities of profilometers and other mechanical probe systems.
In addition, new measures of surface dimensions have been developed which are more directly related to the measurements made possible by high-definition optical gauging technologies.
Most of these parameters can be estimated using the SurfCharJ plugin for the ImageJ.
Areal surface parameters
The surface roughness can also be calculated over an area. This gives Sa instead of Ra values. The ISO 25178 series describes all these roughness values in detail. The advantage over the profile parameters are:
more significant values
more relation to the real function possible
faster measurement with actual instruments possible (optical areal based instruments can measure an Sa in higher speed then Ra.
Surfaces have fractal properties, multi-scale measurements can also be made such as Length-scale Fractal Analysis or Area-scale Fractal Analysis.
Filtering
To obtain the surface characteristic almost all measurements are subject to filtering. It is one of the most important topics when it comes to specifying and controlling surface attributes such as roughness, waviness, and form error. These components of the surface deviations must be distinctly separable in measurement to achieve a clear understanding between the surface supplier and the surface recipient as to the expected characteristics of the surface in question.
Typically, either digital or analog filters are used to separate form error, waviness, and roughness resulting from a measurement. Main multi-scale filtering methods are Gaussian filtering, Wavelet transform and more recently Discrete Modal Decomposition. There are three characteristics of these filters that should be known in order to understand the parameter values that an instrument may calculate. These are the spatial wavelength at which a filter separates roughness from waviness or waviness from form error, the sharpness of a filter or how cleanly the filter separates two components of the surface deviations and the distortion of a filter or how much the filter alters a spatial wavelength component in the separation process.
See also
References
External links
"Surface Metrology Guide", Digital Surf
Metrology
metrology
Geometric measurement | Surface metrology | [
"Physics",
"Mathematics"
] | 2,392 | [
"Geometric measurement",
"Quantity",
"Physical quantities",
"Geometry"
] |
2,285,191 | https://en.wikipedia.org/wiki/Hime-chan%27s%20Ribbon | is a magical girl manga series created by Megumi Mizusawa that was serialized in Ribon Magazine from August 1990 to January 1994. It was later developed into a 61 episode anime series, produced by Studio Gallop, that aired from October 2, 1992 to December 3, 1993. Hajime Watanabe's first project as a character designer was with Hime-chan no Ribbon. The manga series was collected into ten volumes in Japan, where it received a full anime DVD release. A stage musical of the show was produced in December 1993 starring the idol group SMAP. The musical was presented in three episodes, each a week apart. SMAP performed the opening theme and the three ending themes for the anime and each member appears in animated form in episode 13. Both manga and anime are currently unlicensed in the United States.
Plot
The story is about Himeko Nonohara (野々原姫子 Nonohara Himeko), also known as Hime-chan (姫ちゃん, using the Chinese character for "princess"), an aloof, childlike, yet boasty thirteen-year-old girl who frets over the fact that she is the biggest tomboy in the school. Himeko would like nothing more than to be a proper, feminine young lady, like her older sister Aiko, so that she could approach her secret crush, Hasekura.
One night, Himeko is unexpectedly approached by a girl, who is a near-mirror image of herself, floating outside of her bedroom window. She discovers that the girl is Princess Erika of the Magical Kingdom. Erika explains that people in the Magical Kingdom have an exact counterpart in the Human World and that, in order to prove herself worthy as a princess, she must give Himeko a magical item that she has created. Himeko is allowed the use of this item, a red hair ribbon, for one year to determine whether it is useful, and consequently, if Erika will inherit the crown.
The ribbon allows Himeko to transform into anyone in the Human World for one hour. If Himeko is unable to recite the magic incantation in reverse before the hour is up, she will be trapped in that person's form for the rest of her life. She is unable to reveal the secret of the ribbon and the existence of the Magical Kingdom to anyone. If she does, her memory will be erased as punishment. Assisting her with this is her stuffed lion, Pokota, whom the ribbon brought to life. Erika will watch her in the Magic Kingdom through her crystal ball for one year, at which point the ribbon will be returned.
Themes
In authoring Hime-chan's Ribbon, Megumi Mizusawa used themes common to magical girl manga. The concept of being able to transform into other people had been a feature of Himitsu no Akko-chan (1962).
Characters
Himeko is the protagonist of the series. She progresses from being a tomboy at the beginning of the series to becoming a refined, feminine and mature girl by the series' end. Erika, her counterpart in the Land of Magic is an elegant princess, but warms to Hime-chan's playfulness which often lands them both in trouble with the King and Queen and Erika's magical broom, Chappy.
Himeko had a childhood crush on the much older Hasekura-senpai, but she gradually falls in love with Daichi instead. The name Himeko means "little princess". Hime-chan's motto is "Ike! Ike! Go! Go! Jump!". (Come on, Come on, go, go, jump)
Pokota is Himeko's favorite stuffed animal that is only animate when Himeko is wearing the magic ribbon. He is Himeko's friend and voice of reason, often helping her out of various difficult situations. Daichi was suspicious of Pokota from the first time that he saw him and treated Pokota harshly before he could confirm that Pokota was "alive". Pokota, being a toy has no direct equal in the magic world, but he becomes close friends with a pink cat, Pink-chan.
Daichi is a trouble maker at Himeko's school. By chance they meet at an abandoned house that Himeko was passing by on her way to give Hasekura-senpai a good luck charm. Daichi, who kept to himself prior to meeting Himeko, begins teasing her initially because of her tomboyish behavior. Daichi warms to Hime-chan and they become friends, eventually falling in love with her. He discovers her secret about the ribbon and Pokota. The King and Queen make an exception to the rule of secrecy for Daichi and he often travels with Himeko to the Land of Magic where Camille is his exact counterpart.
Hikaru attends the same class as Himeko and Daichi. She shows interest in Daichi and thinks of Hime-chan as her rival. She becomes suspicious of Hime-chan's secretiveness and she comes close to exposing Hime-chan's secret, but always is foiled at the last minute by one of Hime-chan's friends or by Hime-chan herself. Robelia is Hikaru's mirror match from the Land of Magic.
Sei and his bird, Kantaro are from the Land of Magic. Sei is initially only after Himeko's magic, though after his guise of pretending to be a transfer student is revealed, he befriends Daichi and Hime-chan instead. Hime-chan does not forgive his earlier threatening behavior and rejects Sei's attempts at becoming better acquainted.
Hasekura takes the same bus and is in the same grade of Himeko's older sister, Aiko, and is a senior student to Hime-chan. Hime-chan's crush goes unnoticed by Hasekura and he falls in love with her older sister instead.
Aiko is Himeko's sister. (All of the females in the family have names that end with "ko" (子)) Hasekura and Aiko fall in love which saddens Hime-chan. Aiko is treated as a role model by both of her younger sisters, particularly Hime-chan who chooses Aiko as her first ever transformation. She is also incredibly beautiful and domesticated.
Yumeko is Aiko and Himeko's younger sister. Her exploits with Daichi's younger brother, Shintaro often cause their older siblings much grief.
Shintaro is Daichi's younger brother. He is friends with Yumeko.
and
Voiced by: Katsuyo Endou (Manami) and Minami Takayama (Ichiko)
Himeko, Manami and Ichiko are best friends. Manami claims ownership of Himeko calling her "My hime-chan". She refers to many things as being "cool" and enjoys cooking. Ichiko is referred to as Ii-chan. She is adept at archery.
and Hiroshi
Voiced by: Chika Sakamoto (Masshi) and Masami Kikuchi (Hiroshi)
Masshi and Hiroshi are mischievous sprites from the Land of Magic. They cause great destruction and are eventually punished by becoming Hime-chan's assistants. They provide Hime-chan new powers through new accessories other than the ribbon. Hiroshi is a fox-like sprite who has the ability to fly. Masshi does not have a fox-like tail, but looks similar and is lifted from place to place by his partner.
Incantations
To perform magic several different incantations are recited. When Hime-chan first receives the ribbon she is told to recite "Parallel, Parallel, (person's name) ni nare". After the excitement of transformation Hime-chan dismisses Erika's important instruction to recite this incantation saying Parallel backwards, "Rurerapa rurerapa motono sugatani nare." which causes her to almost be caught as the Principal of the school forever. Pokota had to remind her of the correct way of pronouncing Rurerapa to avoid this.
Episode list
Music
Reception
Jennifer B from THEM Anime reviews stated that Hime-chan's Ribbon is "a fun, cute series that's worth watching if you like magical girl shows" and "Hime-chan herself is a likable character". Andrew Sheldon from Anime-Meta Review felt that "The writing is well done, has a great sense of character and can be touching". Kelly Mayback from the Anime Cafe: A Parent's Guide to Anime described it as "an EXCELLENT series to use when exploring Japanese culture".
Remake
In 2009 the series was remade, with manga creator Shiho Komiyuno penning the series. Hime-chan no Ribon Colorful premiered in the October 2009 issue of Ribon, the same magazine that the original series ran in. Differences between the two series included the setting being moved to modern day as well as the character of Pokota being replaced by a shape changing Princess Erika.
References
External links
1990 manga
1992 anime television series debuts
1992 Japanese television series debuts
1993 Japanese television series endings
2009 manga
Fiction about size change
Gallop (studio)
Magical girl anime and manga
Shueisha franchises
Shueisha manga
TV Tokyo original programming | Hime-chan's Ribbon | [
"Physics",
"Mathematics"
] | 1,945 | [
"Fiction about size change",
"Quantity",
"Physical quantities",
"Size"
] |
12,104,224 | https://en.wikipedia.org/wiki/Rubin%E2%80%93Ford%20effect | The Rubin–Ford effect is, per Ian Ridpath's Dictionary of Astronomy, an apparent rather than actual "anisotropy in the expansion of the Universe on a scale of around 100 million [light years] as revealed by a study of the motions of a sample of spiral galaxies," as initially described by Vera Rubin, William Kent Ford Jr., and Norbert Thonnard of the Carnegie Institution of Washington, Morton S. Roberts of the National Radio Astronomy Observatory, and John A. Graham of the Cerro Tololo Inter-American Observatory, in La Serena, Chile. Specifically, their conclusion was that a sample of galaxies (referred to as ScI) was, on the whole, moving with a velocity of 885 km s−1 toward a specific astronomical position (l = 304°, b = 26°) relative to the microwave background radiation, which, because it is isotropic, provided a frame of reference for the measurement.
The description by Rubin and colleagues, first appearing in 1976, is considered pioneering work, and has been the subject of "intense discussion" since its first report. The effect, however, is now seen as being apparent, and the data from which it was derived "probably only reflect the inhomogeneous distribution of galaxies in the region of the Rubin–Ford sample"—i.e., that "the Universe is not homogeneous on these scales"—and are actually "consistent with isotropic expansion, an unperturbed galaxy velocity field and hence a low density Universe."
As indicated by the opening definition, and as described by Bernard J.T. Jones in review in 2002, the data on which the conclusion of an effect was based are:
References
Observational astronomy
Extragalactic astronomy | Rubin–Ford effect | [
"Astronomy"
] | 356 | [
"Extragalactic astronomy",
"Observational astronomy",
"Astronomical sub-disciplines"
] |
12,104,271 | https://en.wikipedia.org/wiki/Pochhammer%20k-symbol | In the mathematical theory of special functions, the Pochhammer k-symbol and the k-gamma function, introduced by Rafael Díaz and Eddy Pariguan are generalizations of the Pochhammer symbol and gamma function. They differ from the Pochhammer symbol and gamma function in that they can be related to a general arithmetic progression in the same manner as those are related to the sequence of consecutive integers.
Definition
The Pochhammer k-symbol (x)n,k is defined as
and the k-gamma function Γk, with k > 0, is defined as
When k = 1 the standard Pochhammer symbol and gamma function are obtained.
Díaz and Pariguan use these definitions to demonstrate a number of properties of the hypergeometric function. Although Díaz and Pariguan restrict these symbols to k > 0, the Pochhammer k-symbol as they define it is well-defined for all real k, and for negative k gives the falling factorial, while for k = 0 it reduces to the power xn.
The Díaz and Pariguan paper does not address the many analogies between the Pochhammer k-symbol and the power function, such as the fact that the binomial theorem can be extended to Pochhammer k-symbols. It is true, however, that many equations involving the power function xn continue to hold when xn is replaced by (x)n,k.
Continued Fractions, Congruences, and Finite Difference Equations
Jacobi-type J-fractions for the ordinary generating function of the Pochhammer k-symbol, denoted in slightly different notation by for fixed and some indeterminate parameter , are considered in
in the form of the next infinite continued fraction expansion given by
The rational convergent function, , to the full generating function for these products expanded by the last equation is given by
where the component convergent function sequences, and , are given as closed-form sums in terms of the ordinary Pochhammer symbol and the Laguerre polynomials by
The rationality of the convergent functions for all , combined with known enumerative properties of the J-fraction expansions, imply the following finite difference equations both exactly generating for all , and generating the symbol modulo for some fixed integer :
The rationality of also implies the next exact expansions of these products given by
where the formula is expanded in terms of the special zeros of the Laguerre polynomials, or equivalently, of the confluent hypergeometric function, defined as the finite (ordered) set
and where denotes the partial fraction decomposition of the rational convergent function.
Additionally, since the denominator convergent functions, , are expanded exactly through the Laguerre polynomials as above, we can exactly generate the Pochhammer k-symbol as the series coefficients
for any prescribed integer .
Special Cases
Special cases of the Pochhammer k-symbol, , correspond to the following special cases of the falling and rising factorials, including the Pochhammer symbol, and the generalized cases of the multiple factorial functions (multifactorial functions), or the -factorial functions studied in the last two references by Schmidt:
The Pochhammer symbol, or rising factorial function:
The falling factorial function:
The single factorial function:
The double factorial function:
The multifactorial functions defined recursively by for and some offset : and
The expansions of these k-symbol-related products considered termwise with respect to the coefficients of the powers of () for each finite are defined in the article on generalized Stirling numbers of the first kind and generalized Stirling (convolution) polynomials in.
References
Gamma and related functions
Factorial and binomial topics | Pochhammer k-symbol | [
"Mathematics"
] | 756 | [
"Factorial and binomial topics",
"Combinatorics"
] |
12,104,288 | https://en.wikipedia.org/wiki/Spring%201945%20offensive%20in%20Italy | The Spring 1945 offensive in Italy, codenamed Operation Grapeshot, was the final Allied attack during the Italian Campaign in the final stages of the Second World War. The attack in the Lombard Plain by the 15th Allied Army Group started on 6 April 1945 and ended on 2 May with the surrender of all Axis forces in Italy.
Background
The Allies had launched their last major offensive on the Gothic Line in August 1944, with the British Eighth Army (Lieutenant-General Oliver Leese) attacking up the coastal plain of the Adriatic and the U.S. Fifth Army (Lieutenant General Mark Clark) attacking through the central Apennine Mountains. Although they managed to breach the formidable Gothic Line defenses, the Allies failed to break into the Po Valley before the winter weather made further attempts impossible. The Allied forward formations spent the rest of the winter of 1944 in inhospitable conditions while preparations were being made for a spring offensive in 1945.
Command changes
When Field Marshal Sir John Dill, the head of the British Mission in Washington, D.C., died on 5 November, Field Marshal Sir Henry Maitland Wilson was appointed his replacement. General Harold Alexander, having been promoted to Field Marshal, replaced Wilson as Allied Supreme Commander Mediterranean on 12 December. Clark succeeded Alexander as commander of the Allied forces in Italy (renamed 15th Army Group), but without promotion. Lieutenant General Lucian Truscott, the commander of the U.S. VI Corps from the Battle of Anzio and the capture of Rome to Alsace, landed in the South of France during Operation Dragoon and returned to Italy to assume command of the Fifth Army.
On 23 March, Albert Kesselring was appointed Commander-in-Chief West, replacing General-Field Marshal Gerd von Rundstedt. Heinrich von Vietinghoff returned from the Baltic to take over from Kesselring and Traugott Herr, the experienced commander of the LXXVI Panzer Corps, took over the 10th Army. Joachim Lemelsen, who had temporarily commanded the 10th Army, returned to command the 14th Army.
Orders of battle
Allied manpower shortages continued in October 1944. The 4th Indian Infantry Division had been sent to Greece and the British 4th Infantry Division had followed them in November along with the 139th Brigade of the British 46th Infantry Division. The rest of the division followed in December along with the 3rd Greek Mountain Brigade. In early January 1945, the British 1st Infantry Division was sent to Palestine and at the end of the month the I Canadian Corps and the British 5th Infantry Division were ordered to the North West Europe Campaign. This reduced the Eighth Army, now commanded by Lieutenant-General Richard McCreery, to seven divisions. Two other British divisions were to follow them to North-Western Europe, but Alexander kept them in Italy.
The U.S. Fifth Army had been reinforced between September and November 1944 with the 1st Brazilian Division, and in January 1945, with the specialist U.S. 10th Mountain Division. Allied strength amounted to 17 divisions and eight independent brigades (including 4 Italian groups of volunteers from the Italian Co-Belligerent Army which were equipped and trained by the British), equivalent to just under 20 divisions. The 15th Army Group ration strength was 1,334,000 men, the Eighth Army having an effective strength of 632,980 men, and the Fifth Army 266,883.
As of 9 April, the Axis in Italy had 21 much weaker German divisions and four Italian National Republican Army (ENR) divisions, with about 349,000 German and 45,000 Italian troops. There were another 91,000 German troops on the lines of communication, and Germans commanded about 100,000 Italian police. Three of the Italian divisions were allocated to the Ligurian Army under Rodolfo Graziani which guarded the western flank facing France. Finally, the fourth division was with the 14th Army in a sector thought less likely to be attacked.
Plan of attack
On 18 March, Clark set out his battle plan. Its objective was "to destroy the maximum number of enemy forces south of the Po, force crossings of the Po and capture Verona". In Phase I, the Eighth Army would cross the Senio and Santerno rivers and then make a dual thrust, one towards Budrio parallel to the Bologna road, Route 9 (the Via Emilia) and the other northwest along Route 16, the , towards Bastia and the Argenta Gap which was a narrow strip of dry terrain through the flooded land west of Lake Comacchio.
An amphibious operation across the lake and parachute drop would bring pressure to bear on the flank and help to break the Argenta position. Depending on the relative success of these actions, a decision would be made on whether the Eighth Army's prime objective would become Ferrara on the or remain Budrio. The U.S. Fifth Army was to launch the Army Group's main effort at 24 hours notice from two days after the Eighth Army attack, and break into the Po valley. The capture of Bologna was looked upon as a secondary task.
In Phase II, the Eighth Army was to drive northwest to capture Ferrara and Bondeno, blocking routes of potential retreat across the Po. The U.S. Fifth Army was to push past Bologna, north to link with Eighth Army in the Bondeno region, to complete an encirclement of German forces south of the Po. The Fifth Army was to make a secondary thrust further west towards Ostiglia, the crossing point on the Po of the main route to Verona. Phase III involved the establishment of bridgeheads across the Po and exploitation north.
The Eighth Army plan (Operation Buckland) had to deal with the difficult task of getting across the Senio, with its raised artificial banks varying between and in height and honeycombed with tunnels and bunkers front and rear. V Corps was ordered to make an attack on the salient formed by the river into the Allied line at Cotignola. On the right of the river's salient was 8th Indian Infantry Division, reprising the role they played crossing the Rapido in the final Battle of Monte Cassino. To the left of the 8th Indian Division, on the left of the salient, the 2nd New Zealand Division would attack across the river to form a pincer. To the left of V Corps, on Route 9, the II Polish Corps would widen the front further by attacking across the Senio towards Bologna. The Poles had been desperately under strength in the autumn of 1944, but had received 11,000 reinforcements during the early months of 1945, mainly from Polish conscripts in the German Army taken prisoner in the Battle of Normandy .
Once across the Senio, the assault divisions were to advance to cross the Santerno. Once the Santerno was crossed, the British 78th Infantry Division would reprise their Cassino role and pass through the bridgehead established by the Indians and New Zealanders and drive for Bastia and the Argenta gap, behind the Senio, where the dry land narrowed to a front of only , bounded on the right by Lake Comacchio, a huge lagoon running to the Adriatic coast and on the left by a marshland. At the same time, the British 56th (London) Infantry Division would launch the amphibious flank attack along Lake Comacchio. On the left flank of V Corps, the New Zealand Division would advance to the left of the marshland on the west side of Argenta while the 8th Indian Infantry Division would pass in army reserve.
The Fifth Army plan (Operation Craftsman) envisaged an initial thrust by IV Corps along to straighten the army front and to draw German reserves away from . II Corps would then attack along Strada statale 65 towards Bologna. The weight of the attack would then switch westward again to break into the Po valley skirting Bologna.
Battle
In the first week of April, diversionary attacks were launched on the extreme right and left of the Allied front to draw German reserves away from the main assaults. Operation Roast was an assault by 2nd Commando Brigade and tanks to capture the seaward isthmus of land bordering Lake Comacchio and seize Port Garibaldi on the lake's north side. Damage to other transport infrastructure forced Axis forces to use sea, canal, and river routes for supply. During this time, Axis shipping was being attacked in bombing raids such as Operation Bowler.
The build-up to the main assault started on 6 April with heavy artillery bombardment of the Senio defenses. In the early afternoon of 9 April 825 heavy bombers dropped fragmentation bombs on the support zone behind the Senio followed by medium and fighter bombers. From 15:20 to 19:10, five heavy artillery barrages were fired each lasting 30 minutes, interspersed with fighter bomber attacks. In support of the New Zealand operations, 28 Churchill Crocodiles and 127 Wasp flamethrower vehicles were deployed along the front. The 8th Indian Infantry Division, 2nd New Zealand Division, and 3rd Carpathian Division (on the Polish Corps front at Route 9) attacked at dusk. In the fight there were two Victoria Crosses won by the 8th Indian Infantry Division. They had reached the Santerno, beyond, by dawn on 11 April. The New Zealanders had reached the Santerno at nightfall on 10 April and succeeded in making a crossing at dawn on 11 April. The Poles had also closed on the Santerno by the night of 11 April.
By late morning of 12 April, after an all-night assault, the 8th Indian Infantry Division was established on the far side of the Santerno and the 78th Infantry Division started their pass through to make the assault on Argenta. In the meantime, the 24th Guards Brigade, part of the 56th (London) Infantry Division, had launched an amphibious flanking attack from the water to the right of the Argenta Gap. Although they gained a foothold, they were still held up at positions on the Fossa Marina on the night of 14 April. The 78th Infantry Division was also held up that same day on the Reno River at Bastia.
The Fifth Army began its assault on 14 April after a bombardment by 2,000 heavy bombers and 2,000 guns along with attacks by IV Corps (1st Brazilian, 10th Mountain and 1st Armored Divisions) on the left. This was followed on the night of 15 April by II Corps attacking with 6th South African Armoured Division and the 88th Infantry Division advancing towards Bologna between Highway 64 and 65 and the 91st and 34th Infantry Divisions along Highway 65.
Progress against a determined German defense was slow, but ultimately the superior Allied firepower and lack of German reserves allowed the Allies to break through the mountain defenses and reach the plains of the Po valley. The 10th Mountain Division was directed to bypass Bologna on the right and push north leaving II Corps to deal with Bologna, along with Eighth Army units advancing from their right.
By 19 April, on the Eighth Army front, the Argenta Gap had been forced and the 6th Armoured Division was released through the left wing of the advancing 78th Infantry Division to swing left to race northwest along the line of the river Reno to Bondeno and link up with the Fifth Army to complete the encirclement of the German armies defending Bologna.
On the same day, the Italian National Liberation Committee for Northern Italy, in command of the Italian resistance movement, ordered a general insurrection; in the following days, fighting between Italian partisan and the German and RSI forces broke out in Turin and Genoa (as well as in many other towns across Northern Italy), while German forces prepared to withdraw from Milan. On all fronts, the German defense continued to be strong and effective, but Bondeno was captured on 23 April. The 6th Armoured Division linked with the 10th Mountain Division the next day at Finale some upstream along the river Panaro from Bondeno. Bologna was entered in the morning of 21 April by the 3rd Carpathian Infantry Division of the II Polish Corps and the Combat Group of the Italian Co-belligerent Army advancing up the line of Route 9, followed two hours later by II US Corps from the south. On 24 April, Parma and Reggio Emilia were liberated by the partisans.
IV Corps had continued its northwards advance and reached the Po river at San Benedetto on 22 April. The river was crossed the next day and they advanced north to Verona which they entered on 26 April. To the right of Fifth Army on Eighth Army's left wing, XIII Corps crossed the Po at Ficarolo on 22 April, while V Corps were crossing the Po by 25 April, heading towards the Venetian Line, a defensive line built behind the line of the river Adige.
As Allied forces pushed across the Po, on the left flank, the Brazilian Division, 34th Infantry Division, and 1st Armored Division of IV Corps were pushed west and northwest along the line of Highway 9 towards Piacenza and across the Po to seal possible escape routes into Austria and Switzerland via Lake Garda. On 27 April, the 1st Armored Division entered Milan which had been liberated by the partisans on 25 April and the IV Corps commander Willis D. Crittenberger entered the city on 30 April. Turin was also liberated by partisan forces on 25 April, after five days of fighting. On 27 April, General Günther Meinhold surrendered his 14,000 troops to the partisans in Genoa. To the south of Milan, at Collecchio-Fornovo, the Brazilian Division bottled up the remaining German and RSI units, taking 13,500 prisoners on 28 April. On the Allied far right flank, V Corps, met by lessened resistance, traversed the Venetian Line and entered Padua in the early hours of 29 April to find that partisans had locked up the German garrison of 5,000.
Aftermath
Secret surrender negotiations between representatives of the Germans and Western Allies had taken place in Switzerland (Operation Crossword) in March, but had resulted only in protests from the Soviets that the Western Allies were attempting to negotiate a separate peace. On 28 April, Vietinghoff sent emissaries to the Allied Army headquarters. On 29 April, they signed an instrument of surrender at the Royal Palace of Caserta stating that hostilities would formally end on 2 May. Confirmation from Vietinghoff, did not reach the 15th Army Group headquarters until the morning of 2 May. It emerged that Kesselring had his authority as Commander of the West extended to include Italy and had replaced Vietinghoff with General Friedrich Schulz from Army Group G on hearing of the plans. After a period of confusion, during which the news of Hitler's death arrived, Schulz obtained Kesselring's agreement to the surrender and Vietinghoff was reinstated to see it through.
On 1 May 1945, the Chief of Staff of the National Republican Army, Marshal Rodolfo Graziani, announced the unconditional surrender of the Italian Social Republic and ordered the forces under his command to lay down their arms. Lieutenant general Max-Josef Pemsel, Chief of General Staff of the Army Liguria, consisting of three German and three Italian divisions, followed Graziani's orders and declared in a broadcast message: "I confirm without reserve the words of my Commander, Marshal Graziani. You must obey his orders."
See also
Mediterranean and Middle East theatre of World War II
Italian campaign (World War II)
Surrender of Caserta
European Theatre of World War II
Gothic Line order of battle
Notes
References
Bibliography
External links
Website of the project by
Further reading
Vito Paticchia, Luigi Arbizzani, Biblioteca comunale dell'Archiginnasio (Bologna, Italy), Combat photo, 1944-1945: l'amministrazione militare alleata dell'Appennino e la liberazione di Bologna nelle foto e nei documenti (Bologna: Grafis, 1994)
Italian campaign (World War II)
World War II defensive lines
World War II operations and battles of the Italian Campaign
Battles and operations of World War II involving Germany
1945 in Italy
Conflicts in 1945
Battles of World War II involving the United States
Battles of World War II involving the United Kingdom
Battles and operations of World War II involving India
Battles and operations of World War II involving Poland
Battles and operations of World War II involving New Zealand
Battles and operations of World War II involving South Africa
Battles of World War II involving Brazil
Battles of World War II involving Italy
April 1945
May 1945 | Spring 1945 offensive in Italy | [
"Engineering"
] | 3,303 | [
"World War II defensive lines",
"Fortification lines"
] |
12,104,505 | https://en.wikipedia.org/wiki/Changaa | Changaa or Chang'aa is a traditional home-brewed spirit, popular in Kenya. It is made by fermentation and distillation from grains like millet, maize and sorghum, and is very potent.
Regulation
After being illegal in Kenya for many years, the Kenyan government legalised the traditional home-brewed spirit in 2010, in an effort to take business away from establishments where toxic chemicals are added to the brew to make it stronger. Under the new law, chang'aa must be manufactured, distributed and sold in glass bottles, and retailers must display health warning signs. Sale to individuals under age 18 is still prohibited, as is sale through automatic vending machines. Anyone making or selling adulterated chang'aa risks penalties of five million shillings, five years in jail, or both. Chang'aa is usually much cheaper and stronger than other alcoholic drinks, making it the beverage of choice for many.
Production and distribution
Its production and distribution in urban slums has to some extent continued to be controlled in many cases by criminal gangs like the Mungiki who run protection cartels for illicit brewers. However, in the rest of the country production is still under traditional brewers. Illegally brewed chang'aa could be purchased for around US$0.20 to $0.40 per glass.
Health concerns
The drink is sometimes adulterated by adding substances like jet fuel, embalming fluid or battery acid, which has the effect of giving the beverage more 'kick'. Drinkers have suffered blindness or death due to methanol poisoning. The water used to make the drink in illegal breweries is also often below acceptable health standards and sometimes contaminated with sewage.
Origin of name
The spirit's name, Chang'aa, means literally "kill me quick."
The name Chang'aa was adopted in the 1950s when Oyuga Muganda, an AP in Kisumu, once narrated the story in the presence of Tom Omuga how Pelele (aka Woraj) got its name as Chang'aa.
Women from the Kano area of Kisumu used to ferry fresh milk in pots to sell to Kisumu residents. The colonial government had banned the sale of local liquor (Pelele) and so the women carrying milk also carried Pelele disguised as milk.
One day a white policeman who had been tipped off about the underground trafficking of Pelele stopped the women milk sellers on the road to inspect their pots of "milk". One of the pots had Pelele and not milk.
The policeman ordered all the pots lined on the roadside and asked "Maziwa ya nani?"
An interpreter repeated in Dholuo "Ma Chag ng'a?" (meaning "whose milk is this?").
So the policeman charged the women with transporting and selling Chang'aa. That is allegedly how the name Chang'aa came about.
See also
Waragi (in Uganda)
References
Distilled drinks
Kenyan cuisine
Adulteration
Alcohol in Kenya | Changaa | [
"Chemistry"
] | 610 | [
"Adulteration",
"Distillation",
"Drug safety",
"Distilled drinks"
] |
12,105,320 | https://en.wikipedia.org/wiki/Macrophage%20migration%20inhibitory%20factor | Macrophage migration inhibitory factor (MIF), also known as glycosylation-inhibiting factor (GIF), L-dopachrome isomerase, or phenylpyruvate tautomerase is a protein that in humans is encoded by the MIF gene. MIF is an important regulator of innate immunity. The MIF protein superfamily also includes a second member with functionally related properties, the D-dopachrome tautomerase (D-DT). CD74 is a surface receptor for MIF.
Bacterial antigens stimulate white blood cells to release MIF into the blood stream. The circulating MIF binds to CD74 on other immune cells to trigger an acute immune response. Hence, MIF is classified as an inflammatory cytokine. Furthermore, glucocorticoids also stimulate white blood cells to release MIF and hence MIF partially counteracts the inhibitory effects that glucocorticoids have on the immune system. Finally trauma activates the anterior pituitary gland to release MIF.
Structure
Macrophage migration inhibitory factor assembles into a trimer composed of three identical subunits. Each of these monomers contain two antiparallel alpha helices and a four-stranded beta sheet. The monomers surround a central channel with 3-fold rotational symmetry.
Response to injury
Cytokines play an important role in promoting wound healing and tissue repair. Cell injury results in MIF release which then interacts with CD74. MIF-CD74 signaling activates pro-survival and proliferative pathways that protects the host during injury.
Enzymatic activity
MIF contains two motifs with catalytic activity. The first is a 27 amino acid motif located at the N-terminus functions as a phenylpyruvate tautomerase that can catalyze the conversion of 2-carboxy-2,3-dihydroindole-5,6-quinone (dopachrome) into 5,6-dihydroxyindole-2-carboxylic acid (DHICA). MIF also contains a Cys-Ala-Leu-Cys catalytic site between residues 57 and 60 that appears to function as a disulfide reductase.
Function
This gene encodes a lymphokine involved in cell-mediated immunity, immunoregulation, and inflammation. MIF plays a role in the regulation of macrophage function in host defense through the suppression of anti-inflammatory effects of glucocorticoids. This lymphokine and the JAB1 protein form a complex in the cytosol near the peripheral plasma membrane, which may indicate a role in integrin signaling pathways.
Mechanism of action
MIF binds to CD74, inducing its phosphorylation and the recruitment of CD44 which then activates non-receptor tyrosine kinases, leading ultimately to extracellular signal-regulated kinase phosphorylation. In addition to ERK, stimulation of CD74 activates other signaling pathways such PI3K-Akt, NF-κB, and AMP-activated protein kinase (AMPK) pathways.
Interactions
Macrophage migration inhibitory factor has been reported to interact with:
BNIPL,
CD74,
COPS5,
CXCR4, and
RPS19.
Clinical significance
MIF is a potential drug target for sepsis, rheumatoid arthritis, and cancer.
Parasite-produced MIF homologs
Multiple protozoan parasites produce homologs MIF that have similar inflammatory functions to human MIF, and play a role in their pathogenesis, invasion and immune evasion. A preclinical study showed that blocking parasite MIF improves outcome in severe protozoan infections. Examples of protozoans with MIF homologs that have been reported:
Entamoeba histolytica,
Plasmodium falciparum,
Toxoplasma gondii,
Leishmania,
Trichomonas vaginalis.
References
External links
Immune system | Macrophage migration inhibitory factor | [
"Biology"
] | 850 | [
"Immune system",
"Organ systems"
] |
12,106,301 | https://en.wikipedia.org/wiki/Partition%20%28politics%29 | In international relations, a partition is a division of a previously unified territory into two or more parts.
Brendan O'Leary distinguishes partition, a change of political borders cutting through at least one territory considered a homeland by some community, from secession, which takes place within existing recognized political units. For Arie Dubnov and Laura Robson, partition is the physical division of territory along ethno-religious lines into separate nation-states.
History
Dubnov and Robson locate partition in the context of post-World War I peacebuilding and the "new conversations surrounding ethnicity, nationhood, and citizenship" that emerged out of it. The post-war agreements, such as the League of Nations mandate system, promoted "a new political language of ethnic separatism as a central aspect of national self-determination, while protecting and disguising continuities and even expansions of French and, especially, British imperial powers. After World War II, they argue, partition transformed from "an imperial tactic into an organizing principle" of world diplomacy".
Ranabir Samaddar agrees that partition gained prominence following World War I, particularly with the partition of the Ottoman Empire and the Dissolution of Austria-Hungary, resulting from competing national ambitions. By this point, he argues, ethnicity had become the primary justification of border proposals.
Scholarship has closely linked partition to violence. Tracing the precedent for the Partition of Ireland in population resettlements across former Ottoman Empire territories and the making of national 'majorities' and 'minorities', Dubnov and Robson emphasise how partitions after Ireland contained proposals to transfer "inconvenient populations in addition to forcible territorial division into separate states," which they note had violent consequences for local actors who were devolved the task of "carving out physically separate political entities on the ground and making them ethnically homogenous".
T.G. Fraser notes that Britain proposed partition in both Ireland and Palestine as a method of resolving conflict between competing national groups, but in neither case did it end communal violence. Rather, Fraser argues, partition merely gave these conflicts a "new dimension".
Similarly, A. Dirk Moses asserts partition does not "so much solve minority issues as deposit them into different containers as minority issues reappear in partitioned units", rejecting what he calls "divine cartographies" that seek to "neatly map peoples as naturally emplaced in their homelands" for disregarding the heterogeneous reality of identity in the real world.
Arguments for
historicist – that partition is inevitable, or already in progress
last resort – that partition should be pursued to avoid the worst outcomes (genocide or large-scale ethnic expulsion), if all other means fail
cost–benefit – that partition offers a better prospect of conflict reduction than if the existing borders are not changed
better tomorrow – that partition will reduce current violence and conflict, and that the new more homogenized states will be more stable
rigorous end – heterogeneity leads to problems, hence homogeneous states should be the goal of any policy
Arguments against
national territorial unity will be lost
bi-nationalism and multi-nationalism are not undesirable
the impossibility of a just partition
difficult in deciding how the new border(s) will be drawn
the likelihood of disorder and violence
partitioning alone does not lead to the desired homogenization
security issues arising within the borders of the new states
Daniel Posner has argued that partitions of diverse communities into homogenous communities is unlikely to solve problems of communal conflict, as the boundary changes will alter the actors' incentives and give rise to new cleavages. For example, while the Muslim and Hindu cleavages might have been the most salient amid the Indian independence movement, the creation of a religiously homogenous Hindu state (India) and a religiously homogeneous Muslim state (Pakistan) created new social cleavages on lines other than religion in both of those states. Posner writes that relatively homogenous countries can be more violence-prone than countries with a large number of evenly matched ethnic groups.
Examples
Europe and the Middle East
Partition, multiple times, of the Roman Empire into the Eastern and Western Roman Empire, following the Crisis of the Third Century.
Partition of Prussia by the Second Peace of Thorn in 1466. creating Royal Prussia, and Duchy of Prussia in 1525
Partition of Catalonia by the Treaty of the Pyrenees in 1659: Northern Catalan territories (Roussillon) were given to France by Spain.
In the Treaty of Versailles (1757), France and Austria agreed upon the partition of Prussia
German occupation of Czechoslovakia: The Sudetenland was ceded to Nazi Germany under the Munich Agreement of 1938, and the country was later divided into the German-administered Protectorate of Bohemia and Moravia and the nominally independent Slovak Republic; later reunified at the end of World War II.
Three Partitions of Luxembourg, the last of which in 1839, divided Luxembourg between France, Prussia, Belgium, and the independent Grand Duchy of Luxembourg.
Three Partitions of Poland in 1772, 1793, and 1795, which led to the complete annihilation of the Polish–Lithuanian Commonwealth.
The Treaty of Bucharest in 1913 partitioned the region of Macedonia between Serbia (now North Macedonia), Greece and Bulgaria.
Partition of Tyrol by the London Pact of 1915 ratified during World War I.
Partition of the German Empire in 1919 by the Treaty of Versailles.
Partition of Prussia in 1919.
Partition of the Ottoman Empire.
Partition of the Austro-Hungarian Empire by the Treaty of Saint-Germain-en-Laye and the Treaty of Trianon.
Partition of Ireland in 1920 into Northern Ireland and Southern Ireland
This partition was only partially implemented as, following the Irish War of Independence, Southern Ireland became the Irish Free State
Treaty of Kars of 1921, which partitioned Ottoman Armenia between Turkey and the Soviet Union (Western and Eastern Armenia).
Partition of Allied-occupied Germany and Berlin after World War II
The Morgenthau Plan proposed independent states in North and South Germany, an international zone in the Ruhr Area, and the transfer of disputed border areas to France and Poland
The actual post-war settlement created West Germany and East Germany and included the annexation of former eastern territories of Germany by Poland and the Soviet Union. Later, East and West Germany were unified at the end of the Cold War.
Partition of East Prussia between Poland and the Soviet Union
The 1947 United Nations Partition Plan for Palestine was never fully implemented
Following the expiration of the British Mandate, David Ben-Gurion declared the establishment of a Jewish state in unspecified borders
The 1948 Palestine war and 1949 Armistice Agreements resulted in the territories of the proposed Arab state in the 1947 plan being occupied by Israel, Transjordan, and Egypt.
Breakup of Yugoslavia in the 1990s.
Independence of Croatia, Bosnia and Herzegovina, North Macedonia and Slovenia from Yugoslavia (leaving Serbia and Montenegro).
Failed partition of the Republic of Serbian Krajina in Croatia after the Croatian War
Ethno-political partition of Bosnia and Herzegovina into two entities, the Serb-majority Republika Srpska and the Bosniak-Croat-majority Federation of Bosnia and Herzegovina, after the Bosnian War.
Partition of Czechoslovakia in 1993 into the independent entities of the Czech Republic and Slovakia.
Partition of Cyprus in 1974 (de facto), into Greek-majority Cyprus and Turkish-majority Northern Cyprus after the Turkish invasion of Cyprus.
Possible Partition of Kosovo after disputed independence (partition from Serbia) in 2008. See also Kosovo independence precedent.
Everywhere else
Partition of Africa (Scramble for Africa), between 1881 and 1914 under the General Act of the Berlin Conference.
Partition of the U.S. state of Virginia in 1863 after Virginia joined the Confederacy in the American Civil War, 50 northwestern counties rejoined the Union as the State of West Virginia.
1947 Partition of Punjab into East Punjab and West Punjab.
1905 Partition of Bengal and 1947 Partition of Bengal into East Bengal and West Bengal.
Partition of Korea in 1945 into American and Soviet zones of occupation.
Division of Korea in 1953 between North Korea and South Korea after the Korean War.
Partition of India (colonial British India) in 1947 into the independent dominions (later republics) of India and Pakistan (which included modern-day Bangladesh).
Partition of China after the Chinese Civil War in 1946–1950 separated the original territory of the Republic of China into the People's Republic of China in Mainland China and the Republic of China on Taiwan and other island groups.
Partition of Punjab in 1966 into the states of Punjab, Haryana and Himachal Pradesh.
Partition of Vietnam in 1954 between North Vietnam and South Vietnam under the Geneva Accord after the First Indochina War. Later reunified in 1976 after the Vietnam War.
The hypothetical partition of the Canadian province of Quebec.
Partition of Sudan into two entities in 2011, the Muslim-majority Sudan and the Christian-majority South Sudan.
See also
Separatism
Secession
References
Further reading
Berg, Eiki. "Re-examining sovereignty claims in changing territorialities: reflections from ‘Kosovo Syndrome’." Geopolitics 14.2 (2009): 219-234.
Downes, Alexander B. "More Borders, Less Conflict? Partition as a Solution to Ethnic Civil Wars." SAIS Review of International Affairs 26.1 (2006): 49–61.
Fearon, James D. "Separatist wars, partition, and world order." Security Studies 13.4 (2004): 394–415.
Horowitz, Michael C., Alex Weisiger, and Carter Johnson. "The limits to partition." International Security 33.4 (2009): 203–210.
Kumar, Radha. "The Partition Debate: Colonialism Revisited or New Policies?." The Brown Journal of World Affairs 7.1 (2000): 3–11.
Kumar, Radha. "Settling Partition Hostilities: Lessons Learned, Options Ahead." The Fate of the Nation-state (2004): 247.
O'Leary, Brendan. "Debating partition: justifications and critiques." Revised version of portion of a paper presented at final conference of the Mapping frontiers, plotting pathways: routes to north–south cooperation in a divided island programme, City Hotel, Armagh, 19–20 January 2006. University College Dublin. Institute for British-Irish Studies (2006).
Robson, Laura. States of Separation: Transfer, Partition, and the Making of the Modern Middle East. University of California Press (2017).
Sambanis, Nicholas, and Jonah Schulhofer-Wohl. "What's in a line? Is partition a solution to civil war?." International Security 34.2 (2009): 82–118.
Borders | Partition (politics) | [
"Physics"
] | 2,150 | [
"Spacetime",
"Borders",
"Space"
] |
12,106,314 | https://en.wikipedia.org/wiki/Jenkins%E2%80%93Traub%20algorithm | The Jenkins–Traub algorithm for polynomial zeros is a fast globally convergent iterative polynomial root-finding method published in 1970 by Michael A. Jenkins and Joseph F. Traub. They gave two variants, one for general polynomials with complex coefficients, commonly known as the "CPOLY" algorithm, and a more complicated variant for the special case of polynomials with real coefficients, commonly known as the "RPOLY" algorithm. The latter is "practically a standard in black-box polynomial root-finders".
This article describes the complex variant. Given a polynomial P,
with complex coefficients it computes approximations to the n zeros of P(z), one at a time in roughly increasing order of magnitude. After each root is computed, its linear factor is removed from the polynomial. Using this deflation guarantees that each root is computed only once and that all roots are found.
The real variant follows the same pattern, but computes two roots at a time, either two real roots or a pair of conjugate complex roots. By avoiding complex arithmetic, the real variant can be faster (by a factor of 4) than the complex variant. The Jenkins–Traub algorithm has stimulated considerable research on theory and software for methods of this type.
Overview
The Jenkins–Traub algorithm calculates all of the roots of a polynomial with complex coefficients. The algorithm starts by checking the polynomial for the occurrence of very large or very small roots. If necessary, the coefficients are rescaled by a rescaling of the variable. In the algorithm, proper roots are found one by one and generally in increasing size. After each root is found, the polynomial is deflated by dividing off the corresponding linear factor. Indeed, the factorization of the polynomial into the linear factor and the remaining deflated polynomial is already a result of the root-finding procedure. The root-finding procedure has three stages that correspond to different variants of the inverse power iteration. See Jenkins and Traub.
A description can also be found in Ralston and Rabinowitz p. 383.
The algorithm is similar in spirit to the two-stage algorithm studied by Traub.
Root-finding procedure
Starting with the current polynomial P(X) of degree n, the aim is to compute the smallest root of P(x). The polynomial can then be split into a linear factor and the remaining polynomial factor Other root-finding methods strive primarily to improve the root and thus the first factor. The main idea of the Jenkins-Traub method is to incrementally improve the second factor.
To that end, a sequence of so-called H polynomials is constructed. These polynomials are all of degree n − 1 and are supposed to converge to the factor of P(X) containing (the linear factors of) all the remaining roots. The sequence of H polynomials occurs in two variants, an unnormalized variant that allows easy theoretical insights and a normalized variant of polynomials that keeps the coefficients in a numerically sensible range.
The construction of the H polynomials is guided by a sequence of complex numbers called shifts. These shifts themselves depend, at least in the third stage, on the previous H polynomials. The H polynomials are defined as the solution to the implicit recursion
and
A direct solution to this implicit equation is
where the polynomial division is exact.
Algorithmically, one would use long division by the linear factor as in the Horner scheme or Ruffini rule to evaluate the polynomials at and obtain the quotients at the same time. With the resulting quotients p(X) and h(X) as intermediate results the next H polynomial is obtained as
Since the highest degree coefficient is obtained from P(X), the leading coefficient of is . If this is divided out the normalized H polynomial is
Stage one: no-shift process
For set . Usually M=5 is chosen for polynomials of moderate degrees up to n = 50. This stage is not necessary from theoretical considerations alone, but is useful in practice. It emphasizes in the H polynomials the cofactor(s) (of the linear factor) of the smallest root(s).
Stage two: fixed-shift process
The shift for this stage is determined as some point close to the smallest root of the polynomial. It is quasi-randomly located on the circle with the inner root radius, which in turn is estimated as the positive solution of the equation
Since the left side is a convex function and increases monotonically from zero to infinity, this equation is easy to solve, for instance by Newton's method.
Now choose on the circle of this radius. The sequence of polynomials , , is generated with the fixed shift value . This creates an asymmetry relative to the previous stage which increases the chance that the H polynomial moves towards the cofactor of a single root.
During this iteration, the current approximation for the root
is traced. The second stage is terminated as successful if the conditions
and
are simultaneously met. This limits the relative step size of the iteration, ensuring that the approximation sequence stays in the range of the smaller roots. If there was no success after some number of iterations, a different random point on the circle is tried. Typically one uses a number of 9 iterations for polynomials of moderate degree, with a doubling strategy for the case of multiple failures.
Stage three: variable-shift process
The polynomials are now generated using the variable shifts which are generated by
being the last root estimate of the second stage and
where is the normalized H polynomial, that is divided by its leading coefficient.
If the step size in stage three does not fall fast enough to zero, then stage two is restarted using a different random point. If this does not succeed after a small number of restarts, the number of steps in stage two is doubled.
Convergence
It can be shown that, provided L is chosen sufficiently large, sλ always converges to a root of P.
The algorithm converges for any distribution of roots, but may fail to find all roots of the polynomial. Furthermore, the convergence is slightly faster than the quadratic convergence of the Newton–Raphson method, however, it uses one-and-half as many operations per step, two polynomial evaluations for Newton vs. three polynomial evaluations in the third stage.
What gives the algorithm its power?
Compare with the Newton–Raphson iteration
The iteration uses the given P and . In contrast the third-stage of Jenkins–Traub
is precisely a Newton–Raphson iteration performed on certain rational functions. More precisely, Newton–Raphson is being performed on a sequence of rational functions
For sufficiently large,
is as close as desired to a first degree polynomial
where is one of the zeros of . Even though Stage 3 is precisely a Newton–Raphson iteration, differentiation is not performed.
Analysis of the H polynomials
Let be the roots of P(X). The so-called Lagrange factors of P(X) are the cofactors of these roots,
If all roots are different, then the Lagrange factors form a basis of the space of polynomials of degree at most n − 1. By analysis of the recursion procedure one finds that the H polynomials have the coordinate representation
Each Lagrange factor has leading coefficient 1, so that the leading coefficient of the H polynomials is the sum of the coefficients. The normalized H polynomials are thus
Convergence orders
If the condition holds for almost all iterates, the normalized H polynomials will converge at least geometrically towards .
Under the condition that
one gets the asymptotic estimates for
stage 1:
for stage 2, if s is close enough to : and
and for stage 3: and giving rise to a higher than quadratic convergence order of , where is the golden ratio.
Interpretation as inverse power iteration
All stages of the Jenkins–Traub complex algorithm may be represented as the linear algebra problem of determining the eigenvalues of a special matrix. This matrix is the coordinate representation of a linear map in the n-dimensional space of polynomials of degree n − 1 or less. The principal idea of this map is to interpret the factorization
with a root and the remaining factor of degree n − 1 as the eigenvector equation for the multiplication with the variable X, followed by remainder computation with divisor P(X),
This maps polynomials of degree at most n − 1 to polynomials of degree at most n − 1. The eigenvalues of this map are the roots of P(X), since the eigenvector equation reads
which implies that , that is, is a linear factor of P(X). In the monomial basis the linear map is represented by a companion matrix of the polynomial P, as
the resulting transformation matrix is
To this matrix the inverse power iteration is applied in the three variants of no shift, constant shift and generalized Rayleigh shift in the three stages of the algorithm. It is more efficient to perform the linear algebra operations in polynomial arithmetic and not by matrix operations, however, the properties of the inverse power iteration remain the same.
Real coefficients
The Jenkins–Traub algorithm described earlier works for polynomials with complex coefficients. The same authors also created a three-stage algorithm for polynomials with real coefficients. See Jenkins and Traub A Three-Stage Algorithm for Real Polynomials Using Quadratic Iteration. The algorithm finds either a linear or quadratic factor working completely in real arithmetic. If the complex and real algorithms are applied to the same real polynomial, the real algorithm is about four times as fast. The real algorithm always converges and the rate of convergence is greater than second order.
A connection with the shifted QR algorithm
There is a surprising connection with the shifted QR algorithm for computing matrix eigenvalues. See Dekker and Traub The shifted QR algorithm for Hermitian matrices. Again the shifts may be viewed as Newton-Raphson iteration on a sequence of rational functions converging to a first degree polynomial.
Software and testing
The software for the Jenkins–Traub algorithm was published as Jenkins and Traub Algorithm 419: Zeros of a Complex Polynomial. The software for the real algorithm was published as Jenkins Algorithm 493: Zeros of a Real Polynomial.
The methods have been extensively tested by many people. As predicted they enjoy faster than quadratic convergence for all distributions of zeros.
However, there are polynomials which can cause loss of precision as illustrated by the following example. The polynomial has all its zeros lying on two half-circles of different radii. Wilkinson recommends that it is desirable for stable deflation that smaller zeros be computed first. The second-stage shifts are chosen so that the zeros on the smaller half circle are found first. After deflation the polynomial with the zeros on the half circle is known to be ill-conditioned if the degree is large; see Wilkinson, p. 64. The original polynomial was of degree 60 and suffered severe deflation instability.
References
External links
A free downloadable Windows application using the Jenkins–Traub Method for polynomials with real and complex coefficients
RPoly++ An SSE-Optimized C++ implementation of the RPOLY algorithm.
Numerical analysis
Polynomial factorization algorithms | Jenkins–Traub algorithm | [
"Mathematics"
] | 2,286 | [
"Computational mathematics",
"Mathematical relations",
"Approximations",
"Numerical analysis"
] |
12,106,733 | https://en.wikipedia.org/wiki/2-Chloro-9%2C10-bis%28phenylethynyl%29anthracene | 2-Chloro-9,10-bis(phenylethynyl)anthracene is a fluorescent dye used in lightsticks. It emits green light, used in 12-hour low-intensity Cyalume sticks.
See also
9,10-Bis(phenylethynyl)anthracene
1-Chloro-9,10-bis(phenylethynyl)anthracene
Fluorescent dyes
Organic semiconductors
Anthracenes
Alkyne derivatives
Chloroarenes | 2-Chloro-9,10-bis(phenylethynyl)anthracene | [
"Chemistry"
] | 112 | [
"Semiconductor materials",
"Molecular electronics",
"Organic semiconductors"
] |
12,106,740 | https://en.wikipedia.org/wiki/Random%20geometric%20graph | In graph theory, a random geometric graph (RGG) is the mathematically simplest spatial network, namely an undirected graph constructed by randomly placing N nodes in some metric space (according to a specified probability distribution) and connecting two nodes by a link if and only if their distance is in a given range, e.g. smaller than a certain neighborhood radius, r.
Random geometric graphs resemble real human social networks in a number of ways. For instance, they spontaneously demonstrate community structure - clusters of nodes with high modularity. Other random graph generation algorithms, such as those generated using the Erdős–Rényi model or Barabási–Albert (BA) model do not create this type of structure. Additionally, random geometric graphs display degree assortativity according to their spatial dimension: "popular" nodes (those with many links) are particularly likely to be linked to other popular nodes.
Percolation theory on the random geometric graph (the study of its global connectivity) is sometimes called the Gilbert disk model after the work of Edgar Gilbert, who introduced these graphs and percolation in them in a 1961 paper. A real-world application of RGGs is the modeling of ad hoc networks. Furthermore they are used to perform benchmarks for graph algorithms.
Definition
In the following, let denote an undirected Graph with a set of vertices and a set of edges . The set sizes are denoted by and . Additionally, if not noted otherwise, the metric space with the euclidean distance is considered, i.e. for any points the euclidean distance of x and y is defined as
.
A random geometric graph (RGG) is an undirected geometric graph with nodes randomly sampled from the uniform distribution of the underlying space . Two vertices are connected if, and only if, their distance is less than a previously specified parameter , excluding any loops. Thus, the parameters and fully characterize a RGG.
Algorithms
Naive algorithm
The naive approach is to calculate the distance of every vertex to every other vertex. As there are possible connections that are checked, the time complexity of the naive algorithm is . The samples are generated by using a random number generator (RNG) on . Practically, one can implement this using d random number generators on , one RNG for every dimension.
Pseudocode
V := generateSamples(n) // Generates n samples in the unit cube.
for each p ∈ V do
for each q ∈ V\{p} do
if distance(p, q) ≤ r then
addConnection(p, q) // Add the edge (p, q) to the edge data structure.
end if
end for
end for
As this algorithm is not scalable (every vertex needs information of every other vertex), Holtgrewe et al. and Funke et al. have introduced new algorithms for this problem.
Distributed algorithms
Holtgrewe et al.
This algorithm, which was proposed by Holtgrewe et al., was the first distributed RGG generator algorithm for dimension 2. It partitions the unit square into equal sized cells with side length of at least . For a given number of processors, each processor is assigned cells, where For simplicity, is assumed to be a square number, but this can be generalized to any number of processors. Each processor then generates vertices, which are then distributed to their respective owners. Then the vertices are sorted by the cell number they fall into, for example with Quicksort. Next, each processor then sends their adjacent processors the information about the vertices in the border cells, such that each processing unit can calculate the edges in their partition independent of the other units. The expected running time is . An upper bound for the communication cost of this algorithm is given by , where denotes the time for an all-to-all communication with messages of length bits to communication partners. is the time taken for a point-to-point communication for a message of length bits.
Since this algorithm is not communication free, Funke et al. proposed a scalable distributed RGG generator for higher dimensions, which works without any communication between the processing units.
Funke et al.
The approach used in this algorithm is similar to the approach in Holtgrewe: Partition the unit cube into equal sized chunks with side length of at least . So in = 2 this will be squares, in = 3 this will be cubes. As there can only fit at most chunks per dimension, the number of chunks is capped at . As before, each processor is assigned chunks, for which it generates the vertices. To achieve a communication free process, each processor then generates the same vertices in the adjacent chunks by exploiting pseudorandomization of seeded hash functions. This way, each processor calculates the same vertices and there is no need for exchanging vertex information.
For dimension 3, Funke et al. showed that the expected running time is , without any cost for communication between processing units.
Properties
Isolated vertices and connectivity
The probability that a single vertex is isolated in a RGG is . Let be the random variable counting how many vertices are isolated. Then the expected value of is . The term provides information about the connectivity of the RGG. For , the RGG is asymptotically almost surely connected. For , the RGG is asymptotically almost surely disconnected. And for , the RGG has a giant component that covers more than vertices and is Poisson distributed with parameter . It follows that if , the probability that the RGG is connected is and the probability that the RGG is not connected is .
For any -Norm ( ) and for any number of dimensions , a RGG possesses a sharp threshold of connectivity at with constant . In the special case of a two-dimensional space and the euclidean norm ( and ) this yields .
Hamiltonicity
It has been shown, that in the two-dimensional case, the threshold also provides information about the existence of a Hamiltonian cycle (Hamiltonian Path). For any , if , then the RGG has asymptotically almost surely no Hamiltonian cycle and if for any , then the RGG has asymptotically almost surely a Hamiltonian cycle.
Clustering coefficient
The clustering coefficient of RGGs only depends on the dimension of the underlying space . The clustering coefficient is
for even and for odd whereFor large , this simplifies to .
Generalized random geometric graphs
In 1988 Waxman generalised the standard RGG by introducing a probabilistic connection function as opposed to the deterministic one suggested by Gilbert. The example introduced by Waxman was a stretched exponential where two nodes and connect with probability given by where is the euclidean separation and , are parameters determined by the system. This type of RGG with probabilistic connection function is often referred to a soft random geometric Graph, which now has two sources of randomness; the location of nodes (vertices) and the formation of links (edges). This connection function has been generalized further in the literature which is often used to study wireless networks without interference. The parameter represents how the signal decays with distance, when is free space, models a more cluttered environment like a town (= 6 models cities like New York) whilst models highly reflective environments. We notice that for is the Waxman model, whilst as and we have the standard RGG. Intuitively these type of connection functions model how the probability of a link being made decays with distance.
Overview of some results for Soft RGG
In the high density limit for a network with exponential connection function the number of isolated nodes is Poisson distributed, and the resulting network contains a unique giant component and isolated nodes only. Therefore by ensuring there are no isolated nodes, in the dense regime, the network is a.a.s fully connected; similar to the results shown in for the disk model. Often the properties of these networks such as betweenness centrality and connectivity are studied in the limit as the density which often means border effects become negligible. However, in real life where networks are finite, although can still be extremely dense, border effects will impact on full connectivity; in fact showed that for full connectivity, with an exponential connection function, is greatly impacted by boundary effects as nodes near the corner/face of a domain are less likely to connect compared with those in the bulk. As a result full connectivity can be expressed as a sum of the contributions from the bulk and the geometries boundaries. A more general analysis of the connection functions in wireless networks has shown that the probability of full connectivity can be well approximated expressed by a few moments of the connection function and the regions geometry.
References
Geometric graphs
Random graphs | Random geometric graph | [
"Mathematics"
] | 1,751 | [
"Mathematical relations",
"Graph theory",
"Random graphs"
] |
12,106,984 | https://en.wikipedia.org/wiki/Schaefer%E2%80%93Bergmann%20diffraction | Schaefer–Bergmann diffraction is the resulting diffraction pattern of light interacting with sound waves in transparent crystals or glasses.
See also
IEEE
http://prola.aps.org/abstract/PR/v52/i3/p223_1
DOI.org
References
Diffraction | Schaefer–Bergmann diffraction | [
"Physics",
"Chemistry",
"Materials_science"
] | 67 | [
"Materials science stubs",
"Spectrum (physical sciences)",
"Crystallography stubs",
"Crystallography",
"Diffraction",
"Spectroscopy"
] |
12,107,810 | https://en.wikipedia.org/wiki/Pepstatin | Pepstatin is a potent inhibitor of aspartyl proteases. It is a hexa-peptide containing the unusual amino acid statine (Sta, (3S,4S)-4-amino-3-hydroxy-6-methylheptanoic acid), having the sequence Isovaleryl-Val-Val-Sta-Ala-Sta (Iva-Val-Val-Sta-Ala-Sta). It was originally isolated from cultures of various species of Actinomyces due to its ability to inhibit pepsin at picomolar concentrations. It was later found to inhibit nearly all acid proteases with high potency and, as such, has become a valuable research tool, as well as a common constituent of protease inhibitor cocktails.
Pepstatin A is well known to be an inhibitor of aspartic proteases such as pepsin, cathepsins D and E. Except for its role as a protease inhibitor, however, the pharmacological action of pepstatin A upon cells remain unclear. Pepstatin A suppresses receptor activator of NF-κB ligand (RANKL)–induced osteoclast differentiation. Pepstatin A suppresses the formation of multinuclear osteoclasts dose-dependently. This inhibition of the formation only affected osteoclast cells, i.e., not osteoblast-like cells. Furthermore, pepstatin A also suppresses differentiation from pre-osteoclast cells to mononuclear osteoclast cells dose-dependently. This inhibition seems to be independent of the activities of proteases such as cathepsin D, because the formation of osteoclasts was not suppressed with the concentration that inhibited the activity of cathepsin D. Cell signaling analysis indicated that the phosphorylation of ERK was inhibited in pepstatin A-treated cells, while the phosphorylation of IκB and Akt showed almost no change. Furthermore, pepstatin A decreased the expression of nuclear factor of activated T cells c1 (NFATc1). These results suggest that pepstatin A suppresses the differentiation of osteoclasts through the blockade of ERK signaling and the inhibition of NFATc1 expression.
Pepstatin is practically insoluble in water, chloroform, ether, and benzene, however it can be dissolved in methanol, ethanol, and DMSO with acetic acid, to between 1 and 5 mg/ml.
See also
Amastatin
Bestatin
References
Peptides
Protease inhibitors | Pepstatin | [
"Chemistry"
] | 565 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
12,108,187 | https://en.wikipedia.org/wiki/Superconducting%20camera | The superconducting camera, SCAM, is an ultra-fast photon-counting camera developed by the European Space Agency. It is cooled to just 0.3 K (three-tenths of a degree Celsius above absolute zero). This enables its sensitive electronic detectors, known as superconducting tunnel junction detectors, to register almost every photon of light that falls onto it.
Its advantage over a charge-coupled device (CCD) is that it can measure both the brightness (rate) of the incoming photon stream and the color (wavelength or energy) of each individual photon.
The number of free primary electrons generated per photon event is proportional to the photon energy and amounts to ~18,000 per electronvolt. As a result if the device is operated in single-photon count mode the energy of each captured photon can be calculated in the visible-light range, where photons have energies of a few electronvolts, each generating >20,000 electrons. In a normal CCD, only one primary electron is generated per photon, except for very energetic photons, like X-rays, where a normal CCD can operate in a similar way to a SCAM.
In 2006 the SCAM instrument was mounted on the ESA's Optical Ground Station telescope in order to observe the disintegration of Comet 73P/Schwassmann-Wachmann 3.
References
European Space Agency: ESA's new camera will revolutionise the way astronomers observe the Universe
S-Cam 3, the detector
Infrared imaging
Infrared telescopes
Astronomy image sensors | Superconducting camera | [
"Astronomy"
] | 318 | [
"Astronomy image sensors",
"Astronomy stubs",
"Astronomical instruments"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.