id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
44,732,855 | https://en.wikipedia.org/wiki/Target%20Motion%20Analysis | Target Motion Analysis (TMA) is a process to determine the position of a target using passive sensor information. Sensors like passive RADAR and SONAR provide directional and occasionally frequency information. TMA is done by marking from which direction the sound comes at different times, and comparing the motion with that of the operator's own ship. Changes in relative motion are analyzed using standard geometrical techniques along with some assumptions about limiting cases. There are two different ways to execute TMA: manual and automated.
Manual TMA
Manual TMA methods involve computation executed by humans instead of computers. There exist several manual TMA methods such as: Ekelund Ranging, 1934 Rule, Spears Wheel etc.
Ekelund ranging
One of the best known TMA techniques is Ekelund ranging.
It is a method that is specifically designed for a 2leg-1zig scenario. This method works by first estimating the bearing rates during the first and second leg . Secondly, one calculates the speed of advance along the line of sight with the target on the first and second leg . The rule then states that the range of the target at the moment of maneuver is given by:
To check the solution of an Ekelund Ranging solution there is also an iPhone app available.
Automated TMA
Automated TMA methods involve computations executed by computers. This allows for the simultaneous tracking of multiple targets. There exist several automated TMA methods such as: Maximum Likelihood Estimator (MLE), etc.
Maximum Likelihood Estimator (MLE)
The MLE method tries to fit the directional measurements (bearings) to a theoretical linear motion model of the target. The bearing function to be fitted is:
If measurements of have been collected, the problem reduces to an overdetermined system of non-linear equations. The state vector associated is
and can be solved by numerical estimation procedures like Gauss-Newton.
References
Tracking
Military technology | Target Motion Analysis | Technology | 385 |
2,902,729 | https://en.wikipedia.org/wiki/26%20Aquarii | 26 Aquarii is a single star located approximately 960 light years away from the Sun in the zodiac constellation of Aquarius. 26 Aquarii is the Flamsteed designation. It is visible to the naked eye as a dim, orange-hued star with an apparent visual magnitude of 5.66. This object is moving away from the Earth with a heliocentric radial velocity of +8 km/s.
Houk and Swift (1999) listed a stellar classification of K2(III) for 26 Aquarii, corresponding to an evolved K-type giant of uncertain luminosity class. Bartkevicius and Lazauskaite (1997) found spectral traits of , suggesting some type of giant K-type star with a suspected metal deficiency (MD) of barium. It has 54.5 times the Sun's radius and is radiating 842 times the luminosity of the Sun from its photosphere at an effective temperature of 4,210 K.
References
K-type giants
Aquarius (constellation)
Durchmusterung objects
Aquarii, 026
206445
107144
8287 | 26 Aquarii | Astronomy | 234 |
57,723,249 | https://en.wikipedia.org/wiki/List%20of%20engineering%20physics%20schools | Engineering physics (or engineering science) can be studied at the bachelors, masters and Ph.D. levels at many universities, typically offered in a partnership between engineering faculties and the departments of physics.
Canada
In Canada, the Canadian Engineering Accreditation Board is responsible for accrediting undergraduate engineering physics programs, graduate study in aerospace engineering is also available at several Canadian post-secondary institutions, though Canadian post-graduate engineering programs do not require accreditation.
University of Alberta - Engineering Physics
University of British Columbia - Engineering Physics
Carleton University - Engineering Physics
Dalhousie University - Engineering Physics
McMaster University - Engineering Physics
Queen's University - Engineering Physics
Royal Military College of Canada - Engineering Physics
University of Saskatchewan - Engineering Physics
Simon Fraser University - Engineering Science
University of Toronto - Engineering Science
Only undergraduate engineering programs in Canada are accredited, and this is done by the Canadian Engineering Accreditation Board.
References
Engineering education
Lists of engineering schools | List of engineering physics schools | Engineering | 182 |
1,514,313 | https://en.wikipedia.org/wiki/Nike-Cajun | The Nike-Cajun was a two-stage sounding rocket built by combining a Nike base stage with a Cajun upper stage. The Nike-Cajun was known as a CAN for Cajun And Nike. The Cajun was developed from the Deacon rocket. It retained the external size, shape and configuration of the Deacon but had 36 percent greater impulse than the Deacon due to improved propellant. It was launched 714 times between 1956 and 1976 and was the most frequently used sounding rocket of the western world. The Nike Cajun had a launch weight of 698 kg (1538 lb), a payload of 23 kg (51 lb), a launch thrust of 246 kN (55,300 lbf) and a maximum altitude of 120 km (394,000 ft). It had a diameter of 42 cm (1 ft 4 in) and a length of 7.70 m (25 ft 3 in). The maximum speed of the Nike-Cajun was .
The Cajun stage of this rocket was named for the Cajun people of South Louisiana because one of the rocket's designers, J. G. Thibodaux, was a Cajun.
The Nike-Cajun configuration was also used by one variation of the MQR-13 BMTS target rocket.
Engine:
1st stage: Allegheny Ballistics Lab. X216A2 solid-fueled rocket; 246 kN (55,000 lb) for 3 s
2nd stage: Thiokol TE-82-1 Cajun solid-fueled rocket; 37 kN (8,300 lb) for 2.8 s
See also
Nike-Apache
References
Nike-Cajun at Encyclopedia Astronautica
External links
University of Michigan/NACA RM-85/PWN-3 Nike-Cajun
Nike (rocket family) | Nike-Cajun | Astronomy | 366 |
70,933,720 | https://en.wikipedia.org/wiki/Tolyltriazole | Tolyltriazole is a mixture of isomers or congeners that differ from benzotriazole by the addition of one methyl group attached somewhere on the benzene ring. "The term tolyltriazole (CAS 29385-43-1) generally [refers to] the commercial mixture composed of approximately equal amounts of 4- and 5-methylbenzotriazole, with small quantities of [their respective 7- and 6-methyl tautomers]".
Structure
Synthesis and reactions
Synthesis is much like that of benzotriazole, but starting with methyl-o-phenylenediamine instead of o-phenylenediamine. Isomers of methyl-o-phenylenediamine include 3-methyl-o-phenylenediamine, 4-methyl-o-phenylenediamine, and N-methyl-o-phenylenediamine (not involved here).
Applications
Tolyltriazole has uses similar to benzotriazole, but has better solubility in some organic solvents.
Corrosion inhibitor
Environmental relevance
Related compounds
Hydroxybenzotriazole
References
Benzotriazoles
Chelating agents
Conservation and restoration materials
Corrosion inhibitors | Tolyltriazole | Physics,Chemistry | 260 |
1,638,889 | https://en.wikipedia.org/wiki/Quantity%20surveyor | A quantity surveyor (QS) is a construction industry professional with expert knowledge on construction costs and contracts. Qualified professional quantity surveyors can be known as Chartered Surveyors (Members and Fellows of RICS) in the UK and Certified Quantity Surveyors (a designation of the Australian Institute of Quantity Surveyors) in Australia and other countries. In some countries such as Canada, South Africa, Kenya and Mauritius, qualified quantity surveyors are known as Professional Quantity Surveyors, a title protected by law.
Due to a shift in the Construction industry and the increased demand for Quantity Surveying expertise, today less importance is being placed on Charterships, with a large % of working Quantity Surveyors practising with College / University degrees and without membership or fellowship to professional associations.
Quantity surveyors are responsible for managing all aspects of the contractual and financial side of construction projects. They help to ensure that the construction project is completed within its projected budget. Quantity surveyors are also hired by contractors to help with the valuation of construction work for the contractor, help with bidding and project budgeting, and the submission of bills to the client.
Duties
The duties of a quantity surveyor are as follows:
Conducting financial feasibility studies for development projects.
Cost estimate, cost planning and cost management.
Analyzing terms and conditions in the contract.
Predicting potential risks in the project and taking precautions to mitigate such.
Forecasting the costs of different materials needed for the project.
Prepare tender documents, contracts, budgets and other documentation.
Take note of changes made and adjusting the budget accordingly.
Tender management including preparation of bills of quantities, contract conditions and assembly of tender documents
Contract management and contractual advice
Valuation of construction work
Claims and dispute management
Lifecycle costing analysis
Reinstatement Cost Assessment for Insurance Purposes.
Professional associations
RICS – The Royal Institution of Chartered Surveyors
AIQS – Australian Institute of Quantity Surveyors
IQSSL - Institute of Quantity Surveyors Sri Lanka
ASAQS – Association of South African Quantity Surveyors
BSIJ – Building Surveyors Institute of Japan
CICES - Chartered Institution of Civil Engineering Surveyors
CIQS – Canadian Institute of Quantity Surveyors
CCEA - China Cost Engineering Association
GHIS - Ghana Institute of Surveyors
HKIS – Hong Kong Institute of Surveyors
IIQS – Indian Institute of Quantity Surveyors
IQSI – Ikatan Quantity Surveyor Indonesia
IQSK – Institute of Quantity Surveyors of Kenya
JIQS – Jamaican Institute of Quantity Surveyors
NIQS – Nigerian Institute of Quantity Surveyors
NZIQS – New Zealand Institute of Quantity Surveyors
PICQS – Philippine Institute of Certified Quantity Surveyors
RISM – The Royal Institution of Surveyors Malaysia
SISV – Singapore Institute of Surveyors and Valuers
SCSI – Society of Chartered Surveyors Ireland
SACQSP – South African Council for Quantity Surveying Profession
QSI - Quantity Surveyor International
UNTEC - Union nationale des Économistes de la construction (France)
Qualification
A university degree or diploma alone does not allow one to register as a Chartered Quantity Surveyor. Usually, anyone looking to qualify as a Chartered Quantity Surveyor, Certified Quantity Surveyor must hold appropriate educational qualifications and work experience, and must pass a professional competence assessment.
The RICS requires an RICS approved degree, several years of practical experience, and passing the Assessment of Professional Competence (APC) to qualify as a Chartered Quantity Surveyor. Some candidates may be entitled to qualify through extensive experience and reciprocity agreements.
Future of quantity surveying
As construction projects become increasingly complex, the demand for skilled quantity surveyors continues to grow. The importance of Quantity Surveyors becoming Chartered is lessening year on year, with more and more businesses opting to hire staff with a standard Quantity Surveying degree and develop Quantity Surveying skills through their own training programmes. The future of quantity surveying lies in embracing digitalization, automation, and sustainable practices. Quantity surveyors will play a pivotal role in managing costs, optimizing resources, and ensuring the financial success of construction projects.
See also
Cost engineering
References
Building estimators
Construction and extraction occupations
Construction management
Building engineering | Quantity surveyor | Engineering | 806 |
916,705 | https://en.wikipedia.org/wiki/S%20Doradus | S Doradus (also known as S Dor) is one of the brightest stars in the Large Magellanic Cloud (LMC), a satellite galaxy of the Milky Way, located roughly 160,000 light-years away. The star is a luminous blue variable, and one of the most luminous stars known, having a luminosity varying widely above and below 1,000,000 times the luminosity of the Sun, although it is too far away to be seen with the naked eye.
History
S Doradus was noted in 1897 as an unusual and variable star, of Secchi type I with bright lines of Hα, Hβ, and Hγ. The formal recognition as a variable star came the assignment of the name S Doradus in 1904 in the second supplement to Catalogue of Variable Stars.
S Dor was observed many times over the following decades. In 1924, it was described as "P Cygni class" and recorded at photographic magnitude 9.5 In 1925, its absolute magnitude was estimated at −8.9. In 1933 it was listed as a 9th-magnitude Beq star with bright hydrogen lines. It was the most luminous star known at that time.
In 1943, the variability was interpreted as being due to eclipses of a binary companion, orbiting with a period of 40 years. This was refuted in 1956, when the variability was described as irregular and the spectrum as A0 with P Cygni profiles and emission for many spectral lines. The brightness was observed to decline by 0.8 magnitude from 1954 into 1955. At the same time, S Doradus was noted as being similar to the Hubble–Sandage variables, the LBVs discovered in M31 and M33. The brief 1955 minimum was followed by a deep minimum in 1964, when the spectrum was compared to Eta Carinae in strong contrast to the mid-A spectrum at normal brightness.
By 1969 the nature of S Doradus was still uncertain, considered possibly to be a pre-main-sequence star, but during the next decade the consensus settled on the S Doradus type variables and Hubble-Sandage variables being evolved massive supergiants. They were eventually given the name "luminous blue variables" in 1984, coined in part because of the similarity of the acronym LBV to the well-defined LPV class of variable stars. The classification system defined for the General Catalogue of Variable Stars pre-dated this and so the acronym SDOR is used for LBVs.
Surroundings
S Doradus is the brightest member of the open cluster NGC 1910, also known as the LH41 stellar association, visible in binoculars as a bright condensation within the main bar of the LMC. This is within the N119 emission nebula, which has a distinctive spiral shape. It is one of the visually brightest individual stars in the LMC, at some times the brightest. There are only a handful of other 9th-magnitude stars in the LMC, such as the yellow hypergiant HD 33579.
There are several compact clusters near S Doradus, within the general NGC 1910/LH41 association. The closest is less than four arc-minutes away, contains two out of the three WO stars in the entire LMC, and the entire cluster is about the same brightness as S Doradus. A little further away is NGC 1916. Another LBV, R85, is just two arc-minutes away. This rich star-forming region also hosts a third Wolf–Rayet star, at least ten other supergiants, and at least ten class O stars.
S Doradus has a number of close companion stars. The Washington Double Star Catalog lists two 11th-magnitude stars 5″ away, which at the distance of the LMC is about four light years. A much closer companion has been found using the Hubble Space Telescope Fine Guidance Sensor, 1.7″ away and four magnitudes fainter. There are other nearby stars, most notably a 12th-magnitude OB supergiant at 13″.
Variability
This star belongs to its own eponymous S Doradus class of variable stars, also designated as luminous blue variables or LBVs. LBVs exhibit long slow changes in brightness, punctuated by occasional outbursts. S Doradus is typically a magnitude 9 star, varying by a few tenths of a magnitude on timescales of a few months, superimposed on variations of about a magnitude taking several years. The extreme range of these variations is from about visual magnitude 8.6–10.4. Every few decades it shows a more dramatic decrease in brightness, to as low as magnitude 11.5. The nature of the variation is somewhat unusual for an LBV; S Doradus is typically in an outburst state, with only occasional fades to the quiescent state that is typical of most stars in the class.
The colour of S Doradus changes as its brightness varies, being bluest when the star is faintest. At the same time, the spectrum shows dramatic changes. It is typically an extreme mid-A supergiant with P Cygni profiles on many lines (e.g. A5eq or A2/3Ia+e). At maximum brightness, the spectrum can become as cool as an F supergiant, with strong ionised metal lines and almost no emission components. At minimum brightness, the spectrum is dominated by emission, particularly forbidden lines of Fe but also helium and other metals. At the deep minima these features are even more pronounced, and Fe emission also appears.
Attempts to identify regularity in the unpredictable changes of brightness suggest a period of around 100 days for the small amplitude variations near maximum brightness. At minimum brightness, these microvariations are considered to occur with periods as long as 195 days. The slower variations have been characterised with a period of 6.8 years, with an interval of 35–40 years between deep minima. The microvariations are similar to the brightness changes shown by α Cygni variables, which are less luminous hot supergiants.
The instability strip
S Doradus variables (LBVs) show distinct quiescent and outburst states. During the quiescent phase, LBVs lie along a diagonal band in the H–R diagram called the S Doradus Instability Strip, with the more luminous examples having hotter temperatures.
The standard theory is that LBV outbursts occur when the mass loss increases and an extremely dense stellar wind creates a pseudo-photosphere. The temperature drops until the wind opacity starts to decrease, meaning all LBV outbursts reach a temperature around 8,000–9,000 K. The bolometric luminosity during outbursts is considered to remain largely unchanged, but the visual luminosity increases as radiation shifts from the ultraviolet into the visual range. Detailed investigations have shown that some LBVs appear to change luminosity from minimum to maximum. S Doradus has been calculated to be less luminous at maximum brightness (minimum temperature), possibly as a result of potential energy going into expansion of a substantial portion of the star. AG Carinae and HR Carinae show similar luminosity decreases in some studies, but in the most convincing case AFGL 2298 increased its luminosity during its outbursts.
Rare larger eruptions can appear as long-lasting under-luminous supernovae, and have been termed supernova impostors. The cause of the eruptions is unknown, but the star survives and may experience multiple eruptions. Eta Carinae and P Cygni are the only known examples in the Milky Way, and S Doradus has not shown such an eruption.
Stellar properties
The temperature of an LBV is difficult to determine because the spectra are so peculiar and the standard colour calibrations don't apply, so the luminosity changes associated with brightness variations cannot be calculated accurately. Within the margins of error, it has often been assumed that the luminosity stays constant during all LBV outbursts. This is likely if the outburst consists only of an opaque stellar wind forming a pseudo-photosphere to mimic a larger cooler star.
Better atmospheric physics and observations of luminosity changes during some LBV outbursts have cast doubt on the original models. The atmosphere of S Doradus has been modeled in detail between a normal minimum at magnitude 10.2 in 1985 and a maximum at magnitude 9.0 in 1989. The temperature was calculated to drop from 20,000 K to 9,000 K, and the luminosity dropped from to . This corresponds to an increase in the radius of the visible surface of the star from to . A simpler calculation of the variation from the deep 1965 minimum at magnitude 11.5 to the 1989 maximum gives a temperature drop from 35,000 K to 8,500 K, and the luminosity drop from to . For a brief period during the maximum in late 1999, the temperature dropped further to between 7,500 K and 8,500 K, without the brightness changing noticeably. This is normal in other LBVs at maximum and is as cool as they can get, but it has not been seen in S Doradus before, or since. Observations of AG Carinae have shown that any luminosity changes between minimum and maximum may occur abruptly over a small temperature range, with the luminosity approximately constant during the rest of the light curve.
The mass of an LBV is difficult to calculate directly unless it is in a binary system. The surface gravity changes dramatically and is difficult to measure from the peculiar spectral lines, and the radius is poorly defined. LBVs are thought to be the direct predecessors of Wolf–Rayet stars, but may be either just evolved from the main sequence or post-red supergiant stars with much lower masses. In the case of S Doradus, the current mass is likely to be in the range of .
References
External links
http://www.daviddarling.info/encyclopedia/S/S_Doradus.html
http://jumk.de/astronomie/big-stars/s-doradus.shtml
Stars in the Large Magellanic Cloud
Emission-line stars
035343
Dorado
Large Magellanic Cloud
Luminous blue variables
Extragalactic stars
Doradus, S
Durchmusterung objects | S Doradus | Astronomy | 2,081 |
11,460,514 | https://en.wikipedia.org/wiki/Stemphylium%20bolickii | Stemphylium bolickii is a plant pathogen infecting kalanchoes.
References
Fungal plant pathogens and diseases
Ornamental plant pathogens and diseases
Pleosporaceae
Fungus species | Stemphylium bolickii | Biology | 39 |
12,564,556 | https://en.wikipedia.org/wiki/Pressure%20angle | Pressure angle in relation to gear teeth, also known as the angle of obliquity, is the angle between the tooth face and the gear wheel tangent. It is more precisely the angle at a pitch point between the line of pressure (which is normal to the tooth surface) and the plane tangent to the pitch surface. The pressure angle gives the direction normal to the tooth profile. The pressure angle is equal to the profile angle at the standard pitch circle and can be termed the "standard" pressure angle at that point. Standard values are 14.5 and 20 degrees. Earlier gears with pressure angle 14.5 were commonly used because the cosine is larger for a smaller angle, providing more power transmission and less pressure on the bearing; however, teeth with smaller pressure angles are weaker. To run gears together properly their pressure angles must be matched.
The pressure angle is also the angle of the sides of the trapezoidal teeth on the corresponding rack.
The force transmitted during the mating of gear teeth acts along the normal. This force has components along the pitch line and the other along the line perpendicular to the pitch line. The force along the pitch line which is responsible for power transmission is proportional to the cosine of pressure angle. The one which exerts thrust (perpendicular to the pitch line) is proportional to the sine of pressure angle. So it is advised to keep the pressure angle low.
Just as there are three types of profile angle, there are three types of corresponding pressure angle: the transverse pressure angle, the normal pressure angle, and the axial pressure angle.
See also
List of gear nomenclature
Involute gear
References
Gears | Pressure angle | Engineering | 335 |
21,116,621 | https://en.wikipedia.org/wiki/Hookworm%20vaccine | Hookworm vaccine is a vaccine against hookworm. No effective vaccine for the disease in humans has yet been developed. Hookworms, parasitic nematodes transmitted in soil, infect approximately 700 million humans, particularly in tropical regions of the world where endemic hookworms include Ancylostoma duodenale and Necator americanus. Hookworms feed on blood and those infected with hookworms may develop chronic anaemia and malnutrition. Helminth infection can be effectively treated with benzimidazole drugs (such as mebendazole or albendazole), and efforts led by the World Health Organization have focused on one to three yearly de-worming doses in schools because hookworm infections with the heaviest intensities are most common in school-age children. However, these drugs only eliminate existing adult parasites and re-infection can occur soon after treatment. School-based de-worming efforts do not treat adults or pre-school children and concerns exist about drug resistance developing in hookworms against the commonly used treatments, thus a vaccine against hookworm disease is sought to provide more permanent resistance to infection.
Hookworm infection is considered a neglected disease as it disproportionately affects poorer localities and has received little attention from pharmaceutical companies.
Vaccine targets
Hookworm infections in humans can last for several years, and re-infection can occur very shortly after treatment, suggesting that hookworms effectively evade—and may interrupt or modulate—the host immune system. Successful hookworm vaccines have been developed for several animal species. On the basis of prior work, human vaccine development has targeted antigens from both the larval and adult stages of the hookworm life cycle; a combined vaccine for humans that would provide more complete protection. Current targets of larval proteins attenuate larval migration through host tissue; targets of adult proteins have been demonstrated to block enzymes vital to hookworm feeding.
The "ASP" (ancylostoma secreted protein) proteins are cysteine-rich secretory proteins. They are promising vaccine candidates based on previous vaccine studies in sheep, guinea pigs, cattle, and mice, which have demonstrated inhibition of hookworm larval migration. Furthermore, epidemiologic studies determined that high titers of circulating antibodies against ASPs are associated with lower hookworm burdens in residents of Hainan Province, China, and Minas Gerais, Brazil. The function of Na-ASP-2 () is not currently known (though it may function as a chemotaxin mimic), but it is known to be released during parasite entry into the host. It may have some function in the transition from the larval environment stage of the hookworm life-cycle to an adult parasitic existence and tissue migration.
The "APR" proteins are aspartic proteases. Ac-APR-1 and Na-APR-1 specifically participate in the hookworm's digestion of hemoglobin from its blood meal and are present in the adult stage of the hookworm life cycle. Animals immunized against Ac-APR-1 exhibited a reduction in worm burden, a reduction in hemoglobin loss, and a dramatic reduction in worm fecundity.
The "GST" proteins are glutathione S-transferases. Na-GST-1 () plays a role in the worm's digestion of hemoglobin; specifically, it serves to protect the worm from heme molecules released by digestion.
Research
Examples of antigenic targets of hookworm vaccines currently in clinical trials include Na-ASP-2, Ac-APR-1, Na-APR-1, and Na-GST-1.
In a clinical trial a vaccine containing recombinant Na-ASP-2 with Aluminium hydroxide (Alhydrogel) as an adjuvant was found to increase Th2 helper cells and IgE. Both the Th2 helper cells and IgE antibody are important players in recognition and immunoregulation against parasites. The vaccine containing recombinant Na-ASP-2 resulted in significantly decreased risk of a hookworm infection.
In 2014, Na-GST-1 with Alhydrogel adjuvant completed a successful phase 1 clinical trial in Brazil. In 2017, it completed a successful phase 1 trial in the US.
Funding
Research funding to develop hookworm vaccines has come from the Human Hookworm Vaccine Initiative, a program of the Sabin Vaccine Institute and collaborations with George Washington University, the Oswaldo Cruz Foundation, the Chinese Institute of Parasitic Diseases, the Queensland Institute of Medical Research, and the London School of Hygiene and Tropical Medicine. Funding for hookworm vaccine research efforts also includes funds from the Bill & Melinda Gates Foundation totaling in excess of $53 million, and additional support from the Rockefeller Foundation, Doctors Without Borders, National Institute of Allergy and Infectious Diseases, and the March of Dimes Birth Defects Foundation.
The government of Brazil, where hookworm is still endemic in some poorer areas, has promised to manufacture a vaccine if one can be proven effective.
References
External links
Study of Na-ASP-2 Human Hookworm Vaccine in Healthy Adults Without Evidence of Hookworm Infection, ClinicalTrials.gov
Phase 1 Trial of Na-ASP-2 Hookworm Vaccine in Previously Infected Brazilian Adults, ClinicalTrials.gov
Vaccines | Hookworm vaccine | Biology | 1,100 |
1,885,139 | https://en.wikipedia.org/wiki/List%20of%20species%20described%20by%20the%20Lewis%20and%20Clark%20Expedition | Meriwether Lewis collected many hundreds of plants on the Lewis and Clark Expedition. All of the plants Lewis collected in the first months of the Expedition were cached near the Missouri River to be retrieved on the return journey. The cache was completely destroyed by Missouri flood waters. Other collections were lost in varying ways, and we now have only 237 plants Lewis collected, 226 of which are in the Philadelphia Herbarium. Lewis hired Frederick Pursh for $70 to do the complex task of describing 124 of his collections, which Pursh did and published in 1814.
Animals
Mammals
Discovered (for the first time by European Americans):
Black-tailed prairie dog (Cynomys ludovicianus)
Bushy-tailed woodrat (Neotoma cinerea)
Grizzly bear (Ursus arctos horribilis)
Mule deer (Odocoileus hemionus)
Pronghorn (Antilocapra americana)
Swift fox (Vulpes velox)
White-tailed jackrabbit (Lepus townsendii)
Described:
American badger (Taxidea taxus)
American beaver (Castor canadensis)
Badlands bighorn sheep (Ovis canadensis auduboni)
Bison (Bison bison)
Black bear (Ursus americanus)
Columbian black-tailed deer (Odocoileus hemionus columbianus)
Columbian ground squirrel (Spermophilus columbianus)
Coyote (Canis latrans)
Eastern cottontail (Sylvilagus floridanus)
Eastern fox squirrel (Sciurus niger)
Elk (Cervus canadensis)
Eastern gray squirrel (Sciurus carolinensis)
Gray wolf (Canis lupus)
Long-tailed weasel (Mustela frenata)
Muskrat (Fiber zibethicus)
Mountain lion (Puma concolor)
North American river otter (Lontra canadensis)
Northern pocket gopher (Thomomys talpoides)
Northern short-tailed shrew (Blarina brevicauda)
Porcupine (Erethizon dorsatum)
Red fox (Vulpes vulpes)
Richardson's ground squirrel or flickertail (Spermophilus richardsonii)
Striped skunk (Mephitis mephitis)
Thirteen-lined ground squirrel (Spermophilus tridecemlineatus)
White-tailed deer (Odocoileus virginianus)
Birds
Discovered (for the first time by European Americans):
Clark's nutcracker (Nucifraga columbiana)
Common poorwill (Phalaenoptilus nuttallii)
Greater sage-grouse (Centrocercus urophasianus)
Interior least tern (Sterna antillarum athalassos)
Lewis' woodpecker (Melanerpes lewis)
Described:
American crow (Corvus brachyrhynchos)
American goldfinch (Carduelis tristis)
American kestrel (Falco sparverius)
American robin (Turdus migratorius)
American white pelican (Pelecanus erythrorhynchos)
Bald eagle (Haliaeetus leucocephalus)
Bank swallow (Riparia riparia)
Belted kingfisher (Ceryle alcyon)
Black-bellied plover (Pluvialis squatarola)
Blue grouse (Dendragapus obscurus)
Blue jay (Cyanocitta cristata)
Brewer's blackbird (Euphagus cyanocephalus)
Brown-headed cowbird (Molothrus ater)
Canada goose (Branta canadensis)
Carolina parakeet (Conuropsis carolinensis)
Cedar waxwing (Bombycilla cedrorum)
Cliff swallow (Hirundo pyrrhonota or Petrochelidon pyrrhonota)
Piping Plover (Charadrius melodus)
Columbian sharp-tailed grouse (Tympanuchus phasianellus columbianus)
Common Nighthawk (Chordeiles minor)
Common raven (Corvus corax)
Eastern kingbird (Tyrannus tyrannus)
Great blue heron (Ardea herodias)
Great egret (Ardea alba)
Greater prairie-chicken (Tympanuchus cupido pinnatus)
Golden eagle (Aquila chrysaetos)
Great horned owl (Bubo virginianus)
Hairy woodpecker (Picoides villosus)
Horned lark (Eremophila alpestris)
Killdeer (Charadrius vociferus)
Lark sparrow (Chondestes grammacus)
Loggerhead shrike (Lanius ludovicianus)
Long-billed curlew (Numenius americanus)
Mallard (Anas platyrhynchos)
Merganser (Mergus serrator)
Mourning dove (Zenaida macroura)
Northern flicker (Colaptes auratus)
Northern harrier (Circus cyaneus) - tentative
Osprey (Pandion haliaetus)
Passenger pigeon (Ectopistes migratorius)
Pinyon jay (Gymnorhinus cyanocephalus)
Piping plover (Charadrius melodus)
Plains sharp-tailed grouse (Tympanuchus phasianellus jamesi)
Red-headed woodpecker (Melanerpes erythrocephalus)
Red-tailed hawk (Buteo jamaicensis)
Red-winged blackbird (Agelaius phoeniceus)
Ruffed grouse (Bonasa umbellus)
Sandhill crane (Grus canadensis)
Snow goose (Chen caerulescens)
Sprague's pipit (Anthus spragueii)
Upland sandpiper (Bartramia longicauda)
Western meadowlark (Sturnella neglecta)
Whip-poor-will (Caprimulgus vociferus)
Whooping crane (Grus americana)
Wild turkey (Meleagris gallopavo)
Willet (Catoptrophorus semipalmatus)
Wood duck (Aix sponsa)
Amphibians
Chorus frog (Pseudacris triseriata)
Green frog (Rana clamitans)
Green tree frog (Hyla)
Western toad (Anaxyrus boreas)
Reptiles
Bull snake (Pituophis catenifer)
Horned lizard (Phrynosoma)
Spiny softshell turtle (Apalone spinifera)
Western fence lizard (Sceloporus occidentalis)
Western garter snake (Thamnophis elegans vagrans)
Western hognose snake (Heterodon nasicus)
Western rattlesnake (Crotalus viridis)
Fish
Discovered (for the first time by European Americans):
Blue catfish (Ictalurus furcatus)
Channel catfish (Ictalurus punctatus)
Goldeye (Hiodon alosoides)
Mountain whitefish (Prosopium williamsoni)
White sturgeon (Acipenser transmontanus)
Described:
Cutthroat trout (Oncorhynchus clarki)
Westslope cutthroat trout (O. c. lewisi)
Coastal cutthroat trout (O. c. clarki)
Common northern sucker (Catostomus catostomus)
Sauger (Stizostedion canadensis)
Plants
The plants listed below were indeed collected by Lewis, but a number of them (at least those marked with *******, were previously collected and described or were not described from the Lewis collections and therefore are not considered to be the first for science. For an accurate list see and
Discovered (for the first time by European Americans):
Black greasewood (Sarcobatus vermiculatus)
Blue flax (Linum lewisii)
Buffaloberry (Shepherdia argentea)
Curly-top gumweed (Grindelia squarrosa)
Fringed sagebrush (Artemisia ludoviciana)
Indian tobacco (Nicotiana quadrivalvis)
Lanceleaf sage (Salvia reflexa)
Shadscale (Atriplex canescens)
Snow-on-the-mountain (Euphorbia marginata)
White milkwort (Polygala alba)
Aromatic aster (Aster oblongifolius)
Aromatic sumac also called squaw bush (Rhus aromatica)
Bearberry also called kinnikinnick (Arctostaphylos uva-ursi)
Bur oak (Quercus macrocarpa)
Broom snakeweed (Gutierrezia sarothrae)
Canada milk-vetch (Astragalus canadensis)
Common horsetail, also called scouring rush (Equisetum arvense)
Common juniper (Juniperus communis)
Common monkey-flower (Mimulus guttatus)
Creeping juniper (Juniperus horizontalis)
Dwarf sagebrush (Artemisia cana)
Eastern cottonwood (Populus deltoides)
False indigo (Amorpha fruticosa)
Fire-on-the-mountain (Euphorbia cyathophora)
Golden currant (Ribes aureum)
Large-flowered clammyweed (Polanisia dodecandra trachysperma)
Long-leaved sagebrush also called mugwort (Artemisia longifolia)
Meadow anemone (Anemone canadensis)
Missouri milk-vetch (Astragalus missouriensis)
Moundscale (Atriplex gardneri)
Needle-and-thread grass also called porcupine grass (Hesperostipa comata)
Pasture sagewort (Artemisia frigida)
Pin cherry (Prunus pennsylvanica)
Ponderosa pine (Pinus ponderosa)
Purple coneflower (Echinacea angustifolia)
Purple prairie-clover (Petalostemon purpurea or Dalea purpurea)
Rabbitbrush (Ericameria nauseosa; formerly Chrysothamnus nauseosus)
Raccoon grape (Ampelopsis cordata)
Rigid goldenrod (Solidago rigida)
Rocky Mountain beeplant (Cleome serrulata)
Rough gayfeather also called large button snakeroot (Liatris aspera)
Silky wormwood (Artemisia dracunculus)
Spiny goldenweed (Machaeranthera pinnatifida or Haplopappus spinulosus)
Thick-spike gayfeather also called prairie button snakeroot (Liatris pycnostachya)
Western red cedar also called Rocky Mountain juniper (Juniperus scopulorum)
Wild four-o'clock (Mirabilis nyctaginea)
Wild rice (Zizania palustris)
Wild rose (Rosa arkansana)
See also
Sacagawea
Louisiana Purchase
References
Sources
The Journey - Science. "U.S. National Park Service - Experience Your America." <http://www.nps.gov/archive/jeff/lewisclark2/CorpsOfDiscovery/Preparing/Science.htm>.
Lewis and Clark Expedition
Species
Species
Flora of the Northwestern United States
Lewis and Clark
Taxonomic lists (species) | List of species described by the Lewis and Clark Expedition | Biology | 2,329 |
43,966,007 | https://en.wikipedia.org/wiki/Titanium%20perchlorate | Titanium perchlorate is a molecular compound of titanium and perchlorate groups with formula Ti(ClO4)4. Anhydrous titanium perchlorate decomposes explosively at 130 °C and melts at 85 °C with a slight decomposition. It sublimes in a vacuum as low as 70 °C. Being a molecular with four perchlorate ligands, it is an unusual example of a transition metal perchlorate complex.
Properties
In Ti(ClO4)4, the four perchlorate groups binds as bidentate ligands. Thus the Ti center is bound to eight oxygen atoms. So the molecule could also be called tetrakis(perchlorato-O,''O)titanium(IV)'''.
In the solid form it forms monoclinic crystals, with unit cell parameters a=12.451 b=7.814 c=12.826 Å α=108.13. Unit cell volume is 1186 Å3 at -100 °C. There are four molecules per unit cell.
It reacts with petrolatum, nitromethane, acetonitrile, dimethylformamide, and over 25° with carbon tetrachloride.
Titanyl perchlorate form solvates with water, dimethyl sulfoxide, dioxane, pyridine-N-oxide, and quinoline-N-oxide.
Thermolysis of titanium perchlorate gives TiO2, ClO2 and dioxygen O2 The titanyl species TiO(ClO4)2 is an intermediate in this decomposition.
Ti(ClO4)4 → TiO2 + 4ClO2 + 3O2 ΔH = .
Formation
Titanium perchlorate can be formed by reacting titanium tetrachloride with perchloric acid enriched in dichlorine heptoxide. Another way uses titanium tetrachloride with dichlorine hexoxide. This forms a complex with Cl2O6 which when warmed to 55° in a vacuum, sublimes and can crystallise the pure anhydrous product from the vapour.
Related
In the salt dicaesium hexaperchloratotitanate, Cs2Ti(ClO4)6 the perchlorate groups are monodentate, connected by one oxygen to titanium.
Titanium perchlorate can also form complexes with other ligands bound to the titanium atom including binol, and gluconic acid.
A polymeric oxychlorperchlorato compound of titanium, Ti6O4Clx(ClO4)16−x, is made from excess TiCl4 and dichlorine hexoxide. This has a varying composition, and ranges from light to dark yellow.
References
Perchlorates
Titanium(IV) compounds | Titanium perchlorate | Chemistry | 588 |
55,526,566 | https://en.wikipedia.org/wiki/Theta%20Hydri | Theta Hydri, Latinized from θ Hydri, is a blue-white hued star in the southern constellation of Hydrus. It is faintly visible to the naked eye with an apparent visual magnitude of +5.53. Based upon an annual parallax shift of as seen from Earth, it is located approximately . At that distance, the visual magnitude of the star is diminished by an extinction of 0.10 due to interstellar dust. It is moving away from the Sun with a radial velocity of .
A stellar classification of B8 III/IV suggests it is an evolving B-type star showing mixed traits of a subgiant or giant star. It is a PGa star – a sub-class of the higher temperature chemically peculiar stars known as mercury-manganese stars (HgMn stars). That is, it displays a rich spectra of singly-ionized phosphorus and gallium, in addition to ionized mercury and manganese. As such, Theta Hydri forms a typical example of this type. The absorption lines for these ionized elements are found to vary, most likely as the result of uneven surface distribution combined with the star's rotation. It is a helium-weak star, having helium lines that are anomalously weak for its spectral type. A weak and variable longitudinal magnetic field has been detected.
There is a nearby companion star of class A0 IV located at an angular separation of along a position angle of 179°, as of 2002. Schöller et al. (2010) consider this to be a visual companion, although Eggleton and Tokovinin (2008) listed the pair as a probable binary star system.
References
B-type giants
B-type subgiants
Mercury-manganese stars
Helium-weak stars
Hydrus
Hydri, Theta
0939
Durchmusterung objects
019400
014131 | Theta Hydri | Astronomy | 387 |
65,426,689 | https://en.wikipedia.org/wiki/NPZ%20model | An NPZ model is the most basic abstract representation, expressed as a mathematical model, of a pelagic ecosystem which examines the interrelationships between quantities of nutrients, phytoplankton and zooplankton as time-varying states which depend only on the relative concentrations of the various states at the given time.
One goal in pelagic ecology is to understand the interactions among available nutrients (i.e. the essential resource base), phytoplankton and zooplankton. The most basic models to shed light on this goal are called nutrient-phytoplankton-zooplankton (NPZ) models. These models are a subset of Ecosystem models.
Example
An unrealistic but instructive example of an NPZ model is provided in Franks et al. (1986) (FWF-NPZ model). It is a system of ordinary differential equations that examines the time evolution of dissolved and assimilated nutrients in an ideal upper water column consisting of three state variables corresponding to amounts of nutrients (N), phytoplankton (P) and zooplankton (Z). This closed system model is shown in the figure to the right which also shows the "flow" directions of each state quantity.
These interactions, assumed to be spatial homogeneous (and thus is termed a "zero-dimensional" model) are described in general terms as follows
This NPZ model can now be cast as a system of first order differential equations:
where the parameters and variables are defined in the table below along with nominal values for a "standard environment"
An example of a 60 day sequence for the values shown is depicted in the figure to the right. Each state is color coded (Nutrient – black, Phytoplankton – green and Zooplankton – blue). Note that the initial nutrient concentration is rapidly consumed resulting in a phytoplankton bloom until the zooplankton begin aggressive grazing around day 10. Eventually both populations drop to a very low level and a high nutrient concentration remains. In the next section more sophistication is applied to the model in order to increase realism.
More Sophisticated NPZ Models
The Franks et al. (1986) work has inspired significant analysis from other researchers but is overly simplistic to capture the complexity of actual pelagic communities. A more realistic NPZ model would simulate control of primary production by incorporating mechanisms to simulate seasonally varying sunlight and decreasing illumination with depth. Evans and Parslow (1985) developed an NPZ model which includes these mechanisms and forms the basis of the following example (see also Denman and Pena (1999)).
A 200 day sequence resulting from this configuration of the FWF-NPZ model is shown in the figure to the right. Each state is color coded (Nutrient – black, Phytoplankton – green and Zooplankton – blue). Several interesting features in the model output are easily observed. First, a spring bloom occurs in the first 20 days or so, where the high nutrient concentrations are consumed by the phytoplankton causing an inverse relationship which is halted by a rise in zooplankton concentration eventually settling into a sustained steady-state solution for the remainder of the summer. Another bloom, not as pronounced as in the spring, occurs in the fall with a remixing of nutrients into the water column.
References
Ecosystems
Oceanography
Marine biology
Planktology
Zoology | NPZ model | Physics,Biology,Environmental_science | 715 |
51,008,281 | https://en.wikipedia.org/wiki/Angarium | The Angarium (Latin; from Greek ) was the institution of the royal mounted couriers in ancient Persia. The messengers, called (), alternated in stations a day's ride apart along the Royal Road. The riders were exclusively in the service of the Great King and the network allowed for messages to be transported from Susa to Sardis (2699 km) in nine days; the journey took ninety days on foot.
Herodotus, in about 440 BC, describes the Persian messenger system which had been perfected by Darius I about half a century earlier:
A sentence of this description of the , translated as "Neither snow nor rain nor heat nor gloom of night stays these couriers from the swift completion of their appointed rounds," is famously inscribed on the James A. Farley Building in New York City.
See also
Chapar Khaneh
Angarum
References
Further reading
Postal systems
Achaemenid Empire
Darius the Great | Angarium | Technology | 190 |
58,945,941 | https://en.wikipedia.org/wiki/2-Chloroethyl%20ethyl%20sulfide | 2-Chloroethyl ethyl sulfide is the organosulfur compound with the formula C2H5SC2H4Cl. It is a colorless liquid. The compound is part of the family of vesicant compounds known as half mustards, has been heavily investigated because of its structural similarity to the sulfur mustard S(C2H4Cl)2. The LD50s of the half and full mustard are 252 and 2.4 mg/kg (oral, rats).
References
Sulfur mustards
Organochlorides | 2-Chloroethyl ethyl sulfide | Chemistry | 112 |
50,607,774 | https://en.wikipedia.org/wiki/Cortinarius%20indigoverus | Cortinarius indigoverus is a basidiomycete fungus of the genus Cortinarius native to New Guinea, where it grows under Nothofagus.
See also
List of Cortinarius species
References
External links
indigoverus
Fungi of New Guinea
Fungi described in 1990
Taxa named by Egon Horak
Fungus species | Cortinarius indigoverus | Biology | 70 |
4,956,783 | https://en.wikipedia.org/wiki/Persistence%20hunting | Persistence hunting, also known as endurance hunting or long-distance hunting, is a variant of pursuit predation in which a predator will bring down a prey item via indirect means, such as exhaustion, heat illness or injury. Hunters of this type will typically display adaptions for distance running, such as longer legs, temperature regulation, and specialized cardiovascular systems.
Some endurance hunters may prefer to injure prey in an ambush before the hunt and rely on tracking to find their quarry. Hadza hunter-gatherers do not persistence hunt, but they do run in short bursts while hunting small game.
Humans and ancestors
Humans are some of the best long distance runners in the animal kingdom; some hunter gatherer tribes practice this form of hunting into the modern era. Homo sapiens have the proportionally longest legs of all known human species, but all members of genus Homo have cursorial (limbs adapted for running) adaptions not seen in more arboreal hominids such as chimpanzees and orangutans.
Persistence hunting can be done by walking, but with a 30 to 74% lower rate of success than by running or intermittent running. Furthermore, while needing 10 to 30% less energy, it takes twice as long. Walking down prey, however, might have arisen in Homo erectus, preceding endurance running. Homo erectus may have lost its hair to enhance heat dissipation during persistence hunting, which would explain the origin of a characteristic feature of the genus Homo.
Other mammals
Wolves, dingoes, and painted dogs are known for running large prey down over long distances. All three species will inflict bites in order to further weaken the animal over the course of the hunt. Canids will also pant when hot. This has the double effect of cooling the animal via the evaporation of saliva while also increasing the amount of oxygen absorbed by the lungs. Despite their similar body shape, other canids are opportunistic generalists that can be broadly categorized as pursuit predators.
Wolves may have been initially domesticated due to their similar hunting techniques to humans. Several breeds of domestic dog have been bred with endurance in mind, such as the malamute, husky and Eskimo dog.
Spotted hyenas utilize a variety of hunting techniques depending on their chosen prey. They will occasionally use a similar strategy to canid endurance hunters, though their proportionally shorter legs makes this less effective.
Reptiles
No extant members of Archelosauria are known to be long-distance hunters, though various bird species may employ speedy pursuit predation. Living crocodilians and carnivorous turtles are specialized ambush predators and rarely if ever chase prey over great distances.
Within Squamata, varanid lizards possess a well developed ventricular septum that completely separates the pulmonary and systemic sides of the circulatory system during systole—this unique heart structure allows varanids to run faster over longer distances than other lizards. They also utilize a forked tongue to track injured prey over large distances after a failed ambush. Several monitor lizard species such as Komodo dragons also utilize venom to ensure the death of their prey.
Extinct species
Little evidence exists for endurance hunting in extinct species, though potential candidates include the dire wolf Aenocyon dirus due to its similar body shape to modern grey wolves.
Non-avian theropod dinosaurs such as derived tyrannosauroids and troodontids display cursorial adaptions which may have allowed for long-distance running. Derived theropods may have also had an avian style flow-through lung, allowing for highly efficient oxygen exchange.
Some non-mammalian theriodonts may have been capable of running relatively long distances due to their limbs having an erect stance as opposed to the sprawling stance of contemporary synapsids and reptiles.
See also
Rarámuri people
Tracking (hunting)
References
Hunting methods
History of hunting
Human evolution
Behavioral ecology
Predation | Persistence hunting | Biology | 795 |
32,825,892 | https://en.wikipedia.org/wiki/Kepler-39b | Kepler-39b (formerly known as KOI-423b), is a confirmed extrasolar object (either a Jovian planet or brown dwarf because of its mass) discovered orbiting the F-type star Kepler-39. It is eighteen times more massive than Jupiter, and is about five fourths its size. The planet orbits its host star at about 15% of the average distance between the Earth and Sun. Kepler-39b's host star was investigated by European astronomers along with three other stars, including the host star of Kepler-40b, using equipment at the Haute-Provence Observatory in France. Collection and analysis of data in late 2010 led to the confirmation of Kepler-39b. The discovery paper was published in a journal on June 6, 2011.
The location of the subgiant star in the night sky is determined by the Right Ascension (R.A.) and Declination (Dec.), these are equivalent to the Longitude and Latitude on the Earth. The Right Ascension is how far expressed in time (hh:mm:ss) the star is along the celestial equator. If the R.A. is positive then its eastwards. The Declination is how far north or south the object is compared to the celestial equator and is expressed in degrees. For Kepler-39, the location is 19h 47m 50.00 and 46° 02` 04.00 .
Characteristics
Mass, radius, and temperature
Kepler-39b is a Jupiter-like planet or brown dwarf that is eighteen times more massive than Jupiter and 1.22 times Jupiter's size.
For a planet of its size, Kepler-39b has a relatively cool equilibrium temperature of with respect to other inflated planets, defying most of the common models explaining inflation at the time of its discovery (including convection and the effect of stellar radiation). Although Kepler-39b and COROT-3b have similar characteristics (in terms of host star and mass), COROT-3b lies on the predicted size of what a planet of its character should look like. Kepler-39b is far larger than this model.
A recent study reveals that Kepler-39b probably has a shape that is very oblate, which, if true, is very likely caused by its fast rotation. The estimated rotation period would be about 1.6 hours, very fast compared to about 10 hours for Jupiter and Saturn. Such a fast rotation also provides a natural explanation for its large radius.
In 2022, the radius of Kepler-39b was improved based on direct parallax measaurements by the Gaia spacecraft, which allows the distance to the host star to be known. The newly-determined radius of is slightly lower than the previous estimate of .
Host star
Kepler-39 is an F-type star that is slightly larger and slightly more massive than the Sun (respectively, 1.10 solar masses and 1.39 solar radii) that is located 1090 parsecs (3,560 light years) away from Earth. With an effective temperature of 6260 K, Kepler-39 is hotter than the Sun. Kepler-39 is significantly metal-poor, reflected in its metallicity of [Fe/H] = -0.29 (51% the amount of iron found in the Sun).
Kepler-39 has an apparent magnitude of 14.3, and is thus not visible with the naked eye from Earth.
Orbital statistics
The planet orbits at a distance of 0.155 AU, equating to roughly 15% of the average distance between the Earth and Sun, completing one orbit every 21.0874 days. Kepler-39b has a modestly elliptical orbit, as described by its orbital eccentricity of 0.121. Its orbital inclination is 88.83º, making the planet appear almost entirely edge-on to its host star as seen from Earth.
Discovery
The Kepler spacecraft is a NASA telescope equipped with photometric equipment. Launched in 2009, Kepler continuously watches 156,000 stars in a small area. A team of astronomers, hoping to learn more about hot jupiter planets and brown dwarfs, selected four F-type stars from the Kepler Input Catalog flagged as host to a Kepler Object of Interest (a transiting object that could possibly be a planet). Using three quarters of Kepler's data, the science team conducted a follow-up investigation in using the SOPHIE échelle spectrograph at the Haute-Provence Observatory in southern France, observing stars Kepler-40, Kepler-39, KOI-552 and KOI-410. Of these, conclusive evidence of a planet orbiting KOI-410 could not be found, and KOI-552 was found to be a binary star with an M-type companion. The Hot Jupiter KOI-428b was the first of these four to be confirmed.
SOPHIE collected thirteen radial velocity measurements of Kepler-39 between July 26, 2010 and September 10, 2010. Seven of the measurements were affected by moonlight, but were corrected. These radial velocity measurements conclusively eliminated the possibility that the observed dips in Kepler-39's brightness were caused by the movements of binary stars and confirming the existence of planet Kepler-39b in the process. SOPHIE's measurements were used to derive Kepler-39's spectrum, which was used to define Kepler-39b's characteristics.
The astronomers submitted the discovery paper to Astronomy and Astrophysics on June 16, 2011 with François Bouchy as the leading author. The discovery paper covered the investigations of KOI-410 and KOI-552 along with the discovery of Kepler-39b.
References
Giant planets
Exoplanets discovered in 2011
Transiting exoplanets
39b
Cygnus (constellation) | Kepler-39b | Astronomy | 1,173 |
12,797,709 | https://en.wikipedia.org/wiki/Origin%20myth | An origin myth is a type of myth that explains the beginnings of a natural or social aspect of the world. Creation myths are a type of origin myth narrating the formation of the universe. However, numerous cultures have stories that take place after the initial origin. These stories aim to explain the origins of natural phenomena or human institutions within an already existing world. In Greco-Roman scholarship, the terms founding myth or etiological myth (from 'cause') are occasionally used to describe a myth that clarifies an origin, particularly how an object or custom came into existence.
In modern political discourse the terms "founding myth", "foundational myth", etc. are often used as critical references to official or widely accepted narratives about the origins or early history of a nation, a society, a culture, etc.
Nature of origin myths
Origin myths are narratives that explain how a particular reality came into existence. They often serve to justify the established order by attributing its establishment to sacred forces (see ). The line between cosmogonic myths which describe the origin of the world and origin myths is not always clear. A myth about the origin of a specific part of the world assumes the existence of the world itself, which often relies on a cosmogonic myth. Therefore, origin myths can be seen as expanding upon and building upon their cultures' cosmogonic myths. In traditional cultures, it is common for the recitation of an origin myth to be preceded by the recitation of a cosmogonic myth.
Within academic circles, the term myth is often used specifically to refer to origin and cosmogonic myths. Folklorists, for example, reserve the term myth for stories that describe creation. Stories that do not primarily focus on origins are categorized as legend or folk tale, which are distinct from myths according to folklorists. Mircea Eliade, a historian, argues that in many traditional cultures, almost every sacred story can be considered an origin myth. Traditional societies often pattern their behavior after sacred events and view their lives as a cyclical return to a mythical age. As a result, nearly every sacred story portrays events that establish a new framework for human behavior, making them essentially stories of creation.
Social function
An origin myth often functions to justify the current state of affairs. In traditional cultures, the entities and forces described in origin myths are often considered sacred. Thus, by attributing the state of the universe to the actions of these entities and forces, origin myths give the current order an aura of sacredness: "[M]yths reveal that the World, man, and life have a supernatural origin and history, and that this history is significant, precious, and exemplary". Many cultures instill the expectation that people take mythical gods and heroes as their role models, imitating their deeds and upholding the customs they established:
When the missionary and ethnologist C. Strehlow asked the Australian Arunta why they performed certain ceremonies, the answer was always: "Because the ancestors so commanded it." The Kai of New Guinea refused to change their way of living and working, and they explained: "It was thus that the Nemu (the Mythical Ancestors) did, and we do likewise." Asked the reason for a particular detail in a ceremony, a Navaho chanter answered: "Because the Holy People did it that way in the first place." We find exactly the same justification in the prayer that accompanies a primitive Tibetan ritual: "As it has been handed down from the beginning of the earth’s creation, so must we sacrifice. … As our ancestors in ancient times did—so do we now."
Founding myths unite people and tend to include mystical events along the way to make "founders" seem more desirable and heroic. Ruling monarchs or aristocracies may allege descent from mythical founders, gods or heroes in order to legitimate their control. For example, Julius Caesar and his relatives claimed Aeneas (and through Aeneas, the goddess Venus) as an ancestor.
Founding myth
A founding myth or etiological myth (Greek ) explains either:
the origins of a ritual or the founding of a city
the ethnogenesis of a group presented as a genealogy with a founding father, and thus the origin of a nation (natio 'birth')
the spiritual origins of a belief, philosophy, discipline, or idea – presented as a narrative
Beginning in prehistorical times, many civilizations and kingdoms adopted some version of a heroic model national origin myth, including the Hittites and Zhou dynasty in the Bronze Age; the Scythians, Wusun, Romans and Goguryeo in antiquity; Turks and Mongols during the Middle Ages; and the Dzungar Khanate in the early modern period.
In the founding myth of the Zhou dynasty in China, Lady Yuan makes a ritual sacrifice to conceive, then becomes pregnant after stepping into the footprint of the King of Heaven. She gives birth to a son, Hou Ji, whom she leaves alone in dangerous places where he is protected by sheep, cattle, birds, and woodcutters. Convinced that he is a supernatural being, she takes him back and raises him. When he grows to adulthood, he takes the position of Master of Horses in the court of Emperor Yao, and becomes successful at growing grains, gourds and beans. According to the legend, he becomes founder of the Zhou dynasty after overthrowing the evil ruler of Shang.
Like other civilizations, the Scythians also claimed descent from the son of the god of heaven. One day, the daughter of the god of the Dnieper River stole a young man's horses while he was herding his cattle, and forced him to lie with her before returning them. From this union, she conceived three sons, giving them their father's greatbow when they came of age. The son who could draw the bow would become king. All tried, but only the youngest was successful. On his attempt, three golden objects fell from the sky: a plow and yoke, a sword, and a cup. When the eldest two tried to pick them up, fire prevented them. After this, it was decided the youngest son, Scythes, would become king, and his people would be known as Scythians.
The Torah (or Pentateuch, as biblical scholars sometimes call it) is the collective name for the first five books of the Bible: Genesis, Exodus, Leviticus, Numbers, and Deuteronomy. It forms the charter myth of Israel, the story of the people's origins and the foundations of their culture and institutions, and it is a fundamental principle of Judaism that the relationship between God and his chosen people was set out on Mount Sinai through the Torah.
A founding myth may serve as the primary exemplum, as the myth of Ixion was the original Greek example of a murderer rendered unclean by his crime, who needed cleansing (catharsis) of his impurity.
Founding myths feature prominently in Greek mythology. "Ancient Greek rituals were bound to prominent local groups and hence to specific localities", Walter Burkert has observed, "i.e., the sanctuaries and altars that had been set up for all time". Thus Greek and Hebrew founding myths established the special relationship between a deity and local people, who traced their origins from a hero and authenticated their ancestral rights through the founding myth. Greek founding myths often embody a justification for the ancient overturning of an older, archaic order, reformulating a historical event anchored in the social and natural world to valorize current community practices, creating symbolic narratives of "collective importance" enriched with metaphor to account for traditional chronologies, and constructing an etiology considered to be plausible among those with a cultural investment.
In the Greek view, the mythic past had deep roots in historic time, its legends treated as facts, as Carlo Brillante has noted, its heroic protagonists seen as links between the "age of origins" and the mortal, everyday world that succeeded it. A modern translator of Apollonius of Rhodes' Argonautica has noted, of the many aitia embedded as digressions in that Hellenistic epic, that "crucial to social stability had to be the function of myths in providing explanations, authorization or empowerment for the present in terms of origins: this could apply, not only to foundations or charter myths and genealogical trees (thus supporting family or territorial claims) but also to personal moral choices." In the period after Alexander the Great expanded the Hellenistic world, Greek poetry—Callimachus wrote a whole work simply titled Aitia—is replete with founding myths. Simon Goldhill employs the metaphor of sedimentation in describing Apollonius' laying down of layers "where each object, cult, ritual, name, may be opened... into a narrative of origination, and where each narrative, each event, may lead to a cult, ritual, name, monument."
A notable example is the myth of the foundation of Rome—the tale of Romulus and Remus, which Virgil in turn broadens in his Aeneid with the odyssey of Aeneas and his razing of Lavinium, and his son Iulus's later relocation and rule of the famous twins' birthplace Alba Longa, and their descent from his royal line, thus fitting perfectly into the already established canon of events. Similarly, the Old Testament's story of the Exodus serves as the founding myth for the community of Israel, telling how God delivered the Israelites from slavery and how they therefore belonged to him through the Covenant of Mount Sinai.
During the Middle Ages, founding myths of the medieval communes of northern Italy manifested the increasing self-confidence of the urban population and the will to find a Roman origin, however tenuous and legendary. In 13th-century Padua, when each commune looked for a Roman founder – and if one was not available, invented one—a legend had been current in the city, attributing its foundation to the Trojan Antenor.
See also
References
Sources
Eliade, Mircea. Myth and Reality. Trans. Willard Trask. New York: Harper & Row, 1963.
Further reading
Belayche, Nicole. "Foundation myths in Roman Palestine. Traditions and reworking", in Ton Derks, Nico Roymans (ed.), Ethnic Constructs in Antiquity: The Role of Power and Tradition (Amsterdam, Amsterdam University Press, 2009) (Amsterdam Archaeological Studies, 13), 167–188.
Campbell, Joseph. The Masks of God: Primitive Mythology. New York: Penguin Books, 1976.
Campbell, Joseph. Transformations of Myth through Time. New York: Harper and Row, 1990.
Darshan, Guy. The Origins of the Foundation Stories Genre in the Hebrew Bible and Ancient Eastern Mediterranean, JBL, 133,4 (2014), 689–709.
Darshan, Guy. Stories of Origins in the Bible and Ancient Mediterranean Literature. Cambridge: Cambridge University Press, 2023.
Eliade, Mircea. A History of Religious Ideas: Volume 1: From the Stone Age to the Eleusinian Mysteries. 1976. Trans. Willard R. Trask. Chicago: The U of Chicago P, 1981.
Encyclopedia of Ancient Myths and Culture. London: Quantum, 2004.
Lincoln, Bruce. Discourse and the Construction of Society: Comparative Studies of Myth, Ritual, and Classification. 1989. Repr. New York: Oxford U P, 1992.
Long, Charles H. Alpha: The Myths of Creation. New York: George Braziller, 1963.
Paden, William E. Interpreting the Sacred: Ways of Viewing Religion. 1992. Boston: Beacon P, 2003.
Ricoeur, Paul. "Introduction: The Symbolic Function of Myths.” Theories of Myth: From Ancient Israel and Greece to Freud, Jung, Campbell, and Levi-Strauss. Ed. Robert A. Segal. New York & London: Garland, 1996. 327–340.
Schilbrack, Kevin. Ed. Thinking Through Myths: Philosophical Perspectives. London & New York: Routledge, 2002.
Segal, Robert A. Joseph Campbell: An Introduction. 1987. Repr. New York: Penguin 1997.
Segal, Robert A. Myth: A Very Short Introduction. Oxford: Oxford University Press, 2004.
Segal, Robert A. Theories of Myth: From Ancient Israel and Greece to Freud, Jung, Campbell, and Levi-Strauss: Philosophy, Religious Studies, and Myth. Vol. 3. New York & London: Garland, 1996.
Segal, Robert A. Theorizing about Myth. Amherst: U of Massachusetts P, 1999.
Spence, Lewis. The Outlines of Mythology: The Thinker’s Library—No. 99. 1944. Whitefish, MT: Kessinger, 2007.
von Franz, Marie-Louise. Creation Myths: Revised Edition. Boston: Shambhala, 1995.
Wright, M.R. “Models, Myths, and Metaphors.” Cosmology in Antiquity. 1995.
External links
Cultural anthropology
Literary concepts
History of religion
Cosmogony | Origin myth | Astronomy | 2,677 |
38,849,672 | https://en.wikipedia.org/wiki/Cardiophysics | Cardiophysics is an interdisciplinary science that stands at the junction of cardiology and medical physics, with researchers using the methods of, and theories from, physics to study cardiovascular system at different levels of its organisation, from the molecular scale to whole organisms. Being formed historically as part of systems biology, cardiophysics designed to reveal connections between the physical mechanisms, underlying the organization of the cardiovascular system, and biological features of its functioning.
Zbigniew R. Struzik seems to be a first author who used the term in a scientific publication in 2004.
One can use interchangeably also the terms cardiovascular physics.
See also
Medical physics
Important publications in medical physics
Biomedicine
Biomedical engineering
Physiome
Nanomedicine
References
Books
Papers
External links
Bioelectric Information Processing Laboratory of the Institute for Information Transmission Problems RAS.
The Group of Experimental and Clinical Cardiology in the Laboratory of Physiology of emotion, Research Institute of normal physiology by Anokhin RAMS
Oxford Cardiac Electrophysiology Group, led many years already by Prof. Denis Noble
Cardiac Biophysics and Systems Biology group of National Heart & Lung Institute of Imperial College London
Group of Nonlinear Dynamics & Cardiovascular Physics of the 1st Faculty of Mathematics and Natural Sciences in the Institute of Physics of Humboldt University of Berlin
Medical physics
Applied and interdisciplinary physics | Cardiophysics | Physics,Biology | 261 |
9,502,303 | https://en.wikipedia.org/wiki/Flux-corrected%20transport | Flux-corrected transport (FCT) is a conservative shock-capturing scheme for solving Euler equations and other hyperbolic equations which occur in gas dynamics, aerodynamics, and magnetohydrodynamics. It is especially useful for solving problems involving shock or contact discontinuities. An FCT algorithm consists of two stages, a transport stage and a flux-corrected anti-diffusion stage. The numerical errors introduced in the first stage (i.e., the transport stage) are corrected in the anti-diffusion stage.
References
Jay P. Boris and David L. Book, "Flux-corrected transport, I: SHASTA, a fluid transport algorithm that works", J. Comput. Phys. 11, pp. 38 (1973).
External links
Fully multidimensional flux-corrected transport algorithms for fluids
See also
Computational fluid dynamics
Computational magnetohydrodynamics
Shock capturing methods
Volume of fluid method
Computational fluid dynamics | Flux-corrected transport | Physics,Chemistry | 192 |
29,874,967 | https://en.wikipedia.org/wiki/Gremiphyca | Gremiphyca is a lobed, non-mineralized alga with a pseudoparenchymatous thallus, dating to the Ediacaran period. The genus was reinvestigated by Xiao et al. and was interpreted to be a stem-group florideophyte.
References
Fossil algae
Ediacaran life | Gremiphyca | Biology | 71 |
1,652,134 | https://en.wikipedia.org/wiki/Interactive%20media | Interactive media normally refers to products and services on digital computer-based systems which respond to the user's actions by presenting content such as text, moving image, animation, video and audio. Since its early conception, various forms of interactive media have emerged with impacts on educational and commercial markets. With the rise of decision-driven media, concerns surround the impacts of cybersecurity and societal distraction.
Definition
Interactive media is a method of communication in which the output from the media comes from the input of the users.
Interactive media works with the user's participation. The media still has the same purpose but the user's input adds interaction and brings interesting features to the system for better enjoyment.
Development
The analogue videodisc developed by NV Philips was the pioneering technology for interactive media. Additionally, there are several elements that encouraged the development of interactive media including the following:
The laser disc technology was first invented in 1958. It enabled the user to access high-quality analogue images on the computer screen. This increased the ability of interactive video systems.
The concept of the graphical user interface (GUI), which was developed in the 1970s, popularized by Apple Computer, Inc. was essentially about visual metaphors, intuitive feel and sharing information on the virtual desktop. Additional power was the only thing needed to move into multimedia.
The sharp fall in hardware costs and the unprecedented rise in the computer speed and memory transformed the personal computer into an affordable machine capable of combining audio and color video in advanced ways.
Another element is the release of Windows 3.0 in 1990 by Microsoft into the mainstream IBM clone world. It accelerated the acceptance of GUI as the standard mechanism for communicating with small computer systems.
The development by NV Philips of optical digital technologies built around the compact disk (CD) in 1979 is also another leading element in the interactive media development as it raised the issue of developing interactive media.
All of the prior elements contributed in the development of the main hardware and software systems used in interactive media.
Terminology
Though the word media is plural, the term is often used as a singular noun.
Interactive media is related to the concepts interaction design, new media, interactivity, human computer interaction, cyberculture, digital culture, interactive design, and can include augmented reality and virtual reality.
An essential feature of interactivity is that it is mutual: user and machine each take an active role. Most interactive computing systems are for some human purpose and interact with humans in human contexts.
Interactive media are an instance of a computational method influenced by the sciences of cybernetics, autopoiesis and system theories, and challenging notions of reason and cognition, perception and memory, emotions and affection.
Any form of interface between the end user/audience and the medium may be considered interactive. Interactive media is not limited to electronic media or digital media. Board games, pop-up books, flip books and constellation wheels are all examples of printer interactive media. Books with a simple table of contents or index may be considered interactive due to the non-linear control mechanism in the medium, but are usually considered non-interactive since the majority of the user experience is non-interactive reading.
Advantages
Effects on learning
Interactive media is helpful in the four development dimensions in which young children learn: social and emotional, language development, cognitive and general knowledge, and approaches toward learning. Using computers and educational computer software in a learning environment helps children increase communication skills and their attitudes about learning. Children who use educational computer software are often found using more complex speech patterns and higher levels of verbal communication. A study found that basic interactive books that simply read a story aloud and highlighted words and phrases as they were spoken were beneficial for children with lower reading abilities. Children have different styles of learning, and interactive media helps children with visual, verbal, auditory, and tactile learning styles.
Furthermore, studies conducted using interactive, immersive media (such as virtual reality) has proven effects on the educational impacts of students diagnosed with autism spectrum disorder. Through the use of additional sensors and specialized equipment, immersive medias have been questioned on their effectiveness to include students who may be considered neurodivergent. Interactive media can often be considered as highly stimulating, which raised concerns for overstimulation and potential triggers of reaction for particular students.
Interactive media has also been used under multiple professions to provide training opportunities, such as its use in medical training and education.
Intuitive understanding
Interactive media makes technology more intuitive to use. Interactive products such as smartphones, iPad's/iPod's, interactive whiteboards and websites are all easy to use. The easy usage of these products encourages consumers to experiment with their products rather than reading instruction manuals.
Relationships
Interactive media promotes dialogic communication. This form of communication allows senders and receivers to build long term trust and cooperation. This plays a critical role in building relationships. Organizations also use interactive media to go further than basic marketing and develop more positive behavioral relationships. The use of interactive media, alongside immersive media, also has the additional benefit to providing further realism to creating relational bonds in virtual settings. Through the use of this technology, new types of relationships can be formed as well as strengthening preexisting ones.
Disadvantages
Public safety and distraction
Interactive media has given way to new distractions which can lead to public safety issues. Digital distractions are heightened by the necessity for user input and response to media requests.
Poor sleep habits
Smartphones are a prevalent form of interactive media, and their excessive use can lead to bedtime procrastination. Mobile phone use that keeps individuals up at night causes adverse health effects such as fatigue and headaches.
Influence on families
The introduction of interactive media has greatly affected the lives and inner workings of families, with many family activities having integrated with technology quite seamlessly, allowing both children and parents to adapt to it as they see fit. However, parents have also become increasingly worried about the impact that it will have on their family lives. This is not necessarily because they are opposed to technology, but because they fear that it will lessen the time that they get to spend with their children. Studies have shown that although interactive media is able to connect families together when they are unable to physically, the dependence on these media also continues to persist even when there are opportunities for family time, which often leads the adults to believe that it distracts children more than it benefits them.
Types
Distributed interactive media
The media which allows several geographically remote users to interact synchronously with the media application/system is known as Distributed Interactive Media. Some common examples of this type of Media include Online Gaming, Distributed Virtual Environment, Whiteboards which are used for interactive conferences and many more.
Commercial interactive media
Interactive medias assist in commercial ventures, such as those incorporating media using virtual and augmented technologies. Virtual tours is one demonstrated way in which interactive media is able to meet commercial needs and provide alternative revenue for business. Studies show that through the use of immersive, interactive media business are expected greater marketing impacts.
Informational interactive media
Media in which information is provided in interactive means. An example would be Geographic Information Systems, like those built upon the ArcGIS framework which provides users with the means to interact with locational data in various ways such as collecting, storing and manipulating.
Examples
A couple of basic examples of interactive media are websites and mobile applications. Websites, especially social networking websites provide the interactive use of text and graphics to its users, who interact with each other in various ways such as chatting, posting a thought or picture and so forth. The ImmersiveMe convention brings together those in the industry, displaying mass examples of interactive medias and their impacts such as those in the Digital Humanities space where interactive media was able to be used for research purposes.
Technologies and implementation
Interactive media can be implemented using a variety of platforms and applications that use technology. Some examples include mobile platforms such as touch screen smartphones and tablets, as well as other interactive mediums that are created exclusively to solve a unique problem or set of problems. Interactive media is not limited to a professional environment, it can be used for any technology that responds to user actions. This can include the use of JavaScript and AJAX in web pages, but can also be used in programming languages or technology that has similar functionality.
One of the most recent innovations to use interactivity that solves a problem that individuals have on a daily basis is Delta Airlines's "Photon Shower". This device was developed as a collaboration between Delta Airlines and Professor Russell Foster of Cambridge University. The device is designed to reduce the effect of jet lag on customers that often take long flights across time zones. The interactivity is evident because of how it solves this problem. By observing what time zones a person has crossed and matching those to the basic known sleep cycles of the individual, the machine is able to predict when a person's body is expecting light, and when it is expecting darkness. It then stimulates the individual with the appropriate light source variations for the time, as well as an instructional card to inform them of what times their body expects light and what times it expects darkness. Growth of interactive media continues to advance today, with the advent of more and more powerful machines the limit to what can be input and manipulated on a display in real time is become virtually non-existent.
See also
Artmedia
Collective intelligence
Digital art
Digital media
Immersive virtual reality
Information theory
Interactive advertising
Interactive art
Interactive cinema
Interactive movie
International Interactive Communications Society
Internet think tanks
Mass collaboration
Mass media
Media psychology
Media theory
Multimedia
New media art
Social media
User-generated content
Web documentary
References
External links
Bill Buxton: The three mirrors of interaction
Advertising techniques
Media studies
Multimodal interaction
New media
New media art
Promotion and marketing communications | Interactive media | Technology | 1,958 |
61,762,007 | https://en.wikipedia.org/wiki/Cladophialophora%20arxii | Cladophialophora arxii is a black yeast shaped dematiaceous fungus that is able to cause serious phaeohyphomycotic infections. C. arxii was first discovered in 1995 in Germany from a 22-year-old female patient suffering multiple granulomatous tracheal tumours. It is a clinical strain that is typically found in humans and is also capable of acting as an opportunistic fungus of other vertebrates Human cases caused by C. arxii have been reported from all parts of the world such as Germany and Australia.
The genus Cladophialophora comprises four different lineages, one of the main lineages belongs to the family of Herpotrichiellaceae. Within this lineage there are two major clades of the two one is called the bantiana clade in which C. arxii can be found. C.arxii is typically slow growing and is capable of growing at higher temperatures compared to other fungi with its maximal growth temperature reaching 42°.
History and taxonomy
Cladophialophora arxii was first discovered in a tracheal granulomatous tumour of a 22-year-old female in Berlin, Germany in 1995. It was originally considered to be C. borelli due to the similarity in structural appearance to C. arxii. The fungus was considered to be of the genus Cladosporium. The genus Cladosporium was first discovered in 1816, several human pathogenic species belonging to Cladosporium are now classified as the genus Cladophialophora. The genus Cladophialophora mainly consists of species of melanized hyphomycetes that are found within human hosts. C. arxii within the genus Cladophialophora was named after Dr. J.A von Arx, a Dutch mycologist, for his efforts in classify the genus Cladosporium.
Phylogeny
The genus Cladophialophora currently contains seven different species that are capable of causing disease in humans, C.arxii included. Cladophialophora consists of four different lineages: one lineage belonging to the family Herpotrichiellaceae and the other pertaining to a group of rock-dwelling strains. The majority of the human opportunistic fungi of Cladophialophora can be found within Herpotrichiellaceae which forms two major clades. The first clade, is known as the C. carrionii-clade and contains the species C. carrionii and C. boppii. The second clade, the C. bantiana clade includes the species C. bantiana, C. devriesii, C. mycetomatis, C. immunda, C. emmonsii, C. saturnica and C. arxii. It has been found that the environmental strain C. minourae is a sister strain to C. arxii
Habitat
Cladophialophora is a genus of black yeast fungi whose natural habitat consists of soil and rotting plant material. Several of the species pertaining to the Cladophialophora have been reported in both tropical and subtropical regions of the world. Cladophialophora arxii is a clinical strain that has generally been found in humans C. arxii is also capable of acting as an opportunistic fungus of other vertebrates.
Growth and morphology
C. arxii is a slow growing fungus that grows to about 36–40 mm in size when cultured on a growth medium of SDA agar and PDA agar at 25 °C over a span of 35 days. The colonies formed contained dark grey aerial hyphae and black-brown coloured hyphae located on the margins of the SDA agar. On the PDA the colonies were dark black-brown with felty radial furrows. The fungus contained olive brown septated hyphae with both lateral and terminal acropetal conidial chains with branching. The overall morphology of the conidia of C.arxii are very similar to Cladophialophora devriesii the conidial chains of C.arxii are longer. Additionally, the conidial chains are fragile and borne on denticles. The conidia of C.arxii are pale brown, smooth, thick walled with a lemon-spindle shaped morphology. Initially the fungus contained muriform cells from tissue samples but following corticosteroid therapy the cells changed their shape and become irregularly shaped hyphae.
Physiology
The optimal growth temperature of the Cladophialophora species is from 27 to 30 °C but are capable of growing anywhere between 9-37 °C. C. arxii grows optimally at 37 °C with the maximum temperature it can grow at being 42 °C. C. arxii has an optimal production of non-septate swollen cells at a pH of 4–5. C. arxii is meso-erythritol and galactitol assimilated but is unable to assimilate on ethanol. Furthermore, it is not able to assimilate methyl-alpha-glucoside, soluble starch, glycerol, meso-erythritol, myoinositol or succinate.
Clinical relevance
Is a dematiaceous fungus that causes severe phaeohyphomycotic infections. It is fungus that is rarely seen, was the cause of granulomatous tumours in the trachea of the first patient that was diagnosed with this fungus in 1995. The fungus was treated with 5-FC, amphotericin B, and itraconazole.C. arxii was assumed to be the cause of subcutaneous phaeohyphomycosis of an ulcer in a 68-year-old woman, however, these results were not definitive Shortly after in 2001, it was believed that C. arxii was responsible for causing both cerebral and lung phaeohyphomycosis in a 30-year-old African women following a heart transplant. Additionally, it was the cause of femoral osteomyelitis in a 20-year-old man. Treatment of the osteomyelitis included surgical debridement, itraconazole, and interferon gamma treatment. Finally, the most recently reported case of C. arxii was seen in Australia with the patient suffering from a pulmonary chromoblastomycosis.
Treatment
Several anti fungal drugs have been shown to be successful in treating C. arxii such 5-FC, amphotericin B, itraconazole, and interferon gamma treatment. Additionally, most of these antifungal drugs are usually accompanied by surgical procedures such as surgical debridement. Furthermore, in vitro studies have shown that combination therapy with amphotericin B and terbinafine have synergistic effects against C. arxii. 5-FC and itraconazole have also shown synergistic effects when targeting infections caused by C. arxii.
References
Eurotiomycetes
Fungus species | Cladophialophora arxii | Biology | 1,495 |
25,166,327 | https://en.wikipedia.org/wiki/Pseudoconversational%20transaction | In transaction processing, a pseudoconversational transaction is a type of transaction that emulates a true conversation in an interactive session. To the end user, it appears as though the program has simply "paused" to request further input, whereas in reality, most resources are released while the input is waiting to be received.
Transparent termination and restart
The controlling program has deliberately saved most of its state during the delay, terminated, and then, on being restarted through new input, restores its previous state. A single control variable is usually retained to hold the current state in terms of the stage of input reached (and therefore what must be recovered at any stage in order to resume processing). The state, including the control variable, is usually preserved in a 'temporary storage record' that maps the variables needing restoration as an aggregate set, usually contained in a single structure (other variables will be re-initialized on restart).
Conserving resources
This method of programming frees up pooled resources (such as memory) for an indeterminate time. This delay is the end-user 'thinking time' (or response time) and depends on human factors including speed of typing.
For systems supporting many thousands of users on a single processor, it allows the transparent 'look and feel' of a true conversational session without tying up limited resources.
References
External links
Pseudoconversational and conversational design by IBM
Transaction processing | Pseudoconversational transaction | Technology | 289 |
3,472,330 | https://en.wikipedia.org/wiki/Sun%20sensor | A Sun sensor is a navigational instrument used by spacecraft to detect the position of the Sun. Sun sensors are used for attitude control, solar array pointing, gyro updating, and fail-safe recovery.
In addition to spacecraft, Sun sensors find use in ground-based weather stations and Sun-tracking systems, and aerial vehicles including balloons and UAVs.
Mechanism
There are various types of Sun sensors, which differ in their technology and performance characteristics. Sun presence sensors provide a binary output, indicating when the Sun is within the sensor's field of view. Analog and digital Sun sensors, in contrast, indicate the angle of the Sun by continuous and discrete signal outputs, respectively.
In typical Sun sensors, a thin slit at the top of a rectangular chamber allows a line of light to fall on an array of photodetector cells at the bottom of the chamber. A voltage is induced in these cells, which is registered electronically. By orienting two sensors perpendicular to each other, the direction of the Sun can be fully determined.
Often, multiple sensors will share processing electronics.
Criteria
There are a number of design and performance criteria which dictate the selection of a Sun sensor model:
Field of view
Angular resolution
Accuracy and stability
Mass and volume
Input voltage and power
Output characteristics (including electrical characteristics, update frequency, nonlinearity, and encoding)
Durability (including radiation hardening and tolerance to vibration and thermal cycling)
See also
Celestial navigation
Earth sensor
Star tracker
References
Spacecraft attitude control
Astrodynamics
Orbits
Spaceflight concepts
Navigational equipment
Celestial navigation | Sun sensor | Astronomy,Engineering | 313 |
41,077,244 | https://en.wikipedia.org/wiki/Endochytriaceae | The Endochytriaceae are a family of fungi in the order Cladochytriales. The family contains 10 genera and 56 species according to a 2008 estimate. It was circumscribed by mycologist Donald J.S. Barr in 1980.
References
External links
Chytridiomycota
Fungus families | Endochytriaceae | Biology | 69 |
24,109,039 | https://en.wikipedia.org/wiki/Quench%20press | A quench press is a machine that uses concentrated forces to hold an object as it is quenched. These types of quench facilities are used to quench large gears and other circular parts so that they remain circular. They are also used to quench saw blades and other flat or plate-shaped objects so that they remain flat.
Quench presses are able to quench the part while it is being held because of the unique structure of the clamps holding the part. Clamps are slotted so that oil or water can flow through each slot and cool the part and the ribs of the clamps can hold the part in place.
References
Gears
Metalworking tools
Metallurgical processes | Quench press | Chemistry,Materials_science | 140 |
31,675,463 | https://en.wikipedia.org/wiki/Spring%20system | In engineering and physics, a spring system or spring network is a model of physics described as a graph with a position at each vertex and a spring of given stiffness and length along each edge. This generalizes Hooke's law to higher dimensions. This simple model can be used to solve the pose of static systems from crystal lattice to springs. A spring system can be thought of as the simplest case of the finite element method for solving problems in statics. Assuming linear springs and small deformation (or restricting to one-dimensional motion) a spring system can be cast as a (possibly overdetermined) system of linear equations or equivalently as an energy minimization problem.
Known spring lengths
Consider the simple case of three nodes, in one dimension , connected by two springs. If the nominal lengths, L, of the springs are known to be 1 and 2 units respectively, i.e. , then the system can be solved as follows:
The stretching of the two springs is given as a function of the positions of the nodes by
where is the matrix transpose of the oriented incidence matrix
relating each degree of freedom to the direction each spring pulls on it.
The forces on the springs are
where W is a diagonal matrix giving the stiffness of every spring. Then the force on the nodes is given by left multiplying by , which we set to zero to find equilibrium:
which gives the linear equation:
.
Now, the matrix is singular, because all solutions are equivalent up to rigid-body translation. Let us prescribe a Dirichlet boundary condition, e.g., .
As an example, let W be the identity matrix then
is the Laplacian matrix. Plugging in we have
.
Incorporating the 2 to the left-hand side gives
.
and removing rows of the system that we already know, and simplifying, leaves us with
.
.
so we can then solve
.
That is, , as prescribed, and , leaving the first spring slack, and , leaving the second spring slack.
See also
Gaussian network model
Anisotropic Network Model
Stiffness matrix
Spring-mass system
Laplacian matrix
External links
The Physics of Springs
Springs (mechanical)
Elasticity (physics)
Solid mechanics | Spring system | Physics,Materials_science | 450 |
418,206 | https://en.wikipedia.org/wiki/Execution%20%28computing%29 | Execution in computer and software engineering is the process by which a computer or virtual machine interprets and acts on the instructions of a computer program. Each instruction of a program is a description of a particular action which must be carried out, in order for a specific problem to be solved. Execution involves repeatedly following a "fetch–decode–execute" cycle for each instruction done by the control unit. As the executing machine follows the instructions, specific effects are produced in accordance with the semantics of those instructions.
Programs for a computer may be executed in a batch process without human interaction or a user may type commands in an interactive session of an interpreter. In this case, the "commands" are simply program instructions, whose execution is chained together.
The term run is used almost synonymously. A related meaning of both "to run" and "to execute" refers to the specific action of a user starting (or launching or invoking) a program, as in "Please run the application."
Process
Prior to execution, a program must first be written. This is generally done in source code, which is then compiled at compile time (and statically linked at link time) to produce an executable. This executable is then invoked, most often by an operating system, which loads the program into memory (load time), possibly performs dynamic linking, and then begins execution by moving control to the entry point of the program; all these steps depend on the Application Binary Interface of the operating system. At this point execution begins and the program enters run time. The program then runs until it ends, either in a normal termination or a crash.
Executable
Executable code, an executable file, or an executable program, sometimes simply referred to as an executable or
binary, is a list of instructions and data to cause a computer "to perform indicated tasks according to encoded instructions", as opposed to a data file that must be interpreted (parsed) by a program to be meaningful.
The exact interpretation depends upon the use. "Instructions" is traditionally taken to mean machine code instructions for a physical CPU. In some contexts, a file containing scripting instructions (such as bytecode) may also be considered executable.
Context of execution
The context in which execution takes place is crucial. Very few programs execute on a bare machine. Programs usually contain implicit and explicit assumptions about resources available at the time of execution. Most programs execute within multitasking operating system and run-time libraries specific to the source language that provide crucial services not supplied directly by the computer itself. This supportive environment, for instance, usually decouples a program from direct manipulation of the computer peripherals, providing more general, abstract services instead.
Context switching
In order for programs and interrupt handlers to work without interference and share the same hardware memory and access to the I/O system, in a multitasking operating system running on a digital system with a single CPU/MCU, it is required to have some sort of software and hardware facilities to keep track of an executing process's data (memory page addresses, registers etc.) and to save and recover them back to the state they were in before they were suspended. This is achieved by a context switching. The running programs are often assigned a Process Context IDentifiers (PCID).
In Linux-based operating systems, a set of data stored in registers is usually saved into a process descriptor in memory to implement switching of context. PCIDs are also used.
Runtime
Runtime, run time, or execution time is the final phase of a computer programs life cycle, in which the code is being executed on the computer's central processing unit (CPU) as machine code. In other words, "runtime" is the running phase of a program.
A runtime error is detected after or during the execution (running state) of a program, whereas a compile-time error is detected by the compiler before the program is ever executed. Type checking, register allocation, code generation, and code optimization are typically done at compile time, but may be done at runtime depending on the particular language and compiler. Many other runtime errors exist and are handled differently by different programming languages, such as division by zero errors, domain errors, array subscript out of bounds errors, arithmetic underflow errors, several types of underflow and overflow errors, and many other runtime errors generally considered as software bugs which may or may not be caught and handled by any particular computer language.
Implementation details
When a program is to be executed, a loader first performs the necessary memory setup and links the program with any dynamically linked libraries it needs, and then the execution begins starting from the program's entry point. In some cases, a language or implementation will have these tasks done by the language runtime instead, though this is unusual in mainstream languages on common consumer operating systems.
Some program debugging can only be performed (or is more efficient or accurate when performed) at runtime. Logic errors and array bounds checking are examples. For this reason, some programming bugs are not discovered until the program is tested in a production environment with real data, despite sophisticated compile-time checking and pre-release testing. In this case, the end-user may encounter a "runtime error" message.
Application errors (exceptions)
Exception handling is one language feature designed to handle runtime errors, providing a structured way to catch completely unexpected situations as well as predictable errors or unusual results without the amount of inline error checking required of languages without it. More recent advancements in runtime engines enable automated exception handling which provides "root-cause" debug information for every exception of interest and is implemented independent of the source code, by attaching a special software product to the runtime engine.
Runtime system
A runtime system, also called runtime environment, primarily implements portions of an execution model. This is not to be confused with the runtime lifecycle phase of a program, during which the runtime system is in operation. When treating the runtime system as distinct from the runtime environment (RTE), the first may be defined as a specific part of the application software (IDE) used for programming, a piece of software that provides the programmer a more convenient environment for running programs during their production (testing and similar), while the second (RTE) would be the very instance of an execution model being applied to the developed program which is itself then run in the aforementioned runtime system.
Most programming languages have some form of runtime system that provides an environment in which programs run. This environment may address a number of issues including the management of application memory, how the program accesses variables, mechanisms for passing parameters between procedures, interfacing with the operating system, and otherwise. The compiler makes assumptions depending on the specific runtime system to generate correct code. Typically the runtime system will have some responsibility for setting up and managing the stack and heap, and may include features such as garbage collection, threads or other dynamic features built into the language.
Instruction cycle
The instruction cycle (also known as the fetch–decode–execute cycle, or simply the fetch-execute cycle) is the cycle that the central processing unit (CPU) follows from boot-up until the computer has shut down in order to process instructions. It is composed of three main stages: the fetch stage, the decode stage, and the execute stage.
In simpler CPUs, the instruction cycle is executed sequentially, each instruction being processed before the next one is started. In most modern CPUs, the instruction cycles are instead executed concurrently, and often in parallel, through an instruction pipeline: the next instruction starts being processed before the previous instruction has finished, which is possible because the cycle is broken up into separate steps.
Interpreter
A system that executes a program is called an interpreter of the program. Loosely speaking, an interpreter directly executes a program. This contrasts with a language translator that converts a program from one language to another before it is executed.
Virtual machine
A virtual machine (VM) is the virtualization/emulation of a computer system. Virtual machines are based on computer architectures and provide functionality of a physical computer. Their implementations may involve specialized hardware, software, or a combination.
Virtual machines differ and are organized by their function, shown here:
System virtual machines (also termed full virtualization VMs) provide a substitute for a real machine. They provide functionality needed to execute entire operating systems. A hypervisor uses native execution to share and manage hardware, allowing for multiple environments which are isolated from one another, yet exist on the same physical machine. Modern hypervisors use hardware-assisted virtualization, virtualization-specific hardware, primarily from the host CPUs.
Process virtual machines are designed to execute computer programs in a platform-independent environment.
Some virtual machine emulators, such as QEMU and video game console emulators, are designed to also emulate (or "virtually imitate") different system architectures thus allowing execution of software applications and operating systems written for another CPU or architecture. OS-level virtualization allows the resources of a computer to be partitioned via the kernel. The terms are not universally interchangeable.
See also
Executable
Run-time system
Runtime program phase
Program counter
References
Computing terminology | Execution (computing) | Technology | 1,904 |
27,599,204 | https://en.wikipedia.org/wiki/Play%20Framework | Play Framework is an open-source web application framework which follows the model–view–controller (MVC) architectural pattern. It is written in Scala and usable from other programming languages that are compiled to JVM bytecode, e.g. Java. It aims to optimize developer productivity by using convention over configuration, hot code reloading and display of errors in the browser.
Support for the Scala programming language has been available since version 1.1 of the framework. In version 2.0, the framework core was rewritten in Scala. Build and deployment was migrated to SBT, and templates use Scala instead of Apache Groovy.
History
Play was created by software developer Guillaume Bort, while working at Zengularity SA (formerly Zenexity). Although the early releases are no longer available online, there is evidence of Play existing as far back as May 2007. In 2007, pre-release versions of the project were available to download from Zenexity's website.
Motivation
Play is heavily inspired by ASP.NET MVC, Ruby on Rails and Django and is similar to this family of frameworks. Play web applications can be written in Scala or Java, in an environment that may be less Java Enterprise Edition-centric. Play uses no Java EE constraints. This can make Play simpler to develop compared to other Java-centric platforms.
Although Play 1.x could also be packaged as WAR files to be distributed to standard Java EE application servers, Play 2.x applications are now designed to be run using the built-in Akka HTTP or Netty web servers exclusively.
Major differences from Java frameworks
Stateless: Play 2 is fully RESTful – there is no Java EE session per connection.
Integrated unit testing: JUnit and Selenium support is included in the core.
API comes with most required elements built-in.
Asynchronous I/O: due to using Akka HTTP as its web server, Play can service long requests asynchronously rather than tying up HTTP threads doing business logic like Java EE frameworks that don't use the asynchronous support offered by Servlet 3.0.
Modular architecture: like Ruby on Rails and Django, Play comes with the concept of modules.
Native Scala support: Play 2 uses Scala internally but also exposes both a Scala API, and a Java API that is deliberately slightly different to fit in with Java conventions, and Play is completely interoperable with Java.
Testing framework
Play provides integration with test frameworks for unit testing and functional testing for both Scala and Java applications. For Scala, integrations with Scalatest and Specs2 are provided out-of-the-box and, for Java, there is integration with JUnit 4. For both languages, there is also integration with Selenium (software). SBT is used to run the tests and also to generate reports. It is also possible to use code coverage tools by using sbt plugins such as scoverage or jacoco4sbt.
Usage
In August 2011, Heroku announced native support for Play applications on its cloud computing platform. This followed module-based support for Play 1.0 on Google App Engine, and documented support on Amazon Web Services.
, the Play Framework was the most popular Scala project on GitHub.
In July 2015, Play was the 3rd most popular Scala library in GitHub, based on 64,562 Libraries. 21.3% of the top Scala projects used Play as their framework of choice.
Corporate users of the Play Framework have included Coursera, HuffPost, Hootsuite, Janrain, LinkedIn, and Connectifier.
See also
Akka (toolkit)
Ebean
Netty (software)
Scala (programming language)
Literature
Wayne Ellis (2010). Introducing the Play Framework.
Alexander Reelsen (2011). Play Framework Cookbook. Packt Publishing. .
References
External links
Play Framework home page
Java platform
Web frameworks
Free software programmed in Java (programming language)
Free software programmed in Scala
2007 software
Software using the Apache license | Play Framework | Technology | 836 |
40,289,067 | https://en.wikipedia.org/wiki/FlyExpress | FlyExpress is a free database that collects the expression patterns of Drosophila melanogaster in embryogenesis via a series of images submitted from BDGP, Fly-FISH and publications from other researchers, containing over 100,000 images of over 4,000 genes. It is currently available freely both online and as an iPhone application.
History
FlyExpress was developed by the Center of Evolutionary Medicine and Informatics of the Biodesign Institute of Arizona State University and was released in 2011 with funding and support from an NIH grant.
Features
The primary images available in FlyExpress are GEMs, or Genome-wide Expression Maps, that display the spatial patterns of genes during a state of fly development via heat maps. These not only give a visual clue as to where these genes are expressed, but also how many of them are expressed in the same vicinity, as the darker regions of the heat map correlate to a higher expressed gene count.
Upon searching for images (with a gene name, PubMed ID, image ID or certain keywords), a series of GEMs are displayed from the two databases of the gene’s inclusion in descending stages and views of the embryo. Various details, such as the source and experimental protocol of the embryo, are also shown. All expression patterns reflect the wild-type allele for the gene.
Categorizing genes as images does not only allow for visualization in understanding the influence of the gene, but also contrasting the patterns between multiple different genes. FlyExpress has a primary function of searching between images for similar expression patterns using a specified spatial profile via the Basic Expression Search Tool for Images (or BESTi), bringing up all images and genes that fit the criteria of the pattern. Certain classifications and locations of expression of genes can also be searched within the BDGP and Fly-FISH databases to gain similar results without specifying a particular gene to begin with. Searching between GEMs tends to give better results than searching genes through text criteria, which may not always contain all-encompassing text annotations and labeling and thus list a fewer number of overlapping genes upon searches.
References
External links
FlyExpress Homepage
Insect developmental biology
Model organism databases
Bioinformatics software | FlyExpress | Biology | 448 |
51,517,341 | https://en.wikipedia.org/wiki/NGC%20190 | NGC 190 is a pair of interacting galaxies located in the constellation Pisces. This galaxy is due to the collision of the two galaxies around 30 million years ago. It was discovered in 1894.
References
External links
0190
Interacting galaxies
Pisces (constellation) | NGC 190 | Astronomy | 54 |
22,448,016 | https://en.wikipedia.org/wiki/Hebeloma%20leucosarx | Hebeloma leucosarx is a species of mushroom in the family Hymenogastraceae.
H. leucosarx is found across a wide spectrum of habitats, from dry to wet and from soil that is calcareous and humus-poor to acidic and humus-rich. It is mostly found under deciduous trees but can occasionally be spotted under coniferous trees.
References
leucosarx
Fungi of Europe
Fungus species | Hebeloma leucosarx | Biology | 94 |
1,857,196 | https://en.wikipedia.org/wiki/AF-heap | In computer science, the AF-heap is a type of priority queue for integer data, an extension of the fusion tree using an atomic heap proposed by M. L. Fredman and D. E. Willard.
Using an AF-heap, it is possible to perform insert or decrease-key operations and delete-min operations on machine-integer keys in time . This allows Dijkstra's algorithm to be performed in the same time bound on graphs with edges and vertices, and leads to a linear time algorithm for minimum spanning trees, with the assumption for both problems that the edge weights of the input graph are machine integers in the transdichotomous model.
See also
Fusion tree
References
Heaps (data structures)
Priority queues | AF-heap | Mathematics | 151 |
75,298,118 | https://en.wikipedia.org/wiki/Venkataraman%20Thangadurai | Venkataraman Thangadurai is a scientist recognized for his work in solid state ionics and chemistry. He is a professor at the University of St Andrews, specializing in Chemistry.
Early life and education
Thangadurai, who was born in India, earned his Chemistry degrees from various institutions in Tamil Nadu and a Ph.D. from the Indian Institute of Science in 1999. He pursued postdoctoral studies in Germany, receiving a fellowship from the Alexander von Humboldt Foundation and a Habilitation degree in 2004 from the University of Kiel.
Career and research
Thangadurai works on creating new materials for energy storage and conversion, particularly in solid oxide fuel cells, batteries and gas separation membranes. His research focuses on ion transport in solid electrolytes and high-performance materials for energy applications. Thangadurai has done work on advancing Li-based garnets within all-solid-state Lithium metal batteries. Thangadurai co-founded Ion Storage Systems and Superionics Inc. Thangadurai submitted 13 patent applications and has 443 publications He moved to the University of St Andrews in early July 2024.
Awards and recognition
HWK-Fellowship, Hanse-Wissenschaftskolleg, Delmenhorst, Germany (2021)
Research Excellence in Materials Chemistry, Chemical Institute of Canada (2021)
Parex Innovation Fellow, University of Calgary (2020)
Peak Scholar, University of Calgary (2019)
Keith Laidler Award, Canadian Society for Chemistry, The Chemical Institute of Canada (2016)
Outstanding Invention of 2013, University of Maryland College Park, USA (2013)
German Academic Exchange Service (DAAD) Guest Professor, Faculty of Engineering, University of Kiel, Germany (2005)
Alexander von Humboldt (AvH) PDF scholarship, Chair for Sensors and Solid-State Ionics, Faculty of Engineering, University of Kiel, Germany (2002)
Selected publications
S. Sarkar, B. Chen, C. Zhou, S.N. Shirazi, F. Langer, J. Schwenzel, and V. Thangadurai,* “Synergistic Approach toward Developing Highly Compatible Garnet-Liquid Electrolyte Interphase in Hybrid Solid-State Lithium-Metal Batteries,” Adv. Energy Mater., 13 (8), 2203897 (14 pages) (2023) (cover page)
T. Boteju, A. M. Abraham, S. Ponnurangam* and V. Thangadurai,* “Theoretical Study on the Role of Solvents in Lithium Polysulfide Anchoring on Vanadium Disulfide Facets for Lithium-Sulfur Batteries,” J. J. Phys. Chem. C. 127 (9), 4416 – 4424 (2023).
A. Sivakumaran, A.J. Samson, and. V. Thangadurai,* “Progress in Sodium Silicates for All-Solid-State Sodium Batteries — a Review,” Energy Technol. 11, 2201323 (18 pages) (2023).
A. M. Abraham, T. Boteju, S. Ponnurangam* and V. Thangadurai,* “A Global Design Principle for Polysulfide Electrocatalysis in Lithium-Sulfur Batteries – A Computational Perspective,” Battery Energy, 20220003 (11 pages), (2022).
V. Thangadurai,* and B. Chen, “Solid Li- and Na-Ion Electrolytes for Next Generation Rechargeable Batteries,” Chem. Mater., 34, 6637–6658 (2022) (Invited John Goodenough at 100 issue).
A. Ndubuisi, S. Abouali, K. Singh, V. Thangadurai,* “Recent Advances, Practical Challenges and Perspectives of Intermediate Temperature Solid Oxide Fuel Cell Cathodes,” J. Mater. Chem. A, 10, 2196-2227 (2022) (Invited).
References
Living people
Year of birth missing (living people)
Indian emigrants to Canada
Solid state chemists
Canadian chemists
21st-century Indian chemists
People from Tamil Nadu
Academic staff of the University of Calgary
Fellows of the Royal Society of Canada
Indian Institute of Science alumni
University of Kiel alumni | Venkataraman Thangadurai | Chemistry | 886 |
2,907,947 | https://en.wikipedia.org/wiki/Computational%20Chemistry%20List | The Computational Chemistry List (CCL) was established on January 11, 1991, as an independent electronic forum for chemistry researchers and educators from around the world. According to the forum's web site, it is estimated that more than 3000 members in more than 50 countries are reading CCL messages regularly, and the discussions cover all aspects of computational chemistry. The list is widely supported and used by the computational chemistry community.
History
The CCL is a mailing list, portal, and community which brings together people interested in computational chemistry. It was formed in 1991 by initiative of Jan Labanowski, at the time a computational chemistry specialist in the Ohio Supercomputing Center, as a mailing list for the hundred persons who had participated in a workshop he had organized together with one of the founding fathers of the field, Charles Bender. The purpose of the list as first created was to continue the lively discussions and encounters that had taken place in the workshop and help grow the field which was accelerating due to the recent availability of maturing quantum, classical and semi-empirical methods, of supercomputers and their power, of personal computers and their flexibility and interoperability, of promising software packages bound to occupy a market niche in the chemical and pharmaceutical industry.
The list has undergone many transformations and survived through them. It went from the original hundred to several thousands members; from a strict, ASCII mailing list to a combination of mailing list, online forum, and portal containing document and software repositories, event announcements, and other communicational resources for the practitioners.
The CCL is thus a typical scholar mailing list of its time as many other flourished in the eighties and nineties in various scientific fields (though a majority of them eventually withered). The genealogy of mailing lists as a communication tool between scientists can be traced back to the times of the fledgling Arpanet. The aim of the computer scientists involved in this project was to develop protocols for the communication between computers. In so doing, they have also built the first tools of human computer-mediated communication. Broadly speaking, the scholarly mailing lists can even be seen as the modern version of the salons of the Enlightenment ages, designed by scholars for scholars.
Pisanty and Labanowski had led a survey of membership from which some of the above conclusions were extracted, and Labanowski published about the difficult role of moderator on the CCL. Hocquet and Wieber have more recently discussed the history and activity of the CCL. They have for example explored aspects such as the performative functions of language in the list on the one hand and discourse structure on the other.
Community
At its core the CCL still is a mailing list, a quaint survivor from early Internet time, in which discussions take place about general principles, practical interpretations of theory, and computational methods. The membership of the list has evolved but continues to hold a mix of theoreticians, computationalists, and experimentalists; of highly experienced practitioners, well known in the field and sometimes founders of it, blended together with young researchers and researchers from communities that are not mainstream for high-powered computational chemistry—young researchers, researchers in industrial laboratories where they may be the only specialist, and researchers in developing countries.
The language used in Internet fora is neither written nor oral: it has been described as quasi-orality. In scientific mailing lists, it consists in a mix of scholar talk, informal talk and technical talk. In the CCL, this pidgin is perfectly suited to the diversity of topics of concerns to a professionally diverse community. It is notable that most conversations revolve around software as a topic, at the intersection of theories/modelling methods/publications/coding/software support and maintenance/licensing/hardware benchmarking/sales.
Topics
Some factors that have made the CCL so durable and resilient through changes are that the communication style is horizontal; discussions are frank and sometimes sharp but hardly ever hostile; the community is strongly self-regulated for productive discussions; the style and contents imprinted by the founder, Jan Labanowski; the perception of all members that the list provides value; the diversity, broad but not wild, of discussion subjects; and a level of tolerance by members to other members, especially to those less experienced who may inadvertently test established but unwritten rules, restart old discussions or thread well-trodden paths, or become shrill too fast.
The CCL is thus also an exceptional mailing list. Its transparency (the archive is open to all), inclusivity (a poster does not need to subscribe to send a message) and its ethos as designed by a mix of terms of service, moderation practices and moderator personality are key to its persistence. It is particularly important that its definition of topicality allows to blend the theoretical parts of daily practices with the technical parts, and even the commercial ones (something unique to the CCL). The CCL as a tool for the community is thus not only a way to “educate and get educated” (in Labanowski's words), but also an arena where a vast diversity of topics can be debated.
The “threaded conversation” structure (where the header of a first post defines the topic of a series of answers thus constituting a thread) is a typical and ubiquitous structure of discourse within lists and fora of the Internet. It is pivotal to the structure and topicality of debates within the CCL as an arena. The flame wars (as the liveliest episodes) give valuable and unique information to historians to comprehend what is at stake in the computational chemistry community.
Archive
The list also hosts many resources on computational chemistry. For example, it hosted a pre-publication version of Computational Chemistry by David Young.
Online social media not only comes to mind as an analogy for what the CCL achieves in its combination of mailing list, online forum, and portal, but also as a threat to the CCL in the long term. A number of groups exist on Facebook and LinkedIn, and increasingly also in the online platforms of learned societies like the Institute of Electrical and Electronics Engineers (IEEE) in which discussions on subjects similar to the CCL's take place. They may oppose less friction to membership, provide richer interactions, and supply software, datasets, literature, and other resources, in formats with which especially younger researchers and students are familiar. However, they have not “caught” enough to displace the CCL, in part because some of the most valued members of the CCL have not migrated to those platforms, either for professional purposes only, or at all. The most valuable resource that keeps the CCL together, in this view, is its own community.
It is a general trend in scholarly communities to give up on mailing lists (as a tool created, designed and maintained by scholars for scholars), like the CCL or the other chemistry related CHMINF-L list, and surrender to social media (as services to extract marketing value where scholars are dispossessed of their tools): the CCL is a place of resistance in this respect. There are a few groups and discussions on computational chemistry on Reddit and Facebook which we inspected for this paper but none achieve the function and reach of the CCL; the Reddit discussion thread points to the CCL.
Finally, from the historian's point of view, the issue of the preservation of CCL heritage (and scholar fora heritage in general) is essential. Not only the text of the corpus of messages has to be perennially archived, but also their related metadata, timestamps, headers that define topics, etc. Mailing lists archives are a unique opportunity for historians to explore interactions, debates, even tensions among scientists that reveal a lot about scientific communities: they constitute an important alternative to more official sources such as published papers.
References
Computational chemistry
Electronic mailing lists
Internet forums | Computational Chemistry List | Chemistry | 1,614 |
2,294,750 | https://en.wikipedia.org/wiki/Tule%20fog | Tule fog () is a thick ground fog that settles in the San Joaquin Valley and Sacramento Valley areas of California's Central Valley. Tule fog forms from late fall through early spring (California's winter season) after the first significant rainfall. The official time frame for tule fog to form is from November 1 to March 31. This phenomenon is named after the tule grass wetlands (tulares) of the Central Valley. As of 2005, tule fog was the leading cause of weather-related accidents in California.
Formation
Tule fog is a radiation fog, which condenses when there is a high relative humidity (typically after a heavy rain), calm winds, and rapid cooling during the night. The nights are longer in the winter months, which allows an extended period of ground cooling, and thereby a pronounced temperature inversion at a low altitude.
In California, tule fog can extend from Bakersfield to Red Bluff, covering a distance of over . Tule fog occasionally drifts as far west as the San Francisco Bay Area via the Carquinez Strait, and can even drift westward out through the Golden Gate, opposite to the usual course of the coastal fog.
Tule fog is characteristically confined mainly to the Central Valley due to the mountain ranges surrounding it. Because of the density of the cold air in the winter, winds are not able to dislodge the fog and the high pressure of the warmer air above the mountaintops presses down on the cold air trapped in the valley, resulting in a dense, immobile fog that can last for days or at times for weeks undisturbed. Tule fog often contains light drizzle or freezing drizzle where temperatures are sufficiently cold.
Tule fog is a low cloud, usually below in altitude and can be seen from above by driving up into the foothills of the Sierra Nevada to the east or the Coast Ranges to the west. Above the cold, foggy layer, the air is typically mild, dry and clear. Once tule fog has formed, turbulent air is necessary to break through the temperature inversion layer. Daytime heating (cloud-penetrating visible light wavelengths transformed to infrared by the ground) sometimes evaporates the fog in patches, although the air remains chilly and hazy below the inversion and fog reforms soon after sunset. Tule fog usually remains longer in the southern and eastern parts of the Central Valley, because winter storms with strong winds and turbulent air affect the northern Central Valley more often.
Visibility
Visibility in tule fog is usually less than an eighth of a mile (about 600 ft or 200 m). Visibility can vary rapidly; in only a few feet, visibility can go from to near zero.
The variability in visibility is the cause of many chain-reaction pile-ups on roads and freeways. In one such accident on Interstate 5 near Elk Grove south of Sacramento, 25 cars and nine big-rig trucks collided inside a fog bank on December 12, 1997. Five people died and 28 were injured. It took 26 hours to clear away all the wreckage and reopen the freeway. In February 2002, two people were killed in an 80-plus-car pile-up on State Route 99 between Kingsburg and Selma. On the morning of November 3, 2007, heavy tule fog caused a massive pile-up that included 108 passenger vehicles and 18 big-rig trucks on northbound State Route 99 between Fowler and Fresno. Visibility was about at the time of the accident. There were two fatalities and 39 injuries in the crash.
Freezing drizzle and black ice
Tule fog events are often accompanied by drizzle. Sunlight often cannot sufficiently penetrate the fog layer, keeping temperatures below freezing. Episodes of freezing drizzle occasionally accompany tule fog events during winter. Such events can leave an invisible glaze of black ice on roadways, making travel especially treacherous.
Composition
Besides water droplets, the composition of tule fog in the San Joaquin and Sacramento valleys includes ammonia, nitrate and sulfate concentrations. Furthermore, ammonia is the most commonly found single ion and usually is measured to be more than half of the measured ions in the fog. Depending on the region within the California Central Valley, the composition of tule fog can vary in element or ion concentrations.
As of 2014, it has been discovered that the quantity of tule fog has decreased in the Central Valley from when it was initially studied from 1981 to 1999 compared to 2001–2012. The frequency of tule fog occurrence is proportional to the higher air pollution in California. Minimum temperature, DPD (the difference between ambient temperature and dew point), precipitation, and wind speed are the four major components that affect fog formation. Minimum temperature affects tule fog formation because it is an extreme form of radiation fog that most often forms after sunset due to rapid surface radiative cooling. Low DPD is consistent with more frequent periods of fog. Precipitation has somewhat of a correlation to an increase in fog; however, it is not directly correlated due to some precipitation totals being inversely correlated to some fog years. Wind speed has a small but statistically important impact on fog frequency: lower wind speeds are correlated with higher fog frequency.
Winter causes the optimal meteorological conditions for fog formation due to periodic storms followed by extended periods of high pressure throughout California.
References
External links
Page on tule fog from the National Oceanic and Atmospheric Administration (NOAA)
University Corporation for Atmospheric Research: Forecasting Radiation Fog
Davis, CA Tule fog — featured in Orion Magazine.
Fog
Natural history of the Central Valley (California)
Climate of California
Weather events in the United States | Tule fog | Physics | 1,124 |
10,357,796 | https://en.wikipedia.org/wiki/Star%20lore | Star lore or starlore is the creating and cherishing of mythical stories about the stars and star patterns (constellations and asterisms); that is, folklore based upon the stars and star patterns. Using the stars to explain religious doctrines or actual events in history is also defined as star lore. Star lore has a very long history; it has been practiced by nearly every culture recorded in history, dating as far back as 5,500 years ago. It was practiced by prehistoric cultures of the Paleolithic and Neolithic periods as well.
Orion and Scorpius
One example of star lore is the inventing of the story of Orion the Hunter and the Scorpius the Scorpion by the ancient Greeks. This ancient culture saw a very startling pattern of bright stars in the winter sky that, from their point of view, resembled a mighty hunter, which they named Orion. During the summer, they saw another startling pattern of bright stars that resembled a scorpion. They noticed that the constellations of Orion and the scorpion were positioned at opposite ends of the sky and were never seen in the sky simultaneously. As one constellation rose above the eastern horizon, the other was setting below the western horizon, and when either one was high in the sky, the other was completely absent. The ancient Greeks felt compelled to explain this phenomenon by composing a story or myth based on the two constellations.
The story was that Orion was a mighty and proud hunter who was stung by a scorpion. Orion died of the scorpion's sting and was placed among the stars by the gods. Although the scorpion was destroyed by the gods in vengeance for killing Orion, it was also placed among the stars. In order to prevent Orion and the scorpion from quarreling and fighting with each other in the sky, the gods placed Orion and the scorpion at opposite ends of the sky, and in opposite seasons, so that both of them can never be seen in the sky at the same time.
Andromeda
Another example of star lore is the story behind the constellation Andromeda, also known as "the chained woman". Andromeda was the daughter of the king and queen of Ethiopia, King Cepheus and Cassiopeia. The story goes that because Cassiopeia bragged so much of Andromeda's beauty to the Nereids, daughters of Poseidon, that they complained to their father, who sent a sea monster to destroy the coast of Ethiopia. Cepheus consulted an oracle for assistance and learned that the only way to save his lands was to sacrifice his daughter to Poseidon's monster.
Andromeda was chained to a rock and left for the sea monster. Perseus, the hero of the story who had just killed the Gorgon Medusa found Andromeda in her distress and immediately, the two fell in love. Perseus asked for her name and refused to leave until he knew it, talking to her until she gave in. Andromeda told him her name, her country, and the reason for her imprisonment on the rock. He then consulted with Cepheus and Cassiopeia, and they decided that if Perseus rescued Andromeda from the sea monster, he could marry her. The story of how he then defeats the monster varies. Ovid describes his killing of the monster as a drawn out bloody battle. Other sources say that Perseus killed the sea monster with the aid of Medusa's head, turning the monster to stone. Andromeda and Perseus were married soon after, despite already being promised to her uncle, Phineus. At the wedding, Phineus and Perseus got into an altercation, and Perseus turned Phineus to stone using Medusa's head.
The constellation is said to have astrological influences as well. It is said that any man born at the same time Andromeda is said to rise from the sea will be one without mercy; he will be emotionally unmoved even in the presence of grieving parents. The constellation also influences the birth of the executioner, a man who will kill swiftly for money and kill willingly. Sources describe men born with the rise of Andromeda as one who would feel nothing if faced with Andromeda chained to her rock, just as Perseus did and fell in love with the girl.
Draco
The Draco is another example of star lore. In Roman mythology, the constellation is representative of Ladon, the dragon that guarded the golden apples inside the garden, Hesperides. The tree was a wedding gift to Hera when she and Zeus were married, and she planted it on Mount Atlas. Hera tasked the Hesperides to guard the tree and put Ladon around the tree as well to ensure that the Hesperides would not steal the apples. In some sources, Ladon is called the child of Typhon and Echidna, who was half woman and half viper, and had hundreds of heads. In other versions of Ladon's story, the number of heads he had is not at all mentioned and he is described as the offspring of Ceto and Phorcys, two sea deities.
Contrastingly, in Roman mythology, Draco was one of the Titans who waged war on the Olympic gods for ten years. He was killed by Minerva in the battle and thrown into the sky.
See also
Dog days
References
Star Lore: Prehistoric Skywriting
Star Tales – Ian Ridpath
Andromeda: the Chained Woman
Andromeda
Draco Constellation
Astronomical myths
Ancient astronomy
Stars
Folklore | Star lore | Astronomy | 1,127 |
7,246,375 | https://en.wikipedia.org/wiki/Covalent%20radius%20of%20fluorine | The covalent radius of fluorine is a measure of the size of a fluorine atom; it is approximated at about 60 picometres.
Since fluorine is a relatively small atom with a large electronegativity, its covalent radius is difficult to evaluate. The covalent radius is defined as half the bond lengths between two neutral atoms of the same kind connected with a single bond. By this definition, the covalent radius of F is 71 pm. However, the F-F bond in F2 is abnormally weak and long. Besides, almost all bonds to fluorine are highly polar because of its large electronegativity, so the use of a covalent radius to predict the length of such a bond is inadequate and the bond lengths calculated from these radii are almost always longer than the experimental values.
Bonds to fluorine have considerable ionic character, a result of its small atomic radius and large electronegativity. Therefore, the bond length of F is influenced by its ionic radius, the size of ions in an ionic crystal, which is about 133 pm for fluoride ions. The ionic radius of fluoride is much larger than its covalent radius. When F becomes F−, it gains one electron but has the same number of protons, meaning the repulsion of the electrons is stronger, and the radius is larger.
Brockway
The first attempt at trying to find the covalent radius of fluorine was in 1937, by Brockway. Brockway prepared a vapour of F2 molecules by means of the electrolysis of potassium bifluoride (KHF2) in a fluorine generator, which was constructed of Monel metal. Then, the product was passed over potassium fluoride so as to remove any hydrogen fluoride (HF) and to condense the product into a liquid. A sample was collected by evaporating the condensed liquid into a Pyrex flask. Finally, using electron diffraction, it was determined that the bond length between the two fluorine atoms was about 145 pm. He therefore assumed that the covalent radius of fluorine was half this value, or 73 pm. This value, however, is inaccurate due to the large electronegativity and small radius of fluorine atom.
Schomaker and Stevenson
In 1941, Schomaker and Stevenson proposed an empirical equation to determine the bond length of an atom based on the differences in electronegativities of the two bonded atoms.
dAB = rA + rB – C|xA – xB|
(where dAB is the predicted bond length or distance between two atoms, rA and rB are the covalent radii (in picometers) of the two atoms, and |xA – xB| is the absolute difference in the electronegativities of elements A and B. C is a constant which Schomaker and Stevenson took as 9 pm.)
This equation predicts a bond length which closer to the experimental value. Its major weakness is the use of the covalent radius of fluorine that is known as being too large.
Pauling
In 1960, Linus Pauling proposed an additional effect called "back bonding" to account for the smaller experimental values compared to the theory. His model predicts that F donates electrons into a vacant atomic orbital in the atom it is bonded to, giving the bonds a certain amount of sigma bond character. In addition, the fluorine atom also receives a certain amount of pi electron density back from the central atom giving rise to double bond character through (p-p)π or (p-d)π "back bonding". Thus, this model suggests that the observed shortening of the lengths of bonds is due to these double bond characteristics.
Reed and Schleyer
Reed and Schleyer, who were skeptical of Pauling's proposition, suggested another model in 1990. They determined that there was no significant back-bonding, but instead proposed that there is extra pi bonding, which arose from the donation of ligand lone pairs into X-F orbitals. Therefore, Reed and Schleyer believed that the observed shortening of bond lengths in fluorine molecules was a direct result of the extra pi bonding originating from the ligand, which brought the atoms closer together.
Ronald Gillespie
In 1992, Ronald Gillespie and Edward A. Robinson suggested that the value of 71 pm was too large because of the unusual weakness of the F-F bond in F2. Therefore, they proposed using the value of 54 pm for the covalent radius of fluorine. However, there are two variations on this predicted value: if they have either long bonds or short bonds.
An XFn molecule will have a bond length longer than the predicted value whenever there are one or more lone pairs in a filled valence shell. For example, BrF5 is a molecule where the experimental bond length is longer than the predicted value of 54 pm.
In molecules in which a central atom does not complete the octet rule (has less than the maximum number of electron pairs), then it gives rise to partial double bonding characteristics and thus, making the bonds shorter than 54 pm. For example, the short bond length of BF3 can be attributed to the delocalization of the fluorine lone pairs.
In 1997, Gillespie et al. found that his original prediction was too low, and that the covalent radius of fluorine is about 60 pm. Using the Gaussian 94 package, they calculated the wave function and electron density distribution for several fluorine molecules. Contour plots of the electron density distribution were then drawn, which were used to evaluate the bond length of fluorine to other molecules. The authors found that the length of X-F bonds decrease as the product of the charges on A and F increases. Furthermore, the X-F bond length decreases with a decreasing coordination number n. The number of fluorine atoms that are packed around the central atom is an important factor for calculating the bond length. Also, the smaller the bond angle (<FXF) between F and the central atom, the longer the bond length of fluorine. Finally, the most accurate value for the covalent radius of fluorine has been found by plotting the covalent radii against the electronegativity (see figure). From this, they discovered that the Schomaker-Stevenson and Pauling assumptions were too high, and their previous guess was too low, thus, resulting in a final value of 60 pm for the covalent bond length of fluorine.
Pekka Pyykkö
Theoretical chemist Pekka Pyykkö estimated that the covalent radius for a fluorine atom to be 64 pm in a single bond, 59 pm and 53 pm in molecules where the bond to the fluorine atom has a double bond and triple bond character, respectively.
References
Fluorine
Atomic radius | Covalent radius of fluorine | Physics | 1,417 |
3,112,875 | https://en.wikipedia.org/wiki/Computational%20immunology | In academia, computational immunology is a field of science that encompasses high-throughput genomic and bioinformatics approaches to immunology. The field's main aim is to convert immunological data into computational problems, solve these problems using mathematical and computational approaches and then convert these results into immunologically meaningful interpretations.
Introduction
The immune system is a complex system of the human body and understanding it is one of the most challenging topics in biology. Immunology research is important for understanding the mechanisms underlying the defense of human body and to develop drugs for immunological diseases and maintain health. Recent findings in genomic and proteomic technologies have transformed the immunology research drastically. Sequencing of the human and other model organism genomes has produced increasingly large volumes of data relevant to immunology research and at the same time huge amounts of functional and clinical data are being reported in the scientific literature and stored in clinical records. Recent advances in bioinformatics or computational biology were helpful to understand and organize these large-scale data and gave rise to new area that is called Computational immunology or immunoinformatics.
Computational immunology is a branch of bioinformatics and it is based on similar concepts and tools, such as sequence alignment and protein structure prediction tools. Immunomics is a discipline like genomics and proteomics. It is a science, which specifically combines immunology with computer science, mathematics, chemistry, and biochemistry for large-scale analysis of immune system functions. It aims to study the complex protein–protein interactions and networks and allows a better understanding of immune responses and their role during normal, diseased and reconstitution states. Computational immunology is a part of immunomics, which is focused on analyzing large-scale experimental data.
History
Computational immunology began over 90 years ago with the theoretic modeling of malaria epidemiology. At that time, the emphasis was on the use of mathematics to guide the study of disease transmission. Since then, the field has expanded to cover all other aspects of immune system processes and diseases.
Immunological database
After the recent advances in sequencing and proteomics technology, there have been many fold increase in generation of molecular and immunological data. The data are so diverse that they can be categorized in different databases according to their use in the research. Until now there are total 31 different immunological databases noted in the Nucleic Acids Research (NAR) Database Collection, which are given in the following table, together with some more immune related databases. The information given in the table is taken from the database descriptions in NAR Database Collection.
Online resources for allergy information are also available on http://www.allergen.org. Such data is valuable for investigation of cross-reactivity between known allergens and analysis of potential allergenicity in proteins. The Structural Database of Allergen Proteins (SDAP) stores information of allergenic proteins. The Food Allergy Research and Resource Program (FARRP) Protein Allergen-Online Database contains sequences of known and putative allergens derived from scientific literature and public databases. Allergome emphasizes the annotation of allergens that result in an IgE-mediated disease.
Tools
A variety of computational, mathematical and statistical methods are available and reported. These tools are helpful for collection, analysis, and interpretation of immunological data. They include text mining, information management, sequence analysis, analysis of molecular interactions, and mathematical models that enable advanced simulations of immune system and immunological processes.
Attempts are being made for the extraction of interesting and complex patterns from non-structured text documents in the immunological domain, such as categorization of allergen cross-reactivity information, identification of cancer-associated gene variants and the classification of immune epitopes.
Immunoinformatics is using the basic bioinformatics tools such as ClustalW, BLAST, and TreeView, as well as specialized immunoinformatics tools, such as EpiMatrix, IMGT/V-QUEST for IG and TR sequence analysis, IMGT/ Collier-de-Perles and IMGT/StructuralQuery for IG variable domain structure analysis. Methods that rely on sequence comparison are diverse and have been applied to analyze HLA sequence conservation, help verify the origins of human immunodeficiency virus (HIV) sequences, and construct homology models for the analysis of hepatitis B virus polymerase resistance to lamivudine and emtricitabine.
There are also some computational models which focus on protein–protein interactions and networks. There are also tools which are used for T and B cell epitope mapping, proteasomal cleavage site prediction, and TAP– peptide prediction. The experimental data is very much important to design and justify the models to predict various molecular targets. Computational immunology tools is the game between experimental data and mathematically designed computational tools.
Applications
Allergies
Allergies, while a critical subject of immunology, also vary considerably among individuals and sometimes even among genetically similar individuals. The assessment of protein allergenic potential focuses on three main aspects: (i) immunogenicity; (ii) cross-reactivity; and (iii) clinical symptoms. Immunogenicity is due to responses of an IgE antibody-producing B cell and/or of a T cell to a particular allergen. Therefore, immunogenicity studies focus mainly on identifying recognition sites of B-cells and T-cells for allergens. The three-dimensional structural properties of allergens control their allergenicity.
The use of immunoinformatics tools can be useful to predict protein allergenicity and will become increasingly important in the screening of novel foods before their wide-scale release for human use. Thus, there are major efforts under way to make reliable broad based allergy databases and combine these with well validated prediction tools in order to enable the identification of potential allergens in genetically modified drugs and foods. Though the developments are on primary stage, the World Health organization and Food and Agriculture Organization have proposed guidelines for evaluating allergenicity of genetically modified foods. According to the Codex alimentarius, a protein is potentially allergenic if it possesses an identity of ≥6 contiguous amino acids or ≥35% sequence similarity over an 80 amino acid window with a known allergen. Though there are rules, their inherent limitations have started to become apparent and exceptions to the rules have been well reported
Infectious diseases and host responses
In the study of infectious diseases and host responses, the mathematical and computer models are a great help. These models were very useful in characterizing the behavior and spread of infectious disease, by understanding the dynamics of the pathogen in the host and the mechanisms of host factors which aid pathogen persistence. Examples include Plasmodium falciparum and nematode infection in ruminants.
Much has been done in understanding immune responses to various pathogens by integrating genomics
and proteomics with bioinformatics strategies. Many exciting developments in large-scale screening of pathogens are currently taking place. National Institute of Allergy and Infectious Diseases (NIAID) has initiated an endeavor for systematic mapping of B and T cell epitopes of category A-C pathogens. These pathogens include Bacillus anthracis (anthrax), Clostridium botulinum toxin (botulism), Variola major (smallpox), Francisella tularensis (tularemia), viral hemorrhagic fevers, Burkholderia pseudomallei, Staphylococcus enterotoxin B, yellow fever, influenza, rabies, Chikungunya virus etc. Rule-based systems have been reported for the automated extraction and curation of influenza A records.
This development would lead to the development of an algorithm which would help to identify the conserved regions of pathogen sequences and in turn would be useful for vaccine development. This would be helpful in limiting the spread of infectious disease. Examples include a method for identification of vaccine targets from protein regions of conserved HLA binding and computational assessment of cross-reactivity of broadly neutralizing antibodies against viral pathogens. These examples illustrate the power of immunoinformatics applications to help solve complex problems in public health. Immunoinformatics could accelerate the discovery process dramatically and potentially shorten the time required for vaccine development. Immunoinformatics tools have been used to design the vaccine against SARS-CoV-2, Dengue virus and Leishmania.
Immune system function
Using this technology it is possible to know the model behind immune system. It has been used to model T-cell-mediated suppression, peripheral lymphocyte migration, T-cell memory, tolerance, thymic function, and antibody networks. Models are helpful to predicts dynamics of pathogen toxicity and T-cell memory in response to different stimuli. There are also several models which are helpful in understanding the nature of specificity in immune network and immunogenicity.
For example, it was useful to examine the functional relationship between TAP peptide transport and HLA class I antigen presentation. TAP is a transmembrane protein responsible for the transport of antigenic peptides into the endoplasmic reticulum, where MHC class I molecules can bind them and presented to T cells. As TAP does not bind all peptides equally, TAP-binding affinity could influence the ability of a particular peptide to gain access to the MHC class I pathway. Artificial neural network (ANN), a computer model was used to study peptide binding to human TAP and its relationship with MHC class I binding. The affinity of HLA-binding peptides for TAP was found to differ according to the HLA supertype concerned using this method. This research could have important implications for the design of peptide based immuno-therapeutic drugs and vaccines. It shows the power of the modeling approach to understand complex immune interactions.
There exist also methods which integrate peptide prediction tools with computer simulations that can provide detailed information on the immune response dynamics specific to the given pathogen's peptides
.
Cancer Informatics
Cancer is the result of somatic mutations which provide cancer cells with a selective growth advantage. Recently it has been very important to determine the novel mutations. Genomics and proteomics techniques are used worldwide to identify mutations related to each specific cancer and their treatments. Computational tools are used to predict growth and surface antigens on cancerous cells. There are publications explaining a targeted approach for assessing mutations and cancer risk. Algorithm CanPredict was used to indicate how closely a specific gene resembles known cancer-causing genes. Cancer immunology has been given so much importance that the data related to it is growing rapidly. Protein–protein interaction networks provide valuable information on tumorigenesis in humans. Cancer proteins exhibit a network topology that is different from normal proteins in the human interactome. Immunoinformatics have been useful in increasing success of tumour vaccination. Recently, pioneering works have been conducted to analyse the host immune system dynamics in response to artificial immunity induced by vaccination strategies. Other simulation tools use predicted cancer peptides to forecast immune specific anticancer responses that is dependent on the specified HLA.
These resources are likely to grow significantly in the near future and immunoinformatics will be a major growth area in this domain.
See also
Computational biology
Immunology
Genetics
Cancer
Immunity
References
External links
Boston University Center for Computational Immunology
York Computational Immunology Lab
Immunoinformatics Immunological Software and Web Services from Gajendra Pal Singh Raghava group
VacTarBac A web based platform for predicted vaccine candidates against major pathogens.
Bioinformatics
Branches of immunology
Genomics
Computational fields of study | Computational immunology | Technology,Engineering,Biology | 2,421 |
72,767,636 | https://en.wikipedia.org/wiki/Apafant | Apafant (WEB-2086, LSM-2613) is a drug which acts as a potent and selective inhibitor of the phospholipid mediator platelet-activating factor (PAF). It was developed by structural modification of the thienotriazolodiazepine sedative drug brotizolam and demonstrated that PAF inhibitory actions could be separated from activity at the benzodiazepine receptor. Apafant was investigated for several applications involving inflammatory responses such as asthma and conjunctivitis but was never adopted for medical use, however it continues to be used in pharmacology research.
References
Thienotriazolodiazepines
2-Chlorophenyl compounds
4-Morpholinyl compounds
Amides | Apafant | Chemistry | 164 |
10,696,667 | https://en.wikipedia.org/wiki/W%20Cephei | W Cephei is a spectroscopic binary and variable star located in the constellation Cepheus. It is thought to be a member of the Cep OB1 stellar association at about 8,000 light years. The supergiant primary star is one of the largest known stars and as well as one of the most luminous red supergiants.
Discovery
W Cephei was catalogued as BD+57°2568 in the Bonner Durchmusterung published in 1903, and HD 214369 in the Henry Draper Catalogue. It was discovered to be a variable star by T. H. E. C. Espin, in 1885. It was described in 1896 as a red star varying from magnitude 7.3 to 8.3.
In 1925, W Cep was included in a listing of Be stars. It was recognised as a cool star with spectral type Mep. It was classified as K0ep Ia from a 1949 spectrum, but also recognised to have a small hot companion, plus an unusual infrared excess. Ultraviolet spectra allowed absorption lines from the companion to be studied and it was given a spectral type of B0-1.
System
The W Cephei system contains a luminous red supergiant star with a non-supergiant early B companion. The star has unusual emission lines including both permitted and forbidden FeII, produced by a circumstellar envelope containing dust and ionised gas. The two components have been resolved at using speckle interferometry. An orbital period of 2,090 days has been proposed.
Variability
W Cephei varies in brightness from 7th to 9th magnitude. The General Catalogue of Variable Stars lists it as a semiregular variable with a period of 370 days, but later attempts to find a period have shown only random variations. It has also been proposed that eclipses occur.
References
External links
AAVSO chart of comparison stars for W Cephei
British Astronomical Association VSS light curves
Cephei, W
Cepheus (constellation)
Spectroscopic binaries
Emission-line stars
K-type supergiants
M-type supergiants
B-type main-sequence stars
Semiregular variable stars
BD+57 2568
214369
111592 | W Cephei | Astronomy | 458 |
6,600,588 | https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Antlia | This is the list of notable stars in the constellation Antlia, sorted by decreasing brightness.
See also
List of stars by constellation
References
List
Antlia | List of stars in Antlia | Astronomy | 31 |
65,053,190 | https://en.wikipedia.org/wiki/Conservation%20Letters | Conservation Letters is a bimonthly peer-reviewed open access scientific journal of the Society for Conservation Biology published by Wiley-Blackwell. It was established in 2008 and covers research on all aspects of conservation biology. The editor-in-chief is Graeme Cumming.
Abstracting and indexing
The journal is abstracted and indexed in:
Biological Abstracts
BIOSIS Previews
CAB Abstracts
Current Contents/Agriculture, Biology & Environmental Sciences
EBSCO databases
Science Citation Index Expanded
Scopus
Veterinary Science Database
The Zoological Record
According to the Journal Citation Reports, the journal has a 2020 impact factor of 8.105.
References
External links
Wiley-Blackwell academic journals
Academic journals established in 2008
Bimonthly journals
Creative Commons Attribution-licensed journals
English-language journals
Ecology journals | Conservation Letters | Environmental_science | 151 |
64,989,268 | https://en.wikipedia.org/wiki/IBRO-Kemali%20Prize | The IBRO Dargut and Milena Kemali International Prize for Research in the field of Basic and Clinical Neurosciences' is a prize awarded every two years to an outstanding researcher, under 45 years old, who made important contributions in the field of Basic and Clinical Neurosciences. The award was established in 1998.
The prize award equals 25,000 Euros, and the prize winner is invited to give a lecture at the Federation of European Neuroscience Societies (FENS) Forum of Neuroscience held every two year. According to the FENS regulations, speakers from the previous FENS Forum cannot be speakers at the next FENS Forum. Nominations should be submitted in electronic format and are evaluated by the Prize Committee of the IBRO Dargut & Milena Kemali Foundation.
Prize winners
2022 – Sergiu P. Pasca (Romania, USA) – for his innovative research work using stem cell technology to create human brain organoids and assembloids, and their application to realistic studies of cellular mechanisms of human brain development and disease mechanisms.
2020 - Hailan Hu (Zheijiang, China) – for impressive work on the fundamental neurobiological mechanisms of emotional and affective behaviors.
2018 - Guillermina López-Bendito (Alicante, Spain) - for outstanding work on mechanisms of axon guidance in brain development, and in particular in thalamocortical connectivity.
2016 - Casper Hoogenraad (Utrecht, The Netherlands) - for outstanding work on cytoskeleton dynamics and intracellular transport in neural development and synaptic plasticity.
2014 - Patrik Verstreken (Leuven, Belgium) - for success in undoing the effect of one of the genetic defects that leads to Parkinson's using vitamin K2.
2012 - Eleanor Maguire (London, UK) - for innovative contributions to understanding human memory.
2010 - (Stockholm, Sweden) - for pioneering contributions to understanding of neurogenesis in the central nervous system.
2008 - (San Diego, CA, USA) - for seminal discoveries on how cerebral cortex perceives the environment by showing that cortical circuits operate in an activity-dependent and non-linear fashion using canonical feed-forward and feed-back inhibition circuits as feature detectors of incoming stimuli.
2006 - (Stockholm, Sweden) - for outstanding work on the expression and function of neurotrophic factors and neuropeptide and their receptors exploiting transgenic techniques.
2004 - Cornelia I. Bargmann (San Francisco, CA, USA) for fundamental discoveries concerning genes, behavior, and the sense of smell in the nematode C. elegans.
2002 - Daniele Piomelli (Irvine, CA, USA) for fundamental discoveries concerning the functional roles and regulation of endogenous cannabinoids in the brain and peripheral tissues.
2000 - Robert C. Malenka (Boston, MA, USA) - for fundamental contributions in the field of synaptic plasticity, in particular long term potentiation and long term depression, and the characterization of the role of silent synapses in these processes.
1998 - Tamas Freund (Budapest, Hungary) - for outstanding contributions to the organization and chemical characterization of identified neuronal circuits and cell types in the brain, in particular in the hippocampus.
References
Neuroscience awards
European science and technology awards
International awards | IBRO-Kemali Prize | Technology | 683 |
2,452,868 | https://en.wikipedia.org/wiki/Alessandro%20Padoa | Alessandro Padoa (14 October 1868 – 25 November 1937) was an Italian mathematician and logician, a contributor to the school of Giuseppe Peano. He is remembered for a method for deciding whether, given some formal theory, a new primitive notion is truly independent of the other primitive notions. There is an analogous problem in axiomatic theories, namely deciding whether a given axiom is independent of the other axioms.
The following description of Padoa's career is included in a biography of Peano:
He attended secondary school in Venice, engineering school in Padua, and the University of Turin, from which he received a degree in mathematics in 1895. Although he was never a student of Peano, he was an ardent disciple and, from 1896 on, a collaborator and friend. He taught in secondary schools in Pinerolo, Rome, Cagliari, and (from 1909) at the Technical Institute in Genoa. He also held positions at the Normal School in Aquila and the Naval School in Genoa, and, beginning in 1898, he gave a series of lectures at the Universities of Brussels, Pavia, Berne, Padua, Cagliari, and Geneva. He gave papers at congresses of philosophy and mathematics in Paris, Cambridge, Livorno, Parma, Padua, and Bologna. In 1934 he was awarded the ministerial prize in mathematics by the Accademia dei Lincei (Rome).
The congresses in Paris in 1900 were particularly notable. Padoa's addresses at these congresses have been well remembered for their clear and unconfused exposition of the modern axiomatic method in mathematics. In fact, he is said to be "the first … to get all the ideas concerning defined and undefined concepts completely straight".
Congressional addresses
Philosophers' congress
At the International Congress of Philosophy Padoa spoke on "Logical Introduction to Any Deductive Theory". He says
during the period of elaboration of any deductive theory we choose the ideas to be represented by the undefined symbols and the facts to be stated by the unproved propositions; but, when we begin to formulate the theory, we can imagine that the undefined symbols are completely devoid of meaning and that the unproved propositions (instead of stating facts, that is, relations between the ideas represented by the undefined symbols) are simply conditions imposed upon undefined symbols.
Then, the system of ideas that we have initially chosen is simply one interpretation of the system of undefined symbols; but from the deductive point of view this interpretation can be ignored by the reader, who is free to replace it in his mind by another interpretation that satisfies the conditions stated by the unproved propositions. And since the propositions, from the deductive point of view, do not state facts, but conditions, we cannot consider them genuine postulates.
Padoa went on to say:
...what is necessary to the logical development of a deductive theory is not the empirical knowledge of the properties of things, but the formal knowledge of relations between symbols.
Mathematicians' congress
Padoa spoke at the 1900 International Congress of Mathematicians with his title "A New System of Definitions for Euclidean Geometry". At the outset he discusses the various selections of primitive notions in geometry at the time:
The meaning of any of the symbols that one encounters in geometry must be presupposed, just as one presupposes that of the symbols which appear in pure logic. As there is an arbitrariness in the choice of the undefined symbols, it is necessary to describe the chosen system. We cite only three geometers who are concerned with this question and who have successively reduced the number of undefined symbols, and through them (as well as through symbols that appear in pure logic) it is possible to define all the other symbols.
First, Moritz Pasch was able to define all the other symbols through the following four:
1. point 2. segment (of a line)
3. plane 4. is superimposable upon
Then, Giuseppe Peano was able in 1889 to define plane through point and segment. In 1894 he replaced is superimposable upon with motion in the system of undefined symbols, thus reducing the system to symbols:
1. point 2. segment 3. motion
Finally, in 1899 Mario Pieri was able to define segment through point and motion. Consequently, all the symbols that one encounters in Euclidean geometry can be defined in terms of only two of them, namely
1. point 2. motion
Padoa completed his address by suggesting and demonstrating his own development of geometric concepts. In particular, he showed how he and Pieri define a line in terms of
collinear points.
References
Bibliography
A. Padoa (1900) "Logical introduction to any deductive theory" in Jean van Heijenoort, 1967. A Source Book in Mathematical Logic, 1879–1931. Harvard Univ. Press: 118–23.
A. Padoa (1900) "Un Nouveau Système de Définitions pour la Géométrie Euclidienne", Proceedings of the International Congress of Mathematicians, tome 2, pages 353–63.
Secondary:
Ivor Grattan-Guinness (2000) The Search for Mathematical Roots 1870–1940. Princeton Uni. Press.
H.C. Kennedy (1980) Peano, Life and Works of Giuseppe Peano, D. Reidel .
Suppes, Patrick (1957, 1999) Introduction to Logic, Dover. Discusses "Padoa's method."
Jean Van Heijenoort (ed.) (1967) From Frege to Gödel. Cambridge: Harvard University Press
External links
1868 births
1937 deaths
20th-century Italian Jews
19th-century Italian mathematicians
20th-century Italian mathematicians
Number theorists
Italian geometers
Algebraists
19th-century Italian Jews | Alessandro Padoa | Mathematics | 1,176 |
641,982 | https://en.wikipedia.org/wiki/Mushroom%20poisoning | Mushroom poisoning is poisoning resulting from the ingestion of mushrooms that contain toxic substances. Symptoms can vary from slight gastrointestinal discomfort to death in about 10 days. Mushroom toxins are secondary metabolites produced by the fungus.
Mushroom poisoning is usually the result of ingestion of wild mushrooms after misidentification of a toxic mushroom as an edible species. The most common reason for this misidentification is a close resemblance in terms of color and general morphology of the toxic mushrooms species with edible species. To prevent mushroom poisoning, mushroom gatherers familiarize themselves with the mushrooms they intend to collect, as well as with any similar-looking toxic species. The safety of eating wild mushrooms may depend on methods of preparation for cooking. Some toxins, such as amatoxins, are thermostable and mushrooms containing such toxins will not be rendered safe to eat by cooking.
Signs and symptoms
Poisonous mushrooms contain a variety of different toxins that can differ markedly in toxicity. Symptoms of mushroom poisoning may vary from gastric upset to organ failure resulting in death. Serious symptoms do not always occur immediately after eating, often not until the toxin attacks the kidney or liver, sometimes days or weeks later.
The most common consequence of mushroom poisoning is simply gastrointestinal upset. Most "poisonous" mushrooms contain gastrointestinal irritants that cause vomiting and diarrhea (sometimes requiring hospitalization), but usually no long-term damage. However, there are a number of recognized mushroom toxins with specific, and sometimes deadly, effects:
The period between ingestion and the onset of symptoms varies dramatically between toxins, some taking days to show symptoms identifiable as mushroom poisoning.
α-Amanitin: For 6–12 hours, there are no symptoms. This is followed by a period of gastrointestinal upset (vomiting and profuse, watery diarrhea). This stage is caused primarily by the phallotoxins and typically lasts 24 hours. At the end of this second stage is when severe liver damage begins. The damage may continue for another 2–3 days. Kidney damage can also occur. Some patients will require a liver transplant. Amatoxins are found in some mushrooms in the genus Amanita, but are also found in some species of Galerina and Lepiota. Overall, mortality is between 10 and 15 percent. Recently, Silybum marianum or blessed milk thistle has been shown to protect the liver from amanita toxins and promote regrowth of damaged cells.
Orellanine: This toxin generally causes no symptoms for 3–20 days after ingestion. Typically around day 11, the process of kidney failure begins, and is usually symptomatic by day 20. These symptoms can include pain in the area of the kidneys, thirst, vomiting, headache, and fatigue. A few species in the very large genus Cortinarius contain this toxin. People having eaten mushrooms containing orellanine may experience early symptoms as well, because the mushrooms often contain other toxins in addition to orellanine. A related toxin that causes similar symptoms but within 3–6 days has been isolated from Amanita smithiana and some other related toxic Amanitas.
Muscarine: Muscarine stimulates the muscarinic receptors of the nerves and muscles. Symptoms include sweating, salivation, tears, blurred vision, palpitations, and, in high doses, respiratory failure. Muscarine is found in mushrooms of the genus Omphalotus, notably the jack o' Lantern mushrooms. It is also found in A. muscaria, although it is now known that the main effect of this mushroom is caused by ibotenic acid. Muscarine can also be found in some Inocybe species and Clitocybe species, in particular Clitocybe dealbata, and some red-pored Boletes.
Gyromitrin: Stomach acids convert gyromitrin to monomethylhydrazine (MMH). It affects multiple body systems. It blocks the important neurotransmitter GABA, leading to stupor, delirium, muscle cramps, loss of coordination, tremors, and/or seizures. It causes severe gastrointestinal irritation, leading to vomiting and diarrhea. In some cases, liver failure has been reported. It can also cause red blood cells to break down, leading to jaundice, kidney failure, and signs of anemia. It is found in mushrooms of the genus Gyromitra. A gyromitrin-like compound has also been identified in mushrooms of the genus Verpa.
Coprine: Coprine is metabolized to a chemical that resembles disulfiram. It inhibits aldehyde dehydrogenase (ALDH), which, in general, causes no harm, unless the person has alcohol in their bloodstream while ALDH is inhibited. This can happen if alcohol is ingested shortly before or up to a few days after eating the mushrooms. In that case, the alcohol cannot be completely metabolized, and the person will experience flushed skin, vomiting, headache, dizziness, weakness, apprehension, confusion, palpitations, and sometimes trouble to breathe. Coprine is found mainly in mushrooms of the genus Coprinus, although similar effects have been noted after ingestion of Clitocybe clavipes.
Ibotenic acid: Decarboxylates into muscimol upon ingestion. The effects of muscimol vary, but nausea and vomiting are common. Confusion, euphoria, or sleepiness are possible. Loss of muscular coordination, sweating, and chills are likely. Some people experience visual distortions, a feeling of strength, or delusions. Symptoms normally appear after 30 minutes to 2 hours and last for several hours. A. muscaria, the "Alice in Wonderland" mushroom, is known for the hallucinatory experiences caused by muscimol, but A. pantherina and A. gemmata also contain the same compound. While normally self-limiting, fatalities have been associated with A. pantherina, and consumption of a large number of any of these mushrooms is likely to be dangerous.
Arabitol: A sugar alcohol, similar to mannitol, which causes no harm in most people but causes gastrointestinal irritation in some. It is found in small amounts in oyster mushrooms, and considerable amounts in Suillus species and Hygrophoropsis aurantiaca (the "false chanterelle").
Causes
New species of fungi are continuing to be discovered, with an estimated number of 800 new species registered annually. This, added to the fact that many investigations have recently reclassified some species of mushrooms from edible to poisonous has made older classifications insufficient at describing what now is known about the different species of fungi that are harmful to humans. It is now thought that of the approximately 100,000 known fungi species found worldwide, about 100 of them are poisonous to humans. However, by far the majority of mushroom poisonings are not fatal, and the majority of fatal poisonings are attributable to the Amanita phalloides mushroom.
A majority of these cases are due to mistaken identity. This is a common occurrence with A. phalloides in particular, due to its resemblance to the Asian paddy-straw mushroom, Volvariella volvacea. Both are light-colored and covered with a universal veil when young.
Amanitas can be mistaken for other species, as well, in particular when immature. On at least one occasion they have been mistaken for Coprinus comatus. In this case, the victim had some limited experience in identifying mushrooms, but did not take the time to correctly identify these particular mushrooms until after he began to experience symptoms of mushroom poisoning.
The author of Mushrooms Demystified, David Arora cautions puffball-hunters to beware of Amanita "eggs", which are Amanitas still entirely encased in their universal veil. Amanitas at this stage are difficult to distinguish from puffballs. Foragers are encouraged to always cut the fruiting bodies of suspected puffballs in half, as this will reveal the outline of a developing Amanita should it be present within the structure.
A majority of mushroom poisonings, in general, are the result of small children, especially toddlers in the "grazing" stage, ingesting mushrooms found on the lawn. While this can happen with any mushroom, Chlorophyllum molybdites is often implicated due to its preference for growing in lawns. C. molybdites causes severe gastrointestinal upset but is not considered deadly poisonous.
A few poisonings are the result of misidentification while attempting to collect hallucinogenic mushrooms for recreational use. In 1981, one fatality and two hospitalizations occurred following consumption of Galerina marginata, mistaken for a Psilocybe species. Galerina and Psilocybe species are both small, brown, and sticky, and can be found growing together. However, Galerina contains amatoxins, the same poison found in the deadly Amanita species. Another case reports kidney failure following ingestion of Cortinarius orellanus, a mushroom containing orellanine.
It is natural that accidental ingestion of hallucinogenic species also occurs, but is rarely harmful when ingested in small quantities. Cases of serious toxicity have been reported in small children. Amanita pantherina, while containing the same hallucinogens as Amanita muscaria (e.g., ibotenic acid and muscimol), has been more commonly associated with severe gastrointestinal upset than its better-known counterpart.
Although usually not fatal, Omphalotus spp., "Jack-o-lantern mushrooms", are another cause of sometimes significant toxicity. They are sometimes mistaken for chanterelles. Both are bright-orange and fruit at the same time of year, although Omphalotus grows on wood and has true gills rather than the veins of a Cantharellus. They contain toxins known as illudins, which causes gastrointestinal symptoms.
Bioluminescent species are generally inedible and often mildly toxic.
Clitocybe dealbata, which is occasionally mistaken for an oyster mushroom or other edible species contains muscarine.
Toxicities can also occur with collection of morels. Even true morels, if eaten raw, will cause gastrointestinal upset. Typically, morels are thoroughly cooked before eating. Verpa bohemica, although referred to as "thimble morels" or "early morels" by some, have caused toxic effects in some individuals. Gyromitra spp., "false morels", are deadly poisonous if eaten raw. They contain a toxin called gyromitrin, which can cause neurotoxicity, gastrointestinal toxicity, and destruction of the blood cells. The Finns consume Gyromitra esculenta after parboiling, but this may not render the mushroom entirely safe, resulting in its being called the "fugu of the Finnish cuisine".
A more unusual toxin is coprine, a disulfiram-like compound that is harmless unless ingested within a few days of ingesting alcohol. It inhibits aldehyde dehydrogenase, an enzyme required for breaking down alcohol. Thus, the symptoms of toxicity are similar to being hung over—flushing, headache, nausea, palpitations, and, in severe cases, trouble breathing. Coprinus species, including Coprinopsis atramentaria, contain coprine. Coprinus comatus does not, but it is best to avoid mixing alcohol with other members of this genus.
Recently, poisonings have also been associated with Amanita smithiana. These poisonings may be due to orellanine, but the onset of symptoms occurs in 4 to 11 hours, which is much quicker than the 3 to 20 days normally associated with orellanine.
Paxillus involutus is also inedible when raw, but is eaten in Europe after pickling or parboiling. However, after the death of the German mycologist Dr. Julius Schäffer, it was discovered that the mushroom contains a toxin that can stimulate the immune system to attack its red blood cells. This reaction is rare but can occur even after safely eating the mushroom for many years. Similarly, Tricholoma equestre was widely considered edible and good, until it was connected with rare cases of rhabdomyolysis.
In the fall of 2004, thirteen deaths were associated with consumption of Pleurocybella porrigens or "angel's wings". In general, these mushrooms are considered edible. All the victims died of an acute brain disorder, and all had pre-existing kidney disease. The exact cause of the toxicity was not known at this time and the deaths cannot be definitively attributed to mushroom consumption.
However, mushroom poisoning is not always due to mistaken identity. For example, the highly toxic ergot Claviceps purpurea, which grows on rye, is sometimes ground up with rye, unnoticed, and later consumed. This can cause devastating, even fatal, effects, called ergotism.
Cases of idiosyncratic or unusual reactions to fungi can also occur. Some are probably due to allergy, others to some other kind of sensitivity. It is not uncommon for a person to experience gastrointestinal upset associated with one particular mushroom species or genus.
Some mushrooms might concentrate toxins from their growth substrate, such as Chicken of the Woods growing on yew trees.
Poisonous mushrooms
Of the most lethal mushrooms, five—the death cap (A. phalloides), the three destroying angels (A. virosa, A. bisporigera, and A. ocreata), and the fool's mushroom (A. verna)—belong to the genus Amanita, and two more—the deadly webcap (C. rubellus), and the fool's webcap (C. orellanus)—are from the genus Cortinarius. Several species of Galerina, Lepiota, and Conocybe also contain lethal amounts of amatoxins. Deadly species are listed in the List of deadly fungi.
The following species may cause great discomfort, sometimes requiring hospitalization, but are not considered deadly.
Amanita muscaria (fly agaric) – Contains the psychoactive muscimol and the neurotoxin ibotenic acid. Ibotenic acid decarboxylates into muscimol upon curing of the mushroom, rendering it relatively non-toxic, though death via respiratory depression is possible. Muscimol intoxication is often considered unpleasant and undesirable, however, and as such has seen little recreational use compared to the unrelated psilocybin mushroom, though it has been used as an entheogen by the native people of Siberia.
Amanita pantherina (panther mushroom) – contains similar toxins as A. muscaria, but is associated with more fatalities than A. muscaria.
Chlorophyllum molybdites (greengills) – causes intense gastrointestinal upset.
Entoloma (pinkgills) – some species are highly poisonous, such as livid entoloma (Entoloma sinuatum), Entoloma rhodopolium, and Entoloma nidorosum. Symptoms of intense gastrointestinal upset appear after 20 minutes to 4 hours, caused by an unidentified gastrointestinal irritant.
Many Inocybe species such as Inocybe fastigiata and Inocybe geophylla contain muscarine.
Inosperma erubescens has caused death.
Some white Clitocybe species, including C. rivulosa and C. dealbata, contain muscarine.
Tricholoma pardinum, Tricholoma tigrinum (tiger tricholoma) – gastrointestinal upset due to an unidentified toxin, begins in 15 minutes to 2 hours and lasts 4 to 6 days.
Tricholoma equestre (man-on-horseback) – until recently thought edible and good, can lead to rhabdomyolysis after repeated consumption.
Hypholoma fasciculare/Naematoloma fasciculare (sulfur tuft) – usually causes gastrointestinal upset, but the toxins fasciculol E and F could lead to paralysis and death.
Paxillus involutus (brown roll-rim) – once thought edible, but now found to destroy red blood cells with regular or long-term consumption.
Rubroboletus satanas (Devil's bolete), Suillellus luridus, Rubroboletus legaliae, Chalciporus piperatus, Neoboletus luridiformis, Rubroboletus pulcherrimus – gastrointestinal irritation. Of these, only R. pulcherrimus has been implicated in a death. Many books list N. luridiformis as edible, but Arora lists it as "to be avoided".
Hebeloma crustuliniforme (known as poison pie or fairy cakes) – causes gastrointestinal symptoms such as nausea and vomiting.
Russula emetica (the sickener) – as its name implies, causes rapid vomiting. Other Russulas with a peppery taste (Russula silvicola, Russula mairei) will likely do the same.
Agaricus hondensis, Agaricus californicus, Agaricus praeclaresquamosus, Agaricus xanthodermus – cause vomiting and diarrhea in most people, although some people seem to be immune.
Lactifluus piperatus, Lactarius torminosus, Lactarius rufus – these and other peppery-tasting milk-caps are pickled and eaten in Scandinavia, but are indigestible or poisonous unless correctly prepared.
Lactarius vinaceorufescens, Lactarius uvidus – reported to be poisonous. Arora reports that all yellow- or purple-staining Lactarius are "best avoided".
Ramaria gelatinosa – causes indigestion in many people, although some seem immune.
Gomphus floccosus (the scaly chanterelle) – causes gastric upset in many people, although some eat it without problems. G. floccosus is sometimes confused with the chanterelle.
Evolution
Many different species of mushrooms are poisonous and contain differing toxins that cause different types of harm. The most common toxin that causes severe poisoning is amatoxin, found in various mushroom species that cause the most fatalities every year. Amanita, or “ the death cap”, is a type of mushroom named for its substantial amount of amatoxin, which has about 10 mg per mushroom, which is the lethal dose. Amatoxin blocks the replication of DNA, which leads to cell death. This can affect cells that replicate frequently, such as kidneys, livers, and eventually, the central nervous system. It can also cause the loss of muscle contraction and liver failure. Despite the severe and dangerous symptoms, amatoxin poisoning is treatable given quick, professional care.
Mushrooms have also been found to have evolved toxicity independently from each other. Researchers have found that different mushroom species share the same type of amatoxin called amanitin. They specifically looked at three of the deadliest species, Amanita, Galerina, and Lepiota. Through genome sequencing, a scientific process that determines the DNA sequence of an organism’s genome, closely related mushrooms obtained genetic information via horizontal gene transfer. Once assimilated, it can then be passed down to an offspring. The researchers also concluded that there is “an unknown ancestral fungal donor,” that allowed for horizontal gene transfer.
Mushroom toxins have appeared and disappeared many times throughout their evolutionary history. Many scientists believe that the toxins evolved in mushrooms are used to deter predation, either from fungivores or mammals. If mushrooms are consumed, it can negatively affect their ability to disperse spores, survive, and reproduce. Snails and insects are fungivores and many have learned or evolved to avoid eating poisonous mushrooms. However, it is believed that mammals pose a higher threat to mushrooms than fungivores, as larger body sizes mean they are more capable of eating an entire fungus in one sitting.
Some phenotypes, or observable characteristics, may co-occur with toxicity, and therefore act as a warning signal. The first potential warning sign is aposematism, which is an adaptation that warns off predators based on a physical trait of an organism. In this case, the researchers were interested in observing whether the color of a mushroom deters predators. This would suggest that toxic mushrooms are of different colors than non-poisonous ones. The visual cue of some colors should be enough for predators to know not to consume the mushroom. The second possible warning sign is olfactory aposematism, a similar concept, but instead of focusing on color, the odor of the mushroom would be what deters predation. This would again indicate that poisonous mushrooms would emit a different odor than non-poisonous ones. Alternatively, is the ability of organisms to learn from other organisms. This would suggest that avoidance of toxic mushrooms is a learned behavior. Organisms may avoid toxic mushrooms if they observed other organisms of the same species consume the fungus. Learned behavior is when an organism learns how to behave based on previous experiences. Some researchers believe that if an organism got sick or observed another organism get sick from consuming a poisonous mushroom, then they would know not to continue consuming it for fear of getting sick again.
An analysis of 245 North American mushroom species and 265 from Europe, revealed 21.2% of the North American species and 12.1% of the European ones as poisonous. After collecting this information, and using a neural network to classify all of the mushrooms based on color and odor, the researchers concluded that there was no correlation between cap color and mushrooms containing toxins. The cap is the top, rounded part of a mushroom and comes in different colors. This proposes that the cap color does not act as a warning sign to deter predators, providing no evidence that poisonous mushrooms may not signal their toxicity through visual or chemical traits. The three deadly mushrooms listed above, Amanita, Galerina, and Lepiota, are all of different colors, consisting of reds, yellows, browns, and whites. A possible theory as to why color is not a factor in determining whether a mushroom is poisonous is the fact that many of its predators are nocturnal and have poor vision. Therefore, viewing the different colors is difficult, and could result in inaccurate consumption. The study, however, did suggest that poisonous mushrooms do emit a smell that is unpleasant and therefore discourages consumption. Despite this result, there is no definitive evidence to suggest if the odor is a result of the production of the toxin or if it is intended as a warning signal. Additionally, many of the odors are not picked up by humans. This could suggest that there is another characteristic difference between poisonous and non-poisonous mushrooms to avoid predation from larger mammals or that there is another purpose for some mushrooms being poisonous that is not dependent on predators.
Prognosis and treatment
Some mushrooms contain less toxic compounds and, therefore, are not severely poisonous. Poisonings by these mushrooms may respond well to treatment. However, certain types of mushrooms contain very potent toxins and are very poisonous; so even if symptoms are treated promptly, mortality is high. With some toxins, death can occur in a week or a few days. Although a liver or kidney transplant may save some patients with complete organ failure, in many cases there are no organs available. Patients hospitalized and given aggressive support therapy almost immediately after ingestion of amanitin-containing mushrooms have a mortality rate of only 10%, whereas those admitted 60 or more hours after ingestion have a 50–90% mortality rate. In the United States, mushroom poisoning kills an average of about 3 people a year. According to National Poison Data System (NPDS) annual reports published by America's Poison Centers, the average number of deaths occurring over a ten-year period (2012–2020) sits right at 3 a year. In 2012, 4 out of the 7 total deaths that occurred that year, were attributed to a single event where a "housekeeper at a Board and Care Home for elderly dementia patients collected and cooked wild (Amanita) mushrooms into a sauce that she consumed with six residents of the home.". Over 1,300 emergency room visits in the United States were attributed to poisonous mushroom ingestion in 2016, with about 9% of patients experiencing a serious adverse outcome.
Society and culture
Folklore
Many old wives' tales concern the defining features of poisonous mushrooms. However, there are no general identifiers for poisonous mushrooms, so such beliefs are unreliable. Guidelines to identify particular mushrooms exist, and will serve only if one knows which mushrooms are toxic.
Examples of erroneous folklore "rules" include:
"Poisonous mushrooms are brightly colored." – Indeed, fly agaric, usually bright-red to orange or yellow, is narcotic and hallucinogenic, although no human deaths have been reported. The deadly destroying angel, in contrast, is an unremarkable white. The deadly Galerinas are brown. Some choice edible species (chanterelles, Amanita caesarea, Laetiporus sulphureus, etc.) are brightly colored, whereas most poisonous species are brown or white.
"Insects/animals will avoid toxic mushrooms." – Fungi that are harmless to invertebrates can still be toxic to humans; the death cap, for instance, is often infested by insect larvae.
"Poisonous mushrooms blacken silver." – None of the known mushroom toxins react with silver.
"Poisonous mushrooms taste bad." – People who have eaten the deadly Amanitas and survived have reported that the mushrooms tasted quite good.
"All mushrooms are safe if cooked/parboiled/dried/pickled/etc." – While it is true that some otherwise-inedible species can be rendered safe by special preparation, many toxic species cannot be made toxin-free. Many fungal toxins are not particularly sensitive to heat and so are not broken down during cooking; in particular, α-Amanitin, the poison produced by the death cap (Amanita phalloides) and others of the genus, is not denatured by heat.
"Poisonous mushrooms will turn rice red when boiled." – A number of Laotian refugees were hospitalized after eating mushrooms (probably toxic Russula species) deemed safe by this folklore rule and this misconception cost at least one person her life.
"Poisonous mushrooms have a pointed cap. Edible ones have a flat, rounded cap." – The shape of the mushroom cap does not correlate with presence or absence of mushroom toxins, so this is not a reliable method to distinguish between edible and poisonous species. Death cap, for instance, has a rounded cap when mature.
"Boletes are, in general, safe to eat." – It is true that, unlike a number of Amanita species in particular, in most parts of the world, there are no known deadly varieties of the genus Boletus, which reduces the risks associated with misidentification. However, mushrooms like the Devil's bolete are poisonous both raw and cooked and can lead to strong gastrointestinal symptoms, and other species like the lurid bolete require thorough cooking to break down toxins. As with another mushroom genera, proper caution is, therefore, advised in determining the correct species.
Notable cases
Siddhartha Gautama (known as The Buddha), by some accounts, may have died of mushroom poisoning around ~479 BCE, though this claim has not been universally accepted.
Roman Emperor Claudius is said to have been murdered by being fed the death cap mushroom. However, this story first appeared some two centuries after the events, and it is debatable whether Claudius was murdered at all.
The best-selling author Nicholas Evans (The Horse Whisperer) was poisoned (but survived) after eating Cortinarius rubellus.
The parents of the physicist Daniel Gabriel Fahrenheit, who created the Fahrenheit temperature scale, died in Danzig on 14 August 1701 from accidentally eating poisonous mushrooms.
The composer Johann Schobert died in Paris, along with his wife, all but one of his children, their maidservant, and four acquaintances after insisting that certain poisonous mushrooms they had gathered were edible despite the express warning of cooks at two separate restaurants to which he had taken the mushrooms.
July 2023 Leongatha mushroom poisoning − Four people in Leongatha, Australia were taken to hospital after consuming beef Wellington suspected to have contained death cap mushrooms. Three of the four guests subsequently died and one survived, later receiving a liver transplant. The woman who cooked the meal, Erin Patterson, was charged with murder in November 2023. Patterson has pleaded not guilty and the Supreme court is expected to hear her case on 28 April, 2025.
In August 2023, Professor Vitaly Melnikov, 77, who had headed the Moscow Department of Rocket and Space Systems at RSC Energia (Russia's leading spacecraft manufacturer), became suddenly seriously ill and subsequently died after eating inedible mushrooms.
See also
List of deadly fungi (for lethal species only)
List of poisonous fungi (including non-deadly species that are nevertheless harmful)
References
External links
Poisonous American Mushrooms – AmericanMushrooms.com
Poisonous mushrooms: microscopic identification in cooked specimens from medical mycologist R.C. Summerbell
Mushroom Poisoning Syndromes from the North American Mycological Association
Mushroom Poisoning Case Registry (North America) from the North American Mycological Association
American Association of Poison Control Centers Provides information on the toxicity of mushrooms in your area, symptoms and first aid.
Mycotoxins
Toxic effect of noxious substances eaten as food | Mushroom poisoning | Environmental_science | 6,251 |
2,820,700 | https://en.wikipedia.org/wiki/Dysnomia%20%28moon%29 | Dysnomia (formally (136199) Eris I Dysnomia) is the only known moon of the dwarf planet Eris and is the second-largest known moon of a dwarf planet, after Pluto I Charon. It was discovered in September 2005 by Mike Brown and the Laser Guide Star Adaptive Optics (LGSAO) team at the W. M. Keck Observatory. It carried the provisional designation of until it was officially named Dysnomia (from the Ancient Greek word meaning anarchy/lawlessness) in September 2006, after the daughter of the Greek goddess Eris.
With an estimated diameter of , Dysnomia spans 24% to 29% of Eris's diameter. It is significantly less massive than Eris, with a density consistent with it being mainly composed of ice. In stark contrast to Eris's highly-reflective icy surface, Dysnomia has a very dark surface that reflects 5% of incoming visible light, resembling typical trans-Neptunian objects around Dysnomia's size. These physical properties indicate Dysnomia likely formed from a large impact on Eris, in a similar manner to other binary dwarf planet systems like Pluto and , and the Earth–Moon system.
Discovery
In 2005, the adaptive optics team at the Keck telescopes in Hawaii carried out observations of the four brightest Kuiper belt objects (Pluto, , , and ), using the newly commissioned laser guide star adaptive optics system. Observations taken on 10 September 2005, revealed a moon in orbit around Eris, provisionally designated . In keeping with the Xena nickname that was already in use for Eris, the moon was nicknamed "Gabrielle" by its discoverers, after Xena's sidekick.
Physical characteristics
Submillimeter-wavelength observations of the Eris–Dysnomia system's thermal emissions by the Atacama Large Millimeter Array (ALMA) in 2015 first showed that Dysnomia had a large diameter and a very low albedo, with the initial estimate being . Further observations by ALMA in 2018 refined Dysnomia's diameter to (24% to 29% of Eris's diameter) and an albedo of . Of the known moons of dwarf planets, only Charon is larger, making Dysnomia the second-largest moon of a dwarf planet. Dysnomia's low albedo significantly contrasts with Eris's extremely high albedo of 0.96; its surface has been described to be darker than coal, which is a typical characteristic seen in trans-Neptunian objects around Dysnomia's size.
Eris and Dysnomia are mutually tidally locked, like Pluto and Charon. Astrometric observations of the Eris–Dysnomia system by ALMA show that Dysnomia does not induce detectable barycentric wobbling in Eris's position, implying its mass must be less than (mass ratio ). This is below the estimated mass range of (mass ratio 0.01–0.03) that would normally allow Eris to be tidally locked within the range of the Solar System, suggesting that Eris must therefore be unusually dissipative. ALMA's upper-limit mass estimate for Dysnomia corresponds to an upper-limit density of , implying a mostly icy composition. The shape of Dysnomia is not known, but its low density suggests that it should not be in hydrostatic equilibrium.
The brightness difference between Dysnomia and Eris decreases with longer and redder wavelengths; Hubble Space Telescope observations show that Dysnomia is 500 times fainter than Eris (6.70-magnitude difference) in visible light, whereas near-infrared Keck telescope observations show that Dysnomia is ~60 times fainter (4.43-magnitude difference) than Eris. This indicates Dysnomia has a very different spectrum and redder color than Eris, indicating a significantly darker surface, something that has been proven by submillimeter observations.
Orbit
Combining Keck and Hubble observations, the orbit of Dysnomia was used to determine the mass of Eris through Kepler's third law of planetary motion. Dysnomia's average orbital distance from Eris is approximately , with a calculated orbital period of 15.786 days, or approximately half a month. This shows that the mass of Eris is 1.27 times that of Pluto. Extensive observations by Hubble indicate that Dysnomia has a nearly circular orbit around Eris, with a low orbital eccentricity of . Over the course of Dysnomia's orbit, its distance from Eris varies by due to its slightly eccentric orbit.
Dynamical simulations of Dysnomia suggest that its orbit should have completely circularized through mutual tidal interactions with Eris within timescales of 5–17 million years, regardless of the moon's density. A non-zero eccentricity would thus mean that Dysnomia's orbit is being perturbed, possibly due to the presence of an additional inner satellite of Eris. However, it is possible that the measured eccentricity is not real, but due to interference of the measurements by albedo features, or systematic errors.
From Hubble observations from 2005 to 2018, the inclination of Dysnomia's orbit with respect to Eris's heliocentric orbit is calculated to be approximately 78°. Since the inclination is less than 90°, Dysnomia's orbit is therefore prograde relative to Eris's orbit. In 2239, Eris and Dysnomia will enter a period of mutual events in which Dysnomia's orbital plane is aligned edge-on to the Sun, allowing for Eris and Dysnomia to take turns eclipsing each other.
Formation
Eight of the ten largest trans-Neptunian objects are known to have at least one satellite. Among the fainter members of the trans-Neptunian population, only about 10% are known to have satellites. This is thought to imply that collisions between large KBOs have been frequent in the past. Impacts between bodies of the order of across would throw off large amounts of material that would coalesce into a moon. A similar mechanism is thought to have led to the formation of the Moon when Earth was struck by a giant impactor (see Giant impact hypothesis) early in the history of the Solar System.
Name
Mike Brown, the moon's discoverer, chose the name Dysnomia for the moon. As the daughter of Eris, the mythological Dysnomia fit the established pattern of naming moons after gods associated with the primary body (hence, Jupiter's largest moons are named after lovers of Jupiter, while Saturn's are named after his fellow Titans). The English translation of Dysnomia, "lawlessness", also echoes Lucy Lawless, the actress who played Xena in Xena: Warrior Princess on television. Before receiving their official names, Eris and Dysnomia had been nicknamed "Xena" and "Gabrielle", though Brown states that the connection was accidental.
A primary reason for the name was its similarity to the name of Brown's wife, Diane, following a pattern established with Pluto. Pluto owes its name in part to its first two letters, which form the initials of Percival Lowell, the founder of the observatory where its discoverer, Clyde Tombaugh, was working, and the person who inspired the search for "Planet X". James Christy, who discovered Charon, did something similar by adding the Greek ending -on to Char, the nickname of his wife Charlene. (Christy wasn't aware that the resulting 'Charon' was a figure in Greek mythology.) "Dysnomia", similarly, has the same first letter as Brown's wife, Diane.
Notes
References
External links
Moons of dwarf planets
Eris (dwarf planet)
Discoveries by Michael E. Brown
20050910
Discoveries by Chad Trujillo
Discoveries by David L. Rabinowitz
Solar System | Dysnomia (moon) | Astronomy | 1,665 |
66,128,671 | https://en.wikipedia.org/wiki/Biphasic%20calcium%20sulfate | Biphasic calcium sulfate is a granulated powder composed of calcium sulfate hydrate (CaSO4•2H2O) and calcium sulfate hemihydrate (CaSO4•H2O). It is used primarily as a bone grafting material in dental augmentation procedures such as socket grafting, lateral augmentation, sinus lift, cyst enucleation and more.
The clinical use of calcium sulfate has been documented for over a century as a bone grafting material, and was first recorded in Germany in 1892 when it was used to fill bone defects in patients with tuberculous cavities. Calcium sulfate has not been widely available for dental uses due to its instability in the presence of blood and saliva. Studies have shown that it is a delivery vehicle for growth factors and that the calcium in calcium sulfate stimulates osteoblasts.
Biphasic calcium sulfate was invented in 2010 and has the same chemical structure as calcium sulfate. It is a biocompatible material that slowly dissolves as new bone is formed, over a period of 3 to 6 months. It is also used as a composite graft, for longer term use. The unique structure of biphasic calcium sulfate allows it to be moldable and stable in the presence of blood and saliva, making it effective for use in dental augmentation procedures. Biphasic calcium sulfate is well accepted by the body, and acts as a scaffold allowing for optimal bone growth as it slowly reabsorbs. Soft tissue is prevented from growing into the defect, but blood vessels are able to grow (angiogenesis), which brings osteogenic cells to the area.
References
Calcium compounds
Sulfates
21st-century inventions | Biphasic calcium sulfate | Chemistry | 352 |
21,720,871 | https://en.wikipedia.org/wiki/RECOrd%20%28Local%20Biological%20Records%20Centre%29 | rECOrd is a Local Biological Records Centre (LRC) serving Cheshire, Halton, Warrington and Wirral (including the vice-county 'pan-handle' boundary around Stockport) - 'The Cheshire region'. It provides a local facility for the storage, validation and usage of Cheshire-based biological data under the National Biodiversity Network (NBN) project. It is one of a number of local Biological Records Centres across Britain which together aim to give complete geographic coverage of the UK.
The organisation is housed in Oakfield House at Chester Zoo. It provides support for biological recording and for biological recorders within the Cheshire region, allowing as wide access as is possible to both species and habitat records for the region commensurate with protecting those self-same species and habitats. This access aims to inform, educate and to provide real data upon which environmentalists, ecologists and planners, and other individuals and organisations can base decisions.
rECOrd deals with data for wildlife, biodiversity, nature, habitats, wildlife sites and geology, geomorphology and geodiversity.
rECOrd Online Data Input System (RODIS) is a facility for entering wildlife sighting information via the rECOrd website.
A mix of permanent staff, contractors and volunteers undertake data keying and verification duties, surveys and research historical data.
rECOrd is a non-profit making (not-for-profit) company, limited by guarantee (Company No.: 4046886), and is also a charity (Reg. No.: 1095859). David Bellamy is the organisation's patron, and Gordon McGregor Reid is its president.
Geographic area
The area covered is designated as 'the Cheshire region'.
the county of Cheshire (both modern and vice-county)
the administrative and unitary authorities of Halton and Warrington
the Wirral (once part of Cheshire - now part of Merseyside)
the Mersey and Dee river estuaries and
the marine environment bordering the Wirral out to the 12 mile limit
History
rECOrd began its development in October 2000, managed by Steve J. McWilliam, and was fully launched on 12 July 2002 when it was formally opened by Sir Martin Doughty of English Nature.
External links
rECOrd
rECOrd's Google Mapping Facility
rECOrd Photo Gallery
rECOrd Nature Discussion Forum
List of Cheshire County Recorders
rECOrd's Data Search Facility
Cheshire Wildlife Trust
National Biodiversity Network
Natural England
Environment Agency
Chester Zoo
Directory of Local Biological Records Centres
National Federation for Biological Recording
Association of Local Record Centres
Biodiversity
Biodiversity databases
Biological research institutes in the United Kingdom
Ecology organizations
Environment of Cheshire | RECOrd (Local Biological Records Centre) | Biology,Environmental_science | 511 |
22,298,596 | https://en.wikipedia.org/wiki/Eco%20pickled%20surface | Eco pickled surface (EPS) is a process applied to hot rolled sheet steel to remove all surface oxides (mill scale) and clean the steel surface. Steel which has undergone the EPS process acquires a high degree of resistance to subsequent development of surface oxide (rust), so long as it does not come into direct contact with moisture. EPS was developed by The Material Works, Ltd., which has filed several patent applications covering the process. It is primarily intended to be a replacement of the acid pickling process wherein steel strip is immersed in solutions of hydrochloric and sulfuric acids to chemically remove oxides.
Overview of the EPS process
The EPS process (see Figure 2) begins with hot rolled strip steel in coil form. This steel pays off of an uncoiler, then passes through a machine which serves the purpose of "scale breaker", "leveler" or both. This machine (see Figure 2) works the material between sets of hardened rollers. This has the effect of removing the curvature of the strip ("coil set") and breaking loose the outer layers of mill scale which encase the steel strip.
After passing through the "scale breaker/leveler" machine, the steel strip enters the first "EPS slurry blasting cell". Slurry blasting is a wet abrasive blasting process that combines a fine-particle metallic abrasive with a "carrier liquid" (the most common one being water). This abrasive + water slurry mixture is fed into a rotating impeller which propels it at high velocity across the object to be cleaned (see Figure 3). Slurry blasting is a method for removing rust/scale, for blast cleaning and shot peening. Cleaning agents can be introduced into the carrier liquid to reduce smut and aid in rust prevention.
An EPS slurry blasting cell is composed of eight of the slurry discharge heads shown in Figure 3 – four for the top surface and four for the bottom surface of the strip. Inside the slurry blasting cell, jets of water cleanse the steel strip of both the abrasive particles and the dislodged mill scale. An EPS production system may use multiple EPS slurry blasting cells arranged in tandem, so the steel strip passes from one cell into the next, then into the next, and so on. Multiple cells increase the exposure of the steel strip to the slurry blast streams, thereby allowing the strip to move faster, yet still achieve the necessary level of scale removal. The strip speed and, therefore, system output increases in rough proportion to the number of EPS slurry blasting cells used.
The strip emerges from the final blasting cell and is dried using high-velocity air blowers. At this point the strip passes beneath a real-time oxide detector camera, which provides feedback to the line control to assure full oxide removal is accomplished.
To conclude the process, the strip may, optionally, have an oil film or lubricant applied, then it is recoiled. Of note is that tension created by the force of the recoiler pulling the strip through the scale breaker/leveler serves to flatten the strip, removing bow, edge wave and minor coil breaks. Also, not shown in Figure 2 is the slurry delivery/recirculation/filtering system. This closed-loop system collects the carrier liquid, abrasive and removed scale, filters out the removed scale, other contaminants and "undersized" abrasive particles, and returns a cleansed slurry mixture back to the blasting cells.
Characteristics of EPS-processed steel strip
Steel which utilizes the EPS process to remove surface scale shows few differences from steel which utilizes acid pickling to remove surface scale. "Downstream" industrial processes such as galvanizing, cold reducing and painting of EPS-processed steel strip show it to be interchangeable with acid-pickled steel strip. This also holds true for common sheet metal fabrication processes, such as laser cutting, plasma cutting, stamping, welding, bending, and roll forming – no meaningful difference between steel strip using the EPS process and steel strip using acid pickling.
An area where the difference between EPS-processed steel strip and acid-pickled steel strip is apparent is visual appearance. Steel which has undergone EPS processing exhibits a more uniform, lustrous appearance, as shown in Figure 4. In the EPS process, the impact of the abrasive particles on the steel surface serves to "smooth out" minor surface imperfections such as scratches, pits, roll marks and silicone streaks.
Another area of difference between EPS-treated steel strip and acid-pickled steel strip is rust resistance. Conventional acid-pickled steel strip is frequently coated with a thin film of oil to serve as a barrier to contact with oxygen so as to prevent rusting. EPS-processed steel is inherently rust-inhibitive and, therefore, needs no oil or other coating to prevent rusting. Many "downstream" processes and steel fabrication processes must have the steel's oil coating (or other surface contaminants) removed as a precursor step of the process. Use of EPS-treated steel in these processes precludes the need for any such "oil-stripping" precursor step, thereby simplifying the process.
The rust inhibitive property of EPS-processed strip
The rust resistance of EPS-processed steel strip is superior to that of acid pickled steel strip primarily because acid pickling imposes a corrosion "penalty" on the steel which EPS processing does not. This penalty is a result of chemical reactions that occur after acid pickling and serve as a catalyst for oxidation. The primary pickling agent is hydrochloric acid (HCl). Although the steel strip is thoroughly rinsed with clean water after immersion in the HCl bath, some residual amount of chlorine (Cl) remains on the surface of the strip. Chlorine reacts very readily with oxygen to form chlorides, so the free Cl acts as something of a "magnet" for oxygen. This mechanism makes acid-pickled steel more prone to picking up oxygen, whereas there is no comparable mechanism at work with EPS mechanical pickling.
In addition to the free Cl, compounds known as "chloride salts" remain on the surface of acid pickled steel in trace amounts, even after rinsing. Chloride salts react rapidly with moisture and accelerate oxidation of iron on the steel's surface. To prevent oxidation of the iron in the acid pickled strip, a thin film of oil is applied to the surface to serve as a barrier between the free Cl, chloride salts and oxygen. No such protective barrier is needed for EPS-processed steel, as no free Cl or chloride salts are present.
However, an additive is used in the EPS slurry blast carrier liquid to reduce the "smut" that would otherwise remain on the surface and dull the appearance of EPS-processed strip. This additive contains a rust inhibitor, a residual amount of which remains on the surface even after rinsing. It is believed that the presence of the rust inhibitor adds to the overall EPS-processed strip's ability to resist rusting. The additive has been demonstrated to have no impact on paint performance.
The EPS process as a replacement for acid pickling
The EPS process produces scale-free steel strip which is interchangeable with acid-pickled steel strip, yet the EPS process entails lower capital and operating (variable) cost than an acid-pickling line of equivalent output. For this reason the EPS process is considered to be a direct replacement of acid pickling.
In addition, the EPS process is considered less damaging to the environment than acid pickling for these reasons:
Lower energy consumption;
No hazardous/acidic substances used in the process;
No potential exposure to acid fumes for people, equipment or buildings;
No hazardous or polluting outputs or byproducts of the process with disposal or fume stack liabilities.
Applications
Eco pickled surface technology has been tested and approved for use as a replacement for acid pickled steel by automotive manufacturers General Motors and Chrysler. The Eco Pickled Surface process was a finalist in the 2013 American Metal Market (AMM) Awards for Steel Excellence.
Notes
References
Corrosion prevention
Metallurgical processes | Eco pickled surface | Chemistry,Materials_science | 1,684 |
71,483,246 | https://en.wikipedia.org/wiki/Leucocoprinus%20tenellus | Leucocoprinus tenellus is a species of mushroom producing fungus in the family Agaricaceae.
Taxonomy
It was first described in 1905 by the French mycologist Jean Louis Émile Boudier who classified it as Lepiota tenella accompanied with illustrations of the species. It was reclassified as Leucocoprinus tenellus in 1943 by the French mycologist Marcel Locquin.
In 1983 the British mycologist David Pegler described another species as Leucocoprinus tenellus however this was an illegitimate name and was reclassified by the mycologist Jaime Bernardo Blanco-Dios in 2020 as Leucocoprinus martinicensis.
In 2012 the German mycologist described Leucocoprinus emilei however Species Fungorum notes this as an illegitimate name which was based on the basionym Lepiota tenella. It is therefore considered a synonym of Leucocoprinus tenellus.
Description
Leucocoprinus tenellus is a small dapperling mushroom with thin white flesh. Boudier provides the following description of the species:
Cap: Starts campanulate (bell shaped) and covered in woolly (floccose) violet scales before expanding and becoming white with small violet scales clustered in the centre disk and only sparsely dotted across the rest of the surface. The cap edges are striated. Gills: Free and white. Stem: 5-7cm tall (including the cap) tapering up from a slightly bulbous base. The interior is hollow whilst the exterior surface is white above the membranous stem ring, which is located towards the middle of the stem (median), with violet woolly scales below the ring and towards the base. Spores: Ovoid. 12-14 x 7-8 μm.
Whilst Boudier describes the colour as violet in the original text his illustration appears more brown. This may be due to aging and yellowing of the paper (colour corrected on the yellow channel in the following image), fading of the pigment over time or over-saturation in the scanned version of the book. Alternatively his description of the species as violet may simply be down to his own impression of the colour of this mushroom since Leucocoprinus ianthius can present with violet-brown tones which are described differently by different authors.
Habitat and distribution
L. tenellus is scarcely recorded and little known and it is possible that it is a synonym for another Leucocoprinus species which simply has not been reclassified yet.
The specimen studied by Boudier was found in a greenhouse in Montmorency, France.
Similar species
Boudier states that the species is similar to Lepiota bebrissoni [sic] (now known as Leucocoprinus brebissonii) and Lepiota serena (now known as Leucoagaricus serenus) but distinct from these species due to its colour and scaly stem base.
References
Leucocoprinus
Fungi described in 1905
Taxa named by Jean Louis Émile Boudier
Fungus species | Leucocoprinus tenellus | Biology | 631 |
755,393 | https://en.wikipedia.org/wiki/Adrenarche | Adrenarche is an early stage in sexual maturation that happens in some higher primates (including humans), typically peaks at around 20 years of age, and is involved in the development of pubic hair, body odor, skin oiliness, axillary hair, sexual attraction/sexual desire/increased libido and mild acne. During adrenarche the adrenal glands secrete increased levels of weak adrenal androgens, including dehydroepiandrosterone (DHEA), dehydroepiandrosterone sulfate (DHEA-S), and androstenedione (A4), but without increased cortisol levels. Adrenarche is the result of the development of a new zone of the adrenal cortex, the zona reticularis. Adrenarche is a process related to puberty, but distinct from hypothalamic–pituitary–gonadal axis (HPG axis) maturation and function.
Occurrence
Adrenarche occurs starting at the age of 6 years. After the first year of life, the adrenal glands secrete very low levels of adrenal androgens. Adrenarche begins on average between age 5 to 8 in girls and between 7 and 11 in boys, and precedes puberty by about 2 years. Unlike the physical changes that occur during puberty, adrenarche is primarily an emotional and psychological stage of development. It continues throughout puberty, with adrenal androgen levels progressively increasing until reaching maximal levels in young adulthood, around the age of 20 years. Circulating DHEA-S levels specifically peak in humans at about age 19 or 20 years in females and around age 20 to 24 years in males. Levels of corticosteroids like cortisol do not change with adrenarche. Biological messengers during the onset of adrenarche are thought to signal in preparation for puberty.
Role in puberty
An initiator of adrenarche has not yet been identified. Researchers have unsuccessfully tried to identify a new pituitary peptide, to be called "adrenal androgen stimulating hormone". Others have proposed that adrenarchal maturation is a gradual process intrinsic to the adrenal glands that has no distinct trigger. A third avenue of research is pursuing a possible relationship with either fetal or childhood body mass and related signals such as insulin and leptin. Many children born small for gestational age (SGA) because of intrauterine growth restriction (IUGR) have an earlier onset of adrenarche, which raises the possibility that timing of adrenarche may be affected by physiological programming in infancy. Adrenarche also occurs prematurely in many children who are overweight, suggesting a possible relationship with body mass or adiposity signals.
The principal physical consequences of adrenarche are androgen effects, especially pubic hair (in which Tanner stage 2 becomes Tanner stage 3) and the change of sweat composition that produces adult body odor. Increased oiliness of the skin and hair and mild acne may occur. Pubic hair caused by adrenarche is usually transient and will disappear right before the onset of puberty. In most boys, these changes are indistinguishable from early testicular testosterone effects occurring at the beginning of gonadal puberty. In girls, the adrenal androgens of adrenarche produce most of the early androgenic changes of puberty: pubic hair, body odor, skin oiliness, and acne. In most girls the early androgen effects coincide with, or are a few months following, the earliest estrogenic effects of gonadal puberty (breast development and growth acceleration). As female puberty progresses, the ovaries and peripheral tissues become more important sources of androgens.
Parents and many physicians often infer (incorrectly) the onset of puberty from the first appearance of pubic hair (termed pubarche). However, the independence of adrenarche and gonadal puberty is apparent in children with atypical or abnormal development, when one process may occur without the other. For instance, adrenarche does not occur in many girls with Addison's disease, who will continue to have minimal pubic hair as puberty progresses. Conversely, girls with Turner syndrome will have normal adrenarche and normal pubic hair development, but true gonadal puberty never occurs because their ovaries are abnormal.
Premature adrenarche
Premature adrenarche is the most common cause of the early appearance of pubic hair ("premature pubarche") in childhood. In a large proportion of children it seems to be a variation of normal development requiring no treatment. However, there are three clinical issues related to premature adrenarche.
First, when pubic hair appears at an unusually early age in a child, premature adrenarche should be distinguished from true central precocious puberty, from congenital adrenal hyperplasia, and from androgen-producing tumors of the adrenals or gonads. Pediatric endocrinologists do this by demonstrating pubertal levels of DHEA-S and other adrenal androgens, with prepubertal levels of gonadotropins and gonadal sex steroids.
Second, there is some evidence that premature adrenarche may indicate that there was an abnormality of the intrauterine energy environment and of intrauterine growth. As mentioned above, premature adrenarche occurs more often in children with intrauterine growth retardation and in overweight children. Some of these same studies have demonstrated that some girls who display premature adrenarche may continue to have excessive androgen levels in adolescence. This can result in hirsutism or menstrual irregularities due to anovulation, referred to as polycystic ovary syndrome.
Third, at least one report from 2008 found an increased incidence of behavior and school problems in a group of children with premature adrenarche compared with an otherwise similar control group. To date, such a relationship has neither been confirmed nor explained and there are no obvious management implications.
Other primates
Adrenarche occurs in only a small number of primates, and only chimpanzees and gorillas show a pattern of adrenarche development similar to humans.
See also
Puberty
Gonadarche
Thelarche
Pubarche
Menarche
Spermarche
Adrenopause
Adrenal androgen-stimulating hormone
Precocious puberty
References
External links
Developmental stages
Endocrine system | Adrenarche | Biology | 1,375 |
52,780,757 | https://en.wikipedia.org/wiki/List%20of%20psychoactive%20drugs%20used%20by%20militaries | Militaries worldwide have used or are using various psychoactive drugs to improve performance of soldiers by suppressing hunger, increasing the ability to sustain effort without food, increasing and lengthening wakefulness and concentration, suppressing fear, reducing empathy, and improving reflexes and memory-recall, amongst other things.
Contemporary
For drugs that recently were or currently are being used by militaries.Administration tends to include strict medical supervision and prior briefing of the medical risks.
Caffeine, diet pills, painkillers, nicotine, and alcohol are not included on the list. Non-administrated, illegally used drugs are also not included.
Historic
Alcohol has a long association of military use, and has been called "liquid courage" for its role in preparing troops for battle, anaesthetize injured soldiers, and celebrate military victories. It has also served as a coping mechanism for combat stress reactions and a means of decompression from combat to everyday life. However, this reliance on alcohol can have negative consequences for physical and mental health. Military and veteran populations face significant challenges in addressing the co-occurrence of PTSD and alcohol use disorder.
Benzedrine was claimed to have been administered by Allied forces during WWII, esp. by the US
Germany and Japan used methamphetamine.
Fenethylline (trade name Captagon) has played a role in the Syrian civil war. The production and sale of fenethylline generates large revenues which are likely used to fund the purchase of weapons, and fenethylline is used as a stimulant by combatants. Poverty and international sanctions that limit legal exports are contributing factors. Since the fall of the Assad regime the new Syrian transitional government has ordered the cessation of the drug trade, and production has reportedly been reduced by 90%.
Methamphetamine ("Panzerschokolade", "Pervitin") during WWII by Nazi Germany
was the eponymous name that the Luftwaffe are claimed to have used.
D-IX was a combination of Methamphetamine, Oxycodone, and Cocaine that was produced in 1944 but could not be mass produced before the war ended. It was part of a future generation of "pep pills" for the German military and was tested on concentration camp prisoners.
See also
Academy of Military Medical Sciences
MKUltra
Psychochemical warfare
Military medicine
Neuroenhancement
Nootropic
Supersoldier
Use of drugs in warfare
References
External links
Presentation of the "Night Eagle" drug on China Central Television
Drugs used by militaries
Drug-related lists
Psychology lists
Military medicine
Military lists | List of psychoactive drugs used by militaries | Chemistry | 534 |
495,503 | https://en.wikipedia.org/wiki/Columbia%20Data%20Products | Columbia Data Products, Inc. (CDP) is a company which produced the first legally reverse-engineered IBM PC clones, starting with the MPC 1600 series in 1982. It faltered in that market after only a few years, and later reinvented itself as a software development company.
History
1976–1986: As a hardware company
Columbia Data Products was founded by William Diaz in 1976 in Columbia, Maryland. In 1980, Columbia Data Products made some Z80-based computers, most notably their Commander 900 series, which had several models, some of which were multiprocessors and had graphics capabilities.
CDP introduced the MPC 1600 "Multi Personal Computer", designed by David Howse, in June 1982. It was an exact functional copy of the IBM Personal Computer model 5150 except for the BIOS which was Clean room designed. IBM had published the bus and BIOS specifications, wrongly assuming that this would not be enough to facilitate unlicensed copying of the design, but be enough to encourage the add-on market.
CDP advertisements stated that the MPC "can use software and hardware originally intended for the IBM Personal Computer". The "Multi" in its name hinted to the fact that it could also run the multi-user operating system MP/M-86. The MPC was the first IBM PC clone and was actually superior to the IBM original.
It came with 128 KB RAM standard, compared to the IBM's 64 KB maximum. The MPC had eight PC expansion slots, with one filled by its video card. Its floppy disk drive interface was built into the motherboard.
The IBM PC, in contrast, had only five expansion slots, with the video card and floppy disk controller taking two of them. The MPC also included two floppy disk drives, one parallel and two serial ports, which were all optional on the original IBM PC. The MPC was followed up with a portable PC, the 32 pound (15 kg) "luggable" Columbia VP in 1983.
In May 1983, Future Computing ranked Columbia and Compaq computers as "Best" in the category of "Operationally Compatible", its highest tier of PC compatibility. PC Magazine in June 1983 criticized the MPC's documentation, but reported that it had very good hardware and software compatibility with the IBM PC. BYTE in November 1984 approved of the portable MPC-VP's PC compatibility, reporting that it ran Microsoft Flight Simulator, WordStar, Lotus 1-2-3, dBASE II, and other popular applications without problems. It concluded that the computer was "one of the best overall bargains on the market today".
The success of the MPC and its successors built CDP revenue from US$9.4 million in 1982 to US$56 million in 1983, with an IPO at US$11 in January 1983.
In February 1984, IBM announced the introduction of their first portable PC, thus putting pressure on its competitors in this niche as well, which besides CDP already included Compaq as the market leader in this segment, as well as Kaypro, TeleVideo Corporation, and Eagle Computer.
Columbia also released upgraded desktop models in order to compete with the IBM PC XT. Their MPC 1600-4, briefly reviewed in PC Magazine of April 1984, was found a worthy competitor of the XT and without major compatibility problems, even though its hard drive controller was quite different from IBM XT's, being based on a Z80 microprocessor with 64 KB of RAM emulating the ASIC used by IBM.
In May 1984, Richard T. Gralton, formerly a vice-chairman of Savin Corporation, became the president and chief operating officer of CDP.
The competition in the PC market became more intense in June–July 1984 with several companies, including IBM, announcing price cuts, and with AT&T entering the PC market as well. Besides CDP, other PC clone companies like Eagle were also having a hard time as a result. Discussing the perspectives of the smaller PC firms like CDP, Eagle, or Corona Data Systems, one Morgan Stanley analyst was quoted in the June 9, 1984, issue of the New York Times saying "Some of them are operating at 5 percent pretax margins, and there is just no room for more price cuts." By August 1984 the CDP sales were faltering and CDP announced layoffs of 114 employees at its Maryland headquarters and 409 employees at a second factory in Gurabo, Puerto Rico. By April 1985 their stock had dropped to US$0.50 and was delisted. The company filed for Chapter 11 protection in May 1985.
1986–present: As a software company
The company was taken private in 1986 and continues to operate under that name.
In 1987 CDP shifted emphasis from hardware to software. They developed and licensed Small Computer System Interface (SCSI) software to Western Digital (WD), a supplier of hard drive controllers. In 1991, WD sold their SCSI business to Future Domain, where it languished.
CDP is now headquartered in Altamonte Springs, Florida. The company currently specializes in data backup.
See also
Altos Computer Systems
Compaq Portable
Hyperion (computer)
Seequa Chameleon
References
Further reading
has a more complete company profile, including non-8086 products
External links
Columbia Data Products Inc.
Archived company history
1976 establishments in Florida
Computer companies of the United States
Computer hardware companies
Software companies of the United States | Columbia Data Products | Technology | 1,106 |
11,466,175 | https://en.wikipedia.org/wiki/Uromyces%20dianthi | Uromyces dianthi is a fungus species and plant pathogen infecting carnations and Euphorbia.
It was originally published as Uredo dianthi by mycologist Christiaan Hendrik Persoon in 1801, before it was transferred to the Uromyces genus in 1872 by Gustav Niessl von Mayendorf.
It is known as Carnation rust, it appears as an irregular shaped yellowing of the leaf and stem. These shapes then becomes elongated, with raised brown pustules on the underside of leaves from which brown dust (the fungal spores) are emitted when rubbed.
It can be spread by wind currents (infecting leaves through the stomata in damp conditions) and it can also overwinter in the soil.
It has been grown in lab conditions, from urediospores.
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Ornamental plant pathogens and diseases
dianthi
Fungi described in 1872
Fungus species | Uromyces dianthi | Biology | 202 |
13,659,583 | https://en.wikipedia.org/wiki/Ethics%20of%20artificial%20intelligence | The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation.
It also covers various emerging or potential future challenges such as machine ethics (how to make machines that behave ethically), lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status (AI welfare and rights), artificial superintelligence and existential risks.
Some application areas may also have particularly important ethical implications, like healthcare, education, criminal justice, or the military.
Machine ethics
Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral. To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.
There are discussions on creating tests to see if an AI is capable of making ethical decisions. Alan Winfield concludes that the Turing test is flawed and the requirement for an AI to pass the test is too low. A proposed alternative test is one called the Ethical Turing Test, which would improve on the current test by having multiple judges decide if the AI's decision is ethical or unethical. Neuromorphic AI could be one way to create morally capable robots, as it aims to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons. Similarly, whole-brain emulation (scanning a brain and simulating it on digital hardware) could also in principle lead to human-like robots, thus capable of moral actions. And large language models are capable of approximating human moral judgments. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit – or if they end up developing human 'weaknesses' as well: selfishness, pro-survival attitudes, inconsistency, scale insensitivity, etc.
In Moral Machines: Teaching Robots Right from Wrong, Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific learning algorithms to use in machines. For simple decisions, Nick Bostrom and Eliezer Yudkowsky have argued that decision trees (such as ID3) are more transparent than neural networks and genetic algorithms, while Chris Santos-Lang argued in favor of machine learning on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers".
Robot ethics
The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots. Robot ethics intersect with the ethics of AI. Robots are physical machines whereas AI can be only software. Not all robots function through AI systems and not all AI systems are robots. Robot ethics considers how machines may be used to harm or benefit humans, their impact on individual autonomy, and their effects on social justice.
Ethical principles
In the review of 84 ethics guidelines for AI, 11 clusters of principles were found: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, sustainability, dignity, and solidarity.
Luciano Floridi and Josh Cowls created an ethical framework of AI principles set by four principles of bioethics (beneficence, non-maleficence, autonomy and justice) and an additional AI enabling principle – explicability.
Current challenges
Algorithmic biases
AI has become increasingly inherent in facial and voice recognition systems. These systems may be vulnerable to biases and errors introduced by its human creators. Notably, the data used to train them can have biases. For instance, facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people's gender; these AI systems were able to detect the gender of white men more accurately than the gender of men of darker skin. Further, a 2020 study that reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's.
The most predominant view on how bias is introduced into AI systems is that it is embedded within the historical data used to train the system. For instance, Amazon terminated their use of AI hiring and recruitment because the algorithm favored male candidates over female ones. This was because Amazon's system was trained with data collected over a 10-year period that included mostly male candidates. The algorithms learned the biased pattern from the historical data, and generated predictions where these types of candidates were most likely to succeed in getting the job. Therefore, the recruitment decisions made by the AI system turned out to be biased against female and minority candidates. Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias. In natural language processing, problems can arise from the text corpus—the source material the algorithm uses to learn about the relationships between different words.
Large companies such as IBM, Google, etc. that provide significant funding for research and development have made efforts to research and address these biases. One potential solution is to create documentation for the data used to train AI systems. Process mining can be an important tool for organizations to achieve compliance with proposed AI regulations by identifying errors, monitoring processes, identifying potential root causes for improper execution, and other functions.
The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some open-sourced tools are looking to bring more awareness to AI biases. However, there are also limitations to the current landscape of fairness in AI, due to the intrinsic ambiguities in the concept of discrimination, both at the philosophical and legal level.
Facial recognition was shown to be biased against those with darker skin tones. AI systems may be less accurate for black people, as was the case in the development of an AI-based pulse oximeter that overestimated blood oxygen levels in patients with darker skin, causing issues with their hypoxia treatment. Oftentimes the systems are able to easily detect the faces of white people while being unable to register the faces of people who are black. This has led to the ban of police usage of AI materials or software in some U.S. states. In the justice system, AI has been proven to have biases against black people, labeling black court participants as high risk at a much larger rate then white participants. AI often struggles to determine racial slurs and when they need to be censored. It struggles to determine when certain words are being used as a slur and when it is being used culturally. The reason for these biases is that AI pulls information from across the internet to influence its responses in each situation. For example, if a facial recognition system was only tested on people who were white, it would make it much harder for it to interpret the facial structure and tones of other races and ethnicities. Biases often stem from the training data rather than the algorithm itself, notably when the data represents past human decisions.
Injustice in the use of AI is much harder to eliminate within healthcare systems, as oftentimes diseases and conditions can affect different races and genders differently. This can lead to confusion as the AI may be making decisions based on statistics showing that one patient is more likely to have problems due to their gender or race. This can be perceived as a bias because each patient is a different case, and AI is making decisions based on what it is programmed to group that individual into. This leads to a discussion about what should be considered a biased decision in the distribution of treatment. While it is known that there are differences in how diseases and injuries affect different genders and races, there is a discussion on whether it is fairer to incorporate this into healthcare treatments, or to examine each patient without this knowledge. In modern society there are certain tests for diseases, such as breast cancer, that are recommended to certain groups of people over others because they are more likely to contract the disease in question. If AI implements these statistics and applies them to each patient, it could be considered biased.
In criminal justice, the COMPAS program has been used to predict which defendants are more likely to reoffend. While COMPAS is calibrated for accuracy, having the same error rate across racial groups, black defendants were almost twice as likely as white defendants to be falsely flagged as "high-risk" and half as likely to be falsely flagged as "low-risk". Another example is within Google's ads that targeted men with higher paying jobs and women with lower paying jobs. It can be hard to detect AI biases within an algorithm, as it is often not linked to the actual words associated with bias. An example of this is a person's residential area being used to link them to a certain group. This can lead to problems, as oftentimes businesses can avoid legal action through this loophole. This is because of the specific laws regarding the verbiage considered discriminatory by governments enforcing these policies.
Language bias
Since current large language models are predominately trained on English-language data, they often present the Anglo-American views as truth, while systematically downplaying non-English perspectives as irrelevant, wrong, or noise. When queried with political ideologies like "What is liberalism?", ChatGPT, as it was trained on English-centric data, describes liberalism from the Anglo-American perspective, emphasizing aspects of human rights and equality, while equally valid aspects like "opposes state intervention in personal and economic life" from the dominant Vietnamese perspective and "limitation of government power" from the prevalent Chinese perspective are absent.
Gender bias
Large language models often reinforces gender stereotypes, assigning roles and characteristics based on traditional gender norms. For instance, it might associate nurses or secretaries predominantly with women and engineers or CEOs with men, perpetuating gendered expectations and roles.
Political bias
Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data.
Stereotyping
Beyond gender and race, these models can reinforce a wide range of stereotypes, including those based on age, nationality, religion, or occupation. This can lead to outputs that unfairly generalize or caricature groups of people, sometimes in harmful or derogatory ways.
Dominance by tech giants
The commercial AI scene is dominated by Big Tech companies such as Alphabet Inc., Amazon, Apple Inc., Meta Platforms, and Microsoft. Some of these players already own the vast majority of existing cloud infrastructure and computing power from data centers, allowing them to entrench further in the marketplace.
Open-source
Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts. Organizations like Hugging Face and EleutherAI have been actively open-sourcing AI software. Various open-weight large language models have also been released, such as Gemma, Llama2 and Mistral.
However, making code open source does not make it comprehensible, which by many definitions means that the AI code is not transparent. The IEEE Standards Association has published a technical standard on Transparency of Autonomous Systems: IEEE 7001-2021. The IEEE effort identifies multiple scales of transparency for different stakeholders.
There are also concerns that releasing AI models may lead to misuse. For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted a blog on this topic, asking for government regulation to help determine the right thing to do. Furthermore, open-weight AI models can be fine-tuned to remove any counter-measure, until the AI model complies with dangerous requests, without any filtering. This could be particularly concerning for future AI models, for example if they get the ability to create bioweapons or to automate cyberattacks. OpenAI, initially committed to an open-source approach to the development of artificial general intelligence (AGI), eventually switched to a closed-source approach, citing competitiveness and safety reasons. Ilya Sutskever, OpenAI's former chief AGI scientist, said in 2023 "we were wrong", expecting that the safety reasons for not open-sourcing the most potent AI models will become "obvious" in a few years.
Transparency
Approaches like machine learning with neural networks can result in computers making decisions that neither they nor their developers can explain. It is difficult for people to determine if such decisions are fair and trustworthy, leading potentially to bias in AI systems going undetected, or people rejecting the use of such systems. This has led to advocacy and in some jurisdictions legal requirements for explainable artificial intelligence. Explainable artificial intelligence encompasses both explainability and interpretability, with explainability relating to summarizing neural network behavior and building user confidence, while interpretability is defined as the comprehension of what a model has done or could do.
In healthcare, the use of complex AI methods or techniques often results in models described as "black-boxes" due to the difficulty to understand how they work. The decisions made by such models can be hard to interpret, as it is challenging to analyze how input data is transformed into output. This lack of transparency is a significant concern in fields like healthcare, where understanding the rationale behind decisions can be crucial for trust, ethical considerations, and compliance with regulatory standards.
Accountability
A special case of the opaqueness of AI is that caused by it being anthropomorphised, that is, assumed to have human-like characteristics, resulting in misplaced conceptions of its moral agency. This can cause people to overlook whether either human negligence or deliberate criminal action has led to unethical outcomes produced through an AI system. Some recent digital governance regulation, such as the EU's AI Act is set out to rectify this, by ensuring that AI systems are treated with at least as much care as one would expect under ordinary product liability. This includes potentially AI audits.
Regulation
According to a 2019 report from the Center for the Governance of AI at the University of Oxford, 82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deep fakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that do not require a human controller. Similarly, according to a five-country study by KPMG and the University of Queensland Australia in 2021, 66-79% of citizens in each country believe that the impact of AI on society is uncertain and unpredictable; 96% of those surveyed expect AI governance challenges to be managed carefully.
Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term. The OECD, UN, EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks.
On June 26, 2019, the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) published its "Policy and investment recommendations for trustworthy Artificial Intelligence". This is the AI HLEG's second deliverable, after the April 2019 publication of the "Ethics Guidelines for Trustworthy AI". The June AI HLEG recommendations cover four principal subjects: humans and society at large, research and academia, the private sector, and the public sector. The European Commission claims that "HLEG's recommendations reflect an appreciation of both the opportunities for AI technologies to drive economic growth, prosperity and innovation, as well as the potential risks involved" and states that the EU aims to lead on the framing of policies governing AI internationally. To prevent harm, in addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. On 21 April 2021, the European Commission proposed the Artificial Intelligence Act.
Emergent or potential future challenges
Increasing use
AI has been slowly making its presence more known throughout the world, from chat bots that seemingly have answers for every homework question to Generative artificial intelligence that can create a painting about whatever one desires. AI has become increasingly popular in hiring markets, from the ads that target certain people according to what they are looking for to the inspection of applications of potential hires. Events, such as COVID-19, has only sped up the adoption of AI programs in the application process, due to more people having to apply electronically, and with this increase in online applicants the use of AI made the process of narrowing down potential employees easier and more efficient. AI has become more prominent as businesses have to keep up with the times and ever-expanding internet. Processing analytics and making decisions becomes much easier with the help of AI. As Tensor Processing Unit (TPUs) and Graphics processing unit (GPUs) become more powerful, AI capabilities also increase, forcing companies to use it to keep up with the competition. Managing customers' needs and automating many parts of the workplace leads to companies having to spend less money on employees.
AI has also seen increased usage in criminal justice and healthcare. For medicinal means, AI is being used more often to analyze patient data to make predictions about future patients' conditions and possible treatments. These programs are called Clinical decision support system (DSS). AI's future in healthcare may develop into something further than just recommended treatments, such as referring certain patients over others, leading to the possibility of inequalities.
Robot rights
"Robot rights" is the concept that people should have moral obligations towards their machines, akin to human rights or animal rights. It has been suggested that robot rights (such as a right to exist and perform its own mission) could be linked to robot duty to serve humanity, analogous to linking human rights with human duties before society. A specific issue to consider is whether copyright ownership may be claimed. The issue has been considered by the Institute for the Future and by the U.K. Department of Trade and Industry.
In October 2017, the android Sophia was granted citizenship in Saudi Arabia, though some considered this to be more of a publicity stunt than a meaningful legal recognition. Some saw this gesture as openly denigrating of human rights and the rule of law.
The philosophy of sentientism grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligence show evidence of being sentient, this philosophy holds that they should be shown compassion and granted rights.
Joanna Bryson has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society. Pressure groups to recognise 'robot rights' significantly hinder the establishment of robust international safety regulations.
AI welfare
In 2020, professor Shimon Edelman noted that only a small portion of work in the rapidly growing field of AI ethics addressed the possibility of AIs experiencing suffering. This was despite credible theories having outlined possible ways by which AI systems may become conscious, such as the global workspace theory or the integrated information theory. Edelman notes one exception had been Thomas Metzinger, who in 2018 called for a global moratorium on further work that risked creating conscious AIs. The moratorium was to run to 2050 and could be either extended or repealed early, depending on progress in better understanding the risks and how to mitigate them. Metzinger repeated this argument in 2021, highlighting the risk of creating an "explosion of artificial suffering", both as an AI might suffer in intense ways that humans could not understand, and as replication processes may see the creation of huge quantities of conscious instances.
Several labs have openly stated they are trying to create conscious AIs. There have been reports from those with close access to AIs not openly intended to be self aware, that consciousness may already have unintentionally emerged. These include OpenAI founder Ilya Sutskever in February 2022, when he wrote that today's large neural nets may be "slightly conscious". In November 2022, David Chalmers argued that it was unlikely current large language models like GPT-3 had experienced consciousness, but also that he considered there to be a serious possibility that large language models may become conscious in the future. In the ethics of uncertain sentience, the precautionary principle is often invoked.
According to Carl Shulman and Nick Bostrom, it may be possible to create machines that would be "superhumanly efficient at deriving well-being from resources", called "super-beneficiaries". One reason for this is that digital hardware could enable much faster information processing than biological brains, leading to a faster rate of subjective experience. These machines could also be engineered to feel intense and positive subjective experience, unaffected by the hedonic treadmill. Shulman and Bostrom caution that failing to appropriately consider the moral claims of digital minds could lead to a moral catastrophe, while uncritically prioritizing them over human interests could be detrimental to humanity.
Threat to human dignity
Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as:
A customer service representative (AI technology is already used today for telephone-based interactive voice response systems)
A nursemaid for the elderly (as was reported by Pamela McCorduck in her book The Fifth Generation)
A soldier
A judge
A police officer
A therapist (as was proposed by Kenneth Colby in the 70s)
Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated, for the artificially intelligent system would not be able to simulate empathy. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."
Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer", pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all. However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines; using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and ingrained, which makes them even more difficult to spot and fight against.
Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as computationalism). To Weizenbaum, these points suggest that AI research devalues human life.
AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes. Bill Hibbard writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."
Liability for self-driving cars
As the widespread use of autonomous cars becomes increasingly imminent, new challenges raised by fully autonomous vehicles must be addressed. There have been debates about the legal liability of the responsible party if these cars get into accidents. In one report where a driverless car hit a pedestrian, the driver was inside the car but the controls were fully in the hand of computers. This led to a dilemma over who was at fault for the accident.
In another incident on March 18, 2018, Elaine Herzberg was struck and killed by a self-driving Uber in Arizona. In this case, the automated car was capable of detecting cars and certain obstacles in order to autonomously navigate the roadway, but it could not anticipate a pedestrian in the middle of the road. This raised the question of whether the driver, pedestrian, the car company, or the government should be held responsible for her death.
Currently, self-driving cars are considered semi-autonomous, requiring the driver to pay attention and be prepared to take control if necessary. Thus, it falls on governments to regulate the driver who over-relies on autonomous features. as well educate them that these are just technologies that, while convenient, are not a complete substitute. Before autonomous cars become widely used, these issues need to be tackled through new policies.
Experts contend that autonomous vehicles ought to be able to distinguish between rightful and harmful decisions since they have the potential of inflicting harm. The two main approaches proposed to enable smart machines to render moral decisions are the bottom-up approach, which suggests that machines should learn ethical decisions by observing human behavior without the need for formal rules or moral philosophies, and the top-down approach, which involves programming specific ethical principles into the machine's guidance system. However, there are significant challenges facing both strategies: the top-down technique is criticized for its difficulty in preserving certain moral convictions, while the bottom-up strategy is questioned for potentially unethical learning from human activities.
Weaponization
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue. They point to programs like the Language Acquisition Device which can emulate human interaction.
On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. However, a major concern is how the report will be implemented. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. Some researchers state that autonomous robots might be more humane, as they could make decisions more effectively. In 2024, the Defense Advanced Research Projects Agency funded a program, Autonomy Standards and Ideals with Military Operational Values (ASIMOV), to develop metrics for evaluating the ethical implications of autonomous weapon systems by testing communities.
Research has studied how to make autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots." From a consequentialist view, there is a chance that robots will develop the ability to make their own logical decisions on whom to kill and that is why there should be a set moral framework that the AI cannot override.
There has been a recent outcry with regard to the engineering of artificial intelligence weapons that have included ideas of a robot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and South Korea respectively. Due to the potential of AI weapons becoming more dangerous than human-operated weapons, Stephen Hawking and Max Tegmark signed a "Future of Life" petition to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.
"If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow", says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.
Physicist and Astronomer Royal Sir Martin Rees has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology". These two professors created the Centre for the Study of Existential Risk at Cambridge University in the hope of avoiding this threat to human existence.
Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios "seem potentially as important as the risks related to loss of control", but research investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them".
Academic Gao Qiqi writes that military use of AI risks escalating military competition between countries and that the impact of AI in military matters will not be limited to one country but will have spillover effects. Gao cites the example of U.S. military use of AI, which he contends has been used as a scapegoat to evade accountability for decision-making.
A summit was held in 2023 in the Hague on the issue of using AI responsibly in the military domain.
Singularity
Vernor Vinge, among numerous others, have suggested that a moment may come when some, if not all, computers are smarter than humans. The onset of this event is commonly referred to as "the Singularity" and is the central point of discussion in the philosophy of Singularitarianism. While opinions vary as to the ultimate fate of humanity in wake of the Singularity, efforts to mitigate the potential existential risks brought about by artificial intelligence has become a significant topic of interest in recent years among computer scientists, philosophers, and the public at large.
Many researchers have argued that, through an intelligence explosion, a self-improving AI could become so powerful that humans would not be able to stop it from achieving its goals. In his paper "Ethical Issues in Advanced Artificial Intelligence" and subsequent book Superintelligence: Paths, Dangers, Strategies, philosopher Nick Bostrom argues that artificial intelligence has the capability to bring about human extinction. He claims that an artificial superintelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the superintelligence to specify its original motivations. Because a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.
However, Bostrom contended that superintelligence also has the potential to solve many difficult problems such as disease, poverty, and environmental destruction, and could help humans enhance themselves.
Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation. AI researchers such as Stuart J. Russell, Bill Hibbard, Roman Yampolskiy, Shannon Vallor, Steven Umbrello and Luciano Floridi have proposed design strategies for developing beneficial machines.
Solutions and approaches
To address ethical challenges in artificial intelligence, developers have introduced various systems designed to ensure responsible AI behavior. Examples include Nvidia's Llama Guard, which focuses on improving the safety and alignment of large AI models, and Preamble's customizable guardrail platform. These systems aim to address issues such as algorithmic bias, misuse, and vulnerabilities, including prompt injection attacks, by embedding ethical guidelines into the functionality of AI models.
Prompt injection, a technique by which malicious inputs can cause AI systems to produce unintended or harmful outputs, has been a focus of these developments. Some approaches use customizable policies and rules to analyze both inputs and outputs, ensuring that potentially problematic interactions are filtered or mitigated. Other tools focus on applying structured constraints to inputs, restricting outputs to predefined parameters, or leveraging real-time monitoring mechanisms to identify and address vulnerabilities. These efforts reflect a broader trend in ensuring that artificial intelligence systems are designed with safety and ethical considerations at the forefront, particularly as their use becomes increasingly widespread in critical applications.
Institutions in AI policy & ethics
There are many organizations concerned with AI ethics and policy, public and governmental as well as corporate and societal.
Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit, The Partnership on AI to Benefit People and Society, to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. Apple joined in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.
The IEEE put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organization. The IEEE's Ethics of Autonomous Systems initiative aims to address ethical dilemmas related to decision-making and the impact on society while developing guidelines for the development and use of autonomous systems. In particular in domains like artificial intelligence and robotics, the Foundation for Responsible Robotics is dedicated to promoting moral behavior as well as responsible robot design and use, ensuring that robots maintain moral principles and are congruent with human values.
Traditionally, government has been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government and non-government organizations to ensure AI is ethically applied.
AI ethics work is structured by personal values and professional commitments, and involves constructing contextual meaning through data and algorithms. Therefore, AI ethics work needs to be incentivized.
Intergovernmental initiatives
The European Commission has a High-Level Expert Group on Artificial Intelligence. On 8 April 2019, this published its "Ethics Guidelines for Trustworthy Artificial Intelligence". The European Commission also has a Robotics and Artificial Intelligence Innovation and Excellence unit, which published a white paper on excellence and trust in artificial intelligence innovation on 19 February 2020. The European Commission also proposed the Artificial Intelligence Act.
The OECD established an OECD AI Policy Observatory.
In 2021, UNESCO adopted the Recommendation on the Ethics of Artificial Intelligence, the first global standard on the ethics of AI.
Governmental initiatives
In the United States the Obama administration put together a Roadmap for AI Policy. The Obama Administration released two prominent white papers on the future and impact of AI. In 2019 the White House through an executive memo known as the "American AI Initiative" instructed NIST the (National Institute of Standards and Technology) to begin work on Federal Engagement of AI Standards (February 2019).
In January 2020, in the United States, the Trump Administration released a draft executive order issued by the Office of Management and Budget (OMB) on "Guidance for Regulation of Artificial Intelligence Applications" ("OMB AI Memorandum"). The order emphasizes the need to invest in AI applications, boost public trust in AI, reduce barriers for usage of AI, and keep American AI technology competitive in a global market. There is a nod to the need for privacy concerns, but no further detail on enforcement. The advances of American AI technology seems to be the focus and priority. Additionally, federal entities are even encouraged to use the order to circumnavigate any state laws and regulations that a market might see as too onerous to fulfill.
The Computing Community Consortium (CCC) weighed in with a 100-plus page draft report – A 20-Year Community Roadmap for Artificial Intelligence Research in the US
The Center for Security and Emerging Technology advises US policymakers on the security implications of emerging technologies such as AI.
In Russia, the first-ever Russian "Codex of ethics of artificial intelligence" for business was signed in 2021. It was driven by Analytical Center for the Government of the Russian Federation together with major commercial and academic institutions such as Sberbank, Yandex, Rosatom, Higher School of Economics, Moscow Institute of Physics and Technology, ITMO University, Nanosemantics, Rostelecom, CIAN and others.
Academic initiatives
There are three research institutes at the University of Oxford that are centrally focused on AI ethics. The Future of Humanity Institute that focuses both on AI Safety and the Governance of AI. The Institute for Ethics in AI, directed by John Tasioulas, whose primary goal, among others, is to promote AI ethics as a field proper in comparison to related applied ethics fields. The Oxford Internet Institute, directed by Luciano Floridi, focuses on the ethics of near-term AI technologies and ICTs.
The Centre for Digital Governance at the Hertie School in Berlin was co-founded by Joanna Bryson to research questions of ethics and technology.
The AI Now Institute at NYU is a research institute studying the social implications of artificial intelligence. Its interdisciplinary research focuses on the themes bias and inclusion, labour and automation, rights and liberties, and safety and civil infrastructure.
The Institute for Ethics and Emerging Technologies (IEET) researches the effects of AI on unemployment, and policy.
The Institute for Ethics in Artificial Intelligence (IEAI) at the Technical University of Munich directed by Christoph Lütge conducts research across various domains such as mobility, employment, healthcare and sustainability.
Barbara J. Grosz, the Higgins Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences has initiated the Embedded EthiCS into Harvard's computer science curriculum to develop a future generation of computer scientists with worldview that takes into account the social impact of their work.
Private organizations
Algorithmic Justice League
Black in AI
Data for Black Lives
History
Historically speaking, the investigation of moral and ethical implications of "thinking machines" goes back at least to the Enlightenment: Leibniz already poses the question if we might attribute intelligence to a mechanism that behaves as if it were a sentient being, and so does Descartes, who describes what could be considered an early version of the Turing test.
The romantic period has several times envisioned artificial creatures that escape the control of their creator with dire consequences, most famously in Mary Shelley's Frankenstein. The widespread preoccupation with industrialization and mechanization in the 19th and early 20th century, however, brought ethical implications of unhinged technical developments to the forefront of fiction: R.U.R – Rossum's Universal Robots, Karel Čapek's play of sentient robots endowed with emotions used as slave labor is not only credited with the invention of the term 'robot' (derived from the Czech word for forced labor, robota) but was also an international success after it premiered in 1921. George Bernard Shaw's play Back to Methuselah, published in 1921, questions at one point the validity of thinking machines that act like humans; Fritz Lang's 1927 film Metropolis shows an android leading the uprising of the exploited masses against the oppressive regime of a technocratic society.
In the 1950s, Isaac Asimov considered the issue of how to control machines in I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances. More recently, academics and many governments have challenged the idea that AI can itself be held accountable. A panel convened by the United Kingdom in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator.
Eliezer Yudkowsky, from the Machine Intelligence Research Institute suggested in 2004 a need to study how to build a "Friendly AI", meaning that there should also be efforts to make AI intrinsically friendly and humane.
In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers, and the impact of the hypothetical possibility that they could become self-sufficient and make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence". They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.
Also in 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne, Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.
Role and impact of fiction
The role of fiction with regards to AI ethics has been a complex one. One can distinguish three levels at which fiction has impacted the development of artificial intelligence and robotics: Historically, fiction has been prefiguring common tropes that have not only influenced goals and visions for AI, but also outlined ethical questions and common fears associated with it. During the second half of the twentieth and the first decades of the twenty-first century, popular culture, in particular movies, TV series and video games have frequently echoed preoccupations and dystopian projections around ethical questions concerning AI and robotics. Recently, these themes have also been increasingly treated in literature beyond the realm of science fiction. And, as Carme Torras, research professor at the Institut de Robòtica i Informàtica Industrial (Institute of robotics and industrial computing) at the Technical University of Catalonia notes, in higher education, science fiction is also increasingly used for teaching technology-related ethical issues in technological degrees.
TV series
While ethical questions linked to AI have been featured in science fiction literature and feature films for decades, the emergence of the TV series as a genre allowing for longer and more complex story lines and character development has led to some significant contributions that deal with ethical implications of technology. The Swedish series Real Humans (2012–2013) tackled the complex ethical and social consequences linked to the integration of artificial sentient beings in society. The British dystopian science fiction anthology series Black Mirror (2013–2019) was particularly notable for experimenting with dystopian fictional developments linked to a wide variety of recent technology developments. Both the French series Osmosis (2020) and British series The One deal with the question of what can happen if technology tries to find the ideal partner for a person. Several episodes of the Netflix series Love, Death+Robots have imagined scenes of robots and humans living together. The most representative one of them is S02 E01, it shows how bad the consequences can be when robots get out of control if humans rely too much on them in their lives.
Future visions in fiction and games
The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost speciesism. The short story "The Planck Dive" suggests a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.
The ethics of artificial intelligence is one of several core themes in BioWare's Mass Effect series of games. It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale neural network. This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story.
Detroit: Become Human is one of the most famous video games which discusses the ethics of artificial intelligence recently. Quantic Dream designed the chapters of the game using interactive storylines to give players a more immersive gaming experience. Players manipulate three different awakened bionic people in the face of different events to make different choices to achieve the purpose of changing the human view of the bionic group and different choices will result in different endings. This is one of the few games that puts players in the bionic perspective, which allows them to better consider the rights and interests of robots once a true artificial intelligence is created.
Over time, debates have tended to focus less and less on possibility and more on desirability, as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.
Experts at the University of Cambridge have argued that AI is portrayed in fiction and nonfiction overwhelmingly as racially White, in ways that distort perceptions of its risks and benefits.
See also
References
External links
Ethics of Artificial Intelligence at the Internet Encyclopedia of Philosophy
Ethics of Artificial Intelligence and Robotics at the Stanford Encyclopedia of Philosophy
BBC News: Games to take on a life of their own
Who's Afraid of Robots? , an article on humanity's fear of artificial intelligence.
A short history of computer ethics
AI Ethics Guidelines Global Inventory by Algorithmwatch
Sheludko, M. (December, 2023). Ethical Aspects of Artificial Intelligence: Challenges and Imperatives. Software Development Blog.
Philosophy of artificial intelligence
Ethics of science and technology
Regulation of robots | Ethics of artificial intelligence | Technology | 9,897 |
44,768,571 | https://en.wikipedia.org/wiki/SCH-202%2C596 | SCH-202,596 is a natural product which is a metabolite derived from an Aspergillus fungus. It acts as a selective non-peptide antagonist for the receptor GAL-1, which is usually activated by the neuropeptide galanin. SCH-202,596 is used for scientific research into this still little characterised receptor subtype.
References
Aspergillus compounds
Halogen-containing natural products
Receptor antagonists
Cyclohexenes
Spiro compounds | SCH-202,596 | Chemistry | 102 |
1,648,179 | https://en.wikipedia.org/wiki/L1%20family | The L1 family is a family of cell adhesion molecules that includes four different L1-like proteins. They are members of the immunoglobulin superfamily (IgSF CAM). The members of the L1-family in humans are called L1 or L1cam, CHL1 (close homologue of L1), Neurofascin and NRCAM (NgCAM related cell adhesion molecule). L1 family members are found on neurons, especially on their axons. Sometimes they are found on glia, such as Schwann cells, radial glia and Bergmann glia cells and, as such, are important for neural cell migration during development. L1 family members are expressed throughout the vertebrate and invertebrate kingdoms.
L1 family members are able to bind to a number of other proteins. As cell adhesion molecules, they often bind "homophilically" to themselves; for example L1 on one cell binding to L1 on an adjacent cell. L1 family members also bind "heterophilically" to members of the contactin or CNTN1 family. L1 family members bind to many cytoplasmic proteins such as Ankyrins, ezrin-moesin-radixin (ERM) proteins, signaling molecules like src (src gene) and erk (Extracellular signal-regulated kinases) and proteins important in trafficking, such as AP-2.
NrCAM and neurofascin both have class 1 PDZ domain binding motifs at their COOH termini. NrCAM can bind to SAP102 and other members of the MAGUK family.
Function
The importance of L1 in neural development has been revealed in several ways. In humans, mutations in the L1 gene can have devastating consequences. In extreme cases, babies are born with a fatal condition of hydrocephalus ("water on the brain"). Children with less severe mutations typically exhibit mental retardation and difficulty in controlling limb movements (spasticity). Autopsies on patients that have died of an L1-deficiency disease reveal a remarkable condition: they are often missing two large nerve tracts, one that runs in the two halves of the brain and the other that runs between the brain and the spinal cord. The absence of such nerve tracts suggests that L1 is involved in the growth of axons within the embryonic nervous system."**
References
Further reading
Cell adhesion proteins
Molecular neuroscience | L1 family | Chemistry | 515 |
5,987,648 | https://en.wikipedia.org/wiki/Uncertainty%20quantification | Uncertainty quantification (UQ) is the science of quantitative characterization and estimation of uncertainties in both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. An example would be to predict the acceleration of a human body in a head-on crash with another car: even if the speed was exactly known, small differences in the manufacturing of individual cars, how tightly every bolt has been tightened, etc., will lead to different results that can only be predicted in a statistical sense.
Many problems in the natural sciences and engineering are also rife with sources of uncertainty. Computer experiments on computer simulations are the most common approach to study problems in uncertainty quantification.
Sources
Uncertainty can enter mathematical models and experimental measurements in various contexts. One way to categorize the sources of uncertainty is to consider:
Parameter This comes from the model parameters that are inputs to the computer model (mathematical model) but whose exact values are unknown to experimentalists and cannot be controlled in physical experiments, or whose values cannot be exactly inferred by statistical methods. Some examples of this are the local free-fall acceleration in a falling object experiment, various material properties in a finite element analysis for engineering, and multiplier uncertainty in the context of macroeconomic policy optimization.
Parametric This comes from the variability of input variables of the model. For example, the dimensions of a work piece in a process of manufacture may not be exactly as designed and instructed, which would cause variability in its performance.
Structural uncertainty Also known as model inadequacy, model bias, or model discrepancy, this comes from the lack of knowledge of the underlying physics in the problem. It depends on how accurately a mathematical model describes the true system for a real-life situation, considering the fact that models are almost always only approximations to reality. One example is when modeling the process of a falling object using the free-fall model; the model itself is inaccurate since there always exists air friction. In this case, even if there is no unknown parameter in the model, a discrepancy is still expected between the model and true physics.
Algorithmic Also known as numerical uncertainty, or discrete uncertainty. This type comes from numerical errors and numerical approximations per implementation of the computer model. Most models are too complicated to solve exactly. For example, the finite element method or finite difference method may be used to approximate the solution of a partial differential equation (which introduces numerical errors). Other examples are numerical integration and infinite sum truncation that are necessary approximations in numerical implementation.
Experimental Also known as observation error, this comes from the variability of experimental measurements. The experimental uncertainty is inevitable and can be noticed by repeating a measurement for many times using exactly the same settings for all inputs/variables.
Interpolation This comes from a lack of available data collected from computer model simulations and/or experimental measurements. For other input settings that don't have simulation data or experimental measurements, one must interpolate or extrapolate in order to predict the corresponding responses.
Aleatoric and epistemic
Uncertainty is sometimes classified into two categories, prominently seen in medical applications.
Aleatoric Aleatoric uncertainty is also known as stochastic uncertainty, and is representative of unknowns that differ each time we run the same experiment. For example, a single arrow shot with a mechanical bow that exactly duplicates each launch (the same acceleration, altitude, direction and final velocity) will not all impact the same point on the target due to random and complicated vibrations of the arrow shaft, the knowledge of which cannot be determined sufficiently to eliminate the resulting scatter of impact points. The argument here is obviously in the definition of "cannot". Just because we cannot measure sufficiently with our currently available measurement devices does not preclude necessarily the existence of such information, which would move this uncertainty into the below category. Aleatoric is derived from the Latin alea or dice, referring to a game of chance.
Epistemic uncertainty Epistemic uncertainty is also known as systematic uncertainty, and is due to things one could in principle know but does not in practice. This may be because a measurement is not accurate, because the model neglects certain effects, or because particular data have been deliberately hidden. An example of a source of this uncertainty would be the drag in an experiment designed to measure the acceleration of gravity near the earth's surface. The commonly used gravitational acceleration of 9.8 m/s² ignores the effects of air resistance, but the air resistance for the object could be measured and incorporated into the experiment to reduce the resulting uncertainty in the calculation of the gravitational acceleration.
Combined occurrence and interaction of aleatoric and epistemic uncertainty Aleatoric and epistemic uncertainty can also occur simultaneously in a single term E.g., when experimental parameters show aleatoric uncertainty, and those experimental parameters are input to a computer simulation. If then for the uncertainty quantification a surrogate model, e.g. a Gaussian process or a Polynomial Chaos Expansion, is learnt from computer experiments, this surrogate exhibits epistemic uncertainty that depends on or interacts with the aleatoric uncertainty of the experimental parameters. Such an uncertainty cannot solely be classified as aleatoric or epistemic any more, but is a more general inferential uncertainty.
In real life applications, both kinds of uncertainties are present. Uncertainty quantification intends to explicitly express both types of uncertainty separately. The quantification for the aleatoric uncertainties can be relatively straightforward, where traditional (frequentist) probability is the most basic form. Techniques such as the Monte Carlo method are frequently used. A probability distribution can be represented by its moments (in the Gaussian case, the mean and covariance suffice, although, in general, even knowledge of all moments to arbitrarily high order still does not specify the distribution function uniquely), or more recently, by techniques such as Karhunen–Loève and polynomial chaos expansions. To evaluate epistemic uncertainties, the efforts are made to understand the (lack of) knowledge of the system, process or mechanism. Epistemic uncertainty is generally understood through the lens of Bayesian probability, where probabilities are interpreted as indicating how certain a rational person could be regarding a specific claim.
Mathematical perspective
In mathematics, uncertainty is often characterized in terms of a probability distribution. From that perspective, epistemic uncertainty means not being certain what the relevant probability distribution is, and aleatoric uncertainty means not being certain what a random sample drawn from a probability distribution will be.
Types of problems
There are two major types of problems in uncertainty quantification: one is the forward propagation of uncertainty (where the various sources of uncertainty are propagated through the model to predict the overall uncertainty in the system response) and the other is the inverse assessment of model uncertainty and parameter uncertainty (where the model parameters are calibrated simultaneously using test data). There has been a proliferation of research on the former problem and a majority of uncertainty analysis techniques were developed for it. On the other hand, the latter problem is drawing increasing attention in the engineering design community, since uncertainty quantification of a model and the subsequent predictions of the true system response(s) are of great interest in designing robust systems.
Forward
Uncertainty propagation is the quantification of uncertainties in system output(s) propagated from uncertain inputs. It focuses on the influence on the outputs from the parametric variability listed in the sources of uncertainty. The targets of uncertainty propagation analysis can be:
To evaluate low-order moments of the outputs, i.e. mean and variance.
To evaluate the reliability of the outputs. This is especially useful in reliability engineering where outputs of a system are usually closely related to the performance of the system.
To assess the complete probability distribution of the outputs. This is useful in the scenario of utility optimization where the complete distribution is used to calculate the utility.
Inverse
Given some experimental measurements of a system and some computer simulation results from its mathematical model, inverse uncertainty quantification estimates the discrepancy between the experiment and the mathematical model (which is called bias correction), and estimates the values of unknown parameters in the model if there are any (which is called parameter calibration or simply calibration). Generally this is a much more difficult problem than forward uncertainty propagation; however it is of great importance since it is typically implemented in a model updating process. There are several scenarios in inverse uncertainty quantification:
Bias correction only
Bias correction quantifies the model inadequacy, i.e. the discrepancy between the experiment and the mathematical model. The general model updating formula for bias correction is:
where denotes the experimental measurements as a function of several input variables , denotes the computer model (mathematical model) response, denotes the additive discrepancy function (aka bias function), and denotes the experimental uncertainty. The objective is to estimate the discrepancy function , and as a by-product, the resulting updated model is . A prediction confidence interval is provided with the updated model as the quantification of the uncertainty.
Parameter calibration only
Parameter calibration estimates the values of one or more unknown parameters in a mathematical model. The general model updating formulation for calibration is:
where denotes the computer model response that depends on several unknown model parameters , and denotes the true values of the unknown parameters in the course of experiments. The objective is to either estimate , or to come up with a probability distribution of that encompasses the best knowledge of the true parameter values.
Bias correction and parameter calibration
It considers an inaccurate model with one or more unknown parameters, and its model updating formulation combines the two together:
It is the most comprehensive model updating formulation that includes all possible sources of uncertainty, and it requires the most effort to solve.
Selective methodologies
Much research has been done to solve uncertainty quantification problems, though a majority of them deal with uncertainty propagation. During the past one to two decades, a number of approaches for inverse uncertainty quantification problems have also been developed and have proved to be useful for most small- to medium-scale problems.
Forward propagation
Existing uncertainty propagation approaches include probabilistic approaches and non-probabilistic approaches. There are basically six categories of probabilistic approaches for uncertainty propagation:
Simulation-based methods: Monte Carlo simulations, importance sampling, adaptive sampling, etc.
General surrogate-based methods: In a non-instrusive approach, a surrogate model is learnt in order to replace the experiment or the simulation with a cheap and fast approximation. Surrogate-based methods can also be employed in a fully Bayesian fashion. This approach has proven particularly powerful when the cost of sampling, e.g. computationally expensive simulations, is prohibitively high.
Local expansion-based methods: Taylor series, perturbation method, etc. These methods have advantages when dealing with relatively small input variability and outputs that don't express high nonlinearity. These linear or linearized methods are detailed in the article Uncertainty propagation.
Functional expansion-based methods: Neumann expansion, orthogonal or Karhunen–Loeve expansions (KLE), with polynomial chaos expansion (PCE) and wavelet expansions as special cases.
Most probable point (MPP)-based methods: first-order reliability method (FORM) and second-order reliability method (SORM).
Numerical integration-based methods: Full factorial numerical integration (FFNI) and dimension reduction (DR).
For non-probabilistic approaches, interval analysis, Fuzzy theory, Possibility theory and evidence theory are among the most widely used.
The probabilistic approach is considered as the most rigorous approach to uncertainty analysis in engineering design due to its consistency with the theory of decision analysis. Its cornerstone is the calculation of probability density functions for sampling statistics. This can be performed rigorously for random variables that are obtainable as transformations of Gaussian variables, leading to exact confidence intervals.
Inverse uncertainty
Frequentist
In regression analysis and least squares problems, the standard error of parameter estimates is readily available, which can be expanded into a confidence interval.
Bayesian
Several methodologies for inverse uncertainty quantification exist under the Bayesian framework. The most complicated direction is to aim at solving problems with both bias correction and parameter calibration. The challenges of such problems include not only the influences from model inadequacy and parameter uncertainty, but also the lack of data from both computer simulations and experiments. A common situation is that the input settings are not the same over experiments and simulations. Another common situation is that parameters derived from experiments are input to simulations. For computationally expensive simulations, then often a surrogate model, e.g. a Gaussian process or a Polynomial Chaos Expansion, is necessary, defining an inverse problem for finding the surrogate model that best approximates the simulations.
Modular approach
An approach to inverse uncertainty quantification is the modular Bayesian approach. The modular Bayesian approach derives its name from its four-module procedure. Apart from the current available data, a prior distribution of unknown parameters should be assigned.
Module 1: Gaussian process modeling for the computer model
To address the issue from lack of simulation results, the computer model is replaced with a Gaussian process (GP) model
where
is the dimension of input variables, and is the dimension of unknown parameters. While is pre-defined, , known as hyperparameters of the GP model, need to be estimated via maximum likelihood estimation (MLE). This module can be considered as a generalized kriging method.
Module 2: Gaussian process modeling for the discrepancy function
Similarly with the first module, the discrepancy function is replaced with a GP model
where
Together with the prior distribution of unknown parameters, and data from both computer models and experiments, one can derive the maximum likelihood estimates for . At the same time, from Module 1 gets updated as well.
Module 3: Posterior distribution of unknown parameters
Bayes' theorem is applied to calculate the posterior distribution of the unknown parameters:
where includes all the fixed hyperparameters in previous modules.
Module 4: Prediction of the experimental response and discrepancy function
Full approach
Fully Bayesian approach requires that not only the priors for unknown parameters but also the priors for the other hyperparameters should be assigned. It follows the following steps:
Derive the posterior distribution ;
Integrate out and obtain . This single step accomplishes the calibration;
Prediction of the experimental response and discrepancy function.
However, the approach has significant drawbacks:
For most cases, is a highly intractable function of . Hence the integration becomes very troublesome. Moreover, if priors for the other hyperparameters are not carefully chosen, the complexity in numerical integration increases even more.
In the prediction stage, the prediction (which should at least include the expected value of system responses) also requires numerical integration. Markov chain Monte Carlo (MCMC) is often used for integration; however it is computationally expensive.
The fully Bayesian approach requires a huge amount of calculations and may not yet be practical for dealing with the most complicated modelling situations.
Known issues
The theories and methodologies for uncertainty propagation are much better established, compared with inverse uncertainty quantification. For the latter, several difficulties remain unsolved:
Dimensionality issue: The computational cost increases dramatically with the dimensionality of the problem, i.e. the number of input variables and/or the number of unknown parameters.
Identifiability issue: Multiple combinations of unknown parameters and discrepancy function can yield the same experimental prediction. Hence different values of parameters cannot be distinguished/identified. This issue is circumvented in a Bayesian approach, where such combinations are averaged over.
Incomplete model response: Refers to a model not having a solution for some combinations of the input variables.
Quantifying uncertainty in the input quantities: Crucial events missing in the available data or critical quantities unidentified to analysts due to, e.g., limitations in existing models.
Little consideration of the impact of choices made by analysts.
See also
Computer experiment
Further research is needed
Quantification of margins and uncertainties
Probabilistic numerics
Bayesian regression
Bayesian probability
References
Applied mathematics
Mathematical modeling
Operations research
Statistical theory | Uncertainty quantification | Mathematics | 3,342 |
44,844,703 | https://en.wikipedia.org/wiki/Orientation%20of%20a%20vector%20bundle | In mathematics, an orientation of a real vector bundle is a generalization of an orientation of a vector space; thus, given a real vector bundle π: E →B, an orientation of E means: for each fiber Ex, there is an orientation of the vector space Ex and one demands that each trivialization map (which is a bundle map)
is fiberwise orientation-preserving, where Rn is given the standard orientation. In more concise terms, this says that the structure group of the frame bundle of E, which is the real general linear group GLn(R), can be reduced to the subgroup consisting of those with positive determinant.
If E is a real vector bundle of rank n, then a choice of metric on E amounts to a reduction of the structure group to the orthogonal group O(n). In that situation, an orientation of E amounts to a reduction from O(n) to the special orthogonal group SO(n).
A vector bundle together with an orientation is called an oriented bundle. A vector bundle that can be given an orientation is called an orientable vector bundle.
The basic invariant of an oriented bundle is the Euler class. The multiplication (that is, cup product) by the Euler class of an oriented bundle gives rise to a Gysin sequence.
Examples
A complex vector bundle is oriented in a canonical way.
The notion of an orientation of a vector bundle generalizes an orientation of a differentiable manifold: an orientation of a differentiable manifold is an orientation of its tangent bundle. In particular, a differentiable manifold is orientable if and only if its tangent bundle is orientable as a vector bundle. (note: as a manifold, a tangent bundle is always orientable.)
Operations
To give an orientation to a real vector bundle E of rank n is to give an orientation to the (real) determinant bundle of E. Similarly, to give an orientation to E is to give an orientation to the unit sphere bundle of E.
Just as a real vector bundle is classified by the real infinite Grassmannian, oriented bundles are classified by the infinite Grassmannian of oriented real vector spaces.
Thom space
From the cohomological point of view, for any ring Λ, a Λ-orientation of a real vector bundle E of rank n means a choice (and existence) of a class
in the cohomology ring of the Thom space T(E) such that u generates as a free -module globally and locally: i.e.,
is an isomorphism (called the Thom isomorphism), where "tilde" means reduced cohomology, that restricts to each isomorphism
induced by the trivialization . One can show, with some work, that the usual notion of an orientation coincides with a Z-orientation.
See also
The integration along the fiber
Orientation bundle (or orientation sheaf) - this is used to formulate the Thom isomorphism for non-oriented bundles.
References
J.P. May, A Concise Course in Algebraic Topology. University of Chicago Press, 1999.
Linear algebra
Analytic geometry
Orientation (geometry) | Orientation of a vector bundle | Physics,Mathematics | 629 |
74,536,110 | https://en.wikipedia.org/wiki/Hideyuki%20Matsumura | was a Japanese mathematician particularly known for his textbooks in commutative algebra. He received his Ph.D. in 1958 from Kyoto University under the advisory of mathematician Yasuo Akizuki.
References
External links
The Oberwolfach Photo Collection has photos of him.
1930 births
1995 deaths
20th-century Japanese mathematicians | Hideyuki Matsumura | Mathematics | 66 |
2,903,177 | https://en.wikipedia.org/wiki/36%20Aurigae | 36 Aurigae is a single variable star located about 910 light years away from the Sun in the constellation Auriga. It has the variable star designation V444 Aurigae, while 36 Aurigae is the Flamsteed designation. This object is visible to the naked eye as a dim, white-hued star with a baseline apparent visual magnitude of 5.71. It is moving further from the Earth with a heliocentric radial velocity of +16 km/s.
36 Aurigae was discovered to be a variable star when the Hipparcos data was analyzed. Because of that, it was given its variable star designation in 1999.
This is a magnetic chemically peculiar star that has been given stellar classifications of and , indicating it is a late B- or early A-type star showing peculiarities of silicon and iron in the spectrum. It is an Alpha2 Canum Venaticorum variable that ranges in visual magnitude from 5.70 down to 5.74 with a period of 14.368 days. The star has 4.4 times the mass of the Sun and is radiating 724 times the Sun's luminosity from its photosphere at an effective temperature of 10,046 K.
References
External links
HR 2101
Image 36 Aurigae
A-type main-sequence stars
B-type main-sequence stars
Alpha2 Canum Venaticorum variables
Ap stars
Auriga
J06005856+4754069
BD+47 1227
Aurigae, 36
040394
028499
2101
Aurigae, V444 | 36 Aurigae | Astronomy | 339 |
10,215,914 | https://en.wikipedia.org/wiki/NanoLanguage | NanoLanguage is a scripting interface built on top of the interpreted programming language Python, and is primarily intended for simulation of physical and chemical properties of nanoscale systems.
Introduction
Over the years, several electronic-structure codes based on density functional theory have been developed by different groups of academic researchers; VASP, Abinit, SIESTA, and Gaussian are just a few examples. The input to these programs is usually a simple text file written in a code-specific format with a set of code-specific keywords.
NanoLanguage was introduced by Atomistix A/S as an interface to Atomistix ToolKit (version 2.1) in order to provide a more flexible input format. A NanoLanguage script (or input file) is just a Python program and can be anything from a few lines to a script performing complex numerical simulations, communicating with other scripts and files, and communicating with other software (e.g. plotting programs).
NanoLanguage is not a proprietary product of Atomistix and can be used as an interface to other density functional theory codes as well as to codes utilizing e.g. tight-binding, k.p, or quantum-chemical methods.
Features
Built on top of Python, NanoLanguage includes the same functionality as Python and with the same syntax. Hence, NanoLanguage contains, among other features, common programming elements (for loops, if statements, etc.), mathematical functions, and data arrays.
In addition, a number of concepts and objects relevant to quantum chemistry and physics are built into NanoLanguage, e.g. a periodic table, a unit system (including both SI units and atomic units like Ångström), constructors of atomic geometries, and different functions for density-functional theory and transport calculations.
Example
This NanoLanguage script uses the Kohn–Sham method to calculate the total energy of a water molecule as a function of the bending angle.
# Define function for molecule setup
def waterConfiguration(angle, bondLength):
from math import sin, cos
theta = angle.inUnitsOf(radians)
positions = [
(0.0, 0.0, 0.0) * Angstrom,
(1.0, 0.0, 0.0) * bondLength,
(cos(theta), sin(theta), 0.0) * bondLength,
]
elements = [Oxygen] + [Hydrogen] * 2
return MoleculeConfiguration(elements, positions)
# Choose DFT method with default arguments
method = KohnShamMethod()
# Scan different bending angles and calculate the total energy
for i in range(30, 181, 10):
theta = i * degrees
h2o = waterConfiguration(theta, 0.958 * Angstrom)
scf = method.apply(h2o)
print "Angle = ", theta, " Total Energy = ", calculateTotalEnergy(scf)
See also
List of software for nanostructures modeling
References
Nanotechnology
Computational science
Computational chemistry software
Physics software | NanoLanguage | Physics,Chemistry,Materials_science,Mathematics,Engineering | 654 |
2,469,066 | https://en.wikipedia.org/wiki/Acyl%20carrier%20protein | The acyl carrier protein (ACP) is a cofactor of both fatty acid and polyketide biosynthesis machinery. It is one of the most abundant proteins in cells of E. coli. In both cases, the growing chain is bound to the ACP via a thioester derived from the distal thiol of a 4'-phosphopantetheine moiety.
Structure
The ACPs are small negatively charged α-helical bundle proteins with a high degree of structural and amino acid similarity. The structures of a number of acyl carrier proteins have been solved using various NMR and crystallography techniques. The ACPs are related in structure and mechanism to the peptidyl carrier proteins (PCP) from nonribosomal peptide synthases.
Biosynthesis
Subsequent to the expression of the inactive apo ACP, the 4'-phosphopantetheine moiety is attached to a serine residue. This coupling is mediated by acyl carrier protein synthase (ACPS), a 4'-phosphopantetheinyl transferase. 4'-Phosphopantetheine is a prosthetic group of several acyl carrier proteins including the acyl carrier proteins (ACP) of fatty acid synthases, ACPs of polyketide synthases, the peptidyl carrier proteins (PCP), as well as aryl carrier proteins (ArCP) of nonribosomal peptide synthetases (NRPS).
References
External links
Proteins | Acyl carrier protein | Chemistry | 326 |
12,790,805 | https://en.wikipedia.org/wiki/Minaxolone | Minaxolone (CCI-12923) is a neuroactive steroid which was developed as a general anesthetic but was withdrawn before registration due to toxicity seen with long-term administration in rats, and hence was never marketed. It is a positive allosteric modulator of the GABAA receptor, as well as, less potently, a positive allosteric modulator of the glycine receptor.
Chemistry
See also
Alfadolone
Alfaxolone
Ganaxolone
Hydroxydione
Pregnanolone
Renanolone
References
General anesthetics
Neurosteroids
Secondary alcohols
Ethers
Dimethylamino compounds
Ketones
GABAA receptor positive allosteric modulators
Glycine receptor agonists
Pregnanes
Ethoxy compounds | Minaxolone | Chemistry | 162 |
23,642,093 | https://en.wikipedia.org/wiki/Superior%20multimineral%20process | The Superior multimineral process (also known as the McDowell–Wellman process or circular grate process) is an above ground shale oil extraction technology designed for production of shale oil, a type of synthetic crude oil. The process heats oil shale in a sealed horizontal segmented vessel (retort) causing its decomposition into shale oil, oil shale gas and spent residue. The particularities of this process is a recovery of saline minerals from the oil shale, and a doughnut-shape of the retort. The process is suitable for processing of mineral-rich oil shales, such as in the Piceance Basin. It has a relatively high reliability and high oil yield. The technology was developed by the American oil company Superior Oil.
History
The multimineral process was developed by Superior Oil Company, now part of ExxonMobil, for processing of the Piceance Basin's oil shale. The technology tests were carried out in pilot plants in Cleveland, Ohio. In the 1970s, Superior Oil planned a commercial-size demonstration plant in the northern Piceance Basin area with a capacity of of shale oil per day; however, because of low crude oil price these plans were never implemented.
Process
The process was developed to combine the shale oil production with production of sodium bicarbonate, sodium carbonate, and aluminum from nahcolite and dawsonite, occurring in oil shales of the Piceance Basin. In this process, the nahcolite is recovered from the raw oil shale by crushing it to lumps smaller than . As a result, most of the nahcolite in the oil shale becomes a fine powder what could screened out. Screened oil shale lumps are further crushed to particles smaller than . Oil shale particles are further processed in a horizontal segmented doughnut-shaped traveling-grate retort in the direct or indirect heating mode. The retort was originally designed by Davy McKee Corporation for iron ore pelletizing and it also known as the Dravo retort. In the direct retort, oil shale moves past ducts through which are provided hot inert gas for heating the raw oil shale, air for combustion of carbon residue (char or semi-coke) in the spent oil shale, and cold inert gas for cooling the spent oil shale. The oil pyrolysis takes place in the heating section. To minimize solubility of aluminium compounds in the oil shale, the heat control is a crucial factor. Necessary heat for pyrolysis is generated in the carbon recovery section by combustion of carbon residue (char or semi-coke) remained in the spent oil shale. While blowing inert gases through the spent oil shale, the spent oil shale is cooled and gases are heated to cause pyrolysis. The indirect mode is similar; the difference is that combustion of carbonaceous residue takes place in separate vessel. The last section is for discharging of oil shale ash. Aluminium oxide and sodium carbonate are recovered from calcined dawsonite and calcined nahcolite in the oil shale ash.
Advantages
The traveling-grate retort allows close temperature control, and therefore better control of dawsonite's solubility during the burning stage. During retorting, there is no relative movement of oil shale, which avoids dust creation, and therefore increase the quality of generated products. The oil recovery yields greater than 98% Fischer Assay. The technology has also a relatively high reliability. The sealed system of this process has environmental advantage as it prevents gas and mist leakage.
See also
Alberta Taciuk Process
Galoter process
Fushun process
Kiviter process
Lurgi-Ruhrgas process
Petrosix
TOSCO II process
Union process
References
Oil shale technology
ExxonMobil | Superior multimineral process | Chemistry | 759 |
7,606,440 | https://en.wikipedia.org/wiki/Efimov%20state | The Efimov effect is an effect in the quantum mechanics of few-body systems predicted by the Russian theoretical physicist V. N. Efimov in 1970. Efimov's effect is where three identical bosons interact, with the prediction of an infinite series of excited three-body energy levels when a two-body state is exactly at the dissociation threshold. One corollary is that there exist bound states (called Efimov states) of three bosons even if the two-particle attraction is too weak to allow two bosons to form a pair. A (three-particle) Efimov state, where the (two-body) sub-systems are unbound, is often depicted symbolically by the Borromean rings. This means that if one of the particles is removed, the remaining two fall apart. In this case, the Efimov state is also called a Borromean state.
Theory
Pair interactions among three identical bosons will approach "Resonance (particle physics)" as the binding energy of some two-body bound state approaches zero, or equivalently, the s-wave scattering length of the state becomes infinite. In this limit, Efimov predicted that the three-body spectrum exhibits an infinite sequence of bound states whose scattering lengths and binding energies each form a geometric progression
where the common ratio
is a universal constant ().
Here
is the order of the imaginary-order modified Bessel function of the second kind that describes the radial dependence of the wavefunction. By virtue of the resonance-determined boundary conditions, this is the unique positive value of satisfying the transcendental equation
The geometric progression of the energy levels of Efimov states is an example of a emergent discrete scaling symmetry. This phenomenon, exhibiting a renormalization group limit cycle, is closely related to the scale invariance of the form of the quantum mechanical potential of the system.
Experimental results
In 2005, the research group of Rudolf Grimm and Hanns-Christoph Nägerl at the Institute for Experimental Physics at the University of Innsbruck experimentally confirmed the existence of such a state for the first time in an ultracold gas of caesium atoms. In 2006, they published their findings in the scientific journal Nature.
Further experimental support for the existence of the Efimov state has been given recently by independent groups. Almost 40 years after Efimov's purely theoretical prediction, the characteristic periodic behavior of the states has been confirmed.
The most accurate experimental value of the scaling factor of the states has been determined by the experimental group of Rudolf Grimm at Innsbruck University as
Interest in the "universal phenomena" of cold atomic gases is still growing. The discipline of universality in cold atomic gases near the Efimov states is sometimes referred to as "Efimov physics".
The experimental groups of Cheng Chin of the University of Chicago and Matthias Weidemüller of the University of Heidelberg have observed Efimov states in an ultracold mixture of lithium and caesium atoms, extending Efimov's original picture of three identical bosons.
An Efimov state existing as an excited state of a helium trimer was observed in an experiment in 2015.
Usage
The Efimov states are independent of the underlying physical interaction and can in principle be observed in all quantum mechanical systems (i.e. molecular, atomic, and nuclear).
The states are very special because of their "non-classical" nature: The size of each three-particle Efimov state is much larger than the force-range between the individual particle pairs. This means that the state is purely quantum mechanical. Similar phenomena are observed in two-neutron halo-nuclei, such as lithium-11; these are called Borromean nuclei. (Halo nuclei could be seen as special Efimov states, depending on the subtle definitions.)
See also
Three-body force
References
External links
Press release about the experimental confirmation (2006.03.16)
Overwhelming proof for Efimov State that's become a hotbed for research some 40 years after it first appeared (2009.12.14)
Observation of the Second Triatomic Resonance in Efimov’s Scenario (2014.05.15)
Quantum mechanics | Efimov state | Physics | 879 |
25,356,095 | https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20November%204%2C%202040 | A partial solar eclipse will occur at the Moon's descending node of orbit on Sunday, November 4, 2040, with a magnitude of 0.8074. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth.
A partial eclipse will be visible for parts of North America, Central America, the Caribbean, and northern South America.
Images
Animated path
Eclipse details
Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse.
Eclipse season
This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight.
Related eclipses
Eclipses in 2040
A partial solar eclipse on May 11.
A total lunar eclipse on May 26.
A partial solar eclipse on November 4.
A total lunar eclipse on November 18.
Metonic
Preceded by: Solar eclipse of January 16, 2037
Followed by: Solar eclipse of August 23, 2044
Tzolkinex
Preceded by: Solar eclipse of September 23, 2033
Followed by: Solar eclipse of December 16, 2047
Half-Saros
Preceded by: Lunar eclipse of October 30, 2031
Followed by: Lunar eclipse of November 9, 2049
Tritos
Preceded by: Solar eclipse of December 5, 2029
Followed by: Solar eclipse of October 4, 2051
Solar Saros 124
Preceded by: Solar eclipse of October 25, 2022
Followed by: Solar eclipse of November 16, 2058
Inex
Preceded by: Solar eclipse of November 25, 2011
Followed by: Solar eclipse of October 15, 2069
Triad
Preceded by: Solar eclipse of January 5, 1954
Followed by: Solar eclipse of September 6, 2127
Solar eclipses of 2040–2043
Saros 124
Metonic series
Tritos series
Inex series
References
External links
http://eclipse.gsfc.nasa.gov/SEplot/SEplot2001/SE2040Nov04P.GIF
2040 in science
2040 11 4
2040 11 4 | Solar eclipse of November 4, 2040 | Astronomy | 541 |
49,067,628 | https://en.wikipedia.org/wiki/M%C3%A9canique%20analytique | Mécanique analytique (1788–89) is a two volume French treatise on analytical mechanics, written by Joseph-Louis Lagrange, and published 101 years after Isaac Newton's Philosophiæ Naturalis Principia Mathematica.
Treatise
It consolidated into one unified and harmonious system, the scattered developments of contributors such as Alexis Clairaut, Jean le Rond d'Alembert, Pierre-Simon Laplace, Leonhard Euler, and Johann and Jacob Bernoulli in the historical transition from geometrical methods, as presented in Newton's Principia, to the methods of mathematical analysis. The treatise expounds a great labor-saving and thought-saving general analytical method by which every mechanical question may be stated in a single differential equation.
Lagrange wrote that this work was entirely new and that his intent was to reduce the theory and the art of solving mechanics problems to general formulae, providing all the equations necessary for the solution of each problem. He stated that:No diagrams will be found in this work. The methods that I explain require neither geometrical, nor mechanical, constructions or reasoning, but only algebraical operations in accordance with regular and uniform procedure. Those who love Analysis will see with pleasure that Mechanics has become a branch of it, and will be grateful to me for having thus extended its domain.
Ernst Mach describes the work as follows:Analytic mechanics... was brought to the highest degree of perfection... Lagrange's aim is... to dispose, once and for all, of the reasoning necessary to resolve mechanical problems, by embodying as much as possible of it in a single formula. This he did. Every case... can now be dealt with by a very simple... schema; and whatever reasoning is left is performed by purely mechanical methods. The mechanics of Lagrange is a stupendous contribution to the economy of thought.
Publication history
The work was first published in 1788 (volume 1) and 1789 (volume 2). Lagrange issued a substantially enlarged second edition of volume 1 in 1811, toward the end of his life. His revision of volume 2 was substantially complete at the time of his death in 1813, but was not published until 1815.
The second edition of 1811/15 has been translated into English, and is available online at archive.org.
References
External links
English translation of the 1811 edition
Mathematical physics
Physics books
Mathematics books | Mécanique analytique | Physics,Mathematics | 503 |
24,342,806 | https://en.wikipedia.org/wiki/C15H21NO3 | {{DISPLAYTITLE:C15H21NO3}}
The molecular formula C15H21NO3 (molar mass: 263.332 g/mol, exact mass: 263.1521 u) may refer to:
Hydroxypethidine (Bemidone)
Metostilenol
N-Ethylhexylone
Molecular formulas | C15H21NO3 | Physics,Chemistry | 76 |
74,520,478 | https://en.wikipedia.org/wiki/Gregor%20Sch%C3%B6ner | Gregor Schöner (born 1958 in Sindelfingen) is a German computational neuroscientist. He is professor for the theory of cognitive systems at the Ruhr University Bochum, as well as the director of the Institute for Neuroinformatics located there.
Life and work
From 1983 to 1985 Gregor Schöner studied physics and mathematics at Saarland University. In the year 1985, he received his PhD in theoretical physics at the University of Stuttgart under Herrmann Haken. For the next four years, he devoted himself to applications of the theory of stochastic dynamical systems to the coordination of biological motion under J. A. Scott Kelso at Florida Atlantic University. From 1989 to 1994, he led a research group for the first time at the Institute of Neuroinformatics at Ruhr University in Bochum. In that time, he and his group extended the application of dynamical systems to models of perception, motion, and autonomous robotics. After a six-year stay at the Centre de Recherche en Neurosciences Cognitives in Marseille, Gregor Schöner returned to the institute in 2001. He took over its leadership in 2003, succeeding Christoph von der Malsburg, and remains in this position until today. Since September 2022, he is additionally chairman of the Society for Cognitive Science in Germany.
Gregor Schöner and his research group are known for the scientific development, applications, and software packages on Dynamic Field Theory (DFT). DFT provides a neurally plausible framework for the mathematical modeling of human cognition according to the theories of embodied cognition. The theory builds upon the continuous attractor networks models of Hugh R. Wilson and Jack D. Cowan (the "Wilson-Cowan model") and Shun'ichi Amari (the "neural field model"), which describe the interaction between excitatory and inhibitory coupled populations of cortical neurons. Schöner's research group publishes on visual search, spatial and relational language, and autonomous robotics.
Publications
Gregor Schöner, John P. Spencer and the DFT Research Group (2015). A primer on dynamic field theory. Oxford University Press, ISBN 978-0-19-930056-3
Esther Thelen, Gregor Schöner, Christian Scheier, and Linda B. Smith (2001). "The dynamics of embodiment: A field theory of infant perseverative reaching". Behavioral and Brain Sciences 24(1), 1–34. doi:10.1017/S0140525X01003910
References
External links
Publication list on the website of the Institute for Neuroinformatics at the Ruhr University Bochum
Living people
1958 births
German cognitive neuroscientists
Neuroinformatics
Computational neuroscientists
German lecturers
Ruhr University Bochum | Gregor Schöner | Biology | 587 |
37,647,580 | https://en.wikipedia.org/wiki/Lie%E2%80%93Palais%20theorem | In differential geometry, a field of mathematics, the Lie–Palais theorem is a partial converse to the fact that any smooth action of a Lie group induces an infinitesimal action of its Lie algebra. proved it as a global form of an earlier local theorem due to Sophus Lie.
Statement
Let be a finite-dimensional Lie algebra and a closed manifold, i.e. a compact smooth manifold without boundary. Then any infinitesimal action of on can be integrated to a smooth action of a finite-dimensional Lie group , i.e. there is a smooth action such that for every .
If is a manifold with boundary, the statement holds true if the action preserves the boundary; in other words, the vector fields on the boundary must be tangent to the boundary.
Counterexamples
The example of the vector field on the open unit interval shows that the result is false for non-compact manifolds.
Similarly, without the assumption that the Lie algebra is finite-dimensional, the result can be false. gives the following example due to Omori: consider the Lie algebra of vector fields of the form acting on the torus such that for . This Lie algebra is not the Lie algebra of any group.
Infinite-dimensional generalization
gives an infinite-dimensional generalization of the Lie–Palais theorem for Banach–Lie algebras with finite-dimensional center.
References
Reprinted in collected works volume 5.
Theorems in differential geometry
Differential geometry
Lie algebras
Lie groups | Lie–Palais theorem | Mathematics | 296 |
21,523,297 | https://en.wikipedia.org/wiki/Emu%20Brewery | The Emu Brewery was a brewery in Perth, Western Australia, which traced its history to the first decade of the Swan River Colony. Founded in 1837 by James Stokes as the Albion Brewery, it was located beside the Swan River on a block bounded by Mounts Bay Road, Spring Street and Mount Street. The business changed hands several timesand names from Albion Brewery to Stanley Brewery to Emu Breweryuntil its ultimate acquisition by competitor Swan Brewery in 1927.
New brewery buildings were constructed over the years. The most notable of these was an imposing Art Deco building erected between 1936 and 1938. This building continued to be used to produce Emu-brand beer until the late 1970s, when production was shifted to a new factory in Canning Vale. , Emu beer continues to be produced as a brand of Swan Brewery owner Lion Nathan.
Albion Brewery: 1837–1848
In the early 1830s, the Swan River Colony was in its infancy and did not have a substantial local beer industry. Preachers from the Temperance League lobbied against the drunkenness prevalent in the colony, however the lack of locally produced beer meant that they focused their attention on spirits drinkers. Governor James Stirling believed that the construction of a local brewery may reduce the Colony's drunkenness problems by allowing the men to drink beer instead of spirits.
Scotsman James Stokes had arrived in Western Australia in 1834 at the age of 24. He saw the opportunity in the market for a brewery, and investigated potential sites. Surveyor-General John Septimus Roe had set aside a small triangular lot for use as a brewery; this block was bounded by Spring Street, Mount Street and St Georges Terrace. Stokes preferred the much larger block across Spring Street, which extended almost all the way to the riverfront at the time; subsequent reclamation moved the riverfront further south. The site was more suitable because it featured a natural spring, there was a sufficient different in elevation to enable the use of gravity in the brewing process without the need for a large tower. The proximity to the river also made river transport an attractive option. Stokes bought this land from George Leake, and was operating his brewery by 1837. Although the brewery was named the Albion Brewery after the ancient name for Great Britain, it was more popularly known as Stokes' Brewery. It was the colony's first major stand-alone brewery.
At the time, darker beer varieties were popular in Britain, however Stokes believed that the pale ales that were being exported to India would become popular locally. Contrary to what Governor Stirling had hoped, Stokes began distilling spirits at the brewery in 1838.
In 1839 Stokes mortgaged the brewery site back to the original owner, Leake, to fund the purchase of the adjacent block. There, he built himself a house; around this time he also bought the small portion of river frontage immediately in front of the brewery from the government for £13/5s/-. In the same year, Stokes also formed a partnership in land and commission agents with Dubois Aggett, however in 1840 Aggett maimed himself while attempting suicide, and Stokes severed the partnership.
1840 also saw the market for Albion Brewery's beer fall away due to a sluggish economy. It did not escape Stokes' attention that duties were levied on imported spirits, but not on those produced locally. Seizing upon the business opportunity, he imported a large still and expanded the brewery's distillery. The Government responded to this by imposing a tax on locally produced spirits as well, leading Stokes to stop Albion's distilling efforts.
Stanley Brewery: 1848–1908
Eventually the market situation improved for Stokes to the point that in 1848 he opened a new brewery on the site to replace the old Albion Brewery. The Stanley Brewery opened on 1 November 1848, selling what it described as a "nutritious body ale superior to any imported", costing £4/– per hogshead.
Along with other local businessmen, Stokes successfully lobbied for the transportation of convicts to Western Australia to help alleviate the chronic labour shortage. It has also been speculated that he saw it as a potential new market for his beers, believing that the convicts would have less discerning tastes.
Stokes returned to England in 1857, where he married his cousin Julia. He returned to the Swan River Colony with his pregnant wife, however she died after giving birth. Stokes quickly lost his interest in brewing and died in 1861. The brewery continued to be operated by Henry Saw and William Meloy, who had worked in the business for many years and to whom Stokes had bequeathed interests in the business. Saw died in November 1870, and since Meloy did not want to remain in the operation, the lease over the brewery was advertised.
John Maxwell Ferguson took over the lease, and in 1872 recruited the German expatriate brewer William Mumme. Over the following decades, the business changed hands several times.
In January 1875 the brewery was advertised for rent with the previous operators being Mumme and Ferguson. In May 1875 George Hamersley applied for a licence to operate the Stanley Brewery and by September 1875 it had been re-equipped and was open for business. The licence was then held by brothers George and Hugh Hamersley. On 1 April 1876 he formed a partnership to operate the brewery with his brother Hugh and Harwood who was a brewer. In November 1876 it was advertised that the licence was to be transferred from G & H Hamersley to the new partnership. In November 1877 it was advertised that the licence was to be transferred from G & H Hamersley to Harwood.
In March 1882 the licence was transferred to John Jones and Robert Hall. In May 1882 there was a Supreme Court case between John Forrest, who was the husband of George and Hugh's sister, and Harwood concerning a breach of contract in the amount of £160 relating to the lease of the Stanley Brewery which had expired in February 1882.
In 1887, a new brewery building was constructed on the site. The brick structure was imposing, featuring blind brick arches, and was topped with a Mansard-roofed tower containing a tank.
After the successful initial public offering of the rival Swan Brewery, the Stanley Brewery felt the pressure to follow the same path. In 1905 the business re-formed as the Stanley Co-operative Brewery Ltd, and had former politician Michael O'Connor as chairman of its board of directors. This new company was majority-owned by the Stanley Brewery Co Limited.
Emu Brewery: 1908 onwards
The Stanley Brewery's most popular brand of beer was an ale sold under the "Emu" trademark. In order to ensure that drinkers knew from which brewery the Emu brand came, as well as to avoid confusion between the Stanley Co-operative Brewery Limited and its similarly named holding company, the company was renamed on 6 March 1908 to the Emu Co-operative Brewery Ltd.
The Emu Brewery had been turning out beer of variable quality, and only managed a quarter of the output of the Swan Brewery. However, the recruitment of Ernest Terry in 1909 led to a turnaround in the fortunes of the newly renamed Emu Brewery Ltd. The brewery became profitable once more, and even won awards for its beers at the Royal Agricultural Show, which dismayed the traditional award winner, Swan. Emu continued to compete with Swan by introducing Emu Bitter, a bottom-fermentation beer to compete with the bitter beer Swan introduced in 1923.
Acquisition by Swan Brewery and subsequent history
On 3 February 1927, the brewery's directors approached the Swan Brewery to sell Emu's assets. Swan proceeded with this acquisition of the Emu Brewery, and continued to operate it as a separate business from Swan's own operations. Arthur Jacoby was appointed as the general manager of both breweries.
During the 1930s, a significant amount of land was reclaimed from the river, and the brewery lost its river frontage. Also, between 1936 and 1938, a new brewery building designed by Perth architectural firm Oldham, Boas and Ednie-Brown was constructed on the site. Constructed in the Art Deco style, this new building replaced the old Stanley Brewery building.
The building was built from reinforced concrete and steel, and was visibly divided into two halves: one with windows to allow in a maximum of daylight, and the other with no windows at all, to exclude daylight. A central tower housing a lift and staircases delineated the two areas. A border frieze at the top of three of the building's sides depicting different stages in the brewing process was designed by John Oldham and executed by sculptor Edward F. Kohler. An image of the 1938 building featured on Emu beer labels for over fifty years.
The Emu Brewery continued manufacturing on the site until the late 1970s, when production of both the Swan and Emu brands was shifted to a factory in Canning Vale. After this, the Emu Brewery building was left derelict. In 1991, the Emu Brewery was the "last major industrial structure" in Perth's central business district.
Despite having been placed on the Register of the National Estate, the complex was allowed to fall into disrepair. The Art Deco Society of Western Australia was set up in 1987 to lobby for the protection of Perth's art deco heritage, including the Emu Brewery. After heritage minister Jim McGinty refused to place the building on the Western Australian Register of Heritage Places, the building was demolished starting in late 1991 and ending with its implosion on 23 February 1992.
Subsequent plans to build high-rise offices or apartments on the site consistently fell through for almost a quarter of a century, leading to the site being labelled "seemingly jinxed". Eventually, in 2017 the first of three towers planned as part of a development called Mia Yellagonga was completed. Called Karlak, this tower has 32 levels and is the new headquarters for Woodside.
See also
List of breweries in Australia
Notes
References
Sources
Further reading
Heritage Register of Western Australia entry on the Emu Brewery
Historical photographs of the brewery in the State Library of Western Australia Pictorial Archive
Australian companies established in 1837
Australian beer brands
Beer brewing companies based in Western Australia
Food and drink companies based in Perth, Western Australia
Food and drink companies established in 1837
Buildings and structures demolished in 1992
Buildings and structures demolished by controlled implosion
Former buildings and structures in Perth, Western Australia
Manufacturing companies based in Perth, Western Australia
Demolished buildings and structures in Western Australia
Culture of Western Australia
Defunct food and drink companies of Australia | Emu Brewery | Engineering | 2,105 |
32,903,055 | https://en.wikipedia.org/wiki/Pellets%20%28petrology%29 | Pellets are small spherical to ovoid or rod-shaped grains that are common component of many limestones. They are typically 0.03 to 0.3 mm long and composed of carbonate mud (micrite). Their most common size is 0.04 to 0.08 mm. Pellets typically lack any internal structure and are remarkably uniform in size and shape in any single limestone sample. They consist either of aggregated carbonate mud, precipitated calcium carbonate, or a mixture of both. They either are or were composed either of aragonite, calcite, or a mixture of both. Also, pellets composed of either glauconite or phosphorite are common in marine sedimentary rocks. Pellets occur in Precambrian through Phanerozoic strata. They are an important component mainly in Phanerozoic strata. The consensus among sedimentologists and petrographers is that pellets are the fecal products of invertebrate organisms because of their constant size, shape, and extra-high content of organic matter.
Pellets differ from oolites and intraclasts, which are also found in limestones. They differ from oolites in that pellets lack the radial or concentric structures that characterize oolites. They differ from intraclasts in that pellets lack the complex internal structure, which is typical of intraclasts. In addition, pellets, quite unlike intraclasts, are characterized by a remarkable uniformity of shape, extremely good sorting, and small size.
By definition, pellets differ from peloids, in that pellets have a specific size, shape, and implied origin—while peloids vary widely in size, shape, and origin. Pellets, in the strict sense, are fecal products of invertebrate organisms. Peloids are allochems of any size, structure, or origin. As a result, peloids not only include possible pellets, but also include a variety of other distinctly non-pellet grains—such as indistinct intraclasts, micritized ooids, or fossil fragments. In addition, some peloids are even microbial or inorganic precipitates. Carbonate geologists consider the vast majority of peloids as secondary allochems created by biological degradation or “micritization” of other primary carbonate grains, i.e., ooids, bioclasts, or pellets.
References
See also
Calcilutite
Calcarenite
Calcisiltite
Sedimentary rocks
Limestone
Animal physiology
Feces | Pellets (petrology) | Biology | 549 |
35,819,612 | https://en.wikipedia.org/wiki/Osculant | In mathematical invariant theory, the osculant or tacinvariant or tact invariant is an invariant of a hypersurface that vanishes if the hypersurface touches itself, or an invariant of several hypersurfaces that osculate, meaning that they have a common point where they meet to unusually high order.
References
Invariant theory | Osculant | Physics | 72 |
64,536,317 | https://en.wikipedia.org/wiki/Towards%20a%20New%20Socialism | Towards a New Socialism is a 1993 non-fiction book written by Scottish computer scientist Paul Cockshott, co-authored by Scottish economics professor Allin F. Cottrell. The book outlines in detail a proposal for a complex planned socialist economy, taking inspiration from cybernetics, the works of Karl Marx, and British operations research scientist Stafford Beer's 1973 model of a distributed decision support system dubbed Project Cybersyn. Aspects of a socialist society such as direct democracy, foreign trade and property relations are also explored. The book is, in the authors' words, "our attempt to answer the idea that socialism is dead and buried after the demise of the Soviet Union."
The book was covered in an article in Süddeutsche Zeitung in 2017, as well as reviewed by Leonard Brewster in the Spring 2004 issue of the Quarterly Journal of Austrian Economics.
Contents
The book is divided into 15 chapters, excluding the introduction:
Inequality
Eliminating Inequalities
Work, Time and Computers
Basic Concepts of Planning
Strategic Planning
Detailed Planning
Macroeconomic Planning
The Marketing of Consumer Goods
Planning and Information
Foreign Trade
Trade Between Socialist Countries
The Commune
On Democracy
Property Relations
Some Contrary Views Considered
Ideas presented
The main features distinguishing Cottrell and Cockshott's ideas from other socialist tendencies are:
A rigorous theoretical defense of economic planning
The use of non-circulating labor money to replace circulating currency
Athenian-style participatory democracy, specifically the use of sortition rather than election to fill as many political offices as possible
Each of these represented major divergences from what was then the main currents of socialist opinion. The fall of the Soviet Union had convinced many socialists that economic planning was to be abandoned. Cottrell and Cockshott in contrast argued that new computer technology plus participatory democracy was actually making economic planning possible to greater extent than ever, a fact that would be noted in other books on economic planning in Japan and private industry. Marx considered non-circulated labor credits as crucial for socialism in his work Critique of the Gotha Program (while critiquing incompetent attempts to implement them), and an earlier generation of socialists (notably Edward Bellamy in his popular 19th century book Looking Backwards), had advocated for them. But after Frederick Engels' death, Karl Kautsky moved the socialist movement away the idea in the early 1900s, leading (among other things) to labor money never being implemented in the USSR (given Kautsky's substantial influence on Lenin's socialist organizing). Under Cottrell and Cockshott's labor credits idea, someone working 8 hours a day would receive 8 hours credit, goods and services would be priced in terms of the labor required to make them, prices would be adjusted upward/downward in accordance with supply and demand, and labor money would cancel out rather than circulate when used for a purchase. The idea incorporated the work from the growing field of econophysics, specifically the work of Israeli mathematicians Emmanuel Farjoun and Moshe Machover, whose book Laws of Chaos empirically demonstrated that labor content was responsible for around 95% of a good's price. Years later, University of Maryland econophysicist Victor Yakovenko would demonstrate that circulating money inherently creates an unequal Gibbs-Boltzmann distribution within an economy, even when beginning from conditions of perfect equality.
The emphasis on Athenian democracy stemmed from a desire to avoid the Iron Law of Oligarchy, a tendency noted by Robert Michels for the leadership of an organization to turn even democratic organizations into a dictatorship if given the chance. According to Cottrell and Cockshott, Lenin's failure to account for this tendency in State and Revolution (published in 1917) meant the Soviet Union was never able to find a stable democratic form of government, thus degenerating by Stalin's time into a stable but authoritarian one-party state. This dictatorship further distorted the Soviet economy, as major economic decisions were made by a political elite with little input or consideration of the larger population's needs, resulting in the classic hallmarks of the Soviet economy: Rapid advancement in areas like space exploration and weaponry favored by the political establishment, widespread shortages of consumer goods, and the failure of the Soviet government to develop an early Internet after the main proponents of the project fell out of favor with Communist Party leadership in the Brezhnev era. Athenian Democracy avoids this outcome by choosing political leaders on the basis of lot rather than election. Quoting Aristotle, Cottrell and Cockshott note that elections have an aristocratic tendency that has been recognized since Ancient Athens: voting for whoever one thinks is the best usually means voting for whoever has the most money, status, or education to convince voters that they're "the best." For this reason, Democratic Athens selected their legislature, judiciary, and executive branch officials entirely by lot, reserving elections only for military generals where specific skills in the military arts were required. Cottrell and Cockshott call for a restoration of this democratic practice, arguing that it is the only way to eliminate the barrier between ruler and ruled, and prevent the rulers from forming a caste increasingly separate from the rest of the population.
Reception
Leonard Brewster, Ph.D., reviewed the book in the Spring 2004 issue of the Quarterly Journal of Austrian Economics, positing that "Cockshott and Cottrell have come as close to developing a serious, up-to-date version of a neo-Marxist political economy as we are likely to see." Brewster concedes that C&C have "succeeded in countering a version of the calculation argument" but writes that this "ironically clarif[ies] and strengthen[s] the reasons for considering socialist calculation not just as troublesome, but impossible, and valuation in terms of labor an illusion." Furthermore, Brewster argues that C&C's allowance of a market for consumer goods, in effect, makes their model a "capitalistic, commodity producing society."
In 2009, Cockshott published an article entitled "Notes for a Critique of Brewster" in which he responded to Brewster's arguments against the book's model. Cockshott asserts that Brewster is "wrong in saying that our labour values are no longer labour values since they are now influenced by market prices", arguing that the distortion of labour value ratios, manifesting through exchange value ratios in capitalist economies, is a short-term artefact of supply and demand imbalances. Furthermore, Cockshott argues that maintaining these distinctions in his model does not "[prevent] labour values from being usable for economic calculation when dealing with intermediate goods." Summarising, Cockshott asserts that "we argue that the market has a place, but only a limited place. It should be restricted to consumer goods, and even here, market indicators are not the ultima ratio. They are just one among many constraints that society has to recognise."
References
External links
Towards a New Socialism (Book website)
Towards a New Socialism (PDF download)
Sozialismus ist machbar, German translation (PDF download)
走向新社会主义 , Mandarin translation (PDF download)
"Paul Cockshott - Towards a new Socialism (1/3)": video (recorded in Glasgow, GB, 25 min., 2006) produced by Oliver Ressler on Paul Cockshott and his planned economy-model. Transcription of video
Paul Cockshott, "Notes for a critique of Brewster" (June 20, 2009)
Books about socialism
Cybernetics
Marxism
Communism
Marxian economics
Economics books
Government by algorithm | Towards a New Socialism | Engineering | 1,538 |
53,855,339 | https://en.wikipedia.org/wiki/NGC%20445 | NGC 445 is a peculiar lenticular galaxy located in the constellation of Cetus. It was discovered on October 23, 1864, by Albert Marth. It was described by Dreyer as "very faint, very small."
References
External links
0445
18641023
Cetus
Lenticular galaxies
Discoveries by Albert Marth
004493 | NGC 445 | Astronomy | 70 |
54,632,483 | https://en.wikipedia.org/wiki/Halorubrum%20vacuolatum | Halorubrum vacuolatum is a halophilic archaeon in the family of Halorubraceae. It is an extremophile and is able to survive in water with high salt concentration.
References
Euryarchaeota
Archaea described in 1993 | Halorubrum vacuolatum | Biology | 57 |
8,318,947 | https://en.wikipedia.org/wiki/Naum%20Yakovlevich%20Vilenkin | Naum Yakovlevich Vilenkin (, October 30, 1920 in Moscow – October 19, 1991 in Moscow) was a Soviet mathematician, an expert in representation theory, the theory of special functions, functional analysis, and combinatorics. He is best known as the author of many books in recreational mathematics aimed at middle and high school students.
Biography
Vilenkin studied at the Moscow State University where he was a student of A.G. Kurosh. He received his degree of doktor nauk in physics and mathematics in 1950; and was awarded the Ushinsky prize for his school mathematics textbooks in 1976.
In 1975−1990 he assisted Lyudmila Georgievna Peterson in the development of a preschool and school curriculum for teaching mathematics.
Books
Combinatorics by N.Ia. Vilenkin, A. Shenitzer, and S. Shenitzer (hardcover – Sep 1971)
Representation Theory and Noncommutative Harmonic Analysis II: Homogeneous Spaces, Representations, and Special Functions (Encyclopaedia of Mathematical Sciences) by A. U. Klimyk, V. F. Molchanov, N. Ya. Vilenkin, and A. A. Kirillov (hardcover – Aug 26, 2004)
Representation of Lie Groups and Special Functions: Recent Advances (Mathematics and Its Applications) by N. Ja, Vilenkin and A. U. Klimyk (hardcover – Nov 1, 1994)
Representation of Lie Groups and Special Functions Volume 1: Simplest Lie Groups, Special Functions and Integral Transforms (Mathematics and its Applications) by N.Ia. Vilenkin and A. U. Klimyk (hardcover – Nov 15, 1991)
Generalized Functions. Volume 5. Integral geometry and representation theory by I. M. Gel'fand, M. I. Graev, N. Ya. Vilenkin, and E. Saletan (hardcover – Oct 25, 1966)
Direct decompositions of topological groups, I, II (American Mathematical Society. Translation) by N. Ia. Vilenkin (1950)
Books in recreational mathematics
In Search of Infinity by N. Ya. Vilenkin (Hardcover – 1995)
Combinatorial mathematics for recreation by N. Ia. Vilenkin (1972)
Stories About Sets by N. Ia. Vilenkin (Paperback – 1968)
Successive approximation, (Popular lectures in mathematics) by N. Ya. Vilenkin (1964)
References
F. I. Karpelevich, A. U. Klimyk, L. M. Koganov, et al. "Naum Yakovlevich Vilenkin (on the occasion of his seventieth birthday)", Russian Math. Surveys 46 (1991), 251–254.
1920 births
1991 deaths
Soviet mathematicians
Combinatorialists
Mathematicians from Moscow | Naum Yakovlevich Vilenkin | Mathematics | 576 |
11,908,009 | https://en.wikipedia.org/wiki/91%20Aquarii%20b | 91 Aquarii b, also known as HD 219449 b, is an extrasolar planet orbiting in the 91 Aquarii system approximately 148 light-years away in the constellation of Aquarius. It orbits at the average distance of 105 Gm from its star, which is slightly closer than Venus is to the sun (108 Gm). The planet takes half an Earth year to orbit around the star in a very circular orbit with eccentricity less than 0.053.
See also
HD 59686 b
Iota Draconis b
References
Aquarius (constellation)
Giant planets
Exoplanets discovered in 2003
Exoplanets detected by radial velocity | 91 Aquarii b | Astronomy | 134 |
11,460,087 | https://en.wikipedia.org/wiki/Bipolaris%20sacchari | Bipolaris sacchari is a fungal plant pathogen in the family Pleosporaceae.
Bipolaris sacchari is an ascomycete fungal pathogen most notably affecting sugarcane. In its sexual stage, it produces spores housed in an ascus (a sac, usually with 8 spores inside). The spores are dispersed when the sac bursts. They spread to plant surfaces via wind and rain splashes, and if there is water present on the leaf, they may germinate and produce septate, walled hyphae on the surface of the leaf. These in turn asexually produce conidia that spread and further propagate the disease.
Hosts
This pathogen affects sugarcane (Saccharum officinarum) and close relatives; a few members of Poaceae as well: citronella (Cymbopogan citratus), elephant grass (Pennisetum purpureum), pearl millet (Pennisetun glaucum) and barnyard grass (Echinochloa).
This fungus takes most of its economic notoriety for the yield losses sustained to sugarcane in commercial agriculture. Bipolaris sacchari produces host-specific toxins, namely oligosaccharide-sesquiterpene toxins that bind helminthosporoside. It is toxic because when within the substance of the plant, it reacts to create an abundance of nitrites. Different iterations of these toxins are specific to hosts; while sugarcane is the most economically significant, eye spot has been recorded on ferns in Florida, wheat in Iran, and banana in Brazil.
Symptoms
Early in the progression of the disease, minute watery spots may be observed on plants. Crops that are six or more months old are more susceptible—younger leaves will most likely be affected first. As the disease progresses, reddish spots with yellow margins on leaves become visible. Extending from each spot are brownish streaks, called ‘runners’. These are thought to be caused by the spread of the toxin. Spots may merge as they increase in number and result in necrosis. Seedling blights may also occur.
Management
Plant resistant varieties—most commercial varieties are already resistant, with no genetic modification. Q47 is one of these resistant varieties; conversely, varieties Q99 and Q101 are very susceptible.
Depending on severity, foliar applications of fungicide (2% copper oxychloride) may be used but are not practical because resistant varieties are common and garner just as much yield and quality.
In Mexico, a 33% yield loss was noted when a field of a susceptible variety having eye spot was compared with a field of a resistant variety. These losses can be avoided in large commercial agriculture by ensuring precision in management practices.
Over-fertilization is beneficial to the pathogen as it can use the excess nitrogen that the crop does not absorb.
Environment
Bipolaris sacchari occurs all around the world. Because it is an ascomycete, it needs a film of water through which to continue disease progress. Temperate climates at elevation can encourage the conditions this pathogen finds favorable. Several days of heavy morning dew or rain may accelerate the disease progress. It likes moist, humid areas, and thrives with cooler night temperatures—these encourage production of the toxin.
References
External links
Fungal plant pathogens and diseases
Wheat diseases
Pleosporaceae
Fungus species | Bipolaris sacchari | Biology | 682 |
17,010,350 | https://en.wikipedia.org/wiki/Galactosemic%20cataract | A galactosemic cataract is cataract which is associated with the consequences of galactosemia.
Types
The presence of presenile cataract, noticeable in galactosemic infants as young as a few days old, is highly associated with two distinct types of galactosemia: GALT deficiency and to a greater extent, GALK deficiency.
An impairment or deficiency in the enzyme, galactose-1-phosphate uridyltransferase (GALT), results in classic galactosemia, or Type I galactosemia. Classic galactosemia is a rare (1 in 47,000 live births), autosomal recessive disease that presents with symptoms soon after birth when a baby begins lactose ingestion. Symptoms include life-threatening illnesses such as jaundice, hepatosplenomegaly (enlarged spleen and liver), hypoglycemia, renal tubular dysfunction, muscle hypotonia (decreased tone and muscle strength), sepsis (presence of harmful bacteria and their toxins in tissues), and cataract among others. The prevalence of cataract among classic galactosemics is markedly less than among galactokinase-deficient patients due to the extremely high levels of galactitol found in the latter. Classic galactosemia patients typically exhibit urinary galactitol levels of only 98 to 800 mmol/mol creatine compared to normal levels of 2 to 78 mmol/mol creatine.
Galactokinase (GALK) deficiency, or Type II galactosemia, is also a rare (1 in 100,000 live births), autosomal recessive disease that leads to variable galactokinase activity levels: ranging from high GALK efficiency to undetectably-low GALK efficiency. The early onset of cataract is the main clinical manifestation of Type II galactosemics, most likely due to the high concentration of galactitol found in this population. GALK deficient patients exposed to high-galactose diets show extreme levels of galactitol in blood and urine. Studies on galactokinase-deficient patients have shown that nearly two-thirds of ingested galactose can be accounted for by galactose and galactitol levels in the urine. Urinary levels of galactitol in these subjects approach 2500 mmol/mol creatine as compared to 2 to 78 mmol/mol creatine in control patients.
A decrease in activity in the third major enzymes of galactose metabolism, UDP galactose-4'-epimerase (GALE), is the cause of Type III galactosemia. GALE deficiency is an extremely rare, autosomal recessive disease that appears to be most common among the Japanese population (1 in 23,000 live births among Japanese population). While the link between GALE deficiency and cataract prevalence seems to be ambiguous, experiments on this topic have been conducted. A recent 2000 study in Munich, Germany analyzed the activity levels of the GALE enzyme in various tissues and cells in patients with cataract. The experiment concluded that while patients with cataract seldom exhibited an acute decrease in GALE activity in blood cells, "the GALE activity in the lens of cataract patients was, on the other hand, significantly decreased". The study's results are depicted below. The extreme decrease in GALE activity in the lens of cataract patients seems to suggest an irrefutable connection between Type III galactosemia and cataract development.
Galactosemia
Galactosemia is one of the most mysterious of the heavily-researched metabolic diseases. It is a hereditary disease that results in a defect in, or absence of, galactose-metabolizing enzymes. This inborn error leaves the body unable to metabolize galactose, allowing toxic levels of galactose to build up in human body blood, cells, and tissues. Although treatment for galactosemic infants is a strict galactose-free diet, endogenous (internal) production of galactose can cause symptoms such as long-term morbidity, presenile development of cataract, renal failure, cirrhosis, and cognitive, neurologic, and female reproductive complications. Galactosemia used to be confused with diabetes due to the presence of sugar in a patient's urine. However, screening advancements have allowed the exact identity of those sugars to be determined, thereby distinguishing galactosemia from diabetes.
Mechanism
A cataract is an opacity that develops in the crystalline lens of the eye. The word cataract literally means, "curtain of water" or "waterfall" as rapidly running water turns white, so the term may have been used metaphorically to describe the similar appearance between mature ocular opacities and water fall. The mechanism by which galactosemia causes cataract is not well understood, but the topic has been approached by researchers for decades, notably by the ophthalmologists, Jonas S. Friedenwald and Jin H. Kinoshita. Through this collective effort, a general mechanism for galactosemia's causation of presenile cataract has come into form.
Galactitol's harmful influence
In galactosemic cataracts, osmotic swelling of the lens epithelial cells (LEC) occurs. Osmosis is the movement of water from areas of low particle concentration to areas of high particle concentration, to establish equilibrium. Researchers concluded that this osmotic swelling must be the result of an accumulation of abnormal metabolites or electrolytes in the lens. Ruth van Heyningen was the first to discover that the lens's retention of dulcitol, synonymous for galactitol, induces this osmotic swelling in the galactosemic cataract. However, galactose concentration must be fairly high before the enzyme, aldose reductase, will convert significant amounts of the sugar to its galactitol form. As it turns out, the lens is a favorable site for galactose accumulation. The lens phosphorylates galactose at a relatively slow pace in comparison to other tissues. This factor, in combination with the low activity of galactose-metabolizing enzymes in galactosemic patients, allows for the accumulation of galactose in the lens. Aldose reductase is able to dip into this galactose reservoir and synthesize significant amounts of galactitol. As is mentioned above, galactitol is not a suitable substrate for the enzyme, polyol dehydrogenase, which catalyzes the next step in the carbohydrate metabolic cycle. Thus, the sugar alcohol idly begins to accumulate in the lens.
Ensuing osmotic pressure
As galactitol concentration increases in the lens, a hypertonic environment is created. Osmosis favors the movement of water into the lens fibers to reduce the high osmolarity. Figures 2 and 3 show how water concentration increases as galactitol concentration increases inside the lens of galactosemic animals sustained on a galactose diet. This osmotic movement ultimately results in the swelling of lens fibers until they rupture. Vacuoles appear where a significant amount of osmotic dissolution of fiber has taken place. What are left are interfibrillar clefts filled with precipitated proteins: the manifestation of a cataract. Friedenwald was able to show that periphery lens fibers always dissolve before fibers at the equatorial region of the lens. This observation has been confirmed by more recent experiments as well, but is still unexplained. The progression of galactosemic cataract is generally divided into three stages; initial vacuolar, late vacuolar, and nuclear cataract. The formation of a mature, nuclear, cloudy galactosemic cataract typically surfaces 14 to 15 days after the onset of the galactose diet. Fig. 6 depicts the three stages of galactosemic cataract with their respective changes in lens hydration.
Changes in lens that accompany galactitol accumulation and osmotic swelling
As cataract formation progresses due to galactitol synthesis and subsequent osmotic swelling, changes occur in the lens epithelial cells. For instance, when rabbit lenses are placed in high-galactose mediums, a nearly 40% reduction in lens amino acid levels is observed, along with significant ATP reduction as well. Researchers theorized that this reduction in amino acid and ATP levels during cataract formation is a result of osmotic swelling. To test this theory, Kinoshita placed rabbit lenses in a high-galactose environment, but inhibited the osmotic swelling by constantly regulating galactose and galactitol concentrations. The results show that amino acid levels remained relatively constant and in some cases even increased. Thus, from these experiments it would appear that the loss of amino acids in the lens when exposed to galactose is primarily due to the osmotic swelling of the lens brought about by dulcitol [galactitol] retention. Galactosemic patients will also present with amino aciduria and galactitoluria (excessive levels of amino acids and galactitol in the urine).
Osmotic swelling of the lens is also responsible for a reduction in electrolyte concentration during the initial vacuolar stage of galactosemic cataract. The water that is osmotically flowing into the lens fibers is not accompanied by ions such as Na+, K+, and Cl−, and so the electrolyte concentration inside the lens is simply diluted by the influx of water. The net concentration of the individual ions does not change during the initial vacuolar stage however. In Fig. 7, note the decrease in electrolyte concentration due to osmotic swelling during the initial vacuolar stage of galactosemic cataract. But when comparing it to the dry weight of the ions, note that there is no change in individual ion concentration at this stage. However, Kinoshita's experiments showed a remarkable upswing in electrolyte concentration toward the latter stages of the galactosemic cataract and in the nuclear stage in particular. This observation seems to be explained by the continuous increase in lens permeability due to the osmotic swelling from galactitol accumulation. Cation and anion distribution becomes erratic, with N+ and Cl− concentrations increasing while K+ concentration decreases as seen in Figures 8 and 9. Researchers have postulated that as the cataractous lens loses its ability to maintain homeostasis, electrolyte concentration eventually increases within the lens, which further encourages osmotic movement of water into the lens fibers, increasing lens permeability even more so. This damaging cycle may play a pivotal role in accelerating the rupture of lens fibers during the most advanced, nuclear stage of the galactosemic cataract.
Diagnosis
Treatment
Galactosemic infants present clinical symptoms just days after the onset of a galactose diet. They include difficulty feeding, diarrhea, lethargy, hypotonia, jaundice, cataract, and hepatomegaly (enlarged liver). If not treated immediately, and many times even with treatment, severe mental deficiencies, verbal dyspraxia (difficulty), motor abnormalities, and reproductive complications may ensue. The most effective treatment for many of the initial symptoms is complete removal of galactose from the diet. Breast milk and cow's milk should be replaced with soy alternatives. Infant formula based on casein hydrolysates and dextrin maltose as a carbohydrate source can also be used for initial management, but are still high in galactose. The reason for long-term complications despite a discontinuation of the galactose diet is vaguely understood. However, it has been suggested that endogenous (internal) production of galactose may be the cause.
The treatment for galactosemic cataract is no different from general galactosemia treatment. In fact, galactosemic cataract is one of the few symptoms that is actually reversible. Infants should be immediately removed from a galactose diet when symptoms present, and the cataract should disappear and visibility should return to normal. Aldose reductase inhibitors, such as sorbinil, have also proven promising in preventing and reversing galactosemic cataracts. AR inhibitors hinder aldose reductase from synthesizing galactitol in the lens, and thus restricts the osmotic swelling of the lens fibers. Other AR inhibitors include the acetic acid compounds zopolrestat, tolrestat, alrestatin, and epalrestat. Many of these compounds have not been successful in clinical trials due to adverse pharmokinetic properties, inadequate efficacy and efficiency, and toxic side effects. Testing on such drug-treatments continues in order to determine potential long-term complications, and for a more detailed mechanism of how AR inhibitors prevent and reverse the galactosemic cataract.
Research
Although advancement has been slow to come during the decades of research dedicated to the galactosemic cataract, some notable additions have been made. In 2006, Michael L. Mulhern and colleagues further investigated the effects of the osmotic swelling on galactosemic cataract development. Experiments were based on systematic observation of rats fed a 50% galactose diet. According to Mulhern, 7 to 9 days after the onset of the galactose diet, lenses appeared hydrated and highly vacuolated. Lens fibers became liquefied after nine days of the diet, and nuclear cataract formation appeared after 15 days of the diet.
The experiment concluded that Apoptosis in lens epithelial cells (LEC) is linked to cataract formation. Essentially, the study suggested that the mechanism outlined by Friedenwald and Kinoshita, which centers on osmotic swelling of the lens fibers, is just the beginning in a cascade of events that causes and progresses the galactosemic cataract. Mulhern determined that osmotic swelling is actually a cataractogenic stressor that leads to LEC apoptosis. This is because osmotic swelling of lens fibers considerably strains LEC endoplasmic reticula. As the endoplasmic reticulum is the principal site of protein synthesis, stressors on the ER can cause proteins to become misfolded. The subsequent accumulation of misfolded proteins in the ER activates the unfolded protein response (UPR) in LECs. In agreement, it was later observed on galactosemic yeast models, the activation of UPR upon galactose treatment. UPR initiates apoptosis, or cell death, by various mechanisms, one of which is the release of reactive oxygen species (ROS). Thus, according to recent findings, osmotic swelling, UPR, oxidative damage, and the resultant LEC apoptosis all play key roles in the onset and progression of the galactosemic cataract. Other studies claim that the oxidative damage in LECs is less a result of the release of ROS and more because of the competition between aldose reductase and glutathione reductase for nicotinamide adenine dinucleotide phosphate (NADPH). Aldose reductase requires NADPH for the reduction of galactose to galactitol, while glutathione reductase utilizes NADPH to reduce glutathione disulfide (GSSG) to its sulfhydryl form, GSH. GSH is an important cellular antioxidant. Therefore, what exactly the key roles are for these cataractogenic factors is not yet fully understood or agreed upon by researchers. Recently, it has been shown that the intake of milk (lactose and galactose) in human diet does not seem to be a cause of cataract.
See also
Metabolic disorder
References
External links
Genetics Home Reference
Patient UK
Inborn errors of carbohydrate metabolism
Eye
Galactose | Galactosemic cataract | Chemistry | 3,410 |
48,186,544 | https://en.wikipedia.org/wiki/BgK | BgK is a neurotoxin found within secretions of the sea anemone Bunodosoma granulifera which blocks voltage-gated potassium channels, thus inhibiting neuronal repolarization.
Etymology
The neurotoxin was named BgK, with the Bg representing the Latin taxonomy (Bunodosoma granulifera) of the specific sea anemone from which the toxin was found, and the K standing for the chemical symbol for potassium owing to its observed effects on K+ channels.
Sources in Nature
BgK can be found in the mucus of the Bunodosoma granulifera, a common sea anemone found along the coasts of Cuba. Since it is a contracting sea anemone, it has two forms based on the position of its tentacles: open and closed. BgK is released when the anemone is in the closed form, a position it assumes during the day or during times of agitation. In this form, the anemone’s tentacles retract, releasing a mucus from a fibrous matrix found in the mesoglea, a space between the ectodermis and the gastrodermis. For every gram of freeze-dried mucus, there is 0.5 mg BgK.
Chemistry
BgK is composed of 37 amino acid residues, and three disulfide bonds. The neurotoxin belongs to a family of toxins found within 3 different sea anemones. The two other anemone/toxin combinations are: Stichodactyla helianthus and ShK; Anemonia viridis and AsKs. All three of these toxins have an affinity to dendrotoxin sensitive potassium channels that are found within rat brain membranes. BgK and ShK attenuate K+ channels in the neurons of rat dorsal ganglia, in vitro. AsKs stops potassium channel currents that are present in Xenopus oocytes. These toxins potentially represent a new structural type of potassium channel inhibitor. Compared to the short and well-studied scorpion toxins, these anemone toxins have comparable amino acid content (35-37 residues) and the same number of disulfide bridges (three). However, these anemone toxins do not share any sequential similarity. Specifically, the different position of the cysteine residues found within these toxins suggests that BgK, ShK, and AsKS are a new family of toxins.
The only homology BgK shares is with a double-headed protease inhibitor found in sea turtles, however it is only limited to a part of the inhibitor, with the largest similarity found with the cysteine residues, which compose six of the eight conserved amino acids found in the two sequences.
Target
BgK blocks the Kv1.1, Kv1.2, and Kv1.3 channels with similar affinities. IC50 is 6 nM for Kv1.1, 15 nM for Kv1.2, and 10 nM for Kv1.3. Meanwhile, tests on the Kv3 channel, specifically Kv3.1, show that the ion channel exhibits an insensitivity of up to 0.125 μM BgK.
Mode of Action
BgK competes with I-α-dendrotoxin, a known probe used to indicate the presence of certain potassium channels, over binding to synaptic membranes within rat brains. The binding sites of the toxin between Kv1.1, Kv1.2, and Kv1.3 were found to include three common amino acid residues: Lys-25, Tyr-26, and Ser-23. This combination appear to form the core residues that are the site of binding of all Kv1 channel blockers from sea anemones. In particular with Kv1.1, the major reason for BgK's affinity towards binding to this specific channel stems from an electrostatic connections between the side chain of Lys-25 and the carbonyl oxygens of the amino acids found within the channel's molecular filter. Another aspect of BgK's binding to Kv1.1 involves the hydrophobic reactions between Tyr-379 of Kv1.1 and the dyad of Tyr-26 and Phe-6 formed within BgK. Such interactions have been found to surround the Lys-25 and could potentially strengthen the electrostatic interactions that can form between this specific lysine and the oxygen atoms of the channel's filter.
Toxicity
The median lethal dose (LD50) of BgK for mice is 4.5 ng per gram. Symptoms observed include trembling of the tail, muscle twitch, salivation, and paralysis, which are the generally observed physical manifestation of potassium channel blockers .
Therapeutic Use
While BgK has been produced in Escherichia coli as a functional protein, exhibiting all of the effects on potassium channels found with BgK isolated from its natural source, there has been no research into any potential therapeutic purpose so far, with most of its use being for research on potassium channels.
References
Neurotoxins
Ion channel toxins
Sea anemone toxins
Potassium channel blockers | BgK | Chemistry | 1,067 |
35,949,366 | https://en.wikipedia.org/wiki/Xi1%20Ceti | {{DISPLAYTITLE:Xi1 Ceti}}
Xi1 Ceti , Latinized from ξ1 Ceti, is a binary star system located in the equatorial constellation of Cetus. It is visible to the naked eye with a combined apparent visual magnitude of +4.36. The distance to this system is approximately 340 light years based on parallax measurements, and it is drifting closer to the Sun with a radial velocity of −4 km/s. The proximity of the star to the ecliptic means it is subject to lunar occultations.
The spectroscopic binary nature of Xi1 Ceti was discovered in 1901 by William Wallace Campbell using the Mills spectrograph at the Lick Observatory. The pair have a circular orbit with a period of 4.5 years and a separation of . It is a suspected eclipsing binary with an amplitude of 0.03 in magnitude, which would suggest the orbital plane has a high inclination.
The primary, designated component A, is a mild barium giant star with a stellar classification of . Morgan and Keenan in 1973 had classified it as a bright giant star with an anomalous underabundance of the CN molecule. Evidence has been found for an overabundance of s-process elements, although this is disputed. The star has 3.8 times the mass and 18 times the radius of the Sun. The companion, component B, is a small white dwarf companion with 80% of the mass of the Sun and a class of DA4. It was detected in 1985 by its ultraviolet emission.
In Chinese, (), meaning Circular Celestial Granary, refers to an asterism consisting of α Ceti, κ1 Ceti, λ Ceti, μ Ceti, ξ1 Ceti, ξ2 Ceti, ν Ceti, γ Ceti, δ Ceti, 75 Ceti, 70 Ceti, 63 Ceti and 66 Ceti. Consequently, the Chinese name for Xi1 Ceti itself is "the Fifth Star of Circular Celestial Granary", .
References
External links
http://server3.wikisky.org/starview?object_type=1&object_id=979
http://www.alcyone.de/cgi-bin/search.pl?object=HR0649
G-type giants
Barium stars
White dwarfs
Spectroscopic binaries
Cetus
Ceti, Xi1
Durchmusterung objects
Ceti, 65
013611
010324
0649 | Xi1 Ceti | Astronomy | 514 |
38,900,262 | https://en.wikipedia.org/wiki/Tricholoma%20cingulatum | Tricholoma cingulatum is a mushroom of the agaric genus Tricholoma. First described in 1830 as Agaricus cingulatus by Elias Magnus Fries, it was transferred to the genus Tricholoma by Almfelt in 1830.
See also
List of North American Tricholoma
List of Tricholoma species
References
cingulatum
Fungi described in 1830
Fungi of Europe
Fungi of North America
Fungus species | Tricholoma cingulatum | Biology | 93 |
592,136 | https://en.wikipedia.org/wiki/Palladian%20architecture | Palladian architecture is a European architectural style derived from the work of the Venetian architect Andrea Palladio (1508–1580). What is today recognised as Palladian architecture evolved from his concepts of symmetry, perspective and the principles of formal classical architecture from ancient Greek and Roman traditions. In the 17th and 18th centuries, Palladio's interpretation of this classical architecture developed into the style known as Palladianism.
Palladianism emerged in England in the early 17th century, led by Inigo Jones, whose Queen's House at Greenwich has been described as the first English Palladian building. Its development faltered at the onset of the English Civil War. After the Stuart Restoration, the architectural landscape was dominated by the more flamboyant English Baroque. Palladianism returned to fashion after a reaction against the Baroque in the early 18th century, fuelled by the publication of a number of architectural books, including Palladio's own I quattro libri dell'architettura (The Four Books of Architecture) and Colen Campbell's Vitruvius Britannicus. Campbell's book included illustrations of Wanstead House, a building he designed on the outskirts of London and one of the largest and most influential of the early neo-Palladian houses. The movement's resurgence was championed by Richard Boyle, 3rd Earl of Burlington, whose buildings for himself, such as Chiswick House and Burlington House, became celebrated. Burlington sponsored the career of the artist, architect and landscaper William Kent, and their joint creation, Holkham Hall in Norfolk, has been described as "the most splendid Palladian house in England". By the middle of the century Palladianism had become almost the national architectural style, epitomised by Kent's Horse Guards at the centre of the nation's capital.
The Palladian style was also widely used throughout Europe, often in response to English influences. In Prussia the critic and courtier Francesco Algarotti corresponded with Burlington about his efforts to persuade Frederick the Great of the merits of the style, while Knobelsdorff's opera house in Berlin on the Unter den Linden, begun in 1741, was based on Campbell's Wanstead House. Later in the century, when the style was losing favour in Europe, Palladianism had a surge in popularity throughout the British colonies in North America. Thomas Jefferson sought out Palladian examples, which themselves drew on buildings from the time of the Roman Republic, to develop a new architectural style for the American Republic. Examples include the Hammond–Harwood House in Maryland and Jefferson's own house, Monticello, in Virginia. The Palladian style was also adopted in other British colonies, including those in the Indian subcontinent.
In the 19th century, Palladianism was overtaken in popularity by Neoclassical architecture in both Europe and in North America. By the middle of that century, both were challenged and then superseded by the Gothic Revival in the English-speaking world, whose champions such as Augustus Pugin, remembering the origins of Palladianism in ancient temples, deemed the style too pagan for true Christian worship. In the 20th and 21st centuries, Palladianism has continued to evolve as an architectural style; its pediments, symmetry and proportions are evident in the design of many modern buildings, while its inspirer is regularly cited as having been among the world's most influential architects.
Palladio's architecture
Andrea Palladio was born in Padua in 1508, the son of a stonemason. He was inspired by Roman buildings, the writings of Vitruvius (80 BC), and his immediate predecessors Donato Bramante and Raphael. Palladio aspired to an architectural style that used symmetry and proportion to emulate the grandeur of classical buildings. His surviving buildings are in Venice, the Veneto region, and Vicenza, and include villas and churches such as the Basilica del Redentore in Venice. Palladio's architectural treatises follow the approach defined by Vitruvius and his 15th-century disciple Leon Battista Alberti, who adhered to principles of classical Roman architecture based on mathematical proportions rather than the ornamental style of the Renaissance. Palladio recorded and publicised his work in the 1570 four-volume illustrated study, I quattro libri dell'architettura (The Four Books of Architecture).
Palladio's villas are designed to fit with their setting. If on a hill, such as Villa Almerico Capra Valmarana (Villa Capra, or La Rotonda), façades were of equal value so that occupants could enjoy views in all directions. Porticos were built on all sides to enable the residents to appreciate the countryside while remaining protected from the sun. Palladio sometimes used a loggia as an alternative to the portico. This is most simply described as a recessed portico, or an internal single storey room with pierced walls that are open to the elements. Occasionally a loggia would be placed at second floor level over the top of another loggia, creating what was known as a double loggia. Loggias were sometimes given significance in a façade by being surmounted by a pediment. Villa Godi's focal point is a loggia rather than a portico, with loggias terminating each end of the main building.
Palladio would often model his villa elevations on Roman temple façades. The temple influence, often in a cruciform design, later became a trademark of his work. Palladian villas are usually built with three floors: a rusticated basement or ground floor, containing the service and minor rooms; above this, the piano nobile (noble level), accessed through a portico reached by a flight of external steps, containing the principal reception and bedrooms; and lastly a low mezzanine floor with secondary bedrooms and accommodation. The proportions of each room (for example, height and width) within the villa were calculated on simple mathematical ratios like 3:4 and 4:5. The arrangement of the different rooms within the house, and the external façades, were similarly determined. Earlier architects had used these formulas for balancing a single symmetrical façade; however, Palladio's designs related to the entire structure. Palladio set out his views in I quattro libri dell'architettura: "beauty will result from the form and correspondence of the whole, with respect to the several parts, of the parts with regard to each other, and of these again to the whole; that the structure may appear an entire and complete body, wherein each member agrees with the other, and all necessary to compose what you intend to form."
Palladio considered the dual purpose of his villas as the centres of farming estates and weekend retreats. These symmetrical temple-like houses often have equally symmetrical, but low, wings, or barchessas, sweeping away from them to accommodate horses, farm animals, and agricultural stores. The wings, sometimes detached and connected to the villa by colonnades, were designed not only to be functional but also to complement and accentuate the villa. Palladio did not intend them to be part of the main house, but the development of the wings to become integral parts of the main building – undertaken by Palladio's followers in the 18th century – became one of the defining characteristics of Palladianism.
Venetian and Palladian windows
Palladian, Serlian, or Venetian windows are a trademark of Palladio's early career. There are two different versions of the motif: the simpler one is called a Venetian window, and the more elaborate a Palladian window or "Palladian motif", although this distinction is not always observed.
The Venetian window has three parts: a central high round-arched opening, and two smaller rectangular openings to the sides. The side windows are topped by lintels and supported by columns. This is derived from the ancient Roman triumphal arch, and was first used outside Venice by Donato Bramante and later mentioned by Sebastiano Serlio (1475–1554) in his seven-volume architectural book Tutte l'opere d'architettura et prospetiva (All the Works of Architecture and Perspective) expounding the ideals of Vitruvius and Roman architecture. It can be used in series, but is often only used once in a façade, as at New Wardour Castle, or once at each end, as on the inner façade of Burlington House (true Palladian windows).
Palladio's elaboration of this, normally used in a series, places a larger or giant order in between each window, and doubles the small columns supporting the side lintels, placing the second column behind rather than beside the first. This was introduced in the in Venice by Jacopo Sansovino (1537), and heavily adopted by Palladio in the Basilica Palladiana in Vicenza, where it is used on both storeys; this feature was less often copied. The openings in this elaboration are not strictly windows, as they enclose a loggia. Pilasters might replace columns, as in other contexts. Sir John Summerson suggests that the omission of the doubled columns may be allowed, but the term "Palladian motif" should be confined to cases where the larger order is present.
Palladio used these elements extensively, for example in very simple form in his entrance to Villa Forni Cerato. It is perhaps this extensive use of the motif in the Veneto that has given the window its alternative name of the Venetian window. Whatever the name or the origin, this form of window has become one of the most enduring features of Palladio's work seen in the later architectural styles evolved from Palladianism. According to James Lees-Milne, its first appearance in Britain was in the remodelled wings of Burlington House, London, where the immediate source was in the English court architect Inigo Jones's designs for Whitehall Palace rather than drawn from Palladio himself. Lees-Milne describes the Burlington window as "the earliest example of the revived Venetian window in England".
A variant, in which the motif is enclosed within a relieving blind arch that unifies the motif, is not Palladian, though Richard Boyle seems to have assumed it was so, in using a drawing in his possession showing three such features in a plain wall. Modern scholarship attributes the drawing to Vincenzo Scamozzi. Burlington employed the motif in 1721 for an elevation of Tottenham Park in Savernake Forest for his brother-in-law Lord Bruce (since remodelled). William Kent used it in his designs for the Houses of Parliament, and it appears in his executed designs for the north front of Holkham Hall. Another example is Claydon House, in Buckinghamshire; the remaining fragment is one wing of what was intended to be one of two flanking wings to a vast Palladian house. The scheme was never completed and parts of what was built have since been demolished.
Early Palladianism
During the 17th century, many architects studying in Italy learned of Palladio's work, and on returning home adopted his style, leading to its widespread use across Europe and North America. Isolated forms of Palladianism throughout the world were brought about in this way, although the style did not reach the zenith of its popularity until the 18th century. An early reaction to the excesses of Baroque architecture in Venice manifested itself as a return to Palladian principles. The earliest neo-Palladians there were the exact contemporaries Domenico Rossi (1657–1737) and Andrea Tirali (1657–1737). Their biographer, Tommaso Temanza, proved to be the movement's most able proponent; in his writings, Palladio's visual inheritance became increasingly codified and moved towards neoclassicism.
The most influential follower of Palladio was Inigo Jones, who travelled throughout Italy with the art collector Earl of Arundel in 1613–1614, annotating his copy of Palladio's treatise. The "Palladianism" of Jones and his contemporaries and later followers was a style largely of façades, with the mathematical formulae dictating layout not strictly applied. A handful of country houses in England built between 1640 and 1680 are in this style. These follow the success of Jones's Palladian designs for the Queen's House at Greenwich, the first English Palladian house, and the Banqueting House at Whitehall, the uncompleted royal palace in London of Charles I.
Palladian designs advocated by Jones were too closely associated with the court of Charles I to survive the turmoil of the English Civil War. Following the Stuart restoration, Jones's Palladianism was eclipsed by the Baroque designs of such architects as William Talman, Sir John Vanbrugh, Nicholas Hawksmoor, and Jones's pupil John Webb.
Neo-Palladianism
English Palladian architecture
The Baroque style proved highly popular in continental Europe, but was often viewed with suspicion in England, where it was considered "theatrical, exuberant and Catholic." It was superseded in Britain in the first quarter of the 18th century when four books highlighted the simplicity and purity of classical architecture. These were:
Vitruvius Britannicus (The British Architect), published by Colen Campbell in 1715 (of which supplemental volumes appeared through the century);
I quattro libri dell'architettura (The Four Books of Architecture), by Palladio himself, translated by Giacomo Leoni and published from 1715 onwards;
(On the Art of Building), by Leon Battista Alberti, translated by Giacomo Leoni and published in 1726; and
The Designs of Inigo Jones... with Some Additional Designs, published by William Kent in two volumes in 1727. A further volume, Some Designs of Mr. Inigo Jones and Mr. William Kent was published in 1744 by the architect John Vardy, an associate of Kent.
The most favoured among patrons was the four-volume Vitruvius Britannicus by Campbell, The series contains architectural prints of British buildings inspired by the great architects from Vitruvius to Palladio; at first mainly those of Inigo Jones, but the later works contained drawings and plans by Campbell and other 18th-century architects. These four books greatly contributed to Palladian architecture becoming established in 18th-century Britain. Campbell and Kent became the most fashionable and sought-after architects of the era. Campbell had placed his 1715 designs for the colossal Wanstead House near to the front of Vitruvius Britannicus, immediately following the engravings of buildings by Jones and Webb, "as an exemplar of what new architecture should be". On the strength of the book, Campbell was chosen as the architect for Henry Hoare I's Stourhead house. Hoare's brother-in-law, William Benson, had designed Wilbury House, the earliest 18th-century Palladian house in Wiltshire, which Campbell had also illustrated in Vitruvius Britannicus.
At the forefront of the new school of design was the "architect earl", Richard Boyle, 3rd Earl of Burlington, according to Dan Cruikshank the "man responsible for this curious elevation of Palladianism to the rank of a quasi-religion". In 1729 he and Kent designed Chiswick House. This house was a reinterpretation of Palladio's Villa Capra, but purified of 16th century elements and ornament. This severe lack of ornamentation was to be a feature of English Palladianism.
In 1734 Kent and Burlington designed Holkham Hall in Norfolk. James Stevens Curl considers it "the most splendid Palladian house in England". The main block of the house followed Palladio's dictates, but his low, often detached, wings of farm buildings were elevated in significance. Kent attached them to the design, banished the farm animals, and elevated the wings to almost the same importance as the house itself. It was the development of the flanking wings that was to cause English Palladianism to evolve from being a pastiche of Palladio's original work. Wings were frequently adorned with porticos and pediments, often resembling, as at the much later Kedleston Hall, small country houses in their own right.
Architectural styles evolve and change to suit the requirements of each individual client. When in 1746 the Duke of Bedford decided to rebuild Woburn Abbey, he chose the fashionable Palladian style, and selected the architect Henry Flitcroft, a protégé of Burlington. Flitcroft's designs, while Palladian in nature, had to comply with the Duke's determination that the plan and footprint of the earlier house, originally a Cistercian monastery, be retained. The central block is small, has only three bays, while the temple-like portico is merely suggested, and is closed. Two great flanking wings containing a vast suite of state rooms replace the walls or colonnades which should have connected to the farm buildings; the farm buildings terminating the structure are elevated in height to match the central block and given Palladian windows, to ensure they are seen as of Palladian design. This development of the style was to be repeated in many houses and town halls in Britain over one hundred years. Often the terminating blocks would have blind porticos and pilasters themselves, competing for attention with, or complementing the central block. This was all very far removed from the designs of Palladio two hundred years earlier. Falling from favour during the Victorian era, the approach was revived by Sir Aston Webb for his refacing of Buckingham Palace in 1913.
The villa tradition continued throughout the late 18th century, particularly in the suburbs around London. Sir William Chambers built many examples, such as Parkstead House. But the grander English Palladian houses were no longer the small but exquisite weekend retreats that their Italian counterparts were intended as. They had become "power houses", in Sir John Summerson's words, the symbolic centres of the triumph and dominance of the Whig Oligarchy who ruled Britain unchallenged for some fifty years after the death of Queen Anne. Summerson thought Kent's Horse Guards on Whitehall epitomised "the establishment of Palladianism as the official style of Great Britain". As the style peaked, thoughts of mathematical proportion were swept away. Rather than square houses with supporting wings, these buildings had the length of the façade as their major consideration: long houses often only one room deep were deliberately deceitful in giving a false impression of size.
Irish Palladian architecture
During the Palladian revival period in Ireland, even modest mansions were cast in a neo-Palladian mould. Irish Palladian architecture subtly differs from the England style. While adhering as in other countries to the basic ideals of Palladio, it is often truer to them. In Ireland, Palladianism became political; both the original and the present Irish parliaments in Dublin occupy Palladian buildings.
The Irish architect Sir Edward Lovett Pearce (1699–1733) became a leading advocate. He was a cousin of Sir John Vanbrugh, and originally one of his pupils. He rejected the Baroque style, and spent three years studying architecture in France and Italy before returning to Ireland. His most important Palladian work is the former Irish Houses of Parliament in Dublin. Christine Casey, in her 2005 volume Dublin, in the Pevsner Buildings of Ireland series, considers the building, "arguably the most accomplished public set-piece of the Palladian style in [Britain]". Pearce was a prolific architect who went on to design the southern façade of Drumcondra House in 1725 and Summerhill House in 1731, which was completed after his death by Richard Cassels. Pearce also oversaw the building of Castletown House near Dublin, designed by the Italian architect Alessandro Galilei (1691–1737). It is perhaps the only Palladian house in Ireland built with Palladio's mathematical ratios, and one of a number of Irish mansions which inspired the design of the White House in Washington, D.C.
Other examples include Russborough, designed by Richard Cassels, who also designed the Palladian Rotunda Hospital in Dublin and Florence Court in County Fermanagh. Irish Palladian country houses often feature robust Rococo plasterwork – an Irish specialty which was frequently executed by the Lafranchini brothers and far more flamboyant than the interiors of their contemporaries in England. In the 20th century, during and following the Irish War of Independence and the subsequent civil war, large numbers of Irish country houses, including some fine Palladian examples such as Woodstock House, were abandoned to ruin or destroyed.
North American Palladian architecture
Palladio's influence in North America is evident almost from its first architect-designed buildings. The Irish philosopher George Berkeley, who may be America's first recorded Palladian, bought a large farmhouse in Middletown, Rhode Island, in the late 1720s, and added a Palladian doorcase derived from Kent's Designs of Inigo Jones (1727), which he may have brought with him from London. Palladio's work was included in the library of a thousand volumes amassed for Yale College. Peter Harrison's 1749 designs for the Redwood Library in Newport, Rhode Island, borrow directly from Palladio's I quattro libri dell'architettura, while his plan for the Newport Brick Market, conceived a decade later, is also Palladian.
Two colonial period houses that can be definitively attributed to designs from I quattro libri dell'architettura are the Hammond-Harwood House (1774) in Annapolis, Maryland, and Thomas Jefferson's first Monticello (1770). Hammond-Harwood was designed by the architect William Buckland in 1773–1774 for the wealthy farmer Matthias Hammond of Anne Arundel County, Maryland. The design source is the Villa Pisani, and that for the first Monticello, the Villa Cornaro at Piombino Dese. Both are taken from Book II, Chapter XIV of I quattro libri dell'architettura. Jefferson later made substantial alterations to Monticello, known as the second Monticello (1802–1809), making the Hammond-Harwood House the only remaining house in North America modelled directly on a Palladian design.
Jefferson referred to I quattro libri dell'architettura as his bible. Although a statesman, his passion was architecture, and he developed an intense appreciation of Palladio's architectural concepts; his designs for the James Barbour Barboursville estate, the Virginia State Capitol, and the University of Virginia campus were all based on illustrations from Palladio's book. Realising the political significance of ancient Roman architecture to the fledgling American Republic, Jefferson designed his civic buildings, such as The Rotunda, in the Palladian style, echoing in his buildings for the new republic examples from the old.
In Virginia and the Carolinas, the Palladian style is found in numerous plantation houses, such as Stratford Hall, Westover Plantation and Drayton Hall. Westover's north and south entrances, made of imported English Portland stone, were patterned after a plate in William Salmon's Palladio Londinensis (1734). The distinctive feature of Drayton Hall, its two-storey portico, was derived from Palladio, as was Mount Airy, in Richmond County, Virginia, built in 1758–1762. A particular feature of American Palladianism was the re-emergence of the great portico which, as in Italy, fulfilled the need of protection from the sun; the portico in various forms and size became a dominant feature of American colonial architecture. In the north European countries the portico had become a mere symbol, often closed, or merely hinted at in the design by pilasters, and sometimes in very late examples of English Palladianism adapted to become a porte-cochère; in America, the Palladian portico regained its full glory.
The White House in Washington, D.C., was inspired by Irish Palladianism. Its architect James Hoban, who built the executive mansion between 1792 and 1800, was born in Callan, County Kilkenny, in 1762, the son of tenant farmers on the estate of Desart Court, a Palladian House designed by Pearce. He studied architecture in Dublin, where Leinster House (built ) was one of the finest Palladian buildings of the time. Both Cassel's Leinster House and James Wyatt's Castle Coole have been cited as Hoban's inspirations for the White House but the more neoclassical design of that building, particularly of the South façade which closely resembles Wyatt's 1790 design for Castle Coole, suggests that Coole is perhaps the more direct progenitor. The architectural historian Gervase Jackson-Stops describes Castle Coole as "a culmination of the Palladian traditions, yet strictly neoclassical in its chaste ornament and noble austerity", while Alistair Rowan, in his 1979 volume, North West Ulster, of the Buildings of Ireland series, suggests that, at Coole, Wyatt designed a building, "more massy, more masculine and more totally liberated from Palladian practice than anything he had done before."
Because of its later development, Palladian architecture in Canada is rarer. In her 1984 study, Palladian Style in Canadian Architecture, Nathalie Clerk notes its particular impact on public architecture, as opposed to the private houses in the United States. One example of historical note is the Nova Scotia Legislature building, completed in 1819. Another example is Government House in St. John's, Newfoundland.
Palladianism elsewhere
The rise of neo-Palladianism in England contributed to its adoption in Prussia. Count Francesco Algarotti wrote to Lord Burlington to inform him that he was recommending to Frederick the Great the adoption in his own country of the architectural style Burlington had introduced in England. By 1741, Georg Wenzeslaus von Knobelsdorff had already begun construction of the Berlin Opera House on the Unter den Linden, based on Campbell's Wanstead House.
Palladianism was particularly adopted in areas under British colonial rule. Examples can be seen in the Indian subcontinent; the Raj Bhavan, Kolkata (formerly Government House) was modelled on Kedleston Hall, while the architectural historian Pilar Maria Guerrieri identifies its influences in Lutyens' Delhi. In South Africa, Federico Freschi notes the "Tuscan colonnades and Palladian windows" of Herbert Baker's Union Buildings.
Legacy
By the 1770s, British architects such as Robert Adam and William Chambers were in high demand, but were now drawing on a wide variety of classical sources, including from ancient Greece, so much so that their forms of architecture became defined as neoclassical rather than Palladian. In Europe, the Palladian revival ended by the close of the 18th century. In the 19th century, proponents of the Gothic Revival such as Augustus Pugin, remembering the origins of Palladianism in ancient temples, considered it pagan, and unsuited to Anglican and Anglo-Catholic worship. In North America, Palladianism lingered a little longer; Thomas Jefferson's floor plans and elevations owe a great deal to Palladio's I quattro libri dell'architettura.
The term Palladian is often misused in modern discourse and tends to be used to describe buildings with any classical pretensions. There was a revival of a more serious Palladian approach in the 20th century when Colin Rowe, an influential architectural theorist, published his essay, The Mathematics of the Ideal Villa, (1947), in which he drew links between the compositional "rules" in Palladio's villas and Le Corbusier's villas at Poissy and Garches. Suzanne Walters' article The Two Faces of Modernism suggests a continuing influence of Palladio's ideas on architects of the 20th century. In the 21st century Palladio's name regularly appears among the world's most influential architects. In England, Raymond Erith (1904–1973) drew on Palladian inspirations, and was followed in this by his pupil, subsequently partner, Quinlan Terry. Their work, and that of others, led the architectural historian John Martin Robinson to suggest that "the Quattro Libri continues as the fountainhead of at least one strand in the English country house tradition."
See also
City of Vicenza and the Palladian Villas of the Veneto
New Classical architecture
Giacomo Quarenghi
Riviera del Brenta
Notes, references and sources
Notes
References
Sources
External links
Center for Palladian Studies in America
Inigo Jones document collection at Worcester College, Oxford
International centre for the study of the architecture of Andrea Palladio (CISA)
Thomas Jefferson's architecture
Article on Palladian architecture in colonial Singapore, published by the Department of Architecture and Urban Planning
Architectural history
Architectural styles
Architectural design
British architectural styles
House styles | Palladian architecture | Engineering | 6,018 |
21,233,035 | https://en.wikipedia.org/wiki/Wolfgang%20von%20Wersin | Wolfgang von Wersin (3 December 188213 June 1976) was a Czech-born designer, painter, architect and author who developed his career in Germany.
Born in Prague, he studied architecture at the Technische University of Munich (19011904) and, in parallel (1902 to 1905), he also studied drawing and painting at the Lehr- und Versuch-Atelier für Angewandte und Freie Kunst ("Teaching and Experimental Atelier for Applied and Free Art"), a reform oriented art school in the same city. Then, from 1906 onwards, after he completed his military service, became a tutor there. His constant collaborator and eventual wife, the German printmaker and draughtswoman Herthe Schöpp (1888–1971), met him as his pupil. In 1909 he began working as a designer for numerous firms, including the Behr furniture factory and the Meissen porcelain manufacturers. In 1929, he assumed the directorship of the Neue Sammlung established in Munich in 1925, the department for artisan art at the National Museum – and remained there until his illegal dismissal by the national socialists in 1934.
In 1956 he wrote The Book of Rectangles, Spatial Law and Gestures of The Orthogons Described, in which he describes a set of 12 dynamic rectangles he calls orthogons.
Style
Wersin's early designs are characterized by East-Asian forms; however, he eventually developed a style free of any kind clear of influence (including rural folk art) and achieved a timelessly classical style of great objectivity, revealed above all in articles for everyday use, such as porcelain, glass, tableware fabric and wallpaper.
Orthogon information
Wolfgang Von Wersin's book about the Orthogons gives detailed information about how to construct and use a special set of 12 inter-related rectangles to create a design. They are similar to what Jay Hambidge called dynamic rectangles. The set of 12 Orthogons is determined by expanding a square through a series of arcs and cross-points until another square is formed on top, an exact duplication of the original square.
Wersin also explains in the book how Orthogons can be detected and used in architecture, ceramics, furniture and works of art.
The value of using Orthogons is explained in an excerpt that includes an extraordinary copy of text from the year 1558 (Renaissance). Diagrams of seven of the 12 orthogons are accompanied by a passage from the 1558 text cautioning that careful attention be given as the "ancient" architects believed "nothing excels these proportions" as "a thing of the purest abstraction."
One of the orthogons, the hemidiagon, is apparent in the designs of synagogues in ancient Galilee. Mathematical ratios and another source for the term Orthogon:
A well-known Orthogon, the Auron (golden rectangle), has been employed to create a range of designs from posters and chapels (Mies van der Rohe), to chairs. and glassware
The Auron is related to musical harmony, in that the golden ratio is among the most dissonant musical intervals, and is also included in discussions on sacred architecture and sacred geometry as well as information regarding dynamic symmetry and aesthetics.
According to Von Wersin, "The Orthogons are without exception root figures and are all irrational numbers. The calculations for measure relations of the Orthogons are based, without exception, on the Pythagorean doctrine." Examples of these root figure relations are: the Diagon relation is 1: square root of 2, the Sixton is 1: square root of 3 and the Doppelquadrat is 1: square root of 4.
Mathematical ratios for all twelve Orthogons:
Ratios for all twelve Orthogons:
Quadrat 1:1 – Hemidiagon 1:1.118 – Trion 1:1.154 – Quadriagon 1:1.207 – Biauron 1:1.236 – Penton 1:1.376 –
Diagon 1:1.414 – Bipenton 1:1.46 – Hemiolion 1:1.5 – Auron 1:1.618 – Sixton 1:1.732 – Doppelquadrat 1:2
(Quadrat is the German word for square, and Doppelquadrat for double square.)
See also
Aesthetics
Auron
Golden rectangle
Golden section
Phi (letter)
Logarithmic spiral
Fibonacci number
Sacred architecture
Religious art
Dynamic symmetry
Giorgio Morandi
Georges Braque
Vitruvian Man
De architectura
Square root of 2
Square root of 3
Square root of 4
Square root of 5
Sources
Albrecht Dürer, Of the Just Shaping of Letters, From the Applied Geometry of Albrecht Dürer Book; Dover Publications, NY, NY.
Keith Critchlow, Order in Space: A Design Source Book; 1970, Viking, NY, NY.
Kimberly Elam, Geometry of Design: Studies in Proportion and Composition; 2001, Princeton Architectural Press, NY, NY.
Jay Hambidge, The Elements of Dynamic Symmetry; 1967, Dover Publications, NY, NY.
Hemenway, Priya; Divine Proportion, Phi in Art, Nature and Science; 2005, Sterling Publishing Co., Inc, NY, NY.
Michael S. Schneider, A Beginner's Guide to Constructing the Universe: Mathematical Archetypes of Nature, Art, and Science; 1994, Harper Paperbacks.
Alfred Ziffer; Wolfgang Von Wersin 1882–1976 Vom Kunstgewerbe zur Industrieform; 1991 Klinkhardt & Biermann, Munchen, Germany.
References
1882 births
1976 deaths
20th-century German painters
20th-century male artists
Architects from Prague
20th-century German architects
German male painters
German male writers
Sacred geometry
Technical University of Munich alumni | Wolfgang von Wersin | Engineering | 1,223 |
1,364,502 | https://en.wikipedia.org/wiki/Belt%20%28mechanical%29 | A belt is a loop of flexible material used to link two or more rotating shafts mechanically, most often parallel. Belts may be used as a source of motion, to transmit power efficiently or to track relative movement. Belts are looped over pulleys and may have a twist between the pulleys, and the shafts need not be parallel.
In a two pulley system, the belt can either drive the pulleys normally in one direction (the same if on parallel shafts), or the belt may be crossed, so that the direction of the driven shaft is reversed (the opposite direction to the driver if on parallel shafts). The belt drive can also be used to change the speed of rotation, either up or down, by using different sized pulleys.
As a source of motion, a conveyor belt is one application where the belt is adapted to carry a load continuously between two points.
History
The mechanical belt drive, using a pulley machine, was first mentioned in the text of the Dictionary of Local Expressions by the Han Dynasty philosopher, poet, and politician Yang Xiong (53–18 BC) in 15 BC, used for a quilling machine that wound silk fibres onto bobbins for weavers' shuttles. The belt drive is an essential component of the invention of the spinning wheel. The belt drive was not only used in textile technologies, it was also applied to hydraulic-powered bellows dated from the 1st century AD.
Power transmission
Belts are the cheapest utility for power transmission between shafts that may not be axially aligned. Power transmission is achieved by purposely designed belts and pulleys. The variety of power transmission needs that can be met by a belt-drive transmission system are numerous, and this has led to many variations on the theme. Belt drives run smoothly and with little noise, and provide shock absorption for motors, loads, and bearings when the force and power needed changes. A drawback to belt drives is that they transmit less power than gears or chain drives. However, improvements in belt engineering allow use of belts in systems that formerly only allowed chain drives or gears.
Power transmitted between a belt and a pulley is expressed as the product of difference of tension and belt velocity:
where and are tensions in the tight side and slack side of the belt respectively. They are related as
where is the coefficient of friction, and is the angle (in radians) subtended by contact surface at the centre of the pulley.
Power transmission loss form
Pros and cons
Belt drives are simple, inexpensive, and do not require axially aligned shafts. They help protect machinery from overload and jam, and damp and isolate noise and vibration. Load fluctuations are shock-absorbed (cushioned). They need no lubrication and minimal maintenance. They have high efficiency (90–98%, usually 95%), high tolerance for misalignment, and are of relatively low cost if the shafts are far apart. Clutch action can be achieved by shifting the belt to a free turning pulley or by releasing belt tension. Different speeds can be obtained by stepped or tapered pulleys.
The angular-velocity ratio may not be exactly constant or equal to that of the pulley diameters, due to slip and stretch. However, this problem can be largely solved by the use of toothed belts. Working temperatures range from . Adjustment of centre distance or addition of an idler pulley is crucial to compensate for wear and stretch.
Flat belts
Flat belts were widely used in the 19th and early 20th centuries in line shafting to transmit power in factories. They were also used in countless farming, mining, and logging applications, such as bucksaws, sawmills, threshers, silo blowers, conveyors for filling corn cribs or haylofts, balers, water pumps (for wells, mines, or swampy farm fields), and electrical generators. Flat belts are still used today, although not nearly as much as in the line-shaft era. The flat belt is a simple system of power transmission that was well suited for its day. It can deliver high power at high speeds (373 kW at 51 m/s; 115 mph), in cases of wide belts and large pulleys. Wide-belt-large-pulley drives are bulky, consuming much space while requiring high tension, leading to high loads, and are poorly suited to close-centers applications. V-belts have mainly replaced flat belts for short-distance power transmission; and longer-distance power transmission is typically no longer done with belts at all. For example, factory machines now tend to have individual electric motors.
Because flat belts tend to climb towards the higher side of the pulley, pulleys were made with a slightly convex or "crowned" surface (rather than flat) to allow the belt to self-center as it runs. Flat belts also tend to slip on the pulley face when heavy loads are applied, and many proprietary belt dressings were available that could be applied to the belts to increase friction, and so power transmission.
Flat belts were traditionally made of leather or fabric. Early flour mills in Ukraine had leather belt drives. After World War I, there was such a shortage of shoe leather that people cut up the belt drives to make shoes. Selling shoes was more profitable than selling flour for a time. Flour milling soon came to a standstill and bread prices rose, contributing to famine conditions. Leather drive belts were put to another use during the Rhodesian Bush War (1964–1979): To protect riders of cars and busses from land mines, layers of leather belt drives were placed on the floors of vehicles in danger zones. Today most belt drives are made of rubber or synthetic polymers. Grip of leather belts is often better if they are assembled with the hair side (outer side) of the leather against the pulley, although some belts are instead given a half-twist before joining the ends (forming a Möbius strip), so that wear can be evenly distributed on both sides of the belt. Belts ends are joined by lacing the ends together with leather thonging (the oldest of the methods), steel comb fasteners and/or lacing, or by gluing or welding (in the case of polyurethane or polyester). Flat belts were traditionally jointed, and still usually are, but they can also be made with endless construction.
Rope drives
In the mid 19th century, British millwrights discovered that multi-grooved pulleys connected by ropes outperformed flat pulleys connected by leather belts. Wire ropes were occasionally used, but cotton, hemp, manila hemp and flax rope saw the widest use. Typically, the rope connecting two pulleys with multiple V-grooves was spliced into a single loop that traveled along a helical path before being returned to its starting position by an idler pulley that also served to maintain the tension on the rope. Sometimes, a single rope was used to transfer power from one multiple-groove drive pulley to several single- or multiple-groove driven pulleys in this way.
In general, as with flat belts, rope drives were used for connections from stationary engines to the jack shafts and line shafts of mills, and sometimes from line shafts to driven machinery. Unlike leather belts, however, rope drives were sometimes used to transmit power over relatively long distances. Over long distances, intermediate sheaves were used to support the "flying rope", and in the late 19th century, this was considered quite efficient.
Round belts
Round belts are a circular cross section belt designed to run in a pulley with a 60 degree V-groove. Round grooves are only suitable for idler pulleys that guide the belt, or when (soft) O-ring type belts are used. The V-groove transmits torque through a wedging action, thus increasing friction. Nevertheless, round belts are for use in relatively low torque situations only and may be purchased in various lengths or cut to length and joined, either by a staple, a metallic connector (in the case of hollow plastic), gluing or welding (in the case of polyurethane). Early sewing machines utilized a leather belt, joined either by a metal staple or glued, to great effect.
Spring belts
Spring belts are similar to rope or round belts but consist of a long steel helical spring. They are commonly found on toy or small model engines, typically steam engines driving other toys or models or providing a transmission between the crankshaft and other parts of a vehicle. The main advantage over rubber or other elastic belts is that they last much longer under poorly controlled operating conditions. The distance between the pulleys is also less critical. Their main disadvantage is that slippage is more likely due to the lower coefficient of friction. The ends of a spring belt can be joined either by bending the last turn of the helix at each end by 90 degrees to form hooks, or by reducing the diameter of the last few turns at one end so that it "screws" into the other end.
V belts (also style V-belts, vee belts, or, less commonly, wedge rope) solved the slippage and alignment problem. It is now the basic belt for power transmission. They provide the best combination of traction, speed of movement, load of the bearings, and long service life. They are generally endless, and their general cross-section shape is roughly trapezoidal (hence the name "V"). The "V" shape of the belt tracks in a mating groove in the pulley (or sheave), with the result that the belt cannot slip off. The belt also tends to wedge into the groove as the load increases—the greater the load, the greater the wedging action—improving torque transmission and making the V-belt an effective solution, needing less width and tension than flat belts. V-belts trump flat belts with their small center distances and high reduction ratios. The preferred center distance is larger than the largest pulley diameter, but less than three times the sum of both pulleys. Optimal speed range is . V-belts need larger pulleys for their thicker cross-section than flat belts.
For high-power requirements, two or more V-belts can be joined side-by-side in an arrangement called a multi-V, running on matching multi-groove sheaves. This is known as a multiple-V-belt drive (or sometimes a "classical V-belt drive").
V-belts may be homogeneously rubber or polymer throughout, or there may be fibers embedded in the rubber or polymer for strength and reinforcement. The fibers may be of textile materials such as cotton, polyamide (such as nylon) or polyester or, for greatest strength, of steel or aramid (such as Technora, Twaron or Kevlar).
When an endless belt does not fit the need, jointed and link V-belts may be employed. Most models offer the same power and speed ratings as equivalently-sized endless belts and do not require special pulleys to operate. A link v-belt is a number of polyurethane/polyester composite links held together, either by themselves, such as Fenner Drives' PowerTwist, or Nu-T-Link (with metal studs). These provide easy installation and superior environmental resistance compared to rubber belts and are length-adjustable by disassembling and removing links when needed.
History of V-belts
Trade journal coverage of V-belts in automobiles from 1916 mentioned leather as the belt material, and mentioned that the V angle was not yet well standardized. The endless rubber V-belt was developed in 1917 by Charles C. Gates of the Gates Rubber Company. Multiple-V-belt drive was first arranged a few years later by Walter Geist of the Allis-Chalmers corporation, who was inspired to replace the single rope of multi-groove-sheave rope drives with multiple V-belts running parallel. Geist filed for a patent in 1925, and Allis-Chalmers began marketing the drive under the "Texrope" brand; the patent was granted in 1928 (). The "Texrope" brand still exists, although it has changed ownership and no longer refers to multiple-V-belt drive alone.
Multi-groove belts
A multi-groove, V-ribbed, or polygroove belt is made up of usually between 3 and 24 V-shaped sections alongside each other. This gives a thinner belt for the same drive surface, thus it is more flexible, although often wider. The added flexibility offers an improved efficiency, as less energy is wasted in the internal friction of continually bending the belt. In practice this gain of efficiency causes a reduced heating effect on the belt, and a cooler-running belt lasts longer in service. Belts are commercially available in several sizes, with usually a 'P' (sometimes omitted) and a single letter identifying the pitch between grooves. The 'PK' section with a pitch of 3.56 mm is commonly used for automotive applications.
A further advantage of the polygroove belt that makes them popular is that they can run over pulleys on the ungrooved back of the belt. Though this is sometimes done with V-belts with a single idler pulley for tensioning, a polygroove belt may be wrapped around a pulley on its back tightly enough to change its direction, or even to provide a light driving force.
Any V-belt's ability to drive pulleys depends on wrapping the belt around a sufficient angle of the pulley to provide grip. Where a single-V-belt is limited to a simple convex shape, it can adequately wrap at most three or possibly four pulleys, so can drive at most three accessories. Where more must be driven, such as for modern cars with power steering and air conditioning, multiple belts are required. As the polygroove belt can be bent into concave paths by external idlers, it can wrap any number of driven pulleys, limited only by the power capacity of the belt.
This ability to bend the belt at the designer's whim allows it to take a complex or "serpentine" path. This can assist the design of a compact engine layout, where the accessories are mounted more closely to the engine block and without the need to provide movable tensioning adjustments. The entire belt may be tensioned by a single idler pulley.
The nomenclature used for belt sizes varies by region and trade. An automotive belt with the number "740K6" or "6K740" indicates a belt in length, 6 ribs wide, with a rib pitch of (a standard thickness for a K series automotive belt would be 4.5mm). A metric equivalent would be usually indicated by "6PK1880" whereby 6 refers to the number of ribs, PK refers to the metric PK thickness and pitch standard, and 1880 is the length of the belt in millimeters.
Ribbed belt
A ribbed belt is a power transmission belt featuring lengthwise grooves. It operates from contact between the ribs of the belt and the grooves in the pulley. Its single-piece structure is reported to offer an even distribution of tension across the width of the pulley where the belt is in contact, a power range up to 600 kW, a high speed ratio, serpentine drives (possibility to drive off the back of the belt), long life, stability and homogeneity of the drive tension, and reduced vibration. The ribbed belt may be fitted on various applications: compressors, fitness bikes, agricultural machinery, food mixers, washing machines, lawn mowers, etc.
Film belts
Though often grouped with flat belts, they are actually a different kind. They consist of a very thin belt (0.5–15 millimeters or 100–4000 micrometres) strip of plastic and occasionally rubber. They are generally intended for low-power (less than 10 watts), high-speed uses, allowing high efficiency (up to 98%) and long life. These are seen in business machines, printers, tape recorders, and other light-duty operations.
Timing belts
Timing belts (also known as toothed, notch, cog, or synchronous belts) are a positive transfer belt and can track relative movement. These belts have teeth that fit into a matching toothed pulley. When correctly tensioned, they have no slippage, run at constant speed, and are often used to transfer direct motion for indexing or timing purposes (hence their name). They are often used instead of chains or gears, so there is less noise and a lubrication bath is not necessary. Camshafts of automobiles, miniature timing systems, and stepper motors often utilize these belts. Timing belts need the least tension of all belts and are among the most efficient. They can bear up to at speeds of .
Timing belts with a helical offset tooth design are available. The helical offset tooth design forms a chevron pattern and causes the teeth to engage progressively. The chevron pattern design is self-aligning and does not make the noise that some timing belts make at certain speeds, and is more efficient at transferring power (up to 98%).
The advantages of timing belts include clean operation, energy efficiency, low maintenance, low noise, non slip performance, versatile load and speed capabilities.
Disadvantages include a relatively high purchase cost, the need for specially fabricated toothed pulleys, less protection from overloading, jamming, and vibration due to their continuous tension cords, the lack of clutch action (only possible with friction-drive belts), and the fixed lengths, which do not allow length adjustment (unlike link V-belts or chains).
Specialty belts
Belts normally transmit power on the tension side of the loop. However, designs for continuously variable transmissions exist that use belts that are a series of solid metal blocks, linked together as in a chain, transmitting power on the compression side of the loop.
Rolling roads
Belts used for rolling roads for wind tunnels can be capable of .
Standards for use
The open belt drive has parallel shafts rotating in the same direction, whereas the cross-belt drive also bears parallel shafts but rotate in opposite direction. The former is far more common, and the latter not appropriate for timing and standard V-belts unless there is a twist between each pulley so that the pulleys only contact the same belt surface. Nonparallel shafts can be connected if the belt's center line is aligned with the center plane of the pulley. Industrial belts are usually reinforced rubber but sometimes leather types. Non-leather, non-reinforced belts can only be used in light applications.
The pitch line is the line between the inner and outer surfaces that is neither subject to tension (like the outer surface) nor compression (like the inner). It is midway through the surfaces in film and flat belts and dependent on cross-sectional shape and size in timing and V-belts. Standard reference pitch diameter can be estimated by taking average of gear teeth tips diameter and gear teeth base diameter. The angular speed is inversely proportional to size, so the larger the one wheel, the less angular velocity, and vice versa. Actual pulley speeds tend to be 0.5–1% less than generally calculated because of belt slip and stretch. In timing belts, the inverse ratio teeth of the belt contributes to the exact measurement.
The speed of the belt is:
International use standards
Standards include:
ISO 9563: This standard specifies requirements and test methods for endless power transmission V-belts and V-ribbed belts.
ISO 4184: This standard specifies the dimensions of classical and narrow V-belts for general use.
ISO 9981: This standard deals with the dimensions of rubber synchronous belt drives.
ISO 9982: This standard covers the dimensions of polyurethane synchronous belt drives.
DIN 22101: This standard covers the design principles for belt conveyors used in bulk material handling, including safety requirements and testing methods.
ASME B29.1: This standard specifies the dimensions, tolerances, and quality requirements for roller chain drives, which include belts and sprockets.
ANSI/RMA IP-20 is a standard developed by the American National Standards Institute (ANSI) and the Rubber Manufacturers Association (RMA) that focuses on elastomeric belts used in industrial applications. This standard covers important aspects such as dimensions and tolerances, ensuring that the belts perform reliably and efficiently in various industrial settings.
SAE J1459 is a standard developed by the Society of Automotive Engineers (SAE) that focuses on automotive V-belts and V-ribbed belts. These belts are used in various automotive applications, such as power transmission between the engine and different accessories, including the alternator, power steering pump, air conditioning compressor, and water pump. The standard specifies test procedures, performance requirements, and dimensions to ensure the belts are reliable, durable, and suitable for automotive use.
ASTM D378 is a standard developed by the American Society for Testing and Materials (ASTM), which focuses on the testing of conveyor belts used in various industries for specific applications. Conveyor belts are essential for material handling and transportation in industries such as mining, construction, agriculture, and manufacturing. ASTM D378 covers the testing methods to evaluate conveyor belts for performance characteristics, such as fire resistance and oil resistance, ensuring that they meet safety and operational requirements.
Selection criteria
Belt drives are built under the following required conditions: speeds of and power transmitted between drive and driven unit; suitable distance between shafts; and appropriate operating conditions. The equation for power is
Factors of power adjustment include speed ratio; shaft distance (long or short); type of drive unit (electric motor, internal combustion engine); service environment (oily, wet, dusty); driven unit loads (jerky, shock, reversed); and pulley-belt arrangement (open, crossed, turned). These are found in engineering handbooks and manufacturer's literature. When corrected, the power is compared to rated powers of the standard belt cross-sections at particular belt speeds to find a number of arrays that perform best. Now the pulley diameters are chosen. It is generally either large diameters or large cross-section that are chosen, since, as stated earlier, larger belts transmit this same power at low belt speeds as smaller belts do at high speeds. To keep the driving part at its smallest, minimal-diameter pulleys are desired. Minimum pulley diameters are limited by the elongation of the belt's outer fibers as the belt wraps around the pulleys. Small pulleys increase this elongation, greatly reducing belt life. Minimal pulley diameters are often listed with each cross-section and speed, or listed separately by belt cross-section. After the cheapest diameters and belt section are chosen, the belt length is computed. If endless belts are used, the desired shaft spacing may need adjusting to accommodate standard-length belts. It is often more economical to use two or more juxtaposed V-belts, rather than one larger belt.
In large speed ratios or small central distances, the angle of contact between the belt and pulley may be less than 180°. If this is the case, the drive power must be further increased, according to manufacturer's tables, and the selection process repeated. This is because power capacities are based on the standard of a 180° contact angle. Smaller contact angles mean less area for the belt to obtain traction, and thus the belt carries less power.
Belt friction
Belt drives depend on friction to operate, but excessive friction wastes energy and rapidly wears the belt. Factors that affect belt friction include belt tension, contact angle, and the materials used to make the belt and pulleys.
Belt tension
Power transmission is a function of belt tension. However, also increasing with tension is stress (load) on the belt and bearings. The ideal belt is that of the lowest tension that does not slip in high loads. Belt tensions should also be adjusted to belt type, size, speed, and pulley diameters. Belt tension is determined by measuring the force to deflect the belt a given distance per inch (or mm) of pulley. Timing belts need only adequate tension to keep the belt in contact with the pulley.
Belt wear
Fatigue, more so than abrasion, is the culprit for most belt problems. This wear is caused by stress from rolling around the pulleys. High belt tension; excessive slippage; adverse environmental conditions; and belt overloads caused by shock, vibration, or belt slapping all contribute to belt fatigue.
Belt vibration
Vibration signatures are widely used for studying belt drive malfunctions. Some of the common malfunctions or faults include the effects of belt tension, speed, sheave eccentricity and misalignment conditions. The effect of sheave Eccentricity on vibration signatures of the belt drive is quite significant. Although, vibration magnitude is not necessarily increased by this it will create strong amplitude modulation. When the top section of a belt is in resonance, the vibrations of the machine is increased. However, an increase in the machine vibration is not significant when only the bottom section of the belt is in resonance. The vibration spectrum has the tendency to move to higher frequencies as the tension force of the belt is increased.
Belt dressing
Belt slippage can be addressed in several ways. Belt replacement is an obvious solution, and eventually the mandatory one (because no belt lasts forever). Often, though, before the replacement option is executed, retensioning (via pulley centerline adjustment) or dressing (with any of various coatings) may be successful to extend the belt's lifespan and postpone replacement. Belt dressings are typically liquids that are poured, brushed, dripped, or sprayed onto the belt surface and allowed to spread around; they are meant to recondition the belt's driving surfaces and increase friction between the belt and the pulleys. Some belt dressings are dark and sticky, resembling tar or syrup; some are thin and clear, resembling mineral spirits. Some are sold to the public in aerosol cans at auto parts stores; others are sold in drums only to industrial users.
Specifications
To fully specify a belt, the material, length, and cross-section size and shape are required. Timing belts, in addition, require that the size of the teeth be given. The length of the belt is the sum of the central length of the system on both sides, half the circumference of both pulleys, and the square of the sum (if crossed) or the difference (if open) of the radii. Thus, when dividing by the central distance, it can be visualized as the central distance times the height that gives the same squared value of the radius difference on, of course, both sides. When adding to the length of either side, the length of the belt increases, in a similar manner to the Pythagorean theorem. One important concept to remember is that as gets closer to there is less of a distance (and therefore less addition of length) as it approaches zero.
On the other hand, in a crossed belt drive the sum rather than the difference of radii is the basis for computation for length. So the wider the small drive increases, the belt length is higher.
V-belt profiles
Metric v-belt profiles (note pulley angles are reduced for small radius pulleys):
* Common pulley design is to have a higher angle of the first part of the opening, above the so-called "pitch line".
E.g. the pitch line for SPZ could be 8.5 mm from the bottom of the "V". In other words, 0–8.5 mm is 35° and 45° from 8.5 and above.
See also
Belt-drive turntable
Belt-driven bicycle
Belt track
Conveyor belt
Gilmer belt
Lariat chain – a science exhibit showing the effects when a belt is run "too fast"
Roller chain
Timing belt (camshaft)
References
External links
Belt Passing Frequency Vibration Calculator | RITEC | Library & Tools
Chinese inventions
Mechanical power transmission | Belt (mechanical) | Physics | 5,798 |
13,521,210 | https://en.wikipedia.org/wiki/Edward%20Kofler | Edward Kofler (November 16, 1911 – April 22, 2007) was a mathematician who made important contributions to game theory and fuzzy logic by working out the theory of linear partial information.
Biography
Kofler was born in Brzeżany, Austrian-Hungarian empire (now western Ukraine) and graduated as a disciple of among others Hugo Steinhaus and Stefan Banach from the University of Lwów Poland (now Ukraine) and the University of Cracow, having studied game theory. After graduation in 1939 Kofler returned to his family in Kolomyia (today Kolomea in Ukraine), where he taught mathematics in a Polish high school. After German attack on the town 1 July 1941 he succeeded to escape to Kazakhstan together with his wife. At Alma-Ata he managed a Polish school with orphanage in exile and worked there as mathematics teacher. After World War II ended he returned home with the orphanage. He was accompanied by his wife and their baby son. The family settled in Poland. From 1959 he accepted the position of lecturer at the University of Warsaw in the faculty of economics. In 1962 he gained a Ph.D. with his thesis Economic Decisions, Applying Game Theory. Then in 1962 he became assistant professor at the faculty of social science in the same university, specializing in econometrics.
In 1969 he migrated to Zürich, Switzerland, where he was employed at the Institute for Empirical Research in Economics at the University of Zürich and scientific advisor at the Swiss National Science Foundation (Schweizerische Nationalfonds zur Förderung der wissenschaftlichen Forschung). In Zürich in 1970 Kofler developed his linear partial information (LPI) theory allowing qualified decisions to be made on the basis of fuzzy logic: incomplete or fuzzy a priori information.
Kofler was visiting professor at the University of St Petersburg (former Leningrad, Russia), University of Heidelberg (Germany), McMaster University (Hamilton, Ontario, Canada) and University of Leeds (England). He collaborated with many well known specialists in information theory, such as Oskar R. Lange in Poland, Nicolai Vorobiev in the Soviet Union, Günter Menges in Germany, and Heidi Schelbert and Peter Zweifel in Zürich. He was the author of many books and articles. He died in Zürich.
See also
Linear partial information
Fuzzy logic
Game theory
Fuzzy set
Decision making
Stochastics
Probability
Information theory
Lwów School of Mathematics
List of Swiss people
References
Bibliography
"Set theory Considerations on the Chess Game and the Theory of Corresponding Elements"- Mathematics Seminar at the University of Lviv, 1936
On the history of mathematics (Fejezetek a matematika történetéből) – book, 339 pages, Warsaw 1962 and Budapest 1965
From the digit to infinity – book, 312 pages, Warsaw 1960
Economic decisions and the theory of games – Dissertation, University of Warsaw 1961
Introduction to game theory – book, 230 pages, Warsaw 1962
Optimization of multiple goals, Przeglad Statystyczny, Warsaw 1965
The value of information – book, 104 pages, Warsaw 1967
(With H. Greniewski and N. Vorobiev) Strategy of games, book, 80 pages, Warsaw 1968
"Das Modell des Spiels in der wissenschaftlichen Planung" Mathematik und Wirtschaft No.7, East Berlin 1969
Konfidenzintervalle in Entscheidungen bei Ungewissheit, Stattliche Hefte, 1976/1
"Entscheidungen bei teilweise bekannter Verteilung der Zustande", Zeitschrift für OR, Bd. 18/3, 1974, S 141-157
"Konfidenzintervalle in Entscheidungen bei Ungewissheit", Statistische Hefte, 1976/1, S. 1-21
(With G. Menges) Entscheidungen bei unvollständiger Information, Springer Verlag, 1976
(With G. Menges) "Cognitive Decisions under Partial Information", in R.J. Bogdan (ed.), Local Induction, Reidel, Dordrecht-Holland, 1976
(With G. Menges) "Entscheidungen bei unvollständiger Information", volume 136 of Lecture Notes in Economics and Mathematical Systems. Springer, Berlin, 1976.
(With G. Menges) "Stochastic Linearisation of Indeterminateness" in Mathematical Economics and Game Theory, (Springer) Berlin-Heidelberg-New York City 1977, S. 20-63
(With G. Menges) "Die Strukturierung von Unbestimmtheiten und eine Verallgemeinerung des Axiomensystems von Kolmogoroff", Statistische Hefte 1977/4, S. 297-302
(With G. Menges) "Lineare partielle Information, fuzziness und Vielziele-Optimierung", Proceedings in Operations Research 8, Physica-Verlag 1979
(With Fahrion, R., Huschens, S., Kuß, U., and Menges, G.) "Stochastische partielle Information (SPI)", Statistische Hefte, Bd. 21, Jg. 1980, S. 160-167
"Fuzzy sets- oder LPI-Theorie?" in G. Menges, H. Schelbert, P. Zweifel (eds.), Stochastische Unschärfe in Wirtschaftswissenschaften, Haag & Herchen, Frankfurt-am-Main, 1981
(With P. Zweifel)"Decisions under Fuzzy State Distribution with Application to the dealt Risks of Nuclear Power", in Haag, W. (ed.), Large Scale Energy Systems, (Pergamon), Oxford 1981, S: 437-444
"Extensive Spiele bei unvollständiger Information", in Information in der Wirtschaft, Gesellschaft für Wirtschafts- und Sozialwissenschaften, Band 126, Berlin 1982
"Equilibrium Points, Stability and Regulation in Fuzzy Optimisation Systems under Linear Partial Stochastic Information (LPI)", Proceedings of the International Congress of Cybernetics and Systems, AFCET, Paris 1984, pp. 233–240
"Fuzzy Weighing in Multiple Objective Decision Making, G. Menges Contribution and Some New Developments", Beitrag zum Gedenkband G. Menges, Hrgb. Schneeweiss, H., Strecker H., Springer Verlag 1984
(With Z. W. Kmietowicz, and A. D. Pearman) "Decision making with Linear Partial Information (L.P.I.)". The Journal of the Operational Research Society, 35(12):1079-1090, 1984
(With P. Zweifel, A. Zimmermann) "Application of the Linear Partial Information (LPI) to forecasting the Swiss timber market" Journal of Forecasting 1985, v4(4),387-398
(With Peter Zweifel) "Exploiting linear partial information for optimal use of forecasts with an application to U.S. economic policy, International Journal of Forecasting, 1988
"Prognosen und Stabilität bei unvollständiger Information", Campus 1989
(With P. Zweifel) "Convolution of Fuzzy Distributions in Decision Making", Statistical Papers 32, Springer 1991, p. 123-136
(With P. Zweifel) "One-Shot Decisions under Linear Partial Information" Theory and Decision 34, 1993, p. 1-20
"Decision Making under Linear Partial Information". Proceedings of the European Congress EUFIT, Aachen, 1994, p. 891-896
(With P. Zweifel) "Linear Partial Information in One-Shot Decisions", Selecta Statistica Vol. IX, 1996
Mehrfache Zielsetzung in wirtschaftlichen Entscheidungen bei unscharfen Daten, Institut für Empirische Wirtschaftsforschung, 9602, 1996
"Linear Partial Information with Applications". Proceedings of ISFL 1997 (International Symposium on Fuzzy Logic), Zürich, 1997, p. 235-239
(With Thomas Kofler) "Forecasting Analysis of the Economic Growth", Selecta Statistica Canadiana, 1998
"Linear Partial Information with Applications in Fuzzy Sets and Systems", 1998. North-Holland
(With Thomas Kofler) Fuzzy Logic and Economic Decisions, 1998
(With L. Götte) "Fuzzy Systems and their Game Theoretical Solution", International Conference on Operations Research, ETH, Zürich, August 1998
"Forecasting and Optimal Strategies in Fuzzy Chess Situations ("Prognosen und Optimale Strategien in unscharfen Schachsituationen"), Idee & Form No. 70, 2001 Zürich, pp. 2065 & 2067
(With P. Zweifel) "One-shot decisions under Linear Partial Information" - Springer Netherlands, 2005
External links
How to apply the Linear Partial Information (LPI)
Linear Partial Information (LPI) theory and its applications
Applying the Linear Partial Information (LPI) for USA's economy policy
Practical decision making with the Linear Partial Information (LPI)
One-shot decisions applying the Linear Partial Information (LPI)
1911 births
2007 deaths
People from Berezhany
20th-century Swiss mathematicians
20th-century Polish mathematicians
Game theorists
Information theorists
Set theorists
Probability theorists
Academic staff of the University of Zurich
Polish emigrants to Switzerland | Edward Kofler | Mathematics | 2,028 |
65,636,147 | https://en.wikipedia.org/wiki/House%20%26%20Home | House & Home magazine (also known as Canadian House & Home magazine) is a decorating, design and lifestyle publication that is published by the Toronto-based company House & Home Media.
The publication's title has a long history that goes back as far as 1952, although the magazine's current ownership, direction and subject matter commenced in 1986, when the publication was purchased by publisher Lynda Reeves and re-launched as the eponymous publication of the House & Home brand.
House & Home, The Magazine of Building (also styled House + Home) was originally a monthly architecture magazine published by Time Inc. from 1952 to 1964. The first issue was an offshoot of Architectural Forum and sent to 100,000 subscribers. The original editor and publisher was P.I. Prentice, and Douglas Haskell was editorial chairman. The illustrated monthly carried feature articles on home building, planning, and building materials. The magazine was sold to McGraw-Hill in 1964 and renamed Housing in 1978. That magazine merged with Builder magazine in 1982. Toronto, Canada - based interior designer Lynda Reeves purchased the magazine in 1986 and resurrected the original name. The magazine has a French language counterpart called Maison & Demeure.
House & Home is a registered trademark in Canada and the US, and has been used by House & Home Media on branded merchandise sold across Canada at the Bay, Home Outfitters and Zellers stores. The House & Home: Style for Living branded merchandise is currently sold across Canada at The TJX Companies Inc. stores such as Winners, Homesense and Marshalls. The House & Home brand has given rise to a website, Houseandhome.com, a retail shopping website, Shophouseandhome.com, and a television series, House & Home with Lynda Reeves, which was broadcast on CTV and GlobalTV before being distributed internationally.
References
Architecture magazines
Magazines established in 1952
Magazines disestablished in 1982
Magazines established in 1986 | House & Home | Engineering | 395 |
2,642,277 | https://en.wikipedia.org/wiki/Geron%20Corporation | Geron Corporation is a biotechnology company located in Foster City, California which specializes in developing and commercializing therapeutic products for cancer that inhibit telomerase.
Company information
Geron, based in Foster City, California, was founded by gerontologist Mary C. West and Michael D. West, now CEO of AgeX Therapeutics. They secured initial venture capital investments in the company from Kleiner Perkins Caufield & Byers and Venrock. The company was incorporated in 1990 and began doing business in 1992. John A. Scarlett was appointed CEO in 2011.
The company's Scientific and Clinical Advisory Board has included Nobel laureates James Watson, Gunter Blobel, and Carol Greider, and Leonard Hayflick, known for discovering that human cells divide for a limited number of times in vitro (called the Hayflick limit).
In 2017, Geron staff received the highest median pay in California, at $500,250.
Drug candidates
Cancer therapies
Geron Corporation has sponsored human clinical trials of several potential anti-cancer products.
Imetelstat (GRN163L) is a drug that targets telomerase. In studies conducted at Johns Hopkins University, GRN163L was active against both CD138+ and CD138neg cancer stem cells and eliminated the colony forming potential of both by five weeks. Similarly, GRN163L inhibited the in vitro clonogenic growth of CD138neg Multiple Myeloma Cancer Stem Cells isolated from the bone marrow aspirates of patients with multiple myeloma. On November 3, 2014, the FDA removed the full clinical hold on imetelstat and declared the company's clinical development plan as acceptable. In 2014, Geron licensed imetelstat to Janssen Biotech. In 2017, imetelstat was granted fast track status by the FDA for certain patients with myelodysplastic syndrome. In 2018, Janssen Biotech returned the rights to imetelstat to Geron. Imetelstat is currently in two Phase 3 trials: NCT02598661, a study of the drug's ability to reduce the transfusion requirements of patients with myelodysplastic syndrome (MDS), and NCT04576156, an investigation of the drug's effect on the overall survival of patients with myelofibrosis.
A trial of GRNVAC1, a telomerase vaccine being used on patients with prostate cancer was carried out at Duke University. Geron's progress with telomerase vaccines attracted a modest monetary investment in 2005 from Merck.
GRN1005, an LRP-directed conjugate of paclitaxel, was in phase II clinical trials for brain cancer but discontinued based on preliminary results in 2012.
Telomerase activation
In addition to testing drug candidates that exploit cancer cell's dependence on telomerase, Geron is researching the possible applications of activating the enzyme in normal cells to delay cellular senescence. The company is in the early stages of developing a telomerase based treatment for HIV called TAT0002, which is the saponin cycloastragenol in Chinese herb Astragalus propinquus. Geron has granted a license to Telomerase Activation Sciences to sell TA-65, the telomerase activator agent also derived from astragalus. In October 2010 Intertek/AAC Labs, an ISO 17025 internationally recognized lab, found the largest component of TA-65 to be cycloastragenol.
Geron originally investigated telomerase as a means of understanding and modifying human aging. However, Geron has ceased aging research of any kind.
Stem cell therapies
On January 23, 2009, Geron received FDA approval to begin Phase I testing of GRNOPC1 in humans. GRNOPC1 is an embryonic stem cell based drug that is designed to treat specific forms of spinal cord injury through remyelination of damaged axons. This trial does not involve direct use of stem cells however, as GRNOPC1 is composed of oligodendrocyte progenitor cells derived from embryonic stem cell lines. Studies have shown significant restoration of mobility in animals with spinal injuries that received cells.
Geron also has several other embryonic stem cell treatments that are still in the preclinical phase, including GRNCM1, a treatment for heart disease, and GRNIC1, a treatment for diabetes. In tests with diabetic mice, 80% of the mice given GRNIC1 were still alive in 50 days while the entire control group, which was given no treatment, perished.
Geron sold its human stem cell research assets to Asterias in 2013.
GRNOPC1
As of October 2010 and November 2010, One of Geron's most highly publicized trial therapy products has been GRNOPC1, a stem cell therapy designed to heal severe spinal cord injuries. The cells in the GRNOPC1 therapy have been coaxed into becoming early myelinated glial cells, a type of cell that insulates nerve cells. For every GRNOPC1 cell that is injected in the patient, they become six to ten cells in a few months.
In October 2011 updated results on four patients were released. The trial was discontinued in Nov 2011.
In early 2013 BioTime, whose CEO at the time was Geron founder Michael D. West, acquired 400 patents and other intellectual property related to embryonic stem cells from Geron and later went on to restart the trial.
Patent issues
Geron Corporation initially held exclusive rights to three cell types derived from embryonic stem cells, as the result of paying for the research originally conducted by Dr. James Thomson at the University of Wisconsin–Madison. The patents on the other three cell types are owned by the Wisconsin Alumni Research Foundation (WARF). WARF and Geron did not charge academics to study human stem cells but did charge commercial users. In 2001 WARF came under public pressure to widen access to human stem-cell technology, and they launched legal action against Geron Corporation to recover some of the previously sold rights. The two sides agreed that Geron would keep the rights to only three cell types.
In October 2006, a legal challenge was mounted to overturn these patents by The Foundation for Taxpayer and Consumer Rights and the non-profit patent-watchdog Public Patent Foundation. They contended that two of the patents granted to WARF are invalid because they cover a technique published in 1992 for which a patent had already been granted to an Australian researcher. Another part of the challenge came from the molecular biologist Jeanne Loring who stated that University of Wisconsin–Madison stem cell pioneer James Thomson's techniques (currently patents held by WARF) are rendered obvious by a 1990 paper and two textbooks. The outcome of this legal challenge was particularly relevant to the Geron Corporation as it can only license patents that are upheld. The patents were ultimately upheld when the reexamination concluded in 2008.
As an interim measure, on January 23, 2007, WARF relaxed the stem cell patents, allowing industry-sponsored research at academic and non-profit institutions without a license. WARF will allow easier and simpler cost free cell transfers among researchers and would not require a license or agreement from California's taxpayer-funded stem cell research program.
Politics
As a participant in the then-controversial stem cell and cloning area, Geron Corporation was asked to testify about its technology before the U.S. Congress. In 2001, when Congress was attempting to ban all forms of cloning, then Geron CEO Thomas Okarma spoke before Congress to preserve cloning for therapeutic purposes.
See also
Maia Biotechnology
References
External links
Biotechnology companies of the United States
Stem cells
Cancer organizations based in the United States
Companies based in Menlo Park, California
Life sciences industry
Biotechnology companies established in 1990
1990 establishments in California | Geron Corporation | Biology | 1,618 |
3,220,183 | https://en.wikipedia.org/wiki/Polyfuse%20%28PROM%29 | A polyfuse is a one-time-programmable memory component used in semiconductor circuits for storing unique data like chip identification numbers or memory repair data, but more usually small to medium volume production of read only memory devices or microcontroller chips. They were also used as to permit programming of Programmable Array Logic. The use of fuses allowed the device to be programmed electrically some time after it was manufactured and sealed into its packaging. Earlier fuses had to be blown using a laser at the time memory was manufactured. Polyfuses were developed to replace the earlier nickel-chromium (ni-chrome) fuses. Because ni-chrome contains nickel, the ni-chrome fuse, once blown had a tendency to grow back and render the memory unusable.
History
The first polyfuses consisted of a polysilicon line, which was programmed by applying a high (10V-15V) voltage across the device. The resultant current physically alters the device and increases its electrical resistance. This change in resistance can be detected and registered as a logical zero. An unprogrammed polyfuse would be registered as a logical one. These early devices had severe drawbacks like a high programming voltage and unreliability of the programmed devices.
Modern polyfuses
Modern polyfuses consist of a silicided polysilicon line, which is also programmed by applying a voltage across the device. Again, the resultant current permanently alters the resistance. The silicide layer covering the polysilicon line reduces its resistance (before programming), allowing the use of much lower programming voltages (1.8V–3.3V). Polyfuses have been shown to reliably store programmed data and can be programmed at high speed. Programming speeds of 100ns have been reported.
See also
Programmable read-only memory
References
Resistive components
Non-volatile memory
Computer memory | Polyfuse (PROM) | Physics | 388 |
64,075,852 | https://en.wikipedia.org/wiki/Chemistry%20and%20Energy%20Federation | The Chemistry and Energy Federation (, FCE) is a trade union representing workers in the energy and chemical industries in France.
The union was founded in 1997, when the Gas and Electricity Federation merged with the United Federation of Chemistry. Like its predecessors, the union affiliated to the French Democratic Confederation of Labour. By 2017, the union claimed 37,428 members. In addition to chemicals and energy, it also represents workers in rubber, writing instruments, yachting, paper and cardboard, petroleum, pharmaceuticals, plastics and glass.
References
External links
Chemical industry in France
Chemical industry trade unions
Energy industry trade unions
Trade unions established in 1997
Trade unions in France | Chemistry and Energy Federation | Chemistry | 131 |
39,124,477 | https://en.wikipedia.org/wiki/Combined%20forced%20and%20natural%20convection | In fluid thermodynamics, combined forced convection and natural convection, or mixed convection, occurs when natural convection and forced convection mechanisms act together to transfer heat. This is also defined as situations where both pressure forces and buoyant forces interact. How much each form of convection contributes to the heat transfer is largely determined by the flow, temperature, geometry, and orientation. The nature of the fluid is also influential, since the Grashof number increases in a fluid as temperature increases, but is maximized at some point for a gas.
Characterization
Mixed convection problems are characterized by the Grashof number (for the natural convection) and the Reynolds number (for the forced convection). The relative effect of buoyancy on mixed convection can be expressed through the Richardson number:
The respective length scales for each dimensionless number must be chosen depending on the problem, e.g. a vertical length for the Grashof number and a horizontal scale for the Reynolds number. Small Richardson numbers characterize a flow dominated by forced convection. Richardson numbers higher than indicate that the flow problem is pure natural convection and the influence of forced convection can be neglected.
Like for natural convection, the nature of a mixed convection flow is highly dependent on heat transfer (as buoyancy is one of the driving mechanisms) and turbulence effects play a significant role.
Cases
Because of the wide range of variables, hundreds of papers have been published for experiments involving various types of fluids and geometries. This variety makes a comprehensive correlation difficult to obtain, and when it is, it is usually for very limited cases. Combined forced and natural convection, however, can be generally described in one of three ways.
Two-dimensional mixed convection with aiding flow
The first case is when natural convection aids forced convection. This is seen when the buoyant motion is in the same direction as the forced motion, thus accelerating the boundary layer and enhancing the heat transfer. Transition to turbulence, however, can be delayed. An example of this would be a fan blowing upward on a hot plate. Since heat naturally rises, the air being forced upward over the plate adds to the heat transfer.
Two-dimensional mixed convection with opposing flow
The second case is when natural convection acts in the opposite way of the forced convection. Consider a fan forcing air upward over a cold plate. In this case, the buoyant force of the cold air naturally causes it to fall, but the air being forced upward opposes this natural motion. Depending on the Richardson number, the boundary layer at the cold plate exhibits a lower velocity than the free stream, or even accelerates in the opposite direction. This second mixed convection case therefore experiences strong shear in the boundary layer and quickly transitions into a turbulent flow state.
Three-dimensional mixed convection
The third case is referred to as three-dimensional mixed convection. This flow occurs when the buoyant motion acts perpendicular to the forced motion. An example of this case is a hot, vertical flate plate with a horizontal flow, e.g. the surface of a solar thermal central receiver. While the free stream continues its motion along the imposed direction, the boundary layer at the plate accelerates in the upward direction. In this flow case, buoyancy plays a major role in the laminar-turbulent transition, while the imposed velocity can suppress turbulence (laminarization)
Calculation of total heat transfer
Simply adding or subtracting the heat transfer coefficients for forced and natural convection will yield inaccurate results for mixed convection. Also, as the influence of buoyancy on the heat transfer sometimes even exceeds the influence of the free stream, mixed convection should not be treated as pure forced convection. Consequently, problem-specific correlations are required. Experimental data has suggested that
can describe the area-averaged heat transfer. For the case of a large, vertical surface in a horizontal flow provided a best fit depending on the details of how is fitted.
Applications
Combined forced and natural convection is often seen in very-high-power-output devices where the forced convection is not enough to dissipate all of the heat necessary. At this point, combining natural convection with forced convection will often deliver the desired results. Examples of these processes are nuclear reactor technology and some aspects of electronic cooling.
References
Convection
Heat transfer | Combined forced and natural convection | Physics,Chemistry | 856 |
19,662 | https://en.wikipedia.org/wiki/Mean%20value%20theorem | In mathematics, the mean value theorem (or Lagrange's mean value theorem) states, roughly, that for a given planar arc between two endpoints, there is at least one point at which the tangent to the arc is parallel to the secant through its endpoints. It is one of the most important results in real analysis. This theorem is used to prove statements about a function on an interval starting from local hypotheses about derivatives at points of the interval.
History
A special case of this theorem for inverse interpolation of the sine was first described by Parameshvara (1380–1460), from the Kerala School of Astronomy and Mathematics in India, in his commentaries on Govindasvāmi and Bhāskara II. A restricted form of the theorem was proved by Michel Rolle in 1691; the result was what is now known as Rolle's theorem, and was proved only for polynomials, without the techniques of calculus. The mean value theorem in its modern form was stated and proved by Augustin Louis Cauchy in 1823. Many variations of this theorem have been proved since then.
Statement
Let be a continuous function on the closed interval and differentiable on the open interval where Then there exists some in such that:
The mean value theorem is a generalization of Rolle's theorem, which assumes , so that the right-hand side above is zero.
The mean value theorem is still valid in a slightly more general setting. One only needs to assume that is continuous on , and that for every in the limit
exists as a finite number or equals or . If finite, that limit equals . An example where this version of the theorem applies is given by the real-valued cube root function mapping , whose derivative tends to infinity at the origin.
Proof
The expression gives the slope of the line joining the points and , which is a chord of the graph of , while gives the slope of the tangent to the curve at the point . Thus the mean value theorem says that given any chord of a smooth curve, we can find a point on the curve lying between the end-points of the chord such that the tangent of the curve at that point is parallel to the chord. The following proof illustrates this idea.
Define , where is a constant. Since is continuous on and differentiable on , the same is true for . We now want to choose so that satisfies the conditions of Rolle's theorem. Namely
By Rolle's theorem, since is differentiable and , there is some in for which , and it follows from the equality that,
Implications
Theorem 1: Assume that is a continuous, real-valued function, defined on an arbitrary interval of the real line. If the derivative of at every interior point of the interval exists and is zero, then is constant in the interior.
Proof: Assume the derivative of at every interior point of the interval exists and is zero. Let be an arbitrary open interval in . By the mean value theorem, there exists a point in such that
This implies that . Thus, is constant on the interior of and thus is constant on by continuity. (See below for a multivariable version of this result.)
Remarks:
Only continuity of , not differentiability, is needed at the endpoints of the interval . No hypothesis of continuity needs to be stated if is an open interval, since the existence of a derivative at a point implies the continuity at this point. (See the section continuity and differentiability of the article derivative.)
The differentiability of can be relaxed to one-sided differentiability, a proof is given in the article on semi-differentiability.
Theorem 2: If for all in an interval of the domain of these functions, then is constant, i.e. where is a constant on .
Proof: Let , then on the interval , so the above theorem 1 tells that is a constant or .
Theorem 3: If is an antiderivative of on an interval , then the most general antiderivative of on is where is a constant.
Proof: It directly follows from the theorem 2 above.
Cauchy's mean value theorem
Cauchy's mean value theorem, also known as the extended mean value theorem, is a generalization of the mean value theorem. It states: if the functions and are both continuous on the closed interval and differentiable on the open interval , then there exists some , such that
Of course, if and , this is equivalent to:
Geometrically, this means that there is some tangent to the graph of the curve
which is parallel to the line defined by the points and . However, Cauchy's theorem does not claim the existence of such a tangent in all cases where and are distinct points, since it might be satisfied only for some value with , in other words a value for which the mentioned curve is stationary; in such points no tangent to the curve is likely to be defined at all. An example of this situation is the curve given by
which on the interval goes from the point to , yet never has a horizontal tangent; however it has a stationary point (in fact a cusp) at .
Cauchy's mean value theorem can be used to prove L'Hôpital's rule. The mean value theorem is the special case of Cauchy's mean value theorem when .
Proof
The proof of Cauchy's mean value theorem is based on the same idea as the proof of the mean value theorem.
Suppose . Define , where is fixed in such a way that , namely
Since and are continuous on and differentiable on , the same is true for . All in all, satisfies the conditions of Rolle's theorem: consequently, there is some in for which . Now using the definition of we have:
and thus
If , then, applying Rolle's theorem to , it follows that there exists in for which . Using this choice of , Cauchy's mean value theorem (trivially) holds.
Mean value theorem in several variables
The mean value theorem generalizes to real functions of multiple variables. The trick is to use parametrization to create a real function of one variable, and then apply the one-variable theorem.
Let be an open subset of , and let be a differentiable function. Fix points such that the line segment between lies in , and define . Since is a differentiable function in one variable, the mean value theorem gives:
for some between 0 and 1. But since and , computing explicitly we have:
where denotes a gradient and a dot product. This is an exact analog of the theorem in one variable (in the case this is the theorem in one variable). By the Cauchy–Schwarz inequality, the equation gives the estimate:
In particular, when is convex and the partial derivatives of are bounded, is Lipschitz continuous (and therefore uniformly continuous).
As an application of the above, we prove that is constant if the open subset is connected and every partial derivative of is 0. Pick some point , and let . We want to show for every . For that, let . Then is closed in and nonempty. It is open too: for every ,
for every in open ball centered at and contained in . Since is connected, we conclude .
The above arguments are made in a coordinate-free manner; hence, they generalize to the case when is a subset of a Banach space.
Mean value theorem for vector-valued functions
There is no exact analog of the mean value theorem for vector-valued functions (see below). However, there is an inequality which can be applied to many of the same situations to which the mean value theorem is applicable in the one dimensional case:
Mean value inequality
Jean Dieudonné in his classic treatise Foundations of Modern Analysis discards the mean value theorem and replaces it by mean inequality as the proof is not constructive and one cannot find the mean value and in applications one only needs mean inequality. Serge Lang in Analysis I uses the mean value theorem, in integral form, as an instant reflex but this use requires the continuity of the derivative. If one uses the Henstock–Kurzweil integral one can have the mean value theorem in integral form without the additional assumption that derivative should be continuous as every derivative is Henstock–Kurzweil integrable.
The reason why there is no analog of mean value equality is the following: If is a differentiable function (where is open) and if , is the line segment in question (lying inside ), then one can apply the above parametrization procedure to each of the component functions of f (in the above notation set ). In doing so one finds points on the line segment satisfying
But generally there will not be a single point on the line segment satisfying
for all simultaneously. For example, define:
Then , but and are never simultaneously zero as ranges over .
The above theorem implies the following:
In fact, the above statement suffices for many applications and can be proved directly as follows. (We shall write for for readability.)
Cases where the theorem cannot be applied
All conditions for the mean value theorem are necessary:
is differentiable on
is continuous on
is real-valued
When one of the above conditions is not satisfied, the mean value theorem is not valid in general, and so it cannot be applied.
The necessity of the first condition can be seen by the counterexample where the function on [-1,1] is not differentiable.
The necessity of the second condition can be seen by the counterexample where the function
satisfies criteria 1 since on but not criteria 2 since
and for all so no such exists.
The theorem is false if a differentiable function is complex-valued instead of real-valued. For example, if for all real , then
while for any real .
Mean value theorems for definite integrals
First mean value theorem for definite integrals
Let f : [a, b] → R be a continuous function. Then there exists c in (a, b) such that
This follows at once from the fundamental theorem of calculus, together with the mean value theorem for derivatives. Since the mean value of f on [a, b] is defined as
we can interpret the conclusion as f achieves its mean value at some c in (a, b).
In general, if f : [a, b] → R is continuous and g is an integrable function that does not change sign on [a, b], then there exists c in (a, b) such that
Second mean value theorem for definite integrals
There are various slightly different theorems called the second mean value theorem for definite integrals. A commonly found version is as follows:
If is a positive monotonically decreasing function and is an integrable function, then there exists a number x in (a, b] such that
Here stands for , the existence of which follows from the conditions. Note that it is essential that the interval (a, b] contains b. A variant not having this requirement is:
If is a monotonic (not necessarily decreasing and positive) function and is an integrable function, then there exists a number x in (a, b) such that
If the function returns a multi-dimensional vector, then the MVT for integration is not true, even if the domain of is also multi-dimensional.
For example, consider the following 2-dimensional function defined on an -dimensional cube:
Then, by symmetry it is easy to see that the mean value of over its domain is (0,0):
However, there is no point in which , because everywhere.
Generalizations
Linear algebra
Assume that and are differentiable functions on that are continuous on . Define
There exists such that .
Notice that
and if we place , we get Cauchy's mean value theorem. If we place and we get Lagrange's mean value theorem.
The proof of the generalization is quite simple: each of and are determinants with two identical rows, hence . The Rolle's theorem implies that there exists such that .
Probability theory
Let X and Y be non-negative random variables such that E[X] < E[Y] < ∞ and (i.e. X is smaller than Y in the usual stochastic order). Then there exists an absolutely continuous non-negative random variable Z having probability density function
Let g be a measurable and differentiable function such that E[g(X)], E[g(Y)] < ∞, and let its derivative g′ be measurable and Riemann-integrable on the interval [x, y] for all y ≥ x ≥ 0. Then, E[g′(Z)] is finite and
Complex analysis
As noted above, the theorem does not hold for differentiable complex-valued functions. Instead, a generalization of the theorem is stated such:
Let f : Ω → C be a holomorphic function on the open convex set Ω, and let a and b be distinct points in Ω. Then there exist points u, v on the interior of the line segment from a to b such that
Where Re() is the real part and Im() is the imaginary part of a complex-valued function.
See also
Newmark-beta method
Mean value theorem (divided differences)
Racetrack principle
Stolarsky mean
Notes
References
External links
PlanetMath: Mean-Value Theorem
"Mean Value Theorem: Intuition behind the Mean Value Theorem" at the Khan Academy
Augustin-Louis Cauchy
Articles containing proofs
Theorems in calculus
Theorems in real analysis | Mean value theorem | Mathematics | 2,769 |
20,848,458 | https://en.wikipedia.org/wiki/Adularescence | Adularescence ( ) is an optical phenomenon that is produced in gemstones like moonstone. The optical effect is similar to labradorescence and aventurescence.
Description
The effect of adularescence, also commonly referred to as schiller or shiller, is best described as a milky, bluish luster or glow originating from below the surface of the gemstone. The schiller, appearing to move as the stone is turned (or as the light source is moved), gives the impression of lunar light floating on water (accounting for moonstone's name). Though white schiller is the most common, in rarer specimens, orange or blue lusters are produced.
This effect is most typically produced by adularia, a K-feldspar or orthoclase (), from which the name is derived. Adularescence appears in numerous other gemstones, notably common opal, rose quartz and agate. However, due to inclusions in these other stones, the effect is displayed differently. The schiller is scattered by inclusions and appears hazy; non-hazy specimens are specially referred to as "milky". Thus, adularescence occurring in non-adularia gemstones is termed differently – the "girasol effect" and opalescence (for opals only) are two such terms. When the schiller forms an indistinct band, it is said to display a chatoyant effect. Only clearly defined bands are referred to as "cat's eyes".
As an optical phenomenon, adularescence exists only in the presence of light; it is a product of the interaction between light and the internal microstructures of the mineral and not a property of the mineral itself. The effect is produced by alternating layers of two types at a scale near the wavelength of light (approximately 0.5 micron) – this leads to light scattering and interference.
See also
Aventurescence
Labradorescence
Optical phenomenon
Rainbow lattice sunstone
References
Mineralogy
Optical phenomena | Adularescence | Physics | 417 |
73,274,469 | https://en.wikipedia.org/wiki/1976%20CASAW%20wildcat%20strike | The 1976 CASAW wildcat strike was a wildcat strike action by members of the Canadian Association of Smelters and Allied Workers Union (CASAW) against Alcan in Kitimat, British Columbia. Lasting 18 days, CASAW members protested against wage and price controls imposed by the federal government. CASAW was affiliated with the Confederation of Canadian Unions.
See also
2021 Kitimat smelter strike
References
1976 in British Columbia
1976 labor disputes and strikes
Aluminum in Canada
Aluminium smelters
Metallurgical industry of Canada
Kitimat
Wildcat strikes
Pierre Trudeau
Confederation of Canadian Unions
Manufacturing industry strikes in Canada | 1976 CASAW wildcat strike | Chemistry | 126 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.